← Return to search results
Back to Prindle Institute

Can We Declare the Death of “Personal Truth”?

photograph of dictionary entry fro "truth"

According to Google Trends, the concepts of “your truth” and “speaking your truth” began a noticeable increase around the mid-2010s, likely as a response to the MeToo movement. At the time, the concept of speaking a personal truth was met with controversy. Just a few months ago actress Cate Blanchett ridiculed the concept, and now with the discussion of Prince Harry’s various “personal truths” it seems it’s made its way back in the news again. But surely if the pandemic has taught us anything, it’s that facts do matter and misinformation is a growing problem. Can we finally put an end to a concept that might be more harmful than helpful?

Before we consider the problems with the concept of personal truth and the idea of speaking one’s own truth, we should consider the uses and insights such a concept does provide. It isn’t a surprise that the concept of personal truth took on a new prominence in the wake of MeToo. The concept of personal truth emerged in response to a problem where women were not believed or taken seriously in their reports of sexual harassment and sexual assault, prompting a call for the public to “believe women.” It can be powerful to affirm “your” truth in the face of a skeptical world that refuses to take seriously your account as representing “the” truth. As Garance Franke-Ruta explains, “sometimes you know something is real and happened and is wrong, even if the world says it’s just the way things are.”

Oprah helped popularize the concept when she used it during the Golden Globes ceremony and her example can demonstrate another important aspect of the concept. Oprah had a difficult childhood, living in poverty and being abused by her family, and the notion that she was “destined for greatness” was considered to be “her” truth. Many feel a connection to such “personal truths” as they allow people who are rarely heard to tell their story and connect their individual experiences to systematic issues.

In philosophy, standpoint theory holds that an individual’s perspectives are shaped by their social experiences and that marginalized people have a unique perspective in light of their particular experiences of power relations.

Sandra Harding’s concept of “strong objectivity” holds that by focusing on the perspectives of those who are marginalized from knowledge production, we can produce more objective knowledge. Thus, by focusing on what many might very well call “their truths” (in other words what they claim to be true in contrast to those who are not marginalized) we might achieve greater objectivity.

On the other hand, even if we recognize the value of such experiential accounts and even if we recognize that there is a problem when people who are abused aren’t believed, it still doesn’t mean that there is any such thing as personal or subjective truth. There seems to be a growing attitude that people are entitled to believe whatever they want individually. But “personal truth” is a contradiction in terms. To understand why we can look to John Dewey’s “The Problem of Truth” which investigates truth not only as a logical concept but as a social one as well.

Truth is supposed to be authoritative. If I tell you something is my opinion, nothing follows from that. If, on the other hand, I state that my opinion is true, then the claim takes on an authority that forces others to evaluate for themselves whether they believe it is true or false. As Dewey explains, “The opposite of truth is not error, but lying, the willful misleading of others.” To represent things as they are

is to represent them in ways that maintain a common understanding; to misrepresent them is to injure—whether wilfully or no—the conditions of common understanding …understanding is a social necessity because it is a prerequisite of all community of action.

Dewey’s point is that truth developed as a social concept that became necessary for social groups to function. This is important because truth and accountability go hand in hand. When we represent something as the truth, we are making a public statement. To say that something is true means that the claim we are making can be assessed by anyone who might investigate it (with enough training and resources) – it means others can reproduce one’s results and corroborate one’s findings. Something held merely in private, on the other hand, has no truth value. As Dewey explains,

So far as a person’s way of feeling, observing and imagining and stating are not connected with social consequences, so far as they have no more to do with truth and falsity than his dreams and reveries. A man’s private affairs are his private affairs, and that is all there is to be said of them. Being nobody else’s business, it is absurd to regard them as either true or false.

While figuratively it can be beneficial to talk about personal truths, ethically it is far more problematic. While many may (rightfully) criticize cultural relativism, at least with cultural relativism, you still have public accountability because culture is the benchmark for truth. In the end, “truth” requires verification. We do not get to claim that something is true until it has survived empirical testing from an ever-growing community of fellow knowers. To claim that something is “true” prior to this, based on individual experience alone, is to take something that rightly belongs to the community. It negates the possibility of delusion or poor interpretation since no one gets to question it. Thus, asserting something to be true on one’s own account is anti-social.

If truth is meant to be publicly accessible and if you are expected to be accountable for things you claim to be true in light of this, then the concept of personal or private truth negates this. If something is true, then it is in light of evidence that extends beyond yourself. Thus, if something is true then there is nothing “personal” about it, and if it is merely personal, it can’t be “true.” Figurative language is nice, but people growing up today hearing about “personal” truths in the media are becoming increasingly confused about the nature of truth, evidence, and reasoning.

As we collectively grapple with growing problems like misinformation, polarization, and conspiracy theories, it is hypocritical to both condemn these things while simultaneously encouraging people to embrace their own personal truths. This notion erases the difference between what is true and what is delusional, and fails to recognize “truth” as a properly social and scientific value. It’s high time we let this concept die.

AI and Pure Science

Pixelated image of a man's head and shoulders made up of pink and purple squares

In September 2019, four researchers wrote to the academic publisher Wiley to request that it retract a scientific paper relating to facial recognition technology. The request was made not because the research was wrong or reflected bad methodology, but rather because of how the technology was likely to be used. The paper discussed the process by which algorithms were trained to detect faces of Uyghur people, a Muslim minority group in China. While researchers believed publishing the paper presented an ethical problem, Wiley defended the article noting that it was about a specific technology, not about the application of that technology. This event raises a number of important questions, but, in particular, it demands that we consider whether there is an ethical boundary between pure science and applied science when it comes to AI development – that is, whether we can so cleanly separate knowledge from use as Wiley suggested.

The 2019 article for the journal WIREs Data Mining and Knowledge Discovery discusses discoveries made by the research term in the work on ethic group facial recognition which included datasets of Chinese Uyghur, Tibetan, and Korean students at Dalian University. In response a number of researchers, believing that it is disturbing that academics tried to build such algorithms, called for the article to be retracted. China has been condemned for its heavy surveillance and mass detention of Uyghurs, and this study and a number of other studies, some scientists claim, are helping to facilitate the development of technology which can make this surveillance and oppression more effective. As Richard Van Noorden reports, there has been a growing push by some scientists to get the scientific community to take a firmer stance against unethical facial-recognition research, not only denouncing controversial uses of technology, but the research foundations of it as well. They call on researchers to avoid working with firms or universities linked to unethical projects.

For its part, Wiley has defended the article, noting “We are aware of the persecution of the Uyghur communities … However, this article is about a specific technology and not an application of that technology.” In other words, Wiley seems to be adopting an ethical position based on the long-held distinction between pure and applied science. This distinction is old, tracing back to the time of Francis Bacon and the 16th century as part of a compromise between the state and scientists. As Robert Proctor reports, “the founders of the first scientific societies promised to ignore moral concerns” in return for funding and for freedom of inquiry in return for science keeping out of political and religious matters. In keeping with Bacon’s urging that we pursue science “for its own sake,” many began to distinguish science as “pure” affair, interested in knowledge and truth by themselves, and applied science which seeks to use engineering to apply science in order to secure various social goods.

In the 20th century the division between pure and applied science was used as a rallying cry for scientific freedom and to avoid “politicizing science.” This took place against a historical backdrop of chemists facilitating great suffering in World War I followed by physicists facilitating much more suffering in World War II. Maintaining the political neutrality of science was thought to make it more objective by ensuring value-freedom. The notion that science requires freedom was touted by well-known physicists like Percy Bridgman who argued,

The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.

For Bridgman, science just wasn’t science unless it was pure. He explains, “Popular usage lumps under the single world ‘science’ all the technological activities of engineering and industrial development, together with those of so-called ‘pure science.’ It would clarify matters to reserve the word science for ‘pure’ science.” For Bridgman it is society that must decide how to use a discovery rather than the discoverer, and thus it is society’s responsibility to determining how to use pure science rather than the scientists’. As such, Wiley’s argument seems to echo those of Bridgman. There is nothing wrong with developing the technology of facial recognition in and of itself; if China wishes to use that technology to oppress people with it, that’s China’s problem.

On the other hand, many have argued that the supposed distinction between pure and applied science is not ethically sustainable. Indeed, many such arguments were driven by the reaction to the proliferation of science during the war. Janet Kourany, for example, has argued that science and scientists have moral responsibilities because of the harms that science has caused, because science is supported through taxes and consumer spending, and because society is shaped by science. Heather Douglas has argued that scientists shoulder the same moral responsibilities as the rest of us not to engage in reckless or negligent research, and that due to the highly technical nature of the field, it is not reasonable for the rest of society to carry those responsibilities for scientists. While the kind of pure knowledge that Bridgman or Bacon favor has value, these values need to be weighed against other goods like basic human rights, quality of life, and environmental health.

In other words, the distinction between pure and applied science is ethically problematic. As John Dewey argues the distinction is a sham because science is always connected to human concerns. He notes,

It is an incident of human history, and a rather appalling incident, that applied science has been so largely made equivalent for use for private and economic class purposes and privileges. When inquiry is narrowed by such motivation or interest, the consequence is in so far disastrous both to science and to human life.

Perhaps this is why many scientists do not accept Wiley’s argument for refusing retraction; discovery doesn’t happen in a vacuum. It isn’t as if we don’t know why the Chinese government has an interest in this technology. So, at what point does such research become morally reckless given the very likely consequences?

This is also why debate around this case has centered on the issue of informed consent. Critics charge that the Uyghur students who participated in the study were not likely fully informed of its purposes and this could not provide truly informed consent. The fact that informed consent is relevant at all, which Wiley admits, seems to undermine their entire argument as informed consent in this case appears explicitly tied to how the technology will be used. If informed consent is ethically required, this is not a case where we can simply consider pure research with no regard to its application. And these considerations prompted scientists like Yves Moreau to argue that all unethical biometric research should be retracted.

But regardless of how we think about these specifics, this case serves to highlight a much larger issue: given the large number of ethical issues associated with AI and its potential uses we need to dedicate much more of our time and attention to the question of whether some certain forms of research should be considered forbidden knowledge. Do AI scientists and developers have moral responsibilities for their work? Is it more important to develop this research for its own sake or are there other ethical goods that should take precedence?

The Democratic Limits of Public Trust in Science

photograph of Freedom Convoy trucks

It isn’t every day that Canada makes international headlines for civil unrest and disruptive protests. But the protests which began last month in Ottawa by the “Freedom Convoy” have inspired similar protests around the world and led to the Canadian government declaring a national emergency and seeking special powers to handle the crisis. But what exactly is the crisis that the nation faces? Is it a far-right, conspiratorial, anti-vaccination movement threatening to overthrow the government? Or is it the government’s infringement on rights in the name of “trusting the experts”?

It is easy to take the view that protests are wrong. First, we must acknowledge that the position that the truckers are taking in protesting the mandate is fairly silly. For starters, even if they were successful at getting the Canadian Federal Government to change its position, the United States also requires that truckers be vaccinated to cross the border, so this is a moot point. I also won’t defend the tactics used in the protests including the noise, blocking bridges, etc. However, several people in Canada have pinned part of the blame for the protests on the government, and Justin Trudeau in particular, for politicizing the issue of vaccines and creating a divisive political atmosphere.

First, it is worth noting that Canada has relied more on restrictive lockdown measures as of late compared to other countries, and much of this is driven by the need to keep hospitals from being overrun. However, this is owing to long-term systemic fragility in the healthcare sector, particularly a lack of ICU beds, prompting many – including one of Trudeau’s own MPs – to call for reform to healthcare funding to expand capacity instead of relying so much on lockdown measures. One would think that this would be a topic of national conversation with the public wondering why the government hasn’t done anything about this situation since the beginning of the pandemic. But instead, the Trudeau government has only chosen to focus on a policy of increasing vaccination rates, claiming that they are following “the best science” and “the best public health advice.”

Is there, however, a possibility that the government is hoping that enough people get vaccinated and with enough lockdown measures, they can avoid having the healthcare system collapse, expect the pandemic blows over, and escape without having to address such long-term problems? Maybe, maybe not. But it certainly casts any advice offered or decisions made the government in a very different light. Indeed, one of the problems with expert advice (as I’ve previously discussed here, here, and here) is that it is subject to inductive risk concerns and so the use of expert advice must be democratically-informed.

For example, if we look at a model used by Canada’s federal government, one will note how often its projections are based on different assumptions about what could happen. The model itself may be driven by a number of unstated assumptions which may or may not be reasonable. It is up to politicians to weigh the risks of getting it wrong, and not simply treat experts as if they are infallible. This is important because the value judgments inherent in risk assessment – about the reasonableness of our assumptions as well as the consequences of getting it wrong and potentially overrunning the healthcare system – are what ultimately will determine what restriction measures the government will enact. But this requires democratic debate and discussion. This is where failure of democratic leadership breeds long-term mistrust in expert advice.

It is reasonable to ask questions about what clear metrics a government might use before ending a lockdown, or to ask if there is strong evidence for the effectiveness of a vaccine mandate. But for the public, not all of whom enjoy the benefit of an education in science, it is not so clear what is and is not a reasonable question. The natural place for such a discussion would be the elected Parliament where representatives might press the government for answers. Unfortunately, defense of the protest in any form in Parliament is vilified, with the opposition being told they stand with “people who wave swastikas.” Prime Minister Trudeau has denounced the entire group as a “small fringe minority,” “Nazis,” with “unacceptable views.” However, some MPs have voiced concern about the tone and rhetoric involved in lumping everyone who has a doubt about the mandate or vaccine together.

This divisive attitude has been called out by one of Trudeau’s own MPs who said that people who question existing policies should not be demonized by their Prime Minister, noting “It’s becoming harder and harder to know when public health stops and where politics begins,” adding, “It’s time to stop dividing Canadians and pitting one part of the population against another.” He also called on the Federal government to establish clear and measurable targets.

Unfortunately, if you ask the federal government a direct question like “Is there a federal plan being discussed to ease out mandates?” you will be told that:

there have been moments throughout the pandemic where we have eased restrictions and those decisions have always been made guided by the best available advice that we’re getting from public health experts. And of course, going forward we will continue to listen to the advice that we get from our public health officials.

This is not democratic accountability (and it is not scientific accountability either). “We’re following the science” or “We’re following the experts” is not good enough. Anyone who actually understands the science will know that this is more a slogan than a meaningful claim.

There is also a bit of history at play. In 1970, Trudeau’s father Pierre invoked the War Measures Act during a crisis that resulted in the kidnapping and murder of a cabinet minister. It resulted in rounding up and arrest of hundreds of arrests without warrant or charge. This week the Prime Minister has invoked the successor to that legislation for the first time in Canadian history because…trucks. The police were having trouble moving the trucks because they couldn’t get tow trucks to help clear blocked border crossing. Now, while we can grant that the convoy has been a nuisance and has illegally blocked bridges, we’ve also seen the convoy complying with court-ordered injunctions on honking, we’ve also seen the convoy organizers opposing violence, with no major acts of violence taking place. While there was a rather odd proposal that the convoys could form a “coalition” with the parliamentary opposition to form a new government, I suspect that this is more owing to a failure to understand how Canada’s system of government works rather than a serious attempt to, as some Canadian politicians would claim “overthrow the government.”

The point is that this is an issue that has started with a government not being transparent and accountable, abusing the democratic process in the name of science, and taking advantage of the situation to demonize and delegitimize the opposition. It is in the face of this, and in the face of uncertainty about the intentions of the convoy, and after weeks of not acting sooner to ameliorate the situation, that the government claims that a situation has arisen that, according to the Emergencies Act, is a “threat to the security of Canada…that is so serious as to be a national emergency.” Not only is there room for serious doubt as to whether the convoy situation has reached such a level, but this is taking place during a context of high tension where the government and the media have demonstrated a willingness to overgeneralize and demonize a minority by lobbing as many poisoning the well fallacies as possible and misrepresenting the nature of science. The fact that in this political moment the government seeks greater power is a recipe for abuse of power.

In a democracy, where not everyone enjoys the chance to understand what a model is, how they are made, or how reliable (and unreliable) they can be, citizens have a right to know more about how their government is making use of expert advice in limiting individual freedom. The politicization of the issue using the rhetoric of “following the science,” as well as the government’s slow response and opaque reasoning have only served to make it more difficult for the public to understand the nature of the problem we face. Our public discourse has been stunted by transforming our policy conversations into a narrow one about vaccination and the risk posed by the “alt right.” But there is a much bigger, much more real problem here: the call to “trust the experts” can be used just as easily as a rallying call for rationality as it can be a political tool to demonizing entire groups of people to justify taking away their rights.

‘Don’t Look Up’ and “Trust the Science”

photograph of "Evidence over Ignorance" protest sign

A fairly typical review of “Don’t Look Up” reads as follows: “The true power of this film, though, is in its ferocious, unrelenting lampooning of science deniers.” I disagree. This film exposes the unfortunate limits of the oft-repeated imperative of the coronavirus and climate-change era: “Trust the Science.” McKay and Co. probe a kind of epistemic dysfunction, one that underlies many of our most fiercest moral and political disagreements. Contrary to how it’s been received, the film speaks to the lack of a generally agreed-upon method for arriving at our beliefs about how the world is and who we should trust.

As the film opens, we are treated to a warm introduction to our two astronomers and shown a montage of the scientific and mathematical processes they use to arrive at their horrific conclusion that a deadly comet will collide with Earth in six months. Surely, you might be thinking, this film tells us exactly whom to believe and trust from the outset! It tells us to “Trust the Scientists,” to “Trust the Science!”

Here’s a preliminary problem with trying to follow that advice. It’s not like we’re all doing scientific experiments ourselves whenever we accept scientific facts. Practically, we have to rely on the testimony of others to tell us what the science says — so who do we believe? Which scientists and which science?

In the film, this decision is straightforward for us. In fact, we’re not given much of a choice. But in real life, things are harder. Brilliantly, the complexity of real-life is (perhaps unintentionally) reflected in the film itself.

Imagine you’re a sensible person, a Science-Truster. You go to the CDC to get your coronavirus data, to the IPCC to get your climate change facts. If you’re worried about a comet smashing into Earth, you might think to yourself something like, “I’m going to go straight to the organization whose job it is to look at the scientific evidence, study it, and come to conclusions; I’ll trust what NASA says. The head of NASA certainly sounds like a reliable, expert source in such a scenario.” What does the head of NASA tell the public in “Don’t Look Up”? She reports that the comet is nothing to worry about.

Admittedly, McKay provides us a clear reason for the audience to ignore the head of NASA’s scientific misleading testimony about the comet. She is revealed to be a political hire and an anesthesiologist rather than an astronomer. “Trust the Science” has a friend, “Trust the Experts,” and the head of NASA doesn’t qualify as an expert on this topic. So far, so good, for the interpretation of the film as endorsing “Trust the Science” as an epistemic doctrine. It’s clear why so many critics misinterpret the film this way.

But, while it’s easy enough to miss amid the increasingly frantic plot, the plausibility of Trust the Science falls apart as the film progresses. Several Nobel-prize winning, Ivy-league scientists throw their support behind the (doomsday-causing) plan of a tech-billionaire to bring the wealth of the comet safely to Earth in manageable chunks. They assure the public that the plan is safe. Even one of our two scientific heroes repeats the false but reassuring line on a talk show, to the hosts’ delight.

Instead of being a member of the audience with privileged information about whom you should trust, imagine being an average Joe in the film’s world at this point. All you could possibly know is that some well-respected scientists claim we need to destroy or divert the comet at all costs. Meanwhile, other scientists, equally if not more well-respected, claim we can safely bring the mineral-rich comet to Earth in small chunks. What does “Trust the Science” advise “Don’t Look Up” average Joe? Nothing. The advice simply can’t be followed. It offers no guidance on what to believe or whom to listen to.

How could you decide what to believe in such a scenario? Assuming you, like most of us, lack the expertise to adjudicate the topic on the scientific merits, you might start investigating the incentives of the scientists on both sides of the debate. You might study who is getting paid by whom, who stands to gain from saying what. And this might even lead you to the truth — that the pro-comet-impact scientists are bought and paid for by the tech-billionaire and are incentivized to ignore, or at least minimize, the risk of mission failure. But this approach to belief-formation certainly doesn’t sound like Trusting the Science anymore. It sounds closer to conspiracy theorizing.

Speaking of conspiracy theories, in a particularly fascinating scene, rioters confront one of our two astronomers with the conspiracy theory that the elites have built bunkers because the they don’t really believe the comet is going to be survivable (at least, not without a bunker). Our astronomer dismissively tells the mob this theory is false, that the elites are “not that competent.” This retort nicely captures the standard rationalistic, scientific response to conspiracy theories; everything can be explained by incompetence, so there’s no need to invoke conspiracy. But, as another reviewer has noticed, later on in the film “we learn that Tech CEO literally built a 2,000 person starship in less than six months so he and the other elites could escape.” It turns out the conspiracy theory was actually more or less correct, if not in the exact details. This rationalistic, scientific debunking and dismissal of conspiracy is actually proven entirely wrong. We would have done better trusting the conspiracy theorist than trusting the scientist.

Ultimately, the demand that we “Trust the Science” turns out to be both un-followable (as soon as scientific consensus breaks down, since we don’t know which science or scientists to listen to), and unreliable (as shown when the conspiracy theorist turns out to be correct). The message this film actually delivers about “Trust the Science” is this: it’s not good enough!

The Moral and Political Importance of “Trust the Science”

Let’s now look at why any of this matters, morally speaking.

Cultures have epistemologies. They have established ways for their members to form beliefs that are widely accepted as the right ways within those cultures. That might mean that people generally accept, for example, a holy text as the ultimate source of authority about what to believe. But in our own society, currently, we lack this. We don’t have a dominant, shared authority or a commonly accepted way to get the right beliefs. We don’t have a universally respected holy book to appeal to, not even a Walter Cronkite telling us “That’s the way it is.” We can’t seem to agree on what to believe or whom to listen to, or even what kinds of claims have weight. Enter “Trust the Science”: a candidate heuristic that just might be acceptable to members of a technologically developed, scientifically advanced, and (largely) secularized society like ours. If our society could collectively agree that, in cases of controversy, everyone should Trust the Science, we might expect the emergence of more of a consensus on the basic facts. And that consensus, in turn, may resolve many of our moral and political disagreements.

This final hope isn’t a crazy one. Many of our moral and political disagreements are based on disagreements about beliefs about the basic facts. Why do Democrats tend to agree with mandatory masks, vaccines, and other coronavirus-related restrictions, while Republicans tend to disagree with them? Much of it is probably explained by the fact that, as a survey of 35,000 Americans found, “Republicans consistently underestimate risks [of coronavirus], while Democrats consistently overestimate them.” In other words, the fact that both sides have false beliefs partly explains their moral and political disagreements. Clearly, none of us are doing well at figuring out whom we can trust to give truthful, undistorted information on our own. But perhaps, if we all just followed the  “Trust the Science” heuristic, then we would reach enough agreement about the basic facts to make some progress on these moral and political questions.

Perhaps unintentionally, “Don’t Look Up” presents a powerful case against this hopeful, utopian answer to the deep divisions in our society. Trusting the Science can’t play the unifying role we might want it to; it can’t form the basis of a new, generally agreed upon secular epistemic heuristic for our society. “Don’t Look Up” is not the simple “pro-science,” “anti-science-denier” film many have taken it to be. It’s far more complicated, ambivalent, and interesting.

The Texas Heartbeat Act and Linguistic Clarity

black-and-white photograph of Texas State Capitol Building

On September 1st, S.B. 8, otherwise known as the Texas Heartbeat Act, came into force. This Act bars abortions once fetal cardiac activity is detectable by ultrasound. While the specific point at which this activity can be identified is challenging to pin down, it most often occurs around the six-week mark. Past this point, the Act allows private citizens to sue those who offer abortions or ‘aids and abets’ a procedure – this includes everyone from abortion providers to taxi drivers taking people to clinics. If the suit is successful, not only can the claimant recover their legal fees, but they also receive $10,000 – all paid by the defendant.

The introduction of this law raises numerous concerns. These include (but are certainly not limited to) whether private citizens should be rewarded for enforcing state law, the fairness of the six-week mark given that most people won’t know they’re pregnant at this point, the lack of an exception for pregnancies resulting from rape or incest, and whether the law is even constitutional. However, in this piece, I want to draw attention to the Act’s language. Specifically, I want to look at two key terms: ‘fetal heartbeat’ and ‘fetus.’

Fetal Heartbeat

At multiple points within the Act, reference is made to the fetal heartbeat requiring detection. This concept is so central to the Act that not only does heartbeat feature in its title, but it is also the very first definition provided – “(1) ‘Fetal heartbeat’ means cardiac activity or the steady and repetitive rhythmic contraction of the fetal heart within the gestational sac.” You would think that such terminology is correct and accurate. After all, accuracy is essential for all pieces of legislation, let alone one that has such crucial and intimate ramifications. Indeed, the Act itself indicates that the term is appropriate as, in the Legislative Findings section, it states, “(1) fetal heartbeat has become a key medical predictor that an unborn child will reach live birth.

However, there exists here a problem. For something to have a heartbeat, it must first have the valves whose opening and closing results in the tell-tale ‘thump-thump’; no valves, no heartbeat. While this may seem obvious (indeed, I think it is), it appears to be something the Act’s creators have… overlooked.

At six weeks, the point at which cardiac activity is typically detectable and abortions become prohibited, a fetus doesn’t have these valves. While a rudimentary structure will be present, typically developing into a heart, this structure doesn’t create a heartbeat. So, if you put a stethoscope on a pregnant person’s stomach at this point, you wouldn’t hear the beating of a heart. Indeed, when someone goes in for an ultrasound, and they listen to something sounding like a heartbeat, this is created by the ultrasound machine based upon the cardiac activity it detects. As such, the Heartbeat Act concerns itself with something that is entirely incapable of producing a heartbeat.

For some, this may seem like a semantic issue. After all, the Act clarifies what it considers a fetal heartbeat when it conflates it with cardiac activity. You may think that I’m being overly picky and that the two amount to roughly the same thing at the end of the day. You might argue that while this activity may not result in the same noise you would hear in a fully developed person, it still indicates a comparable biological function. However, the term heartbeat is emotively loaded in a way that cardiac activity isn’t, and this loading is essential to the discussion at hand.

For centuries, a heartbeat (alongside breath) was the defining quality that signified life. Thus, someone was dead when their heart irrevocably stopped beating. However, with developments in medical technologies, most notably transplantation, this cardiopulmonary definition of death became less valuable. After all, undergoing a heart transplant means, at some point, you’ll lack a heartbeat. Yet, saying that person is dead would seem counterintuitive as the procedure aims to, and typically does, save the organ’s recipient. As a result, definitions of death started to focus more on the brain.

By saying that cardiac activity is synonymous with a heartbeat, the creators of the Act seek to draw upon this historical idea of the heartbeat as essential for life. By appealing to the emotive idea that a heartbeat is detectable at six weeks, an attempt is made to draw the Act’s ethical legitimacy not from scientific accuracy but an emotional force. Doing so anthropomorphizes something which is not a person. The phrase fetal heartbeat seeks to utilize our familiarity with the coupling of personhood and that tell-tale ‘thump-thump.’ But it is important to remember that the entity in question here does not have a heartbeat. Heck, cardiac activity, which is at its core electrical activity, doesn’t even indicate a functional cardiovascular system or a functional heart.

Fetus

So far in this piece, I have used the same terminology as the Act to describe the entity in question, that being the word ‘fetus.’ However, much like the use of ‘fetal heartbeat,’ the Act’s use of the phrase is inaccurate and smuggles deceptive emotive rhetoric. Unlike ‘fetal heartbeat,’ however, ‘fetus’ is at least a scientific term.

There are, roughly speaking, three stages of prenatal development: (i) germinal, where the entity is nothing more than a clump of cells (0 – 2 weeks); (ii) embryonic, where the cell clump starts to take on a human form (3 – 8 weeks); and (iii) fetal, where the further refinement and development occurs (9 weeks – birth).

I’m sure you can already spot the issue here. If cardiac activity occurs typically around the six-week mark, at which point the Act prohibits abortions, then this would place this boundary squarely in the embryonic, not the fetal, stage. Thus, using the term ‘fetus’ throughout the Act is scientifically inaccurate at best, and dangerously misleading at worst. Once again, you might wonder why this matters and think I’m making a bigger deal of this than it needs to be. After all, it’s only a couple of weeks out of step with the scientific consensus. However, as is with the case of ‘fetal heartbeat’ (a term that is now doubly inaccurate as it refers to neither a fetus nor a heartbeat), the term ‘fetus’ comes packaged with emotional baggage.

Describing the developing entity as a fetus evokes images of a human-like being, one that resembles how we are after birth and makes it easier to ascribe it some degree of comparable moral worth. But, this is not the case. An embryo, around the six-week point, may possess some human-like features. However, it is far from visually comparable to a fully formed person, and it is this point that the Act’s language obfuscates. Describing the embryo as a fetus is to try and draw upon the imagery the latter evokes. To make you think of a baby-like being developing in a womb and to push the belief that abortion is a form of murder.

Wrapping it up

It would seem a reasonable claim to make that accuracy is essential in our philosophical reasoning and our legal proceedings. We want to understand the world as it is and create systems that are best suited for the challenges thrown at them. Key to this is the use of appropriate language. Whether deliberative or not, inaccurate terminology makes it harder to act morally as inappropriate assumptions often lead to inappropriate results.

The moral status of the embryo and fetus is a topic that has been debated for centuries, and I would not expect it to be unanimously resolved anytime soon. However, using incorrect language as a means of eliciting a response built solely on the passions is undoubtedly not going to help. Laws need to describe the things they are concerned with accurately, and the Texas Heartbeat Act fails in this task.

Ivermectin, Hydroxychloroquine, and the Dangers of Scientific Preprints

photograph of "In Evidence We Trust" protest sign

There is a new drug of choice among those who have refused to get vaccinated for COVID-19, or are otherwise looking for alternative treatments: ivermectin, an antiparasitic drug that is used primarily in farm animals. The drug recently made headlines in the U.S. after a judge in Ohio ordered a hospital to treat a patient with it, and a number of countries in Latin America and Europe have begun using it, as well. It is not the first time that a drug that was developed for something else entirely was touted as the new miracle cure for COVID-19: hydroxychloroquine, an anti-malarial, was an early favorite for alternative treatments from former president Trump, despite the FDA’s statement that it had no real effect on patients with COVID-19, and indeed could be very dangerous when used improperly. The FDA has recently issued a statement to a similar effect when it comes to ivermectin, warning that the drug can be “highly toxic in humans.”

It is not surprising that there has been continued interest in alternative treatments to COVID-19: given the existence of vaccine skepticism and various surrounding conspiracy theories, people who do not trust the science of vaccinations, for one reason or another, will look for other ways of fighting the disease. What is perhaps surprising is why this particular drug was chosen as the new alternative treatment. There is, after all, no seemingly good reason to think that a horse de-wormer would be effective at killing the coronavirus. So where did this idea come from?

Not, it turns out, from nowhere. As was the case with hydroxychloroquine, the U.S.-based health analytics company Surgisphere produced a study that purported to show that ivermectin was effective at treating COVID-19, albeit in just “a handful of in vitro and observational studies.” The study was not published in any peer-reviewed outlet, but was instead uploaded as a preprint.

A preprint is a “version of a scientific manuscript posted on a public server prior to formal review”: it’s meant to be a way of rapidly disseminating results to the scientific community at large. Preprints can have significant benefits when it comes to getting one’s results out quickly: peer-review can be a lengthy process, and during a global pandemic, time is certainly of the essence. At the same time, there are a number of professional and ethical considerations that surround the use of preprints in the scientific community.

For example, a recent study on preprints released during the pandemic found a “remarkably low publication rate” for sampled papers, with one potential explanation being that “some preprints have lower quality and will not be able to endure peer-reviewing.” Others have cautioned that while the use of preprints has had positive effects in the physical sciences, when it comes to the medical sciences there is potentially more reason to be concerned: given that developments in medical science is typically of much more interest to the general public, “Patients may be exposed to early, unsubstantiated claims relevant to their conditions, while lacking the necessary context in which to interpret [them].” Indeed, this seems to be what happened with regards to alternative treatments for COVID-19, which have been uploaded online amongst an explosion of new preprint studies.

Additional problems arise when it comes to the use of medical preprints in the media. Another recent study found that while online media outlets linking to preprints was a common practice, said preprints were often framed inconsistently: media outlets often failed to mention that the preprints had not been peer reviewed, instead simply referring to them as “research.” While the authors of the study were encouraged that discussions of preprints in the media could foster “greater awareness of the scientific uncertainty associated with health research findings,” they were again concerned that failing to appropriately frame preprint studies risked misleading readers into thinking that the relevant results were accepted in the scientific community.

So what should we take away from this? We have seen that there are clearly benefits to the general practice of publishing scientific preprints online, and that in health crises in particular the rapid dissemination of scientific results can result in faster progress. At the same time, preprints making claims that are not adequately supported by the evidence can get picked up by members of the general public, as well as the media, who may be primarily concerned with breaking new “scientific discoveries” without properly contextualizing the results or doing their due diligence in terms of the reliability of the source. Certainly, then, there is an obligation on the part of media outlets to do better: given that many preprints do not survive peer review, it is important for the media to note that, when they do refer to preprint studies, that the results are provisional.

It’s not clear, though, whether highlighting the distinction would make much of a difference in the grand scheme of things. For instance, in response to the FDA’s statement that there is no scientific basis for studying the effects of ivermectin on COVID-19, Kentucky senator Rand Paul stated that it was really a “hatred for Trump” that stood in the way of investigating the drug, and not, say, the fact that the preprint study did not stand up to scientific scrutiny. It seems unlikely that, for someone like Paul, the difference between preprints and peer-reviewed science is a relevant one when it comes to pushing a political narrative.

Nevertheless, a better understanding of the difference between preprints and peer-reviewed science could still be beneficial when helping people make decisions about what information to believe. While some preprints certainly do go on to pass peer review, if the only basis that one has for some seemingly implausible medical claims is a preprint study, it is worth approaching those claims with skepticism.

Aesop and the Unvaccinated: On Messaging and Rationality

cartoon image of scorpion on frogs back

Aesop shared a fable once about a scorpion and a frog. The scorpion asked a frog to ferry him across a pond. The frog was reluctant because he feared the scorpion’s sting. But the scorpion appealed to the frog’s intellect and pointed out that if he did sting the frog, the scorpion would surely drown as well. So, the frog agreed to the request. But, as expected, about halfway across the pond, the frog felt an awful pain and before they both died, asked the scorpion why. The scorpion replied that he really couldn’t help it saying, “it’s in my nature to sting.”

Why did the frog make that irrational decision, even though he knew better? Fables typically have a moral for us to learn, and this one is no different; make rational decisions. Unfortunately, we make irrational decisions all of the time, even if, in the animal kingdom, we are known as the rational ones.

As of this writing, about 50% of the U.S. population is vaccinated. Since it is estimated that between 70% and 90 % of the population will need to be vaccinated against the COVID-19 virus to reach herd immunity, we have a long way to go. But the vaccination rate overall has slowed significantly. We watched the vaccination rate begin to plateau in late June and early July, at about the same time that the more deadly Delta variant began to ravage the unvaccinated. Now, with new cases rising each day across the country, one wonders why anyone would put off getting the vaccine.

Explanations for this phenomenon abound; some believe that vaccine hesitancy is to blame. Early on in the rollout of the three major vaccines available in the U.S., many were “hesitant” because they wanted more information about the vaccines. Were the vaccines safe? If so, like most medications, they probably were not safe for everyone, so for whom were the vaccines not safe? Where would people go to get the vaccines? What costs would be involved? These are rational questions the population was asking; they may have been gathering facts to make rational decisions. Or were they?

Humans aren’t really known for our ability to be consistent when it comes to making rational decisions. Some of those same people get flu shots every fall and make sure their children receive needed vaccinations as infants and again prior to the start of school, still don’t want to take the COVID vaccine. All despite the fact approximately 99% of deaths in America due to COVID are found among those unvaccinated. It seems irrational not to avail oneself of this life-saving intervention.

Even some government officials — in those areas where the vaccination rate is low, and the spread of the variant is high — are growing more outspoken about their constituents’ health decisions. Senate minority leader, Mitch McConnell (R-KY), has reiterated in public that for those who can be vaccinated to do so. (His state, Kentucky, has a lower-than-average vaccination rate.) The Governor of Alabama, Kay Ivy, recently said that this is now an epidemic of the unvaccinated in her state, further stating that you just can’t teach “common sense.”

But alongside these pleas are plenty of name-calling, finger-pointing, and blaming — all of which may be smokescreens for the fact that we don’t really know how to message the vaccine’s appeal to remaining holdouts. We continue to assume that humans are consistent in making rational choices, and when we believe they have not done so, we have a tendency to throw up our hands. We think that stupid decisions are made by stupid people. The truth, however, is that we aren’t consistent in making rational choices; irrationality abounds, and it has nothing to do with stupid. The same people who buy lottery tickets also buy insurance. Why? Cognitive science and the felicific calculus of Jeremy Bentham may both give us a peek into why we make decisions as we do, whether they are rational ones or not.

In the 18th century, Bentham formulated the “felicific calculus” which stated that an event can be assigned a value (typically numeric) as to its utility or worth. That worth was measured in terms of the amount of happiness or pleasure the event would bring people; the more happiness, the better the decision that caused it, and the more rational it would be seen. This mathematical algorithm measured pleasure or pain in terms of several facets; among them were the pleasure or pain’s intensity, its duration, the probability of its occurrence (and reoccurrence), and the number of people affected. While being mathematically sound, philosophically appealing in many ways, and rational, for most day-to-day decisions the calculus was impractical. Adapting a thought experiment originally posed by cognitive scientist/mathematician Amos Tversky however, may help us understand from a cognitive perspective why people are so inconsistent when making decisions.

Example 1. Let’s say that your local health department has projected that 600 people will get the Delta variant of COVID-19 in your hometown of 6,000 people.

There is a proposed treatment, A, and if applied it will save 200 people. 

There is another proposed treatment, B, and if applied, there will be 1 chance in 3 that 600 people will be saved, and 2 chances in 3, that no one will be saved.

Which treatment would you choose?

When presented with the original problem, most people chose treatment A where there is a surety that 200 people will live.

Example 2. Now, let’s say that the health department again predicts that 600 people in your hometown of 6,000 will get the Delta variant of COVID-19.

There are 2 treatments, A and B.

If treatment A is applied, 400 people will die.

If treatment B is applied there are 2 chances in 3 that all 600 will be lost, and I chance in 3 that no one will be lost.

Which treatment would you choose?

When presented with the original problem, most people chose treatment B.

Notice, however, that 200 people survive in each case. Despite this, in case one, treatment A was chosen as the better alternative, while in case two, treatment B was chosen. Why, when the probabilities and outcomes are the same, did A get chosen one time and B the other time? It’s the way the cases are presented, or framed. In the first scenario, the probabilities are presented in terms of lives saved (gains), and in scenario two the probabilities are framed in terms of lives lost (losses). We focus on the number of lives saved in either case, whether it’s a “sure bet” or the better probability.

Currently, public messaging regarding vaccinations focuses on lives lost rather than the number of lives saved. If we reframe messaging to focus on lives saved (gains) instead of lives lost (losses), the application of Tversky’s thought experiment might get us over the hump and on our way to achieving herd immunity. The felicific calculus of Bentham applies as well; perhaps a mathematical algorithm makes more sense to us homo sapiens in this case. Think of the number of persons who would experience happiness and pleasure instead of pain over a long period of time, plus the freedom from worry that the Delta could re-infect us. Correctly framing the message seems to be one effective and scientific way to help people manage the inherent irrationality that comes with being human.

What Morgellons Disease Teaches Us about Empathy

photograph of hand lined with ants

For better or for worse, COVID-19 has made conditions ripe for hypochondria. Recent studies show a growing aversion to contagion, even as critics like Derek Thompson decry what he calls “the theater of hygiene,” the soothing but performative (and mostly ineffectual) obsession with sanitizing every surface we touch. Most are, not unjustifiably, terrified of contracting real diseases, but for nearly two decades, a small fraction of Americans have battled an unreal condition with just as much fervor and anxiety as the contemporary hypochondriac. This affliction is known as Morgellons, and it provides a fascinating study in the limits of empathy, epistemology, and modern medical science. How do you treat an illness that does not exist, and is it even ethical to provide treatment, knowing it might entrench your patient further in their delusion?

Those who suffer from Morgellons report a nebulous cluster of symptoms, but the overarching theme is invasion. They describe (and document extensively, often obsessively) colorful fibers and flecks of crystal sprouting from their skin. Others report the sensation of insects or unidentifiable parasites crawling through their body, and some hunt for mysterious lesions only visible beneath a microscope. All of these symptoms are accompanied by extreme emotional distress, which is only exacerbated by the skepticism and even derision of medical professionals.

In 2001, stay-at-home mother Mary Leiato noticed strange growths on her toddler’s mouth. She initially turned to medical professionals for answers, but they couldn’t find anything wrong with the boy, and one eventually suggested that she might be suffering from Munchausen’s-by-proxy. She rejected this diagnosis, and began trawling through historical sources for anything that resembled her son’s condition. Leiato eventually stumbled across 17th-century English doctor and polymath Sir Thomas Browne, who offhandedly describes in a letter to a friend “’that Endemial Distemper of little Children in Languedock, called the Morgellons, wherein they critically break out with harsh hairs on their Backs, which takes off the unquiet Symptoms of the Disease, and delivers them from Coughs and Convulsions.” Leiato published a book on her experiences in 2002, and others who suffered from a similar condition were brought together for the first time. This burgeoning community found a home in online forums and chat rooms. In 2006, the Charles E. Holman foundation, which describes itself as a “grassroots activist organization that supports research, education, diagnosis, and treatment of Morgellons disease,” began hosting in-person conferences for Morgies, as some who suffer from Morgellons affectionately themselves. Joni Mitchell is perhaps the most famous of the afflicted, but it’s difficult to say exactly how many people have this condition.

No peer-reviewed study has been able to conclusively prove the disease is real. When fibers are analyzed, they’re found to be from sweaters and t-shirts. A brief 2015 essay on the treatment of delusional parasitism published by the British Medical Journal notes that Morgellons usually appears at the nexus between mental illness, substance abuse, and other underlying neurological disorders. But that doesn’t necessarily mean the ailment isn’t “real.” When we call a disease real, we mean that it has an identifiable biological cause, usually a parasite or bacterium, something that will show up in blood tests and X-rays. Mental illness is far more difficult to prove than a parasitic infestation, but no less real for that.

In a 2010 book on culturally-specific mental illness, Ethan Watt interviewed medical anthropologist Janet Hunter Jenkins, who explained to him that “a culture provides its members with an available repertoire of affective and behavioural responses to the human condition, including illness.” For example, Victorian women suffering from “female hysteria” exhibited symptoms like fainting, increased sexual desire, and anxiety because those symptoms indicated distress in a way that made their pain legible to culturally-legitimated medical institutions. This does not mean mental illness is a conscious performance that we can stop at any time; it’s more of a cipherous language that the unconscious mind uses to outwardly manifest distress.

What suffering does Morgellons make manifest? We might say that the condition indicates a fear of losing bodily autonomy, or a perceived porous boundary between self and other. Those who experience substance abuse often feel like their body is not their own, which further solidifies the link between Morgellons and addiction. Of course, one can interpret these fibers and crystals to death, and this kind of analysis can only take us so far; it may not be helpful to those actually suffering. Regardless of what they mean, the emergence of strange foreign objects from the skin is often experienced as a relief. In her deeply empathetic essay on Morgellons, writer Leslie Jamison explains in Sir Thomas Browne account, outward signs of Morgellons were a boon to the afflicted. “Physical symptoms,” Jamison says, “can offer their own form of relief—they make suffering visible.” Morgellons provides physical proof of that something is wrong without forcing the afflicted to view themselves as mentally ill, which is perhaps why some cling so tenaciously to the label.

Medical literature has attempted to grapple with this deeply-rooted sense of identification. The 2015 essay from the British Medical Journal recommends recruiting the patient’s friends and family to create a treatment plan. It also advises doctors not to validate or completely dispel their patient’s delusion, and provides brief scripts that accomplish that end. In short, they must “acknowledge that the patient has the right to have a different opinion to you, but also that he or she shall acknowledge that you have the same right.” This essay makes evident the difficulties doctors face when they encounter Morgellons, but its emphasis on empathy is important to highlight.

In many ways, the story of Morgellons runs parallel to the rise of the anti-vaccination movement. Both groups were spear-headed by mothers with a deep distrust of medical professionals, both have fostered a sense of community and shared identity amongst the afflicted, and both legitimate themselves through faux-scientific conferences. The issue of bodily autonomy is at the heart of each movement, as well as an epistemic challenge to medical science. And of course, both movements have attracted charlatans and snake-oil salesmen, looking to make a cheap buck off expensive magnetic bracelets and other high-tech panaceas. While the anti-vaxx movement is by far the most visible and dangerous of the two, these movements test the limits of our empathy. We can acknowledge that people (especially from minority communities, who have historically been mistreated by the medical establishment) have good reason to mistrust doctors, and try to acknowledge their pain while also embracing medical science. Ultimately, the story of Morgellons may provide a valuable roadmap for doctors attempting to combat vaccine misinformation.

As Jamison says, Morgellons disease forces us to ask “what kinds of reality are considered prerequisites for compassion. It’s about this strange sympathetic limbo: Is it wrong to speak of empathy when you trust the fact of suffering but not the source?” These are worthwhile questions for those within and without the medical profession, as we all inevitably bump up against other realities that differ from our own.

Educating Professionals

photograph of graduation caps thrown in the air

Universities around the country have, in the last century, shifted their focus from a traditional liberal arts curriculum to an increasingly “practical” or vocational form of education. There is a popular conception that the purpose of higher education is some form of job-training. A cursory Google search will produce a number of articles asking whether college is a “sound investment,” or whether college graduates make more money than their peers who elect to forego college for work. Virtually every one of these articles defines the worth of a college degree in purely economic terms. There is little room to deny that, in our modern liberal democracy, making money is a practical necessity. Yet, I think there is something deeply confused about the attempt to reduce the value of education generally — and higher education specifically — to the economic gains that come from education. I have argued elsewhere that conflating the so-called “practicality” of education with the “vocationality” of education is a conceptual mistake, so I will not rehearse those arguments here.

Instead, I intend to discuss a related problem present in the ways we conceive of the nature, purpose, and value of higher education. Following the 2008 recession, there was a marked shift in students’ and educators’ priorities toward STEM (science, technology, engineering, and mathematics) fields. People seem to see STEM fields as a means to a professional end — scientists, engineers, and folks in tech tend to make money, and that’s something people in a precarious economic environment want. We can see the need for economic stability reflected in every aspect of the university, including many college and university mission and vision statements.

It is not difficult to see the ways in which gaining technical proficiency in biology or engineering, for example, will prepare students for a career. However, what some students and educators fail to recognize is that even areas within sciences that most directly correlate to in-demand jobs need the humanities. In preparing a guest lecture on engineering ethics, I looked into the nature of professional ethics generally. This led me to think about the nature of a profession and why it is important that certain professions have ethical guidelines by which practitioners must abide. The word “profession” is derived from the late Latin professus, which roughly means “to profess one’s vows.” One might wonder what a profession of one’s vows has to do with a “profession” as we consider it today. The answer is surprisingly straightforward — in the monastic tradition, monks were asked to make a public declaration of their commitment to living a more just, ethical life in light of their training. Accordingly, they would profess their commitment to living according to this higher standard. Such dedications bled over into highly skilled and highly specialized trades — as jobs require increasingly specific training, it becomes increasingly important that the people who take on these skilled positions profess to self-govern according to higher standards, if only because the number of people who have the knowledge to provide a check on them has become vanishingly small. There can be little doubt that technicians at every level need to behave ethically, but with a larger peer group, there are more individuals, and more opportunities to recognize and correct potential abuse. As William F. May powerfully states, “if knowledge is power, then ignorance is powerlessness. Although it is possible to devise structures that limit the opportunities for abuse of specialized knowledge, ultimately one needs to cultivate virtue in those who wield that relatively inaccessible power.”

It is not difficult to see how we can take this idea of professionalism as tied with virtue and apply it to higher education today. Let’s take the example of our engineering students. Within the field of engineering, there are different fields of sub-specialization, the result of which is a (relatively) small number of professional peers — those with the specialized knowledge to recognize and correct potential problems before they become catastrophic. The fact that students in a senior-level engineering class already have narrowly defined expertise that differs from peers in the same class highlights the need for a curriculum that instills ethics early on.

This problem becomes more acute as students graduate and enter the profession. As the number of engineers who have the specific knowledge necessary to evaluate the choices made by any given engineer is so small, we must rely on the engineers themselves to abide by a higher standard — especially in light of the public-facing nature of the work engineers undertake. Engineering is a profession, and as such we need engineers who profess to, and actually do, live and work according to a higher standard. Such a profession requires more than mere compliance with a code of conduct. As Michael Pritchard notes, “professional codes of ethics are not static documents. But even if a code is relatively unchanging, it is not simply an algorithm for decision making. It must be applied – which calls for independent judgment on the part of the professional.” In light of this understanding of the nature and demands of professionalism, I propose that universities insist upon an increased emphasis on humanities — those fields whose value is less directly connected to vocational outcomes and are more easily connected to the development of character, person, and civic responsibility. Humanistic fields are just as valuable as more vocationally-directed fields, even to those vocational-directed fields themselves.

According to a recent report from the Bureau of Labor Statistics, many institutions were ill-prepared to handle the influx of people looking for STEM degrees following the 2008 recession. The BLS additionally cautions that the pandemic is likely to cause another STEM surge, offering us another opportunity to shape industries and mold the next wave of future professionals. In considering how to do this, and how to do it well, it should be clear from what I’ve said that we need to emphasize the connections between the humanities and STEM fields. While we often like to think of science as purely descriptive and divorced from considerations of value (moral, aesthetic, or otherwise), that is simply not an accurate, or at any rate a complete picture. The ultimate aims of science are, I suggest, intrinsically value-laden. I don’t have room here to defend this claim, but for a careful discussion, see Heather Douglas’ Science, Policy, and the Value-Free Ideal (especially chapters 4, 5, and 8). For now, let’s return to our example of engineering students. In my discussions with students, many report that they went into engineering with high-minded goals about improving the quality of life for those around them. They see the end for the sake of which they pursue STEM not as mere financial stability, but for the betterment of human lives; yet most report that they have had little or no formal education in ethics or value theory. The narrow scope of their education illustrates that colleges and universities are not doing enough to truly prepare students for the non-technical aspects of their chosen profession. The solution, I propose, is to return to a more well-rounded form of education; one that emphasizes humanities and integrates humanistic education with STEM fields.

We do not need technically proficient but ethically unimaginative or inflexible workers to fill the needs of our consumer economy; rather, we need professionals understood in the broad sense I’ve described. We need to cultivate and encourage our students to commit to living according to the highest standards of moral virtue. As Rena Beatrice Goldstein argues,

“Virtue enables a person to navigate challenging human encounters in many spheres, and virtue curricula can help students learn to navigate well by practicing virtue in different environments. It takes time to develop virtues like open-mindedness. Indeed, being open-minded with strangers in the civic domain may require different motivations than being open-minded with one’s peers, family, or friends. Practicing virtues in a variety of domains can help students develop the right motivations, which may be different in different domains.”

I propose that we see the next STEM push as an opportunity to re-emphasize our commitment to all of the core values of higher education: personal growth, civic responsibility, and professional excellence. When we consider “professional excellence,” we must build into that concept a healthy understanding of, and respect for, the stable virtues cultivated through sustained humanistic study.

The Ethics of Self-Citation

image of man in top hat on pedestal with "EGO" sash

In early 2021, the Swiss Academies of Arts and Sciences (SAAS) published an updated set of standards for academic inquiry; among other things, this new “Code of Conduct for Scientific Integrity” aims to encourage high expectations for academic excellence and to “help build a robust culture of scientific integrity that will stand the test of time.” Notably, whereas the Code’s previous version (published in 2008) treated “academic misconduct” simply as a practice based on spreading deceptive misinformation (either intentionally or due to negligence), the new document expands that definition to include a variety of bad habits in academia.

In addition to falsifying or misrepresenting one’s data — including various forms of plagiarism (one of the most familiar academic sins) — the following is a partial list of practices the SAAS will now also consider “academic misconduct”:

  • Failing to adequately consider the expert opinions and theories that make up the current body of knowledge and making incorrect or disparaging statements about divergent opinions and theories;
  • Establishing or supporting journals or platforms lacking proper quality standards;
  • Unjustified and/or selective citation or self-citation;
  • Failing to consider and accept possible harm and risks in connection with research work; and
  • Enabling funders and sponsors to influence the independence of the research methodology or the reporting of research findings.

Going forward, if Swiss academics perform or publish research failing to uphold these standards, they might well find themselves sanctioned or otherwise punished.

To some, these guidelines might seem odd: why, for example, would a researcher attempting to write an academic article not “adequately consider the expert opinions and theories that make up the current body of knowledge” on the relevant topic? Put differently: why would someone seek to contribute to “the current body of knowledge” without knowing that body’s shape?

As Katerina Guba, the director of the Center for Institutional Analysis of Science and Education at the European University at St. Petersburg, explains, “Today, scholars have to publish much more than they did to get an academic position. Intense competition leads to cutting ethical corners apart from the three ‘cardinal sins’ of research conduct — falsification, fabrication and plagiarism.” Given the painful state of the academic job market, researchers can easily find incentives to pad their CVs and puff up their resumes in an attempt to save time and make themselves look better than their peers vying for interviews.

So, let’s talk about self-citation.

In general, self-citation is simply the practice of an academic who cites their own work in later publications they produce. Clearly, this is not necessarily ethically problematic: indeed, in many cases, it might well be required for a researcher to cite themselves in order to be clear about the source of their data, the grounding of their argument, the development of the relevant dialectical exchange, or many other potential reasons — and the SAAS recognizes this. Notice that the new Code warns against “unjustified and/or selective citation or self-citation” — so, when is self-citation unjustified and/or unethical?

Suppose that Moe is applying for a job and lists a series of impressive-sounding awards on his resume; when the hiring manager double-checks Moe’s references, she confirms that Moe did indeed receive the awards of which he boasts. But the manager also learns that one of Moe’s responsibilities at his previous job was selecting the winners of the awards in question — that is to say, Moe gave the awards to himself.

The hiring manager might be suspicious of at least two possibilities regarding Moe’s awards:

  1. It might be the case that Moe didn’t actually deserve the awards and abused his position as “award-giver” to personally profit, or
  2. It might be the case that Moe could have deserved the awards, but ignored other deserving (potentially more-deserving) candidates for the awards that he gave to himself.

Because citation metrics of publications are now a prized commodity among academics, self-citation practices can raise precisely the same worries. Consider the h-index: a score for a researcher’s publication record determined by a function of their total number of publication credits and how often their publications have been cited in other publications. In short, the h-index claims to offer a handily quantified measurement of how “influential” someone has been on their academic field.

But, as C. Thi Nguyen has pointed out, these sorts of quantifications not only reduce complicated social phenomena (like “influence”) to thinned-out oversimplifications, but they can be gamified or otherwise manipulated by clever agents who know how to play the game in just the right way. Herein lies one of the problems of self-citations: an unscrupulous academic can distort their own h-index scores (and other such metrics) to make them look artificially larger (and more impressive) by intentionally “awarding themselves” with citations just like Moe granted himself awards in Situation #1.

But, perhaps even more problematic than this, self-citations limit the scope of a researcher’s attention when they are purporting to contribute to the wider academic conversation. Suppose that I’m writing an article about some topic and, rather than review the latest literature on the subject, I instead just cite my own articles from several years (or several decades) ago: depending on the topic, it could easily be the case that I am missing important arguments, observations, or data that have been made in the interim period. Just like Moe in Situation #2, I would have ignored other worthy candidates for citation to instead give the attention to myself — and, in this case, the quality of my new article would suffer as a result.

For example, consider a forthcoming article in the Monash Bioethics Review titled “Can ‘Eugenics’ Be Defended?” Co-written by a panel of six authors, many of whom are well-known in their various fields, the 8-page article’s reference list includes a total of 34 citations — 14 of these references (41%) were authored by one or more of the article’s six contributors (and 5 of them are from the lead author, making him the most-cited researcher on the reference list). While the argument of this particular publication is indeed controversial, my present concern is restricted to the article’s form, rather than its contentious content: the exhibited preference to self-cite seems to have led the authors to ignore almost any bioethicists or philosophers of disability who disagree with their (again, extremely controversial) thesis (save for one reference to an interlocutor of this new publication and one citation of a magazine article). While this new piece repeatedly cites questions that Peter Singer (one of the six co-authors) asked in the early 2000s, it fails to cite any philosophers who have spent several decades providing answers to those very questions, thereby reducing the possible value of its purported contributions to the academic discourse. Indeed, self-citation is not the only dysgenic element of this particular publication, but it is one trait that attentive authors should wish to cull from the herd of academic bad habits.

Overall, recent years have seen just such an increased interest among academics about the sociological features of their disciplinary metrics, with several studies and reports being issued about the nature and practice of self-citation (notably, male academics — or at least those without “short, disrupted, or diverse careers” — seem to be far more likely to self-cite, as are those under pressure to meet certain quantified productivity expectations). In response, some have proposed additional metrics to specifically track self-citations, alternate metrics intended to be more balanced, and upending the culture of “curated scorekeeping” altogether. The SAAS’s move to specifically highlight self-citation’s potential as professional malpractice is another attempt to limit self-serving habits that can threaten the credibility of academic claims to knowledge writ large.

Ultimately, much like the increased notice that “p-hacking” has recently received in wider popular culture — and indeed, the similar story we can tell about at least some elements of “fake news” development online —  it might be time to have a similarly wide-spread conversation about how people should and should not use citations.

Ethical Considerations in the Lab-Leak Theory

3D image of Covid-19 virus cells

President Biden announced recently that he would be launching an investigation into the origin of the coronavirus. While the standard narrative over much of the course of the pandemic has been that it was initially transmitted to humans via contact with animals in Wuhan, China – thought by many to be bats, although there have also been theories that pangolins could have been involved – a second possibility has also been entertained, namely that the virus originated in a virology lab. Indeed, this was one of the favorite theories of Donald Trump, who, on several occasions, simply stated that the virus originated in a lab, although he failed to provide any evidence for his assertions. The so-called “lab-leak” theory soon took on the status of a conspiracy theory: it was explicitly rejected by numerous scientists, and its association with Trump and other members of the alt-right greatly hindered any credibility that the theory may have had within the scientific community. With Trump out of office, however, questions about the plausibility of the theory have resurfaced, and there has been enough pressure for Biden to open the investigation.

Should Biden have opened his investigation into the lab-leak theory? While it might seem like a question that can be answered by considering the science – i.e., by looking at whether there is good evidence for the theory, whether expert scientific opinion considers it a plausible hypothesis, etc. – there are other ethical factors that we should consider, as well.

Here’s one sense in which it seems that such an investigation is worthwhile: it is always worthwhile to try to learn the truth. Now, there are a lot of truths that we might think really don’t add that much value to our lives – I can spend a lot of time counting the number of blades of grass on my lawn, for example, and at the end of a very long day will possess a shiny new true belief, but hardly anyone would think that I had spent my time wisely. The COVID-19 pandemic, however, is of substantial importance, and so learning about where it came from may seem like an investigation that is worth pursuing for its own sake.

At the same time, there are also potential practical benefits to learning the truth of the matter about the origin of COVID-19. The pandemic has raised many questions about how we should react to the next one, and what we can do to prevent it. Making sure that we have the correct theory of the origin of the virus would then no doubt be useful when thinking about responses to future outbreaks. So here are two points in favor of conducting the investigation: we can learn the truth of something important, and we might be able to become better prepared for similar events in the future.

However, there are also some potential drawbacks. Specifically, there have been concerns that, especially during the previous administration, the impetus for discussing the lab-leak theory was not an attempt to make sure that one’s science was correct, but to find a scapegoat. The theory comes in two different forms. According to one version, the virus was intentionally released from the lab, for whatever reason. If this were to be the case, then there would be a definitive place to direct one’s blame. This version of the theory, however, falls predominantly within the realm of conspiracy theory. The other, more popular version states that while the virus originated in a lab, its transmission into the surrounding population was an accident. Even if this is the case, though, it would seem to represent an act of negligence, and thus the lab, the scientists, and the government would be blameworthy for it.

One of the early criticisms of Trump’s endorsement of the lab-leak theory was that given that it was driven by the search for someone to blame instead of a theory that was best supported by evidence, he was fanning the flames of anti-Asian racism. Indeed, by insisting on the truth of the theory without evidence, as well as consistently referring to the coronavirus as the “China virus,” incidents of anti-Asian racism increased during the course of the pandemic in the U.S.

Here, then, is a concern with Biden’s investigation: opening an official investigation into the lab-leak theory gives legitimacy to a view that has been considered by many to be little more than a conspiracy theory, which may again result in an increase in incidents of anti-Asian racism. Given the potential ethically problematic results of the inquiry, we can then ask: is it worth it?

What is perhaps encouraging is that Biden’s investigation seems to be motivated more by dissent within parts of the scientific community than by the political search for a scapegoat. We might still be concerned, however, that people will not be good at distinguishing versions of the theory under consideration. As noted above, there are two versions of the lab-leak theory, one more distinctly conspiratorial than the other. However, by giving credence to the view that the virus accidentally leaked from the lab, one may instead interpret this as giving more credence to the other.

This is not to say that the investigation is a bad idea. Instead, it should remind us that inquiry is never conducted in a vacuum, and that which questions are worth investigating may depend not solely on the evidence, but on the ethical consequences of doing so.

Climate Services, Public Policy, and the Colorado

photograpg of Colorado River landscape

What does the Colorado River Compact of 1922 have to do with ethical issues in the philosophy of science? Democracy, that’s what! This week The Colorado Sun reported that the Center for Colorado River Studies issued a white paper urging reform to river management in light of climate change to make the Colorado River basin more sustainable. They argue that the Upper Colorado River Commission’s projections for water use are inflated, and that this makes planning the management of the basin more difficult given the impact of climate change.

Under a 1922 agreement among seven U.S. states, the rights to use water from the Colorado River basin are divided into an upper division — Colorado, New Mexico, Utah, and Wyoming — and a lower division — Nevada, Arizona, and California. Each division was apportioned a set amount with the expectation being that the upper division would take longer to develop than the lower division. The white paper charges that the UCRC is relying on inflated water usage projections for the upper division despite demand for development in the upper basin being flat for three decades. In reality, however, the supply of water is far lower than projected in 1922, and climate change has exacerbated the issue. In fact, the supply has shrunk so much that upper basin states have taken efforts to reduce water consumption so that they do not violate the agreement with lower basin states. As the Sun reported, “If it appears contradictory that the upper basin is looking at how to reduce water use while at the same time clinging to a plan for more future water use, that’s because it is.”

To see how this illustrates an ethical problem in philosophy of science, we need to first examine inductive risk. While it is a common enough view that science has nothing to do with values, a consensus among several philosophers of science has formed in the past decade which suggests that not only does science use values, but that this is a good thing. Science never deals with certainty but with inductive generalizations based on statistical modelling. Because one can never be certain, one can always be wrong. Inductive risk involves considering the ethical harms which one should be aware of should their decisions turn out to be wrong. For example, if there is a 90% chance that it will not rain, you may be inclined to wear your expensive new shoes. On the other hand, if you are wrong about that 90% chance, your expensive new shoes will get ruined in the rain. In a case like this, you need to evaluate two factors at the same time: how important are the consequences of being wrong, and, in light of this judgment, how confident do you need to be in your conclusion? If your shoes cost $1000 and ruin very easily, you may want a level of confidence close to 95% or 99% before leaving home. On the other hand, if your shoes are cheap and easy to replace, you may be happy to go outside with a 50% chance of rain.

When dealing with what philosophers call socially-relevant or policy-relevant science, the same inductive risk concerns arise. In an inductive risk situation, we need to make value judgments about how important the consequences of being wrong are, and how accurate we thus ought to be. But what values should be used? According to many philosophers of science, when dealing with socially-relevant science, only democratically-endorsed values are legitimate. The reason for this is straightforward; if values are going to be used that affect public policy-making, then the people should select those values rather than scientists, or other private interests, as that would give them undue power and influence in policy-making.

This brings us back to the Colorado River. A new area of climate science known as “climate services” aims to make climate data more usable for social decision-making by ensuring that the needs of users are central to the collection and analysis of data. Typically, such climate data is not organized to suit the needs of stakeholders and decision-makers. For example, Colorado River Basin managers employed climate services from state and national agencies to create model-based projections of Lake Mead’s ability to supply water. In a recent paper, Wendy Parker and Greg Lusk have explored how inductive risk concerns allow for the use of values in the “co-production” of climate services jointly between providers and users. This means that insofar as inductive risk is a concern, the values of the user can affect everything from model creation, the selection of data, and even the ultimate conclusions reached. Thus, if a group wished to develop land in the Colorado basin, and sought the use of climate services, then the values of that group could affect the information and data that is used and what policies take effect.

According to Greg Lusk, however, this is potentially a problem since if any user who pays for climate services is able to use their own values to affect scientifically-informed policy-making, then this would violate the need for the values to be democratically endorsed. He notes:

“Users could refer to anyone, including government agencies, public interest groups, private industry, or political parties …. The aims or values of these groups are not typically established through democratic mechanisms that secure representative participation and are unlikely to be indicative of the general public’s desires. Yet, the information that climate service providers supplies to users is typically designed to be useful for social and political decision making.”

It is worth noting, for example, that the white paper issued by the Center for Colorado River Studies was funded by the Walton Family Foundation, the USGS Southwest Climate Adaptation Science Center, the Utah Water Research Laboratory, and various other private donors and grants. This report could affect policy maker’s decisions. None of this suggests that the research is biased or bad, but to whatever extent values can influence such reports, and to whatever extent such reports affect policy-making, is the extent to which we should question whose values are playing what roles in information-based policy-making.

In other words, there is an ethical dilemma. On the one hand, climate services can offer major advantages to help users of all kinds prepare for, mitigate, adapt to, or plan development in light of climate change. On the other hand, scientific material designed to be relevant for policy-making, yet heavily influenced by non-democratically endorsable values, can be hugely influential and can affect what we consider to be good data-driven policy. As Lusk notes,

“According to the democratic view, then, the employment of users’s values in climate services would often be illegitimate, and scientists should ignore those values in favor of democratically endorsed ones, to ensure users do not have undue influence over social decision making.”

Of course, even if we accept the democratic view, the problem of defining what “democratically endorsable” means remains. As the events of the past year remind us, democracy is about more than just voting for a representative. In an age of polarization where the values endorsed may be likely to swing radically every four years, or where there is disagreement among various elected governments, deciding which values are endorsable becomes extremely difficult, and ensuring that they are used becomes more impactable. Thus, deciding what place democracy has in science remains an important moral question for philosophers of science, but even more so for the public.

The Morality of the Arts vs. Science Distinction

image of child architect with hard hat standing in front of sketch of city skyline

If one pursues post-secondary education, is it better to study the arts or focus on the sciences? Given the career opportunities and prestige, it has become a common source of mockery that someone would choose to pursue the arts rather than the sciences. But what makes the arts different from the sciences? Do how and why we make such distinctions have ethical ramifications?

What is the difference between the liberal arts and the sciences? The concept of “the arts” stretches back to antiquity where ‘art’ designated a human skill. These skills were used to make things that are artificial (human made), an artifact. Later, the concept of the liberal arts was used to designate the kind of education required for a free citizen (the term “liberal” designating freedom rather than a political ideology) to take part in civil life. Today, the arts may refer to fine arts (like painting or music) as well as liberal arts such as various humanities (philosophy, history, literature, linguistics, etc.) and social sciences (like sociology, economics, or political science). These are now held in contrast to the STEM fields (science, technology, engineering, and mathematics).

The distinction made between the arts and the sciences takes on a moral character when the conversion drifts towards what kinds of education we think is important for meeting the needs of modern society. The distinction goes beyond merely what governments or universities claim the difference is, for is also a distinction that is made by potential students, parents, taxpayers, employers, and society at large. How does society make that distinction? A quick internet search for the relevant distinctions suggests a tendency to emphasize the objective nature of science and the subjective nature of the arts. Science is about finding truth about the world, whereas the arts focus on finding subjective representations according to cultural and historical influences. The sciences are technical, precise, and quantitative. The arts are qualitative, vague, and focus less on right or wrong answers, and thus are thought to lack the rigor of the sciences.

These kinds of sharp distinctions reinforce the idea that liberal arts are not really worth pursuing, that higher education should be about gaining the skills needed for the workforce and securing high-paying jobs. To add to this, the distinction has been a flashpoint of an ongoing culture war as the large number of liberal arts memes and critical comments on the internet will testify to. The result has been severe cuts in liberal arts education, the elimination of staff and services, and even the elimination of majors. To some this may be progress. If the liberal arts and humanities are subjective, if there is little objective truth to be discovered, then they may not be worth saving.

Justin Stover of the University of Edinburgh, for example, believes that there is no case to be made for the humanities. While defenders of the humanities may argue that they are means of improving and expressing our ideas, that they provide skills that are relevant and transferable to other fields and pursuits, or that they are a search for values, Stover believes that these benefits are hollow. He points out that study in the humanities isn’t necessary for actual artistic expression. While studies in obscure languages or cultures may foster useful skills for careers outside of the academy, these are mere by-products of study and not something that makes a strong case for their study.

In addressing the matter of value, Stover notes,

“’values’ is a hard thing to put in a long diachronic frame because it is not clear that there is any analogous notion in any culture besides our own. Values can hardly be a necessary component of the humanities — much less the central core — if there was no notion of them for most of the humanities’ history […] values might have a lot to do with Golden Age Spanish literature; but what have they to do with historical linguistics?”

Stover suggests alternatively that studies in the humanities fulfills a social function by creating a prestigious class of people who share certain tastes and manners of judgment but that ultimately there is no non question-begging justification for the humanities. He notes, “The humanities do not need to make a case within the university because the humanities are the heart of the university.” One cannot justify the importance of the humanities from outside of the perspective of the humanities.

The moral concern on this issue is less about the morality of defending a liberal arts education compared to a science education, but rather about how we are making the distinction itself. Are we talking about methods? Disciplinary norms? The texts? The teaching? Stover’s argument relies on understanding the humanities as an essentially different thing from the sciences. But are there actually good reasons to make these distinctions? Anyone who has studied logic, linguistics, or economics knows how technical those fields can be. By the same token, several studies of the sciences reveal the importance that aesthetic taste can have not only on individual scientists, but on whole scientific communities. The response of scientific communities to the COVID-19 pandemic — disagreements about treatment protocols, publication concerns about observations of the disease, and so on — reveals that the notion that science is a purely objective affair while the arts are purely more subjective is more of a slogan than a reality.

Values are not a mere “notion” of university professors and academics. While Stover doesn’t clarify what he means by values, I would suggest that values are at the heart of the liberal arts and humanities — a ‘value’ at its core simply denotes what people take to be important and worth pursuing. My morning coffee is important to me, I pursue it, I prize it, it has value. The humanities have always been a matter of addressing the issues that humans consider important. So, the answer to the question of what do values have to do with historical linguistics is “a lot.” Languages change over time to reflect the problems, interests, and desires that humans have; linguistic change is a reflection of what is important, what is valued by a society and why.

But if this is the case, then science and the many STEM fields are not immune from this either. What we choose to focus on in science, technology, and engineering reveals what we care about, what we value (knowledge of climate change, for example, has changed how we value the environment). The notion that the humanities can only aspire to the subjective with only secondary benefits in other areas is a moral failure in thinking. Science is not isolated from society, nor should it be. By the same token, a method and style that focuses on empirical verification and experimentation over subjective elements can improve what the humanities can produce and help us focus on what is important.

In addressing the cross section of human interest and scientific method, philosopher John Dewey notes,

“Science through its physical technological consequences is now determining the relations which human beings, severally and in groups, sustain to one another. If it is incapable of developing moral techniques which will also determine these relations, the split in modern culture goes so deep that not only democracy but all civilized values are doomed.”

The distinction between the arts and the sciences is not essential or absolute, but one of our own creation that reflects our own limited thinking. Any art, just like science, can aspire towards critical, experimental, objectivity of some degree just like any scientific and engineering pursuit should be understood in terms of its role in the larger human project. The more we try to separate them, the more detrimental it will be to both. The problem regarding whether there is a case to be made for the arts disappears once we drop the notion that there is complete separation — the more important and interesting moral problem becomes how we might best improve our methods of inquiry that are vital for both.

Bad Science, Bad Science Reporting

3d image of human face with severalpoints of interest circled

It tends to be that only the juiciest of developments in the sciences become newsworthy: while important scientific advances are made on a daily basis, the general public hear about only a small fraction of them, and the ones we do hear about do not necessarily reflect the best science. Case in point: a recent study that made headlines for having developed an algorithm that could detect perceived trustworthiness in faces. The algorithm used as inputs a series of portraits from the 16th to the 19th centuries, along with participant’s judgments of how trustworthy they found the depicted faces. The authors then claimed that there was a significant increase in trustworthiness over the period of time they investigated, which they attributed to lower levels of societal violence and greater economic development. With an algorithm thus developed, they then applied it to some modern-day faces, comparing Donald Trump to Joe Biden, and Meghan Markle to Queen Elizabeth II, among others.

It is perhaps not surprising, then, that once the media got wind of the study that articles with names like “Meghan Markle looks more trustworthy than the Queen” and “Trust us, it’s the changing face of Britain” began popping up online. Many of these articles read the same: they describe the experiment, show some science-y looking pictures of faces with dots and lines on them, and then marvel at how the paper has been published in Nature Communications, a top journal in the sciences.

However, many have expressed serious worries with the study. For instance, some have noted how the paper’s treatment of their subject matter – in this case, portraits from hundreds of years ago – is uninformed by any kind of art history, and that the belief that there was a marked decrease in violence over that time is uniformed by any history at all. Others note how the inputs into the algorithm are exclusively portraits of white faces, leading some to make the charge that the authors were producing a racist algorithm. Finally, many have noted the very striking similarity between what the authors are doing and the long-debunked studies of phrenology and physiognomy, which purported to show that the face of one’s skull and nature of one’s facial features were indicative of their personality traits, respectively.

There are many ethical concerns that this study raises. As some have noted already, developing an algorithm in this manner could be used as a basis for making racist policy decisions, and would seem to lend credence to a form of “scientific racism.” While these problems are all worth discussing, here I want to focus on a different issue, namely how a study lambasted by so many, with so many glaring flaws, made its way to the public eye (of course, there is also the question of how the paper got accepted in such a reputable journal in the first, but that’s a whole other issue).

Part of the problem comes down to how the results of scientific studies are communicated, with the potential for miscommunications and misinterpretations along the way. Consider again how those numerous websites clamoring for clicks with tales of the trustworthiness of political figures got their information in the first place, which was likely from a newswire service. Here is how ScienceDaily summarized the study:

“Scientists revealed an increase in facial displays of trustworthiness in European painting between the fourteenth and twenty-first centuries. The findings were obtained by applying face-processing software to two groups of portraits, suggesting an increase in trustworthiness in society that closely follows rising living standards over the course of this period.”

Even this brief summary is misleading. First, to say that scientists “revealed” something implies a level of certainty and definitiveness in their results. Of course, all results of scientific studies are qualified: there is never an experiment that will say that it is 100% certain of its results, or that, when measuring different variables, that there is a definitive cause and effect relationship between them. The summary does qualify this a little bit – in saying that the study “suggests” an increase in trustworthiness. But this is misleading for another reason, namely that the study does not purport to measure actual trustworthiness, but perceptions of trustworthiness.

Of course, a study about an algorithm measuring what people think trustworthiness looks like is not nearly as exciting as a trustworthiness detection machine. And perhaps because the difference can be easily overlooked, or because the latter is likely to garner much more attention than the former, the mistake shows up in several of the outlets reporting it. For example:

Meghan was one and a half times more trustworthy than the Queen, according to researchers.

Consultants from PSL Analysis College created an algorithm that scans faces in painted portraits and pictures to find out the trustworthiness of the individual.

Meghan Markle has a more “trustworthy” face than the Queen, a new study claims.

From Boris Johnson to Meghan Markle – the algorithm that rates trustworthiness.”

Again, the problem here is that the study never made the claim that certain individuals were, in fact, more trustworthy than others. But that news outlets and other sites report it as such compound worries that one might employ the results of the study to reach unfounded conclusions about who is trustworthy and who isn’t.

So there are problems here at three different levels: first, with the nature and design of the study itself; second, with the way that newswire services summarized the results, making them seem more certain than they really were; and third, with the way that sites that used those summaries presented the results in order to make it look more interesting and legitimate than it really was, without raising any of the many concerns expressed by other scientists. All of these problems compound to produce the worries that the results of the study could be misinterpreted and misused.

While there are well-founded ethical concerns about how the study itself was conducted, it is important not to ignore what happens after the studies are finished and their results disseminated to the public. The moral onus is not only on the scientists themselves, but also on those reporting on the results of scientific studies.

The Dangerous Allure of Conspiracy Theories

photograph of QAnon sign at rally

Once again, the world is on fire. Every day seems to bring a new catastrophe, another phase of a slowly unfolding apocalypse. We naturally intuit that spontaneous combustion is impossible, so a sinister individual (or a sinister group of individuals) must be responsible for the presence of evil in the world. Some speculate that the most recent bout of wildfires in California were ignited by a giant laser (though no one can agree on who fired the lasers in the first place), while others across the globe set 5G towers ablaze out of fear that this frightening new technology was created by a malevolent organization to hasten the spread of coronavirus. Events as disparate as the recent explosion in Beirut to the rise in income inequality have been subsumed into a vast web of conspiracy and intrigue. Conspiracy theorists see themselves as crusaders against the arsonists at the very pinnacle of society, and are taking to internet forums to demand retribution for perceived wrongs.

The conspiracy theorists’ framework for making sense of the world is a dangerously attractive one. Despite mainstream disdain for nutjobs in tinfoil hats, conspiracy theories (and those who unravel them) have been glamorized in pop culture through films like The Matrix and The Da Vinci Code, both of which involve a single individual unraveling the lies perpetuated by a malevolent but often invisible cadre of villains. Real-life conspiracy theorists also model themselves after the archetypal detective of popular crime fiction. This character possesses authority to sort truth from untruth, often in the face of hostility or danger, and acts as an agent for the common good.

But in many ways, the conspiracy theorist is the inverse of the detective; the latter operates within the system of legality, often working directly for the powers-that-be, which requires an implicit trust in authority. They usually hunt down someone who has broken the law, and who is therefore on the fringes of the system. Furthermore, the detective gathers empirical evidence which forms the justification for their pursuit. The conspiracy theorist, on the other hand, is on the outside looking in, and displays a consistent mistrust of both the state and the press as sources of truth. Though conspiracy theorists ostensibly obsess over paper trails and blurry photographs, their evidence (which is almost always misconstrued or fabricated) doesn’t matter nearly as much as the conclusion. As Michael Barkun explains in A Culture of Conspiracy: Apocalyptic Visions in Contemporary America,

the more sweeping a conspiracy theory’s claims, the less relevant evidence becomes …. This paradox occurs because conspiracy theories are at their heart nonfalsifiable. No matter how much evidence their adherents accumulate, belief in a conspiracy theory ultimately becomes a matter of faith rather than proof.

In that sense, most conspiracy theorists are less concerned with uncovering the truth than confirming what they already believe. This is supported by a 2016 study, which identifies partisanship as an crucial factor in measuring how likely someone is to buy into conspiracy theories. The researchers determined that “political socialization and psychological traits are likely the most important influences” on whether or not someone will find themselves watching documentaries on ancient aliens or writing lengthy Facebook posts about lizard people masquerading as world leaders. For example, “Republicans are the most likely to believe in the media conspiracy followed by Independents and Democrats. This is because Republicans have for decades been told by their elites that the media are biased and potentially corrupt.” The study concludes that people from both ends of the political spectrum can be predisposed to see a conspiracy where there isn’t one, but partisanship is ultimately the more important predictor whether a person will believe a specific theory than any other factor. In other words, Democrats rarely buy into conspiracy theories about their own party, and vice versa with Republicans. The enemy is never one of us.

It’s no wonder the tinfoil-hat mindset is so addictive. It’s like being in a hall of mirrors, where all you can see is your own flattering image repeated endlessly. Michael J. Wood suggests in another 2016 study that “people who are aware of past malfeasance by powerful actors in society might extrapolate from known abuses of power to more speculative ones,” or that “people with more conspiracist world views might be more likely to seek out information on criminal acts carried out by officials in the past, while those with less conspiracist world views might ignore or reject such information.” It’s a self-fulfilling prophecy, fed by a sense of predetermined mistrust that is only confirmed by every photoshopped UFO. Conspiracy theories can be easily adapted to suit our own personal needs, which further fuels the narcissism. As one recent study on a conspiracy theory involving Bill Gates, coronavirus, and satanic cults points out,

there’s never just one version of a conspiracy theory — and that’s part of their power and reach. Often, there are as many variants on a given conspiracy theory as there are theorists, if not more. Each individual can shape and reshape whatever version of the theory they choose to believe, incorporating some narrative elements and rejecting others.

This mutable quality makes conspiracy theories personal, as easily integratable into our sense of self as any hobby or lifestyle choice. Even worse, the very nature of social media amplifies the potency of conspiracy theories. The study explains that

where conspiracists are the most engaged users on a given niche topic or search term, they both generate content and effectively train recommendation algorithms to recommend the conspiracy theory to other users. This means that, when there’s a rush of interest, as precipitated in this case by the Covid-19 crisis, large numbers of users may be driven towards pre-existing conspiratorial content and narratives.

The more people fear something, the more likely an algorithm will be to offer them palliative conspiracy theories, and the echo chamber grows even more.

Both of the studies previously mentioned suggest that there is a predisposition to believe in conspiracy theories that transcends political alliance, but where does that predisposition come from? It seems most likely that conspiracy beliefs are driven by anxiety, paranoia, feelings of powerlessness, and a desire for authority. A desire for authority is especially evident at gatherings of flat-earthers, a group that consistently mimics the tone and language academic conferences. Conspiracies rely on what Barkun called “stigmatized knowledge,” or “claims to truth that the claimants regard as verified despite the marginalization of those claims by the institutions that conventionally distinguish between knowledge and error — universities, communities of scientific researchers, and the like.” People feel cut off from the traditional locus of knowledge, so they create their own alternative epistemology, which restores their sense of authority and control.

Conspiracy theories are also rooted in a basic desire for narrative structure. Faced with a bewildering deluge of competing and fragmentary narratives, conspiracy theories cobble together half-truths and outright lies into a story that is more coherent and exciting than reality. The conspiracy theories that attempt to explain coronavirus provide a good example of this process. The first stirrings of the virus began in the winter of 2019, then rapidly accelerated without warning and altered the global landscape seemingly overnight. Our healthcare system and government failed to respond with any measure of success, and hundreds of thousands of Americans died over the span of a few months. The reality of the situation flies in the face of narrative structure — the familiar rhythm of rising action-climax-falling action, the cast of identifiable good guys and bad guys, the ultimate moral victory that redeems needless suffering by giving it purpose. In the dearth of narrative structure, theorists suggest that Bill Gates planned the virus decades ago, citing his charity work as an elaborate cover-up for nefarious misdeeds. The system itself isn’t broken or unequipped to handle the pandemic because of austerity. Rather, it was the result of a single bad actor.

Terrible events are no longer random, but imbued with moral and narrative significance. Michael Barkun argues that this is a comfort, but also a factor that further drives conspiracy theories:

the conspiracy theorist’s view is both frightening and reassuring. It is frightening because it magnifies the power of evil, leading in some cases to an outright dualism in which light and darkness struggle for cosmic supremacy. At the same time, however, it is reassuring, for it promises a world that is meaningful rather than arbitrary. Not only are events nonrandom, but the clear identification of evil gives the conspiracist a definable enemy against which to struggle, endowing life with purpose.

A group of outsiders (wealthy Jewish people, the “liberal elite,” the immigrant) are Othered within the discourse of theorists, rendered as villains capable of superhuman feats. The QAnon theory in particular feels more like the Marvel cinematic universe than a coherent ideology, with its bloated cast of heroes teaming up for an Avengers-style takedown of the bad guys. Some of our best impulses — our love of storytelling, a desire to see through the lies of the powerful — are twisted and made ugly in the world of online conspiracy forums.

The prominence of conspiracy theories in political discourse must be addressed. Over 70 self-professed Q supporters have run for Congress as Republicans in the past year, and as Kaitlyn Tiffany points out in an article for The Atlantic, the QAnon movement is becoming gradually more mainstream, borrowing aesthetics from the lifestyle movement and makeup tutorials make itself more palatable. “Its supporters are so enthusiastic, and so active online, that their participation levels resemble stan Twitter more than they do any typical political movement. QAnon has its own merch, its own microcelebrities, and a spirit of digital evangelism that requires constant posting.” Perhaps the most frightening part of this problem is the impossibility of fully addressing it, because conspiracy theorists are notoriously difficult to hold a good-faith dialogue with. Sartre’s description of anti-Semites written in the 1940s (not coincidentally, the majority of contemporary conspiracy theories are deeply anti-Semitic) is relevant here. He wrote that anti-Semites (and today, conspiracy theorists)

know that their statements are empty and contestable; but it amuses them to make such statements: it is their adversary whose duty it is to choose his words seriously because he believes in words. They have a right to play. They even like to play with speech because by putting forth ridiculous reasons, they discredit the seriousness of their interlocutor; they are enchanted with their unfairness because for them it is not a question of persuading by good arguing but of intimidating or disorienting.

This quote raises the frightening possibility that not all conspiracy theorists truly believe what they say, that their disinterest in evidence is less an intellectual blindspot than a source of amusement. Sartre helps us see why conspiracy theories often operate on a completely different wavelength, one that seems to preclude logic, rationality, and even the good-faith exchange of ideas between equals.

The fragmentation of postmodern culture has created an epistemic conundrum: on what basis do we understand reality? As the operations of governments become increasingly inscrutable to those without education, as the concept of truth itself seems under attack, how do we make sense of the forces that determine the contours of our lives? Furthermore, as Wood points out, mistrust in the government isn’t always baseless, so how do we determine which threats are real and which are imagined?

There aren’t simple answers to these questions. The only thing we can do is address the needs that inspire people to seek out conspiracy theories in the first place. People have always had an impulse to attack their anxieties in the form of a constructed Other, to close themselves off, to distrust difference, to force the world to conform to a single master narrative, so it’s tempting to say that there will probably never be a way to completely eradicate insidious conspiracy theories entirely. Maybe the solution is to encourage the pursuit of self-knowledge, our own biases and desires, before we pursue an understanding of forces beyond our control.

Hydroxychloroquine and the Ethical Pitfalls of Private Science

A box of hydroxychloroquine sulphate tablets held by a hand with coronavirus written in background

Last week, news broke that a significant study into the effects of hydroxychloroquine for treating COVID-19 relied on data that has now been called into question. The effects of this study, and other studies that relied on data from the same source, were profound, leading to changes in planned studies and in treatments for COVID-19 being prescribed to patients. The fact that this data comes from an unaudited source highlights the ethical concerns that stem from having an increased corporate role in science.

In late May, a study published in the elite medical journal The Lancet suggested that COVID-19 patients taking chloroquine or hydroxychloroquine were more likely to die. The study included over 96,000 patients, relying on electronic health data from the company Surgisphere run by Dr. Sepan Desai, who was also included as a co-author of the article. It found that at 671 hospitals where COVID-19 patients had been prescribed hydroxychloroquine, the risk of death was over twice as great as patients who were not prescribed the drug. An additional study using data from Surgisphere investigated the uses of blood pressure medication and was published in a paper for The New England Journal of Medicine. A third paper using Surgisphere data was available as a preprint which suggested that ivermectin significantly reduced mortality in COVID-19 patients. All three papers have been retracted.

The retractions occurred after discrepancies were noticed in the data. The reported doses of hydroxychloroquine for American patients was higher than FDA guidelines and the number of Australian deaths were higher than official statistics. There was also a discrepancy between the small number of hospitals included and the vast number of patient records. Following this, independent auditors were asked to review the data provided by Surgisphere; however, the company refused to provide the data, citing confidentiality requirements with the hospitals. Yet investigations found that no hospitals located in the US admitted to participating with Surgisphere. 

Surgisphere itself is also a suspect source. The company was founded in 2007 but has little online presence. Their website does not list partner hospitals or identify its scientific advisory board. It claims that the company has 11 employees. Their enormous database doesn’t seem to have been used by peer reviewed studies until May. Desai himself also has a colorful history, including a record of three outstanding medical malpractice suits against him. 

The studies had significant impact world-wide. Following the report that hydroxychloroquine increased mortality rates in patients, the WHO announced a “temporary” pause into their studies of hydroxychloroquine (they have since resumed their efforts). The studies also played a role in the national conversation about the drug in the United States following President Trump’s announcement that he had been taking it to combat the virus. The preprint on ivermectin was never officially published, but it did lead to changes in treatment protocols in South America. In Bolivia, a local government planned to hand out 350,000 doses of the drug after receiving authorization from the Bolivian Ministry of Health. The drug was also cited as a potential treatment in Chile and Peru. 

This episode highlights several general moral issues. Retraction scandals at a time when the public is looking to, and relying on, medical science are dangerous. The situation is intensified by the fact that these controversies are tied to the political debate over hydroxychloroquine, as it may undermine science along partisan lines. Polls show that Democrats are far more likely than Republicans to have a great deal of confidence in scientists to act in the best interests of the public yet such scandals further undermine public trust and make science seem more partisan. 

The matter also raises ethical issues within the sciences. According to Ivan Oransky from Retraction Watch, the case represents larger systematic issues within the sciences, noting that even leading journalists rely too heavily on an honor system. For example, the pandemic has led to warning signs about the use of preprints in journals, which have moved away from getting feedback while studies are being finalized to sharing “breaking data” as fast as possible, despite the lack of peer review.  

The Surgisphere episode highlights the ethical pitfalls of science relying on private sector companies for research. Since the twentieth century, the private sector has been an increasing source of scientific funding. In the United States, private funding accounts for 65% of research and development spending in 2013. There are good reasons for private sector investments and corporate-university level partnerships. The public sector has shown less willingness to supply the needed funding. As Ashtosh Jogalekar points out in an article for Scientific American, investments by private interests have allowed for many projects to be funded which might not be funded otherwise. He notes, “For these billionaires a few millions of dollars is not too much, but for a single scientific project hinging on the vicissitudes of government funding it can be a true lifeline.” It has also been noted that private funding can ensure cost-effective replication studies are possible, especially important since efforts to produce reproducibility were only successful in 40% of experiments published in peer-reviewed journals. 

On the other hand, according to Sheldon Krimsky, the author of Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research?, numerous problems can occur when scientists partner with private corporations. Krimsky finds that publication practices have been influenced by commercial interests: the commercialization of science has led to a decline in the notion that scientists should work in the public interest, and sharing data becomes more problematic given the use of paywalls and intellectual property protection. This makes it more difficult to verify the data.

There are many ways corporations can complicate data-sharing. By choosing not to release unflattering findings or claiming data as exclusive intellectual property, companies can make it difficult for others to use research (consider Diamond v Chakrabarty which began the precedent for allowing genetically modified organisms to be patentable). And, of course, the Surgisphere episode is an example of university-level researchers working in collaboration with a private company where the company retains sole control of the data. Such cases allow for fraud and suffer from a lack of oversight. 

One proposed solutions is to move towards “open science,” making publications, data, and other information open and accessible to everyone. Such a move would allow for both increased transparency and accountability as well as more rigorous peer-review. Under such a system, falsified data would be more difficult to provide and more easy to detect. 

While many of these issues have been brewing for years, it is not every day that a single published study can have the kind of global impact that came with investigations into the effectiveness of hydroxychloroquine, even while other independent studies have also demonstrated its ineffectiveness. The ethical fallout from this scandal is thus far more obvious given public interest in the disease. Indeed, there have already been calls to stop private speculation into COVID-19 research; part of this call includes the position that all intellectual property should be made available for free to the international scientific community for fighting the pandemic. The question now is what specific reforms should be implemented to prevent scandals like this from happening again?

 

Religious Liberty and Science Education

photograph of empty science classroom

In November, the Ohio House of Representatives passed “The Ohio Student Religious Liberty Act of 2019.” The law quickly garnered media attention because it seems to allow students to get answers wrong without penalty if the reason they get those answers wrong is because of their religious beliefs. The language of the new law is the following:

Sec. 3320.03. No school district board of education, governing authority of a community school […], or board of trustees of a college-preparatory boarding school […] shall prohibit a student from engaging in religious expression in the completion of homework, artwork, or other written or oral assignments. Assignment grades and scores shall be calculated using ordinary academic standards of substance and relevance, including any legitimate pedagogical concerns, and shall not penalize or reward a student based on the religious content of a student’s work.

Sponsors of the bill claim that students will be required to learn the material they are being taught, and to answer questions in the way that the curriculum supports regardless of whether they agree with it. Opponents of the law disagree. The language of the legislation prohibits teachers from penalizing the work of a student when that work is expressive of religious belief. This seems to entail that a teacher cannot give a student a bad grade if that student gets an answer wrong for religious reasons. In any event, the vagueness of the law may affect the actions of teachers. They might be reluctant to grade assignments correctly if they think doing so may put them at odds with the law.

Ohio is not the only state in which bills like this are being considered, though most have failed to pass for one reason or another. Some states, such as Arizona, Florida, Maine, and Virginia have attempted to pass “controversial issues” bills. The bills take various forms. Arizona Bill 202, for example, attempted to prohibit teachers from advocating any positions on issues that are mentioned in the platform of any major political party (a similar bill was proposed in Maine). This has implications for teaching evolution and anthropogenic climate change in science classes. Other controversial issue bills prohibit schools from punishing teachers who teach evolution or climate change as if they are scientifically controversial.

Much of the recent action is motivated by attitudes about Next Generation Science Standards, a science education program developed by 26 states in conjunction with the National Science Teachers Association, the American Association for the Advancement of Science, and the National Research Council. The program aims to teach science in active ways that emphasize the important role that scientific knowledge plays in innovation, the development of new technologies, and in responsible stewardship of the natural environment. NGSS has encountered some resistance in state legislatures because the curriculum includes education on the topics of evolution and anthropogenic climate change.

Advocates of these laws make a number of different arguments. First, all things being equal, there is value in freedom of conscience. We should set up our public spaces in such a way that respects the fact that people can believe what they want to believe. The U.S. Constitution was intentionally written in a way that provides protections for citizens to form beliefs independently of the will of governments. In response, an opponent of this legislation might say that imposing a set of standards for curriculum based on the best available evidence is not the same thing as forcing citizens to endorse a particular set of beliefs. A student can learn about evolution or anthropogenic climate change, all the while disagreeing with what they are learning.

A second, related argument might be that school curriculum and grading policies should respect the role that religion plays in people’s lives. For many, religion provides life with meaning, peace, and hope. Given the importance of these values, our public institutions shouldn’t be taking steps that might undermine religion.

A third argument concerns parental rights to raise children in the way that they see fit. This concern is content-neutral. It might be a principle that everyone should respect. Parents have significant interests in the way that their children turn out, and as a result they have interests in avoiding what they might view as indoctrination of their children by the government. Attendance at school is mandatory for children. If the government is going to force them to attend, they shouldn’t be forced to “learn” things that their parents might not want them to hear.

A fourth argument has to do with the value of free speech and the expression of alternative positions. It is always valuable to hear opposing positions, even those that are in opposition to received scientific knowledge, so that science doesn’t just become another form of dogma. In response, opponents would likely argue that we get closer to the truth when we assess the validity of opposing viewpoints, but not all opposing viewpoints are created equal. Students only have so much time dedicated to learning science in school, so if opposing positions are considered in the classroom, perhaps it is best if they are positions advocated by scientists. Moreover, if a particular view reflects only the opinion of a small segment of the scientific community, perhaps it is a waste of valuable time to discuss those positions at all.

Opponents of this kind of legislation would insist that those in charge of the education of our children must value best epistemic practices. Some belief-forming practices contribute to the formation of true beliefs more reliably than others. The scientific method and the peer review process are examples of these kinds of reliable practices. It is irresponsible to treat positions that are not supported by evidence as if they are equally deserving of acceptance as beliefs that are supported by evidence. Legislation of this type presents tribalism and various forms of pernicious cognitive bias as adequate evidence for belief.

Furthermore, opponents argue, the passage of these bills is nothing more than political grandstanding—attempts to solve non-existent problems. The United States Constitution already protects the religious liberty of students. Additional legislation is not necessary.

Education, in part, is the creation of responsible, productive, autonomous citizens. What’s more, the issues at stake are crucially important. Denying the existence of anthropogenic climate change has powerful, and even deadly, consequences for millions of current living beings, as well as future generations of beings. Our best hope is to create citizens who are well-informed on this issue and who are therefore in a good position to mitigate the effects and to construct meaningful climate policy in the future. This will be impossible if future generations are essentially climate illiterate.

The Ethics of Scientific Advice: Lessons from “Chernobyl”

photograph of Fireman's Monument at Cherynobl

The recently-released HBO miniseries Chernobyl highlights several important moral issues that are worth discussing. For example, what should we think about nuclear power in the age of climate change? What can disasters tell us about government accountability and the dangers of keeping unwelcome news from the public? This article will focus on the ethical issues concerning scientists potential to influence government policy. How should scientists advise governments, and who holds them accountable for their advice? 

In the second episode, the Soviet Union begins dumping thousands of tons of sand and boron onto the burning nuclear plant at the suggestion of physicist Valery Legasov. After consulting fellow scientist Ulana Khomyuk (a fictional character who represents the many other scientists involved), Legasov tells Soviet-leader Gorbachev that in order to prevent a potential disaster, drainage pools will need to be emptied from within the plant in an almost certain suicide mission. “We’re asking for your permission to kill three men,” Legasov reports to the Soviet government. It’s hard to imagine a more direct example of a scientist advising a decision with moral implications. 

Policy makers often lack the expertise to make informed decisions, and this provides an opportunity for scientists to influence policy. But should scientists consider ethical or policy considerations when offering advice? 

On one side of this debate are those who argue that scientists primary responsibility is to ensure the integrity of science. This means that scientists should maintain objectivity and should not allow their personal moral or religious convictions to influence their conclusions. It also means that the public should see science as an objective and non-political affair. In essence, science must be value-free.

This value-free side of the debate is reflected in the mini-series’ first episode. It ends with physicist Legasov getting a phone call from Soviet minister Boris Shcherbina telling him that he will be on the commission investigating the accident. When Legasov begins to suggest an evacuation, Shcherbina tells him, “You’re on this committee to answer direct questions about the function of an RBMK reactor…nothing else. Certainly not policy.”

Those who argue for value-free science often argue that scientists have no business trying to influence policy. In democratic nations this is seen as particularly important since policy makers are accountable to voters while scientists are not. If scientists are using ethical judgments to suggest courses of action, then what mechanism will ensure that those value judgments reflect the public’s values?

In order to maintain the value-free status of science, philosophers such as Ronald N. Geire argue that there is an important distinction between judging the truth of scientific hypotheses and judging the practical uses of science. A scientist can evaluate the evidence for a theory or hypotheses, but they shouldn’t evaluate whether one should rely on that theory or hypothesis to make a policy decision. For example, a scientist might tell the government how much radiation is being released and how far it will spread, but they should not advise something like an evacuation. Once the government is informed of relevant details, the decision of how to respond should be left entirely to elected officials. 

Opponents of this view, however, argue that scientists do have a moral responsibility when offering advice to policy makers and believe that scientists shouldering this responsibility is desirable. Philosopher Heather Douglas argues that given that scientists can be wrong, and given that acting on incorrect information can lead to morally important consequences, scientists do have a moral duty concerning the advice they offer to policy makers. Scientists are the only ones who can fully appreciate the potential implications of their work. 

In the mini-series we see several examples where only the scientists fully appreciate the risks and dangers from radiation, and are the strongest advocates of evacuation. In reality, Legasov and a number of other scientists offered advice on how to proceed with cleaning up the disaster. According to Adam Higginbotham’s Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster, the politicians were ignorant of nuclear physics, and the scientists and technicians were too paralyzed by indecision to commit to a solution.

In the real-life disaster, the scientists involved were frequently unsure about what was actually happening. They had to estimate how fast various parts of the core might burn and whether different radioactive elements would be released into the air. Reactor specialist Konstantin Fedulenko was worried that the boron drops were having limited effect and that each drop was hurling radioactive particles into the atmosphere. Legasov disagreed and told him that it was too late to change course. Fedulenko believed it was best to let the graphite fire burn itself out, but Legasov retorted, “People won’t understand if we do nothing…We have to be seen to be doing something.” This suggests that the scientists were not simply offering technical advice but were making judgments based on additional value and policy considerations. 

Again, according to Douglas, given the possibility for error and the potential moral consequences at play, scientists should consider these consequences to determine how much evidence is enough to say that a hypothesis is true or to advise a particular course of action. 

In the mini-series, the government relies on monitors showing a low level of radiation to initially conclude that the situation is not bad enough to warrant an evacuation. However, it is pointed out the radiation monitors being used likely only had a limited maximum range, and so the radiation could be much higher than the monitor would tell them. Given that they may be wrong about the actual amount of radiation and the threat to public health, a morally-responsible scientist might conclude that evacuation be suggested to policy makers. 

While some claim that scientists shouldn’t include these considerations, others argue that they should. Certainly, the issue isn’t limited to nuclear disasters either. Cases ranging from climate change to food safety, chemical and drug trials, economic policies, and even the development of weapons, all present a wide array of potential moral consequences that might be considered when offering scientific advice. 

It’s difficult to say a scientist shouldn’t make morally relevant consequences plain to policy makers. It often appears beneficial, and it sometimes seems unavoidable. But this liberty requires scientists to practice judgment in determining what a morally relevant consequence is and is not. Further, if scientists rely on value judgments when advising government policy, how are scientists to be held accountable by the public? Given these benefits and concerns, whether we want scientists to make such judgments and to what extent their advice should reflect those judgments presents an important ethical dilemma for the public at large. Resolving this dilemma will at least require that we be more aware of how experts provide policy advice.

Getting Personal About Personal Genetic Information

Photograph of two boxes by the brand 23AndMe

Learning about the ins and outs of what makes you, you has become a trend in recent years due primarily to the popularization of genetic testing companies such as 23andMe, AncestryDNA, or GEDmatch. All three companies may have stickier corporate policies than what you might expect from a harmless saliva collection kit. In fact, in recent months, story after story has surfaced regarding the largely nonexistent privacy protections on personal genetic information. At the end of April 2018, authorities were able to identify and eventually prosecute the ‘Golden State Killer’ suspect using genetic information, which was acquired through a genealogy site called GEDmatch.

This site, as explained by The Atlantic, is a website where individuals can upload their genetic information in the hopes of finding unknown relatives through DNA commonalities. However, authorities utilized this site to create a fake profile and uploaded DNA found at a crime scene, where it was soon matched to a distant relative of the man eventually identified as the killer. As you can imagine, this created a widespread privacy concern for not just GEDmatch users but consumers of other genetic testing databases, and provoked questions about whether the greater common good was morally permissible over breaching individual privacy. It was revealed through the Freedom of Information Act that the Federal Trade Commission is investigating DNA testing companies like 23andMe and Ancestory.com over their policies for handling personal infomation and genetic data and how they share that information with third parties.

Not only has private genetic information been exploited to solve multiple murder cases, but in 2017 NBC warned consumers of the potential risks of giving companies access to their complete genetic codes. As Peter Pitts, who is part of  a medical advocacy group, stated,  genetic code “is the most valuable thing you own”. Although the majority of legitimate companies ensure customers that they do not share this information with researchers or third parties, media outlets including NBC are encouraging people to read the fine print of these broad contracts that have to be signed before personal samples are submitted for analysis. In fact, even though many of these companies market themselves as purely targeting genealogy, there is still critical information about your health illustrated in your genetic code, which in the wrong hands could be devastating to personal privacy.

Even more terrifying is the concealed nature of genetic information. For example, in comparison to your credit card information, where you can eventually see purchases which cannot be attributed to your own spending, you may never find out if a third party has your personal genetic information. Beyond having something interesting to discuss over the Thanksgiving table, many individuals use DNA testing in order to contribute to future medical advances. However, as Marcy Darnovsky of The New York Times suggests, “there are more efficient ways of contributing to medical advances than paying to hand over your genetic health information to companies like 23andMe.” In late 2015, 23andMe announced two deals with some of the largest pharmaceutical and biotech corporations in the industry in order to find treatments for diseases hidden in our DNA. Concerns arise after reading through 23andMe’s consent document, which acknowledges the fact that once you send off your genetic information there are no guarantees of anonymity. In fact, breaches in confidentiality could affect more than just you — they could impact your family members as well, since you share a similar genetic code. Darnovsky explains that “a 2008 law prohibits health insurance companies and employers from discrimination based on genetic information, but the law does not cover disability, life, or long-term care insurance.”  

Another noteworthy negative impact of this information being provided is that the general public may not be able to decipher wordy scientific information. How are they going to deal with potentially devastating news about their own or their children’s future health, in terms of genetic risk for getting certain diseases or their carrier status? A quick look at the 23andMe website shows that anyone can get their health information regarding genetic probability for certain illnesses. 23andMe actually states “Genetic Health Risk reports – learn how your genetics can influence your risk for certain diseases”. Even though they do mention that having positive for a certain gene does not necessarily mean one will get the disease, a naïve or uninformed individual could take this information to mean that they are certainly getting this illness. In this new era of simplifying genetic information so the general public can “learn more about themselves,” it is imperative that we not only advertise companies that can make this possible, but also make clear the risks associated with such lenient confidentiality contracts. A breach of your genetic information means anyone in a pharmaceutical company laboratory not only knows what color eyes you have, but they know exactly what diseases you have a probability of getting. Careful evaluation is therefore critical in determining whether learning more about oneself through genetic testing, is worth the risk produced due not only to many companies negligence of personal privacy, but their nonspecific privacy guarantees which could easily be exploited by third parties.

The takeaway for any lay person not familiar with the ins and outs of genetic information specifically how to interpret it,  is that they should be especially cautious of these geneology tests. Consumers should take care to read the fine print which describes the company  privacy policies and also recognize these genetic testing companies as businesses who will protect their own interests, whether or not they are favorable or not to their consumers.

 

Is There a Problem With Scientific Discoveries Made by Harassers?

A scientist taking notes next to a rack of test tubes.

The question about bias in science is in the news again.

It arose before, in the summer, when the press got hold of an inflammatory internal memo that Google employees had been circulating around their company. The memo’s author, James Damore, now formerly of Google, argued that Google’s proposed solutions to eradicating the gender gap in software engineering are flawed. They’re flawed, Damore thought, because they assume that the preponderance of men in “tech and leadership positions” is a result only of social and institutional biases, and they ignore evidence from evolutionary psychology suggesting that biologically inscribed differences in “personality,” “interests,” and “preferences” explain why women tend not to hold such positions.

Continue reading “Is There a Problem With Scientific Discoveries Made by Harassers?”

Baby Powder, Consumer Labeling and Scientific Uncertainty

A photo of spilled baby powder.

Overturning the August 21, 2017 verdict that Johnson & Johnson must pay $417 million in compensatory and punitive damages to cancer sufferer Eva Echeverria, a Los Angeles Superior Court judge last week granted a new trial to the pharmaceutical giant, essentially concluding, contra the jury, that Echeverria didn’t adequately demonstrate Johnson & Johnson’s negligence.

Continue reading “Baby Powder, Consumer Labeling and Scientific Uncertainty”