← Return to search results
Back to Prindle Institute

Implicit Bias and the Efficacy of Training

colored image of a human brain

On September 12th, California’s state legislature passed a series of measures designed to reduce unconscious biases among medical practitioners and other public servants; under the new laws, doctors, nurses, lawyers and court workers will be expected to undergo implicit bias training as a regular continuing education requirement. A number of advocacy groups argue that it is these unconscious biases that strongly contribute to wage gaps, differential education outcomes, criminal justice proceedings, and healthcare results – such as the fact that pregnant black women are three to four times more likely to die from complications during labor and delivery than are pregnant white women. Bias training is supposed to be a tool for chipping away at the generations of crystallized racism encasing our society.

The only problem is that implicit bias training probably doesn’t work – at least not in the way that people want it to. 

At this point, the data seem clear about two things: 

    1. Unconscious biases are pervasive elements of how we perceive our social environments, and
    2. Unconscious biases are exceedingly difficult to permanently change.

Since Saul Tversky and Daniel Kahneman first drew attention to the phenomenon of cognitive biases in the early 1970s, researchers have explored the varieties of mental shortcuts on which we daily rely; tricks like ‘confirmation bias,’ ‘the halo effect,’ ‘the availability heuristic,’ ‘anchoring’ and more have been explored by everything from psychologists and philosophers trying to understand the mind to marketers trying to convince customers to purchase products

One of the more surprising things about implicit biases is how they can sometimes conflict with your explicit beliefs or attitudes. You might, for example, explicitly believe that racism or misogyny is wrong while nevertheless harboring an implicit bias against minority groups or genders that could lead you to naturally react in harmful ways (either behaviorally or even just by jumping to an unfounded conclusion).  You can explore this sort of thing yourself: implicit association tests (IATs) purport to be able to peel back your natural assumptions to reveal some of the underlying mental shortcuts that operate behind the scenes of your normal thought processes. In general, implicit bias training aims to highlight these cognitive biases by making the implicit processes explicit, with the hope that this will allow people to make conscious choices they actually endorse thereafter.

However, a study published this month in The Journal of Personality and Social Psychology indicates that the demonstrable effect of a variety of implicit bias training modules was, at best, a short-term affair that did not contribute to lasting changes in either explicit measures or behavior. By analyzing evidence from nearly 500 separate studies, researchers discovered that, although implicit bias training seminars, workshops, classes, or other short-form lessons could provoke short-term shifts in mood or emotions, there was next-to-no evidence that these shifts would ultimately translate into different patterns of actual behavior

This fits with a general pattern of casting doubt on the efficacy of intensive bias training; in fact, by focusing on implicit problems (rather than the manifest explicit issues), some have argued that implicit training is simply distracting from the systemic issues underlying the real problem – some evidence even suggests that mandatory training (as opposed to voluntary exercises) might even make said biases stronger. Overall, this is likely intuitive: the notion that biased attitudes built up over decades of a person’s life could somehow simply be broken apart by a single day’s training is, at best, naive. 

If there is one consistent beneficiary of implicit bias training, it’s the companies mandating them. Consider what happened after a video of two black customers being racially profiled at a Starbucks in Philadelphia went viral: the coffee company closed its stores nationwide for several hours so that its workforce could undergo bias training. By appearing decisive, Starbucks was able to address (and generally sidestep) an intensely damaging PR incident at the cost of a few hours of profit. The fact that the bias training was not likely to effectively change the racist environment that precipitated the video was beside the point. As Brian Nosek, one of the psychologists who helped develop the IAT, put it, “I have been studying this since 1996, and I still have implicit bias.” Nonetheless, Starbucks apologized and the news cycle moved on.

So, it remains to be seen what the future holds for the state of California. Certainly, the move towards action regarding the problems of implicit bias is a step in the right direction. However, that sort of training by itself, without a systemic addressals of the institutional problems that promote oppressive environments (intentionally or otherwise), will be ultimately powerless.

Should You Have the Right to Be Forgotten?

In 2000, nearly 415 million people used the Internet. By July 1, 2016, that number is estimated to grow to nearly 3.425 billion. That is about 46% of the world’s population. Moreover, there are as of now about 1.04 billion websites on the world wide web. Maybe one of those websites contains something you would rather keep out of public view, perhaps some evidence of a youthful indiscretion or an embarrassing social media post. Not only do you have to worry about friends and family finding out, but now nearly half of the world’s population has near instant access to it, if they know how to find it. Wouldn’t it be great if you could just get Google to take those links down?

This question came up in a recent court case in the European Union in 2014. A man petitioned for the right to request that Google remove a link from their search results that contained an announcement of the forced sale of one of his properties, arising from old social security debts. Believing that since the sale had concluded years before and was no longer relevant, he wanted Google to remove the link from their search results. They refused. Eventually, the court sided with the petitioner, ruling that search engines must consider requests from individuals to remove links to pages that result from a search on their name. The decision recognized for the first time the “right to be forgotten.”

This right, legally speaking, now exists in Europe. Morally speaking, however, the debate is far from over. Many worry that the right to be forgotten threatens a dearly cherished right to free speech. I, however, think some accommodation of this right is justified on the basis of an appeal to the protection of individual autonomy.

First, what are rights good for? Human rights matter because their enforcement helps protect the free exercise of agency—something that everyone values if they value anything at all. Alan Gewirth points out that the aim of all human rights is “that each person have rational autonomy in the sense of being a self-controlling, self-developing agent who can relate to others person on a basis of mutual respect and cooperation.” Now, virtually every life goal we have requires the cooperation of others. We cannot build a successful career, start a family, or be good citizens without other people’s help. Since an exercise of agency that has no chance of success is, in effect, worthless, the effective enforcement of human rights entails that our opportunities to cooperate with others are not severely constrained.

Whether people want to cooperate depends on what they think of us. Do they think of us as trustworthy, for example? Here is where “the right to be forgotten” comes in. This right promotes personal control over access to personal information that may unfairly influence another person’s estimation of our worthiness for engaging in cooperative activities—say, in being hired for a job or qualifying for a mortgage.

No doubt, you might think, we have a responsibility to ignore irrelevant information about someone’s past when evaluating their worthiness for cooperation. “Forgive and forget” is, after all, a well-worn cliché. But do we need legal interventions? I think so. First, information on the internet is often decontextualized. We find disparate links reporting personal information in a piecemeal way. Rarely do we find sources that link these pieces of information together into a whole picture. Second, people do not generally behave as skeptical consumers of information. Consider the anchoring effect, a widely shared human tendency to attribute more relevance to the first piece of information we encounter than we objectively should. Combine these considerations with the fact that the internet has exponentially increased our access to personal information about others, and you have reason to suspect that we can no longer rely upon the moral integrity of others alone to disregard irrelevant personal information. We need legal protections.

This argument is not intended to be a conversation stopper, but rather an invitation to explore the moral and political questions that the implementation of such a right would raise. What standards should be used to determine if a request should be honored? Should search engines include explicit notices in their search results that a link has been removed, or should it appear as if the link never existed in the first place? Recognizing the right to be forgotten does not entail the rejection of the right to free speech, but it does entail that these rights need to be balanced in a thoughtful and context-sensitive way.