← Return to search results
Back to Prindle Institute
SocietyTechnology

Racist, Sexist Robots: Prejudice in AI

By Meredith McFadden
5 Apr 2019

The stereotype of robots and artificial intelligence in science fiction is largely of a hyper-rational being, unafflicted by the emotions and social infirmities like biases and prejudices that impair us weak humans. However, there is reason to revise this picture. The more progress we make with AI the more a particular problem comes to the fore: the algorithms keep reflecting parts of our worst selves back to us.

In 2017, research showed compelling evidence that AI picks up deeply ingrained racial- and gender-based prejudices. Current machine learning techniques rely on algorithms interacting with people in order to better predict correct responses over time. Because of the dependence on interacting with humans for standards of correctness, the algorithms cannot detect when bias informs a correct response or when the human is engaging in a non-prejudicial way. Thus, the best working AI algorithms pick up the racist and sexist underpinnings of our society. Some examples: the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions. Europeans were associated with pleasantness and excellence.

In order to prevent discrimination in housing, credit, and employment, Facebook has recently been forced to agree to an overhaul of its ad-targeting algorithms. The functions that determined how to target audiences for ads relating to these areas turned out to be racially discriminatory, not by design – the designers of the algorithms certainly didn’t encode racial prejudices – but because of the way they are implemented. The associations learned by the ad-targeting algorithms led to disparities in the advertising of major life resources. It is not enough to program a “neutral” machine learning algorithm (i.e., one that doesn’t begin with biases). As Facebook learned, the AI must have anti-discrimination parameters built in as well. Characterizing just what this amounts to will be an ongoing conversation. For now, the ad-targeting algorithms cannot take age, zip code, or gender into consideration, as well as legally protected categories.

The issue facing AI is similar to the “wrong kind of reasons” problem in philosophy of action. The AI can’t tell a systemic bias of humans from a reasoned consensus: both make us converge on an answer and support the algorithm to select what we may converge on. It is difficult to say what, in principle, the difference is between the systemic bias and a reasoned consensus is. It is difficult, in other words, to give the machine learning instrument parameters to tell when there is the “right kind of reason” supporting a response and the “wrong kind of reason” supporting the response.

In philosophy of action, the difficulty of drawing this distinction is illustrated by a case where, for instance, you are offered $50,000 to (sincerely) believe that grass is red. You have a reason to believe, but intuitively this is the wrong kind of reason. Similarly, we could imagine a case where you will be punished unless you (sincerely) desire to eat glass. The offer of money doesn’t show that “grass is red” is true, similarly the threat doesn’t show that eating glass is choice-worthy. But each somehow promote the belief or desire. For the AI, a racist or sexist bias leads to a reliable response in the way that the offer and threat promote a behavior – it is disconnected from a “good” response, but it’s the answer to go with.

For International Women’s Day, Jeanette Winterson suggested that artificial intelligence may have a significantly detrimental effect on women. Women make up 18% of computer science graduates and thus are left out of the design and directing of this new horizon of human development. This exclusion can exacerbate the prejudices that can be inherent in the design of these crucial algorithms that will become more critical to more arenas of life.

Meredith is an Assistant Professor at the University of Wisconsin, Whitewater. She earned her PhD at the University of California, Riverside, with a research focus in Philosophy of Action and Practical Reasoning and continues to explore the relationship between reason and value. Her current research consists of investigating modes of agential endorsement: how an agent's understanding of what is good, what is reasonable, what she desires, and who she is, informs what she does. Meredith is also committed to public philosophy and applied ethics; in particular, she is invested in illuminating debates in biomedical ethics, ethics of technology, and philosophy of law. Her website can be found at: https://mermcfadden.wixsite.com/philosopher.
Related Stories