← Return to search results
Back to Prindle Institute

Separating Character from Policy at the Ballot Box

close-up photograph of old ballot box

The Democratic primary and 2020 presidential election are just around the corner. The Democratic nominee’s best chance of winning likely involves trying to get votes from Trump supporters in swing states. In an effort to do this, the nominee will almost certainly attack Trump’s personal moral character. As polls suggest, many of his supporters won’t care. They’ll draw a sharp line between the person and their policies. I am going to argue that they’re basically right to do this. Trump should be voted out of office because he’s been a terrible president — not because he’s been a terrible person.

Endless ire is directed at Trump for being a morally terrible person in his private life. He deserves it. The full list of Trump’s personal moral flaws is far too long to review here, though many of the most egregious are well known. He’s repeatedly cheated his employees, business partners, and students of his fake university; he’s also cheated on his wives. On countless occasions, he demonstrated himself to be an unrepentant racist, ableist, homophobic, Islamophobic, transphobic, misogynist.

Many people, including those in the media, treat the fact that Trump is a terrible person as a decisive reason not to vote for him. The implication is that Trump’s personal moral failings make it wrong for people to support him politically. This is a mistake: sometimes, we should support candidates who say and do morally terrible things in their personal life.

When, exactly? Whenever the person who has done morally terrible things would do more good in office than any other candidate. To be clear, I agree that Trump has done morally terrible things in his personal life and I agree that people shouldn’t support him politically, but I deny that we shouldn’t support him politically because he’s done morally terrible things in his personal life.

Rather, our support for political candidates should be almost exclusively determined by how good it would be for the world if this candidate were elected — relative to our alternatives. This position may seem odd, but it’s one that I think many will find plausible upon reflection. To see why, consider an analogy. Suppose that there are ten people trapped in various places around town in a snowstorm. You have the keys to the only snowplow, which you can give to one of two people. The first is a moral saint whose moderate snowplow driving skills would result in just three people being rescued. The second is a moral reprobate whose superior snowplow driving skills would result in all ten people being rescued. Suppose, furthermore, that whoever ends up driving the plow will become (locally) famous and receive numerous accolades for their rescue mission. Though, neither will use their newfound place in the spotlight to do anything else as important as saving lives.

Who should get the keys? It seems clear that you should give the keys to the moral reprobate who is going to save all ten lives over the moral saint who is only going to save three. This is so even though you’ll be giving power to a person who has done morally terrible things in their personal life. This is regrettable, but ensuring that more good people are saved is simply more important than ensuring that those doing the saving are themselves good people.

Now, the president can shape domestic and foreign policy in ways that affect the lives of billions of people, including future generations. This means that the reason to prioritize the value of a president’s effects on the world over their personal moral character is exponentially greater in the real world than in my snowplow case.   

Perhaps you’re worried that my analogy is too simple. After all, some of the particular ways in which Trump was a morally terrible private citizen provided good evidence that the policies he would enact would likewise be morally terrible, and indeed they were. If this is right, then Trump’s personal moral failings are at least indirectly relevant to whether we should support him politically. This much seems right to me. But this does not vindicate the ever-so-common assumption that a candidate’s personal moral failings themselves determine whether we should support a candidate politically.

In fact, if I am right, many candidate’s personal moral failings should play almost no role in our political deliberation. This is because personal moral failings simply tell us less about how a candidate would act in office than the candidate’s platform and political affiliation. Moreover, moral failings often tells us very little about what candidates would do in office. For instance, Trump’s infidelity told us nothing about how he would try to change the tax code or our healthcare system or much of anything, really. On the other hand, Trump’s history of xenophobic comments was good evidence that he would support xenophobic policies. So, a candidate’s personal moral failings can be quite relevant to the question of whether they deserve our political support, but this will only apply in cases where the candidate’s personal moral failings provide good evidence of political moral failings. This consideration won’t apply in cases where a candidate’s personal moral transgressions are unrelated to policy issues (e.g. infidelity) or when they’ve genuinely disavowed past moral transgressions (e.g. opposition to gay marriage).    

My view may seem to be on shaky ground when applied to Trump: he’s such a uniquely morally terrible person. But I’m not so sure it is. To see why, simply imagine that we’re faced with the choice of electing one of two candidates. One acts just like Trump does in his personal life, but would use his political power to enact whatever you take to be the best possible policies. Perhaps this includes mitigating the effects of climate change, providing universal healthcare, ending factory farming, and so on. The second is a moral saint in their personal life, but would do exactly what Trump has done in office. Whom should we elect? I think the answer is clear.

Racist, Sexist Robots: Prejudice in AI

Black and white photograph of two robots with computer displays

The stereotype of robots and artificial intelligence in science fiction is largely of a hyper-rational being, unafflicted by the emotions and social infirmities like biases and prejudices that impair us weak humans. However, there is reason to revise this picture. The more progress we make with AI the more a particular problem comes to the fore: the algorithms keep reflecting parts of our worst selves back to us.

In 2017, research showed compelling evidence that AI picks up deeply ingrained racial- and gender-based prejudices. Current machine learning techniques rely on algorithms interacting with people in order to better predict correct responses over time. Because of the dependence on interacting with humans for standards of correctness, the algorithms cannot detect when bias informs a correct response or when the human is engaging in a non-prejudicial way. Thus, the best working AI algorithms pick up the racist and sexist underpinnings of our society. Some examples: the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions. Europeans were associated with pleasantness and excellence.

In order to prevent discrimination in housing, credit, and employment, Facebook has recently been forced to agree to an overhaul of its ad-targeting algorithms. The functions that determined how to target audiences for ads relating to these areas turned out to be racially discriminatory, not by design – the designers of the algorithms certainly didn’t encode racial prejudices – but because of the way they are implemented. The associations learned by the ad-targeting algorithms led to disparities in the advertising of major life resources. It is not enough to program a “neutral” machine learning algorithm (i.e., one that doesn’t begin with biases). As Facebook learned, the AI must have anti-discrimination parameters built in as well. Characterizing just what this amounts to will be an ongoing conversation. For now, the ad-targeting algorithms cannot take age, zip code, or gender into consideration, as well as legally protected categories.

The issue facing AI is similar to the “wrong kind of reasons” problem in philosophy of action. The AI can’t tell a systemic bias of humans from a reasoned consensus: both make us converge on an answer and support the algorithm to select what we may converge on. It is difficult to say what, in principle, the difference is between the systemic bias and a reasoned consensus is. It is difficult, in other words, to give the machine learning instrument parameters to tell when there is the “right kind of reason” supporting a response and the “wrong kind of reason” supporting the response.

In philosophy of action, the difficulty of drawing this distinction is illustrated by a case where, for instance, you are offered $50,000 to (sincerely) believe that grass is red. You have a reason to believe, but intuitively this is the wrong kind of reason. Similarly, we could imagine a case where you will be punished unless you (sincerely) desire to eat glass. The offer of money doesn’t show that “grass is red” is true, similarly the threat doesn’t show that eating glass is choice-worthy. But each somehow promote the belief or desire. For the AI, a racist or sexist bias leads to a reliable response in the way that the offer and threat promote a behavior – it is disconnected from a “good” response, but it’s the answer to go with.

For International Women’s Day, Jeanette Winterson suggested that artificial intelligence may have a significantly detrimental effect on women. Women make up 18% of computer science graduates and thus are left out of the design and directing of this new horizon of human development. This exclusion can exacerbate the prejudices that can be inherent in the design of these crucial algorithms that will become more critical to more arenas of life.

So I am a racist. What do I do now?

This post originally appeared on October 27, 2015.

Like most human beings, I grew up imbibing racist stereotypes. Since I am Italian, those stereotypes were to some extent different from the kind of stereotypes I would have acquired had I grown up in the United States. For instance, I thought all people “of color” were exotic and more beautiful than “Whites”. This positive, and yet still damaging, stereotype included Black women and men, and Asian men, who in the American dating market are known to be greatly disadvantaged.

My personal attitude was to some extent reflective of Italian culture. The fascination with women of color, for instance, is fairly widespread among Italian men, as you would expect given Italy’s colonial past and its relatively racially homogenous present.

When I started visiting the US academically more ten years ago, I grew accustomed to a much more sophisticated discussion about race, and went through an awkward and often painful process of realization of how implicitly racist I was. I learned that asking “Where are you really from?” to a Seattle native of Korean descent was racist, or at the very least racially insensitive. I realized the tricky undertones of many expressions that I deemed simply descriptive, such as “Black music”. And I found out, much to my surprise, that even my aesthetic appreciation for non-Caucasian people was highly suspicious.

I also discovered that Black women are supposed to be bossy, angry, and dependent on welfare, and that Black men are supposed to be criminals and absent fathers; that East-Asian men are supposed to be unattractive and effeminate, and all Asian women submissive; that Asians in general are good at science… Some of these stereotypes were somewhat in line with my own culture’s, if not necessarily my own, but some were a complete surprise, and that surprise, that sense of “I would never think that” gave me an unwarranted sense of reassurance. When taking the IAT, I even compared positively to White Americans with regard to implicit bias toward Native Americans. So I thought: now that I know all this stuff about race, and given that I am a committed anti-racist, I’ll get rid of all the bad stuff, and I’ll stop being racist!

But, in fact, it didn’t go quite like that… When walking in segregated New Haven, seeing hooded Black men walking behind me made me nervous. I was very aware and ashamed of my own nervousness, but I was nervous nonetheless. Later on, when living in the United Kingdom, I found myself mistaking Black men for store employees. These are only two of the most unnerving instances of my implicit racism surfacing to my uncomfortable consciousness.

And it doesn’t even stop at race: I have become aware of many other forms of discrimination, over the years, and that has greatly increased my capacity at catching myself being implicitly homophobic or transphobic, fattist, ableist, and so forth. But, in fact, it seems to have only increased my awareness, not my ability to be less biased.

Philosopher Robin Zheng, whose research is on moral responsibility and implicit bias, has reassured me that I am not alone. Empirical research confirms that fighting implicit bias require a lot more than just informing people about the reality of discrimination.

This research wouldn’t be surprising to those familiar with more general work on implicit reasoning. For those who are not, I find useful an ancient metaphor from the Buddhist tradition popularized by Jonathan Haidt in his acclaimed pop-psychology book The Happiness Hypothesis. The metaphor describes the human mind as composed by an elephant and its rider. According to Haidt, the elephant roughly corresponds to what has been called System I in dual-processing accounts of reasoning: a system that is old in evolutionary terms, and shared with other animals. This system is comprised of a set of autonomous subsystems that include both innate input modules and domain-specific knowledge acquired by a domain-general learning mechanism. System I is fast, automatic and operates under the level of consciousness. The rider roughly corresponds to System II: a system that is evolutionarily recent and distinctively human. System II permits abstract reasoning and hypothetical thinking, and is slower, controlled and conscious. “The rider evolved to serve the elephant,” says Haidt, and while it may sometimes override it, trick it into obedience, “it cannot order the elephant around against its will” (The Happiness Hypothesis, p. 17).

This tension between the rider and the elephant has many different manifestations, but one that is particularly relevant to the discussion of the implicit biases is the case of mental intrusions. If we are explicitly asked to not think about a white bear, all we can think of is, you guessed it, a white bear. This ironic process of mental control is the consequence of automatic and controlled processes firing at each other: the request of not thinking a certain thought activates System II, which attempts to suppress the thought. System I activates automatic monitoring of one’s progress, which in this case means continuously checking whether one is not thinking about a white bear. That move turns out to be obviously counterproductive, since it reintroduces the thought that one is supposed to ban. But “because controlled processes tire quickly, eventually the inexhaustible automatic processes run unopposed, conjuring up herds of white bears” (The Happiness Hypothesis, p. 20). Dan Wegner, who first studied ironic process in a lab setting, has shown that it affects also people who try to repress unendorsed stereotypes.

While there is interesting research addressing more productive and effective ways of fighting implicit bias and stereotyping, I want to conclude with a remark about the implications of this empirical literature for microaggressions, a topic that has gained much attention recently.

I largely disagree with Haidt’s criticisms of trigger and content warnings in academic settings, for reasons well-articulated by Regina Rini and Kate Manne. But I do share his attention to underlying psychological mechanisms, and I worry that they are sometimes neglected in the political commentary.

Committed anti-racists are unlikely to engage in overtly prejudiced behavior. However, they may still find themselves inadvertently engaging in microaggressions such as those I described at the beginning of the post: inappropriate jokes or questions, or bona fide mistakes stemming from deeply-ingrained stereotypes. The elephant acts against the rider’s wishes, or even awareness: when something that has been internalized as a threat (such as a hooded Black man) appears in view, the elephant doesn’t hesitate, and kicks the rider in the shins, making it jump. The rider will take one or two seconds to realize that there is in fact no threat, and that will be too late: the jump was visible, the offense taken, the harm done. Not fully understanding how powerful these unconscious mechanisms are affects not only our moral assessment of the perpetrators (which can be also self-assessment). It also produces condemnatory reactions that, while appropriate in theory, are not necessarily fertile in practice, such as a certain relatively widespread paralyzing White guilt of well-intentioned liberals, who go around admitting their White privilege without knowing exactly what to do about it. Realizing that some of the mechanisms motivating our behavior are outside of our direct control allows us to focus on indirect ways to modify our behavior, and to shift from a sterile admission of White privilege to a more proactive commitment to changing the institutional injustice that gives rise to it. You can’t order the elephant at will, but you can change the environment it is raised in.