← Return to search results
Back to Prindle Institute

Should AI Reflect Us as We Are or as We Wish to Be?

closeup image of camera lens

Our understanding of AI has come a very long way in a short amount of time. But one issue we still have yet to crack is the prevalence of bias. And this seems especially troubling since AI does everything from determining if you should go to jail, to whether you get a job, to whether you should receive healthcare, and more. Efforts have been made to make algorithms less biased – like including greater diversity in training data – but issues persist. Recently, Google had to suspend their Gemini AI platform because of the images it was generating. Users reported that when they asked for pictures of Nazi soldiers in 1943, they would get images of multi-ethnic people in Nazi uniforms. Another user requested a picture of a medieval British king and received equally counterfactual content. Clearly our desire to combat social bias conflicts with our desire for accuracy. How should problems like this be addressed?

There are good reasons for wanting to prevent AI from producing content that reflects socially harmful bias. We don’t want it to simply reinforce past prejudice. We don’t want only images of men as doctors and lawyers and images of women as secretaries and nurses. If biases like these were systematic across AI, it could perpetuate social stereotypes. Presumably, we might instead desire that if we asked for images of a CEO at work, that a significant portion of the images would be women (regardless of past statistics).

A similar concern occurs when we consider generative AI’s handling of race. In order for algorithms to generate an image, it requires large amounts of training data to pull from. However, if there are biases in the training data, this can lead to biased results as well. If the training data contains mostly images of people with white skin and few images of people with black or brown skin, the algorithm will be less likely to generate images of black or brown skinned people in images and may struggle to reproduce different ethnic facial features. Research on facial recognition algorithms, for example, has demonstrated how difficult it can be to discern different skin tones without a diverse training dataset.

Correcting for these problems requires that developers be mindful of the kinds of assumptions they make when designing an algorithm and curating training data. As Timnit Gebru – who famously left Google over a dispute about ethical AI – has pointed out, “Ethical AI is not an abstract concept but is one that is in dire need of a holistic approach. It starts from who is at the table, who is creating the technology, and who is framing the goals and values of AI.” Without a serious commitment to inclusion, it will be impossible to catch bias before it gets reproduced again and again. It’s a system of garbage in, garbage out.

While biased AI can have real life significant impacts on people – such as the woman who lost her refugee status after a facial recognition algorithm failed to properly identify her, or the use of predictive policing and recidivism algorithms that tend to target Black people – there’s also the risk that in attempting to cleanse real-life biases from AI we distort reality. The curation of training data is a delicate balance. Attempts to purge the presence of bias from AI can go too far. The results may increasingly reflect the world as we ideally imagine it rather than as it actually is.

The Google Gemini controversy demonstrates this clearly: In attempting to create an algorithm featuring diverse people, it generates results that are not always true to life. If we return to the example of women CEOs, the problem is clearer. If someone performs a google image search of CEOs, it might mostly generate images of men and we might object that this is biased. Surely if a young person were to look up images of CEOs, we would want them to find examples other than men. Yet, in reality, women account for about ten percent of CEOs of fortune 500 companies. But, if the impression the public gets is the opposite, that women make up a far more significant number of CEOs than they actually do, they may not realize the real-life bias that exists. By curating an ideal AI version of our world, we cover up problems and become less aware of real-life bias and are less prepared to resolve those problems.

Consider an example like predictive policing where algorithms are often trained using crime data collected through biased policing. While we can attempt to correct the data, we should also be reminded of our responsibility to correct those practices in the first place. The reason an algorithm may not produce an image of a female CEO or that an algorithm predicts crime in poor neighborhoods is not the algorithm’s fault, it simply reflects what it sees. Correcting for bias in data may eventually go a long way towards correcting bias in society, but it can also create problems by distorting our understanding of society. There is moral risk in deciding the degree to which we want AI to reflect our own human ugliness back at us and the degree to which we want it to reflect something better.

Academic Activism, Objectivity, and Public Outreach

photograph of teacher presenting to packed classroom

In a previous column, I argued that academics should not — with significant qualifications — be political activists. In his thoughtful and admirably objective reply, Tim Sommers makes two principal arguments highlighting a genuine weak point in my original treatment of this issue, and thus helpfully pushes me to shore up and defend that particular aspect of it. Ultimately, though, neither argument is fully persuasive.

First, Sommers contends that the conception of objectivity that underlies my argument against academic activism is “unhelpful,” since it conflates objectivity with “having no views at all or concealing your views.” But Sommers rightly points out that the “undecided and the waffling” are not necessarily more objective than “the firmly committed.” If this conception of objectivity — call it objectivity as disinterestedness or ambivalence — formed the basis of my case against academic activism, then Sommers’ argument would constitute a serious challenge.

Fortunately, my argument does not rely on a notion of objectivity that identifies it with either disinterestedness or ambivalence. My idea of objectivity is, I hope, uncontroversial: to be objective is to be capable of properly weighing evidence and arguments. My empirical claim is that being passionately committed to a political goal tends to make it more difficult to be objective in this sense because it increases our susceptibility to various well-documented cognitive biases, such as motivated reasoning and confirmation bias. In my view, then, the relation between objectivity and a certain kind of disinterestedness is not definitional, but empirical. That is, “objective” does not mean “disinterested”; rather, there is a contingent psychological link between being objective and being disinterested in a particular way.

Thus, it is far from the case that one cannot possibly be objective about some issue while also being passionately committed to a political goal related to that issue. Nor does objectivity require either suspension of judgment or ambivalence: one can easily be firmly convinced of the correctness of one position in a certain debate, yet not passionately committed to enacting a political goal that flows from that position. In fact, I would go further and say that passionate interest in some issue is often necessary to motivate a person to devote significant time and energy to understanding it. Objectivity, then, does not require or even favor disinterestedness in all respects. However, it seems to be the case that being passionately committed not just to understanding an issue, but to the attainment of a political goal related to the issue, tends to degrade one’s ability to properly weigh evidence and arguments concerning it.

Because my conception of objectivity does not identify it with disinterestedness, and certainly not with having no views or only weakly-held views, confining one’s pedagogy to the “realm of the reasonable” — that is, only teaching “positions and reasons generally recognized by professionals in our fields” — does not represent a departure from objectivity. Nor does good pedagogy require “disguising your own views” to be consistent with this conception. Rather, objectivity requires manifesting the capacity to properly weigh evidence and arguments — and in particular, to take seriously proper evidence and plausible arguments that cut against one’s own political commitments. Being a passionately committed political activist not only makes doing this more difficult; it also makes one appear less able to do it. But in teaching, both objectivity and the appearance of objectivity matter.

There is one more argument against objectivity that Sommers does not make, but which is now so commonplace in some academic quarters that addressing it at this juncture would be worthwhile. It is frequently pointed out that perfect objectivity is unattainable. This is certainly true if by “objectivity” we mean either disinterestedness or ambivalence, or the ability to properly weigh evidence and arguments. But the familiar inference from this true premise to the conclusion that objectivity is not a worthwhile ideal has never been clear to me. Unattainability is arguably inherent in the nature of any ideal — that is, in part, what makes it an ideal. Now, an argument from perfect objectivity’s unattainability might get off the ground if we add either of two claims: that it is impossible to be more or less objective, or that the costs of trying to be more objective outweigh the benefits. But it is possible to be more or less objective — to get closer to or farther away from the ideal of perfect objectivity. And while it is certainly possible that ethical or epistemic imperatives appropriate to non-ideal conditions conflict with our ideals, this does not seem to be the case with respect to objectivity in the context of academic research and teaching.

Next, Sommers argues that the line between public outreach and activism is “meaningless,” or alternatively that drawing this distinction is merely a way of categorizing the same underlying activity according to one’s affinity for the political goals the activity serves. This objection has bite because I had insisted that public outreach allows academics whose activism substantially relates to their research and teaching to share their expertise with the general public while avoiding the pitfalls of activism. If there is no meaningful distinction between public outreach and activism, or it is only a covert way of denigrating activism of which one disapproves, then this argument is in trouble.

This is a more difficult objection to answer, since I myself conceded that the line between public outreach and activism is a blurry one. Moreover, the distinction must ultimately be found in the quality and intensity of the subjective attitudes of a person, with their outward activities — for example, picketing, boycotting, canvassing, writing opinion pieces, giving legislative testimony — only a rough proxy for those attitudes. Thus, it is certainly possible that someone deeply and continuously involved in activities characteristic of political activism has only moderate levels of commitment to the political goals their activism serves. But this will be an unusual case. For this reason, the activities that tend to indicate passionate commitment to a political goal are fairly grouped under the heading of “political activism”; the activities that tend to indicate a desire to improve the quality of public debate are likewise fairly grouped under the heading of “public outreach.” These categories are not mutually exclusive; and ultimately, the distinction turns, at least in part, on what the academic wants to do with their public-facing activity and the strength of their desire.

I must insist, however, that the distinction is not necessarily a disguised way of denigrating political activity with a particular ideological complexion. In my case, just the opposite is true: I tend to worry more about leftwing academic activism despite my own leftist sympathies for the simple reason that a substantial majority of academics are left-leaning. Of course, all arguments may be wielded in bad faith. But this possibility does not warrant dismissing the argument out of hand.

Sommers’ reply to my column exemplifies the sort of engagement with opposing viewpoints that the cultivation of objectivity makes possible. I fear, however, that his advocacy of academic activism would, if successful, make such engagements rarer.

Interrogating the Sunk Cost Fallacy

photograph of cruise ship sinking

It’s Saturday afternoon. You are lazing on the couch, thinking about your evening. You had planned to attend a concert, but you’re feeling tired. You realize that you would, on the whole, have a much better evening if you stayed home. So, you decide to skip the concert.

I think we can all agree that this would be reasonable. But let’s add a detail. You already purchased a concert ticket for $400, which you can’t recoup. Should you go to the concert, after all?

Regardless of what you should do, many of us would go to the concert, in full knowledge that staying in would be time better spent, because we have already paid for the ticket.

This is an example of what’s often called the ‘sunk cost fallacy.’

Here’s another example. Your family has booked an expensive vacation to Yosemite with the intention of having a fun, relaxing vacation. Unfortunately, though, Yosemite is on fire. You know that if you take the vacation as planned, you will be forced to stay indoors on account of the smoke and will experience virtually no fun or relaxation. Yet the money you spent on booking can’t be recouped. The only other option is a staycation, which would be low-grade fun and relaxing.

A family that decides to go on vacation simply because they have already invested resources into that vacation honors sunk costs.

Psychologists tell us that humans regularly honor sunk costs. That this tendency is irrational is often considered axiomatic and taught as such. For example, the textbook Psychology in Economics and Business proclaims:

Economic theory implies that historical costs are irrelevant for present decision making and only [present and future] costs and benefits should be taken into account. In everyday life, however, this implication of economic theory is frequently neglected thus forming another instance of irrational behavior.

The underlying idea is something like the following. The rational actor aims to promote the best possible outcome. This requires him to attend to the potential consequences of his actions. But sunk costs are by definition unrecoverable. So, the rational actor shouldn’t take them into account in deciding what to do. Someone who foregoes better outcomes simply because he has incurred costs that are irretrievably lost irrationally leaves goods on the table for nothing in return. He compounds, rather than recoups, his losses.

This seems reasonable enough. But things are not quite as simple as they seem.

First, the claim that “historical costs are irrelevant for present decision making” can easily be misunderstood.

Sometimes actors should attend to sunk costs because those costs constitute evidence bearing on the likelihood of future success.

A firm that has invested extensive resources into a fruitless project would be foolish to ignore those investments when deliberating about whether to continue dedicating resources to the project, since the sunk costs may constitute good evidence that future investments will also be fruitless.

Moreover, sometimes what looks from the outside like honoring sunk costs is really something else. Suppose you know that if you choose not to go to the concert, you will inevitably feel bad about wasting money. In this case, it might be perfectly rational for you to attend the concert in order to avoid this feeling. Or suppose you know that skipping the vacation will cause family conflict to bubble up down the line. Again, it might be all-things-considered best to go to Yosemite despite the smoke. True, the underlying dispositions may be irrational. But individuals can’t always control their emotions, and families can’t always control their dynamics. A rational actor must act within the constraints that apply to him. To those who are unaware of those constraints, a rational action can look irrational.

All this can perhaps be acknowledged by the defender of sunk cost orthodoxy. However, several philosophers have argued that this orthodoxy is mistaken.

One reason for being suspicious is that the tendency to honor sunk costs can be leveraged in useful ways. The philosopher Robert Nozick has argued that it can be utilized to counteract the tendency to act against one’s considered judgment about what one should do. Suppose that on Friday you think that you should go to the concert on Saturday because it will be educational, but you anticipate that once Saturday afternoon rolls around you will have trouble mustering the motivation to get off the couch. If you know that you have a tendency to honor sunk costs, then you can, on Friday, increase the likelihood that on Saturday you will do what you think you should do by purchasing the ticket in advance. Similarly, the tendency to honor sunk costs can be utilized to convince others that you will do something. You might be able to convince an incredulous friend that you will attend the concert by showing them that you have already purchased a ticket; if your friend knows that you tend to honor sunk costs, this will give them good reason to believe that you will attend the concert.

There’s more. The doctrine that present actions can’t touch past events, which is implicit in the sunk cost orthodoxy, is not entirely true.

While it’s true that present actions can’t have a causal impact on past events, present actions can change the meaning of past events in ways that matter to us. And as Thomas Kelly has argued, this can make it reasonable to attend to sunk costs.

Tolstoy’s Anna Karenina provides an illustrative example. Anna leaves her husband and beloved child to start a new life with her sybaritic lover, Vronsky. The new relationship fails terribly. In a famous discussion, British philosopher Bernard Williams points out that this failure seems to retroactively tarnish Anna’s decision by showing it to have been unjustified. In contrast, had her relationship flourished, this would have vindicated the cost she incurred. The intuition that success or failure can retroactively vindicate or tarnish a costly choice is not uncommon in human life. It’s not hard to imagine that Anna would have preferred that the cost she paid to start her new life with Vronsky not turn out to have been fruitless. And if she were to have such a preference, this would give her reason to take the fact that she paid this cost into consideration when deliberating about, say, whether to try to salvage her relationship with Vronsky. Thus, sometimes it seems to make rational sense to honor sunk costs.

Then there is the issue of moral sunk costs. Suppose you are a general in charge of deciding whether to authorize a mission to rescue two hostages. You know that the rescue mission will very probably succeed but also cost the life of one soldier. Let’s assume that you are morally justified in sacrificing up to one life to save the two hostages. You authorize the mission. Unluckily, the mission is unsuccessful, and a soldier dies. Then another opportunity arises. You can authorize a second rescue mission, which you know is guaranteed to succeed but will cost the life of another soldier. The question is this:

Is it morally acceptable to authorize this second mission, given that one soldier has already died and we stipulated that you are only justified in sacrificing up to one life to save the hostages?

In other words, should we honor moral sunk costs?

Ethicists disagree on this issue. Some think that assessments of moral costs and benefits should be prospective, ignoring sunk costs. On this view, you should authorize the second mission. Others argue that there’s an overall limit to the moral costs that can be justifiably incurred to achieve a worthwhile objective, meaning that moral sunk costs should be taken into account. On this view, you shouldn’t authorize the mission. Still others adopt an intermediate approach. The point, for our purposes, is that the answer to this question is non-trivial. To treat one answer as axiomatic is a mistake.

All this is to say that authoritative pronouncements decrying the tendency to honor sunk costs as clearly irrational are misleading at best. It’s not at all obvious that “[a] rational decision maker is interested only in the future consequences of current investments,” as the esteemed economist Daniel Kahneman put it. There are cases, like the original concert example, perhaps, where this is true. And it may always be irrational to choose an outcome merely because one has incurred sunk costs. But in many cases, it seems to make perfect sense to account for sunk costs in deliberation. We just need to be reflective about how and why we do so.

The Scourge of Self-Confidence

photograph of boy cliff-jumping into sea

Our culture is in love with self-confidence — defined by Merriam-Webster as trust “in oneself and in one’s powers and abilities.” A Google search of the term yields top results with titles such as “Practical Ways to Improve Your Confidence (and Why You Should)” (The New York Times), “What is Self-Confidence? + 9 Ways to Increase It” (positivepsychology.com), and “How to Be More Confident” (verywellmind.com). Apparently, self-confidence is an especially valued trait in a romantic partner: a Google search for “self-confidence attractive” comes back with titles like “Why Confidence Is So Attractive” (medium.com), “4 Reasons Self-Confidence is Crazy Sexy” (meetmindful.com), and “6 Reasons Why Confidence Is The Most Attractive Quality A Person Can Possess” (elitedaily.com).

I will argue that self-confidence is vastly, perhaps even criminally, overrated. But first, a concession: clearly, some degree of self-confidence is required to think or act effectively. If a person has no faith in her ability to make judgments, she won’t make many of them. And without judgments, thinking and reasoning is hard to imagine, since judgments are the materials of thought. Similarly, if a person has no faith in her ability to take decisions, she won’t take many of them. And since decisions are necessary for much intentional action, such a person will often be paralyzed into inaction.

Nevertheless, the value that we place on self-confidence is entirely inappropriate. The first thing to note is that behavioral psychologists have gathered a mountain of evidence showing that people are significantly overconfident about their ability to make correct judgments or take good decisions. Representative of the scholarly consensus around this finding is a statement in a frequently-cited 2004 article published in the Journal of Research in Personality: “It has been consistently observed that people are generally overconfident when assessing their performance.” Or take this statement, from a 2006 article in the Journal of Marketing Research: “The phenomenon of overconfidence is one of the more robust findings in the decision and judgment literature.”

Furthermore, overconfidence is not a harmless trait: it has real-world effects, many of them decidedly negative. For example, a 2013 study found “strong statistical support” for the presence of overconfidence bias among investors in developed and emerging stock markets, which “contribut[ed] to the exceptional financial instability that erupted in 2008.” A 2015 paper suggested that overconfidence is a “substantively and statistically important predictor” of “ideological extremeness” and “partisan identification.” And in Overconfidence and War: The Havoc and Glory of Positive Illusions, published at the start of the second Iraq War, the Oxford political scientist Dominic Johnson argued that political leaders’ overconfidence in their own virtue and ability to predict and control the future significantly contributed to the disasters of World War I and the Vietnam War. And of course, the sages of both Athens and Jerusalem have long warned us about the dangers of pride.

To be sure, there is a difference between self-confidence and overconfidence. Drawing on the classical Aristotelian model of virtue, we might conceive of “self-confidence” as a sort of “golden mean” between the extremes of overconfidence and underconfidence. According to this model, self-confidence is warranted trust in one’s own powers and abilities, while overconfidence is an unwarranted excess of such trust. So why should the well-documented and baneful ubiquity of overconfidence make us think we overvalue self-confidence?

The answer is that valuing self-confidence to the extent that we do encourages overconfidence. The enormous cultural pressure to be and act more self-confident to achieve at work, attract a mate, or make friends is bound to lead to genuine overestimations of ability and more instances of people acting more self-confidently than they really are. Both outcomes risk bringing forth the rotten fruits of overconfidence.

At least in part because we value self-confidence so much, we have condemned ourselves to suffer the consequences of pervasive overconfidence. As I’ve already suggested, my proposed solution to this problem is not a Nietzschean “transvaluation” of self-confidence, a negative inversion of our current attitude. Instead, it’s a more classical call for moderation: our attitude towards self-confidence should still be one of approval, but approval tempered by an appreciation of the danger of encouraging overconfidence.

That being said, we know that we tend to err on the side of overconfidence, not underconfidence. Given this tendency, and assuming, as Aristotle claimed, that virtue is a mean “relative to us” — meaning that it varies according to a particular individual’s circumstances and dispositions — it follows that we probably ought to value what looks a lot like underconfidence to us. In this way, we can hope to encourage people to develop a proper degree of self-confidence — but no more than that.

Nuclear War and Scope Neglect

photograph of 'Fallout Shelter' sign in the dark

“Are We Facing Nuclear War?”The New York Times, 3/11/22

“Pope evokes spectre of nuclear war wiping out humanity” — Reuters, 3/17/22

“The fear of nuclear annihilation raises its head once more” — The Independent, 3/18/22

“The threat of nuclear war hangs over the Russia-Ukraine crisis”NPR, 3/18/22

“Vladimir Putin ‘asks Kremlin staff to perform doomsday nuclear attack drill’”The Mirror, 3/19/22

“Demand for iodine tablets surge amid fears of nuclear war”The Telegraph, 3/20/22

“Thinking through the unthinkable”Vox, 3/20/22

The prospect of nuclear war is suddenly back, leading many of us to ask some profound and troubling questions. Just how terrible would a nuclear war be? How much should I fear the risk? To what extent, if any, should I take preparatory action, such as stockpiling food or moving away from urban areas?

These questions are all, fundamentally, questions of scale and proportion. We want our judgments and actions to fit with the reality of the situation — we don’t want to needlessly over-react, but we also don’t want to under-react and suffer an avoidable catastrophe. The problem is that getting our responses in proportion can prove very difficult. And this difficulty has profound moral implications.

Everyone seems to agree that a nuclear war would be a significant moral catastrophe, resulting in the loss of many innocent lives. But just how bad of a catastrophe would it be? “In risk terms, the distinction between a ‘small’ and a ‘large’ nuclear war is important,” explains Seth Baum, a researcher at a U.S.-based think tank, the Global Catastrophic Risk Institute. “Civilization as a whole can readily withstand a war with a single nuclear weapon or a small number of nuclear weapons, just as it did in WW2. At a larger number, civilization’s ability to withstand the effects would be tested. If global civilization fails, then […] the long-term viability of humanity is at stake.”

Let’s think about this large range of possible outcomes in more detail. Writing during the heights of the Cold War, the philosopher Derek Parfit compared the value of:

    1. Peace.
    2. A nuclear war that kills 99% of the world’s existing population.
    3. A nuclear war that kills 100%.

Everyone seems to agree that 2 is worse than 1 and that 3 is worse than 2. “But,” asks Parfit, “which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater.”

Parfit was, it turns out, correct about what most people think. A recent study posing Parfit’s question (lowering the lethality of option 2 to 80% to remove confounders) found that most people thought there is a greater moral difference between 1 and 2 than between 2 and 3. Given the world population is roughly 8 billion, the difference between 1 and 2 is an overwhelming 6.4 billion more lives lost. The difference between 2 and 3 is “only” 1.6 billion more lives lost.

Parfit’s reason for thinking that the difference between 2 and 3 was a greater moral difference was because 3 would result in the total extinction of humanity, while 2 would not. Even after a devastating nuclear war such as that in 2, it is likely that humanity would eventually recover, and we would lead valuable lives once again, potentially for millions or billions of years. All that future potential would be lost with the last 20% (or in Parfit’s original case, the last 1%) of humanity.

If you agree with Parfit’s argument (the study found that most people do, after being reminded of the long-term consequences of total extinction), you probably want an explanation of why most people disagree. Perhaps most people are being irrational or insufficiently imaginative. Perhaps our moral judgments and behavior are systematically faulty. Perhaps humans are victims of a shared psychological bias of some kind. Psychologists have repeatedly found that people aren’t very good at scaling up and down their judgments and responses to fit the size of a problem. They name this cognitive bias “scope neglect.”

The evidence for scope neglect is strong. Another psychological study asked respondents how much they would be willing to donate to prevent migrating birds from drowning in oil ponds — ponds that could, with enough money, be covered by safety nets. Respondents were either told that 2,000, or 20,000, or 200,000 birds are affected each year. The results? Respondents were willing to spend $80, $78, and $88 respectively. The scale of the response had no clear connection with the scale of the issue.

Scope neglect can explain many of the most common faults in our moral reasoning. Consider the quote, often attributed to Josef Stalin, “If only one man dies of hunger, that is a tragedy. If millions die, that’s only statistics.” Psychologist Paul Slovic called this tendency to fail to conceptualize the scope of harms suffered by large numbers of people mass numbing. Mass numbing is a form of scope neglect that helps explain ordinary people standing by passively in the face of mass atrocities, such as the Holocaust. The scale of suffering, distributed so widely, is very difficult for us to understand. And this lack of understanding makes it difficult to respond appropriately.

But there is some good news. Knowing that we suffer from scope neglect allows us to “hack” ourselves into making appropriate moral responses. We can exploit our tendency for scope neglect to our moral advantage.

If you have seen Steven Spielberg’s Schindler’s List, then you will remember a particular figure: The girl in the red coat. The rest of the film is in black and white, and the suffering borders continually on the overwhelming. The only color in the film is the red coat of a young Jewish girl. It is in seeing this particular girl, visually plucked out from the crowd by her red coat, that Schindler confronts the horror of the unfolding Holocaust. And it is this girl who Schindler later spots in a pile of dead bodies.

The girl in the red coat is, of course, just one of the thousands of innocents who die in the film, and one of the millions who died in the historical events the film portrays. The scale and diffusion of the horror put the audience members at risk of mass numbing, losing the capacity to have genuine and appropriately strong moral responses. But using that dab of color is enough for Spielberg to make her an identifiable victim. It is much easier to understand the moral calamity that she is a victim of, and then to scale that response up. The girl in the red coat acts as a moral window, allowing us to glimpse the larger tragedy of which she is a part. Spielberg uses our cognitive bias for scope neglect to help us reach a deeper moral insight, a fuller appreciation of the vast scale of suffering.

Charities also exploit our tendency for scope neglect. The donation-raising advertisements they show on TV tend to focus on one or two individuals. In a sense, this extreme focus makes no sense. If we were perfectly rational and wanted to do the most moral good we could, we would presumably be more interested in how many people our donation could help. But charities know that our moral intuitions do not respond to charts and figures. “The reported numbers of deaths represent dry statistics, ‘human beings with the tears dried off,’ that fail to spark emotion or feeling and thus fail to motivate action,” writes Slovic.

When we endeavor to think about morally profound topics, from the possibility of nuclear war to the Holocaust, we often assume that eliminating psychological bias is the key to good moral judgment. It is certainly true that our biases, such as scope neglect, typically lead us to poor moral conclusions. But our biases can also be a source for good. By becoming more aware of them and how they work, we can use our psychological biases to gain greater moral insight and to motivate better moral actions.

Losing Ourselves in Others

illustration of Marley's ghost in A Christmas Carol

The end of the year is a time when people often come together in love and gratitude. Regardless of religion, many gather to share food and drink or perhaps just to enjoy one another’s company. It’s a time to celebrate the fact that, though life is hard and dangerous, we made it through one more year with the help of kindness and support from one another.

Of course, this is why the end of the year can also be really hard. Many people didn’t survive the pandemic and have left enormous voids in their wake. Even for families and friend groups who were lucky enough to avoid death, many relationships didn’t survive.

Deep differences of opinion about the pandemic, race, and government have created chasms of frustration, distrust, and misunderstanding. If this is an accurate description of relationships between those who cared deeply for one another, it’s even less likely to be resolvable for casual acquaintances and members of our communities we only come to know as a result of our attempts to create social policy. This time of year can amplify our already significant sense of grief, loss, and loneliness — the comfort of community is gone. We feel what is missing acutely. How ought we to deal with these differences? Can we deal with them without incurring significant changes to our identities?

Moral philosophy throughout the course of human history has consistently advised us to love our neighbors. Utilitarianism tells us to treat both the suffering and the happiness of others impartially — to recognize that each sentient being’s suffering and happiness deserves to be taken seriously. Deontology advises us to recognize the inherent worth and dignity of other people. Care ethics teaches us that our moral obligations to others are grounded in care and in the care relationships into which we enter with them. Enlightenment moral philosophers like Adam Smith have argued that our moral judgments are grounded in sympathy and empathy toward others. We are capable of imaginatively projecting ourselves into the lives and experiences of other beings, and that provides the grounding for our sense of concern for them.

Moral philosophers have made fellow-feeling a key component in their discussions of how to live our moral lives, yet we struggle (and have always struggled) to actually empathize with fellow creatures. At least one challenge is that there can be no imaginative projection into someone else’s experiences and worldview if doing so is in conflict with everything a person cares about and with the most fundamental things with which they identify.

“Ought implies can” is a contentious but common expression in moral philosophy. It suggests that any binding moral obligation must be achievable; if we ought to do something, then we realistically can do the thing in question. If you tell me that I ought to have done more to end world hunger, for instance, that implies that it was possible for me to have done more to end world hunger (or, at least, that you believe that it was possible for me to have done so).

But there are different senses of “can.” One sense is that I “can” do something only if it is logically possible. Or, perhaps, I “can” do something only if it is metaphysically possible. Or, in many of the instances that I have in mind here, a person “can” do something only if it is psychologically possible. It may be the case that empathizing with one’s neighbor, even in light of all of the advice offered by wise people, may be psychologically impossible to do, or close to it. The explanation for this has to do with the ways in which we construct and maintain our identities over time.

Fundamental commitments make us who we are and make life worth living (when it is). In fact, the fragility of those commitments, and thus the fragility of our very identities, causes some philosophers to argue that immortality is undesirable. In Bernard Williams’ now famous paper The Makropulos Case: Reflections on the Tedium of Immortality he describes a scene from The Makropulos Affair, an opera by Czech composer Leoš Janáček. The main character, Elina, is given the opportunity to live forever — she just needs to keep taking a potion to extend her life. After many, many years of living, she decides to stop taking the potion, even though she knows that if she does so she will cease to exist. Williams argues that anyone who takes such a potion — anyone who chooses to extend their life indefinitely — would either inevitably become bored or would change so much that they lose their identity — they would, though they continue to live, cease to be who they once were.

One of the linchpins of Williams’ view is that, if a person puts themselves in countless different circumstances, they will take on desires, preferences, and characteristics that are so unlike the “self” that started out on the path that they would become someone they no longer recognize. One doesn’t need to be offered a vial of magical elixir to take on the potential for radical change — one has simply to take a chance on opening oneself up to new ideas and possibilities. To do so, however, is to risk becoming unmoored from one’s own identity — to become someone that an earlier version of you wouldn’t recognize. While it may frustrate us when our friends and loved ones are not willing to entertain the evidence that we think should change their minds, perhaps this shouldn’t come as a surprise — we sometimes see change as an existential threat.

Consider the case of a person who takes being patriotic as a fundamental part of their identity. They view people who go into professions that they deem as protective of the country — police officers and military members — to be heroes. If they belong to a family which has long held the same values, they may have been habituated to have these beliefs from an early age. Many of their family members may be members of such professions. If this person were asked to entertain the idea that racism is endemic in the police force, even in the face of significant evidence, they may be unwilling and actually incapable of doing so. Merely considering such evidence might be thought of, consciously or not, as a threat to their very identity.

The challenge that we face here is more significant than might be suggested by the word “bias.” Many of these beliefs are reflective of people’s categorical commitments and they’d rather die than give them up.

None of this is to say that significant changes to fundamental beliefs are impossible — such occurrences are often what philosophers call transformative experiences. That language is telling. When we are able to entertain new beliefs and attitudes, we express a willingness to become new people. This is a rare enough experience to count as a major plot point in a person’s life.

This leaves us with room for hope, but not, perhaps, for optimism. Events of recent years have laid bare the fundamental, identity-marking commitments of friends, family, and members of our community. Reconciling these disparate commitments, beliefs, and worldviews will require nothing less than transformation.

Ethics and Job Apps: Why Use Lotteries?

photograph of lottery balls coming out of machine

This semester I’ve been 1) applying for jobs, and 2) running a job search to select a team of undergraduate researchers. This has resulted in a curious experience. As an employer, I’ve been tempted to use various techniques in running my job search that, as an applicant, I’ve found myself lamenting. Similarly, as an applicant, I’ve made changes to my application materials designed to frustrate those very purposes I have as an employer.

The source of the experience is that the incentives of search committees and the incentives job applicants don’t align. As an employer, my goal is to select the best candidate for the job. While as an applicant, my goal is that I get a job, whether I’m the best candidate or not.

As an employer, I want to minimize the amount of work it takes for me to find a dedicated employee. Thus, as an employer, I’m inclined to add ‘hoops’ to the application process, by requiring applicants to jump through those hoops, I make sure I only look through applications of those who are really interested in the job. But as an applicant, my goal is to minimize the amount of time I spend on each application. Thus, I am frustrated with job applications that require me to develop customized materials.

In this post, I want to do three things. First, I want to describe one central problem I see with application systems — what I will refer to as the ‘treadmill problem.’ Second, I want to propose a solution to this problem — namely the use of lotteries to select candidates. Third, I want to address an objection employers might have to lotteries — namely that it lowers the average quality of an employer’s hires.

Part I—The Treadmill Problem

As a job applicant, I care about the quality of my application materials. But I don’t care about the quality intrinsically. Rather, I care about the quality in relation to the quality of other applications. Application quality is a good, but it is a positional good. What matters is how strong my applications are in comparison to everyone else.

Take as an analogy the value of height while watching a sports game. If I want to see what is going on, it’s not important just to be tall, rather it’s important to be taller than others. If everyone is sitting down, I can see better if I stand up. But if everyone stands up, I can’t see any better than when I started. Now I’ll need to stand on my tiptoes. And if everyone else does the same, then I’m again right back where I started.

Except, I’m not quite back where I started. Originally everyone was sitting comfortably. Now everyone is craning uncomfortably on their tiptoes, but no one can see any better than when we began.

Job applications work in a similar way. Employers, ideally, hire whosoever application is best. Suppose every applicant just spends a single hour pulling together application materials. The result is that no application is very good, but some are better than others. In general, the better candidates will have somewhat better applications, but the correlation will be imperfect (since the skills of being good at philosophy only imperfectly correlate with the skills of being good at writing application materials).

Now, as an applicant, I realize that I could put in a few hours polishing my application materials — nudging out ahead of other candidates. Thus, I have a reason to spend time polishing.

But everyone else realizes the same thing. So, everyone spends a few hours polishing their materials. And so now the result is that every application is a bit better, but still with some clearly better than others. Once again, in general, the better candidates will have somewhat better applications, but the correlation will remain imperfect.

Of course, everyone spending a few extra hours on applications is not so bad. Except that the same incentive structure iterates. Everyone has reason to spend ten hours polishing, now fifteen hours polishing. Everyone has reason to ask friends to look over their materials, now everyone has reason to hire a job application consultant. Every applicant is stuck in an arms race with every other, but this arms race does not create any new jobs. So, in the end, no one is better off than if everyone could have just agreed to an armistice at the beginning.

Job applicants are left on a treadmill, everyone must keep running faster and faster just to stay in place. If you ever stop running, you will slide off the back of the machine. So, you must keep running faster and faster, but like the Red Queen in Lewis Carrol’s Through the Looking Glass, you never actually get anywhere.

Of course, not all arms races are bad. A similar arms race exists for academic journal publications. Some top journals have a limited number of article slots. If one article gets published, another article does not. Thus, every author is in an arms race with every other. Each person is trying to make sure their work is better than everyone else’s.

But in the case of research, there is a positive benefit to the arms race. The quality of philosophical research goes up. That is because while the quality of my research is a positional good as far as my ability to get published, it is a non-positional good in its contribution to philosophy. If every philosophy article is better, then the philosophical community is, as a whole, better off. But the same is not true of job application materials. No large positive externality is created by everyone competing to polish their cover letters.

There may be some positive externalities to the arms race. Graduate students might do better research in order to get better publications. Graduate students might volunteer more of their time in professional service in order to bolster their CV.

But even if parts of the arms race have positive externalities, many other parts do not. And there is a high opportunity cost to the time wasted in the arms race. This is a cost paid by applicants, who have less time with friends and family. And a cost paid by the profession, as people spend less time teaching, writing, and helping the community in ways that don’t contribute to one’s CV.

This problem is not unique to philosophy. Similar problems have been identified in other sorts of applications. One example is grant writing in the sciences. Right now, top scientists must spend a huge amount of their time optimizing grant proposals. One study found that researchers collectively spent a total of 550 working years on grant proposals for Australia’s National Health and Medical Research Council’s 2012 funding round.

This might have a small benefit in leading research to come up with better projects. But most of the time spent in the arms race is expended just so everyone can stay in place. Indeed, there are some reasons to think the arms race actually leads people to develop worse projects, because scientists optimize for grant approval and not scientific output.

Another example is college admissions. Right now, high school students spend huge amounts of time and money preparing for standardized tests like the SAT. But everyone ends up putting in the time just to stay in place. (Except, of course, for those who lack the resources required to put in the time; they just get left behind entirely.)

Part II—The Lottery Solution

Because I was on this treadmill as a job applicant, I didn’t want to force other people onto a treadmill of their own. So, when running my own job search, I decided to modify a solution to the treadmill problem that has been suggested for both grant funding and college admissions. I ran a lottery. I had each applying student complete a short assignment, and then ‘graded’ the assignments on a pass/fail system. I then choose my assistants at random from all those who had demonstrated they would be a good fit. I judged who was a good fit. I didn’t try to judge, of those who were good fits, who fit best.

This allowed students to step off the treadmill. Students didn’t need to write the ‘best’ application. They just needed an application that showed they would be a good fit for the project.

It seems to me that it would be best if philosophy departments similarly made hiring decisions based on a lottery. Hiring committees would go through and assess which candidates they think are a good fit. Then, they would use a lottery system to decide who is selected for the job.

The details would need to be worked out carefully and identifying the best system would probably require a fair amount of experimentation. For example, it is not clear to me the best way to incorporate interviews into the lottery process.

One possibility would be to interview everyone you think is likely a good fit. This, I expect, would prove logistically overwhelming. A second possibility, and I think the one I favor, would be to use a lottery to select the shortlist of candidates, rather than to select the final candidate. The search committee would go through the application and identify everyone who looks like a good fit. They would then use a lottery to narrow down to a shortlist of three to five candidates who come out for an interview. While the shortlisted candidates would be placed on the treadmill, a far smaller number of people are subject to the wasted effort. A third possibility would use the lottery to select a single final candidate, and then use an in-person interview merely to confirm the selected candidate really is a good fit. There is a lot of evidence that hiring committees systematically overweight the evidential weight of interviews, and that this creates tons of statistical noise in hiring decisions (see chapters 11 and 24 in Daniel Kahneman’s book Noise).

Assuming the obstacles could be overcome, however, lotteries would have an important benefit in going some way towards breaking the treadmill.

There are a range of other benefits as well.

  • Lotteries would decrease the influence of bias on hiring decisions. Implicit bias tends to make a difference in close decisions. Thus, bias is more likely to flip a first and second choice, than it is to flip someone from making it onto the shortlist in the first place.
  • Lotteries would decrease the influence of networking, and so go some way towards democratizing hiring. At most, an in-network connection will get someone into the lottery but it won’t increase you chance of winning the lottery.
  • It would create a more transparent way to integrate hiring preferences. A department might prefer to hire someone who can teach bioethics, or might prefer to hire a female philosopher, but not want to restrict the search to people who meet such criteria. One way to integrate such preferences more rigorously would be to explicitly weight candidates in the lottery by such criteria.
  • Lotteries could decrease interdepartmental hiring drama. It is often difficult to get everyone to agree on a best candidate. It is generally not too difficult to get everyone to agree on a set of candidates all who are considered a good fit.

Part III—The Accuracy Drawback

While there are advantages accrue to applicants and the philosophical community, employers might not like a lottery system. The problem for employers is that a lottery will decrease the average quality of hires.

A lottery system means you should expect to hire the average candidate who meets the ‘good fit’ criteria. Thus, as long as trying to pick the best candidate results in a candidate at least above average, then the average quality of the hire goes down with a lottery.

However, while there is something to this point, the point is weaker than most people think. That is because humans tend to systematically overestimate the reliability of judgment. When you look at the empirical literature a pattern emerges. Human judgment has a fair degree of reliability, but most of that reliability comes from identifying the ‘bad fits.’

Consider science grants. Multiple studies have compared the scores that grant proposals receive to the eventual impact of research (as measured by future citations). What is found is that scores do correlate with research impact, but almost all of that effect is explained by the worst performing grants getting low scores. If you restrict your assessment to the good proposals, researchers are terrible at judging which of the good proposals are actually best. Similarly, while there is general agreement about which proposals are good and which bad, evaluators rarely agree about which proposals are best.

A similar sort of pattern emerges for college admission counselors. Admissions officers can predict who is likely to do poorly in school, but can’t reliably predict which of the good students will do best.

Humans are fairly good at judging which candidates would make a good fit. We are bad at judging which good fit candidates would actually be best. Thus, most of the benefit of human judgment comes at the level of identifying the set of candidates who would make a good fit, not at the level of deciding between those candidates. This, in turn, suggests that the cost to employers of instituting a lottery system is much smaller than we generally appreciate.

Of course, I doubt I’ll convince people to immediately use lotteries on major important decisions. Thus, for now I’ll suggest that for smaller less consequential decisions, try a lottery system. If you are a graduate department, select half your graduating class the traditional way, and half by a lottery of those who seem like a good fit. Don’t tell faculty which are which, and I expect several years later it will be clear that the lottery system works just as well. Or, like me, if you are hiring some undergraduate researchers, try the lottery system. Make small experiments and let’s see if we can’t buck the current status quo.

On Objectivity in Journalism

blurred image of crowd and streetlights

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Over the past few years, a number of left-leaning journalists have publicly questioned the notion of objectivity as an ideal for journalists and journalistic practice. The discussions that ensued have generated a lot of heat, but for the most part not too much light. That’s why I was delighted by the latest episode of Noah Feldman’s podcast, Deep Background, which featured a lengthy interview with journalist Nikole Hannah-Jones, who is perhaps best known as the creator of The New York Times’s The 1619 Project. In that interview, Hannah-Jones and Feldman develop a nuanced account of the place of objectivity in journalism. I will discuss this account in due course. Before I do, I would like to unpack the multiple meanings of “objectivity” as it is used to describe journalists and their art.

The word “objectivity” is normally applied to two things: persons and facts (or truths). An objective person is one who has three attributes: neutrality, even-handedness, and disinterestedness. A neutral person has no prior or preconceived views about a particular subject; an even-handed person is disposed to give due weight to both sides in a factual dispute; and a disinterested person has no strong interests in one side or the other being the correct one. Thus, objectivity as an attribute of persons involves (the lack of) both beliefs and desires. It is in the name of promoting the appearance of this kind of objectivity that some journalists think it is improper for them to engage in political activity, or even to vote.

When applied to facts or truths, as in the oft-repeated phrase “objective truth,” the word is generally taken to mean something about either empirical verifiability or “mind-independence.” Take empirical verifiability first. In this sense, “objective” truths are truths that can be directly verified by the senses, and so are part of a public world which we share with other sentient creatures. In this sense, “objective” truths contrast with both truths about our mental states, such as that I like the taste of chocolate ice cream, and “metaphysical” truths, such as that God is all-powerful. Mind-independence is a slippery concept, but the basic idea is that mind-independent truths are truths which don’t depend on anyone’s beliefs about what is true. That it is raining in Durham, North Carolina would be true even if everyone believed it false. In this sense, “objective” truths contrast with conventional truths, such as truths about grammar rules, since such rules depend for their very existence on the attitudes, and in particular the beliefs, of writers and speakers. In this sense, however, “objective” truths include both metaphysical truths and truths about mental states. To see the latter point, consider that the fact that I like chocolate ice cream would be true even if no one, including I myself, believed it to be true. Thus, truths about personal taste can count as subjective in one sense, but objective in another.

With some exceptions I will discuss shortly, criticisms of objectivity rarely cast doubt on the existence of objective truths. Instead, they target the ideal of the journalist as a neutral, even-handed, and disinterested observer. The criticisms are two-fold: first, that adopting the objective stance is impossible, since all journalists use their prior beliefs and interests to inform their decisions about what facts to include or highlight in a story, and if they have the discretion, even what stories to write. Second, since a perfectly objective stance is impossible, trying to adopt the stance constitutes a form of deception that causes people to invest journalists with a kind of epistemic authority they don’t and couldn’t possess. Better to be honest about the subjective (basically, the psychological) factors that play a role in journalistic practice than to deceive one’s readers.

In the interview with Feldman, Hannah-Jones echoed these criticisms of objectivity. She then distinguished between two activities every journalist engages in: fact-finding and interpretation. In the fact-finding phase, she said, journalists can and must practice “objectivity of method.” What she apparently means to pick out with this phrase are methods by which journalists can hope to access objective truth. Such methods might include interviewing multiple witnesses to an event or searching for documentary evidence or some other reliable corroboration of testimony; they might also include the institutional arrangements that newsrooms adopt — for example, using independent fact checkers. However, she and Feldman seemed to agree that interpretation — variously glossed as working out what facts “mean” or which are “important” — is a subjective process, inevitably informed by the journalist’s prior beliefs and desires.

Here are two observations about Hannah-Jones’s account. First, the methods used to access objective truth in the fact-finding stage tend to force journalists to at least act as if they are objective persons. For example, interviewing multiple witnesses and weighing the plausibility of all the testimony is the kind of thing an even-handed observer would do. Looking for corroborating evidence even when one wants a witness’s testimony to be true emulates disinterestedness. This doesn’t mean that one has to be objective in order to practice journalism well, but it does suggest a role for objectivity as a regulative ideal: when we want to know how to proceed in fact-finding, we ask how an objective person would proceed. And to the extent that we can emulate the objective person, to that extent is the epistemic authority of the journalist earned.

Second, it seems to me that “interpretation” involves trying to access objective truth, or doing something much like it. Feldman and Hannah-Jones used two examples to illustrate the kinds of truths that the process of interpretation is aimed at accessing: truths about people’s motives, or why they acted (as opposed to truths about their actions themselves, which are within the domain of fact-finding), and causal truths, like that such-and-such an event or process was the key factor in bringing about some state of affairs. But such truths are objective in at least one sense. Moreover, even truths about motives, while subjective in not belonging to the public world of the senses, can be indirectly verified using empirical methods very similar to those used to access directly empirically verifiable truths. These are methods lawyers use every day to prove or disprove that a defendant satisfied the mens rea element of a crime. Since interpretation involves accessing objective truths or using empirical methods to access subjective ones, and since the methods of accessing objective truths involve emulating an objective person, interpretation at least partly involves striving to be objective.

This can’t be all it involves, however: what’s important is not equivalent to what’s causally efficacious. Here is where Feldman and Hannah-Jones are undoubtedly correct that a journalist’s attitudes, and in particular her values, will inevitably shape how she interprets the facts. For example, a commitment to moral equality may cause a journalist to train their focus on the experience of marginalized groups, that value informing what the journalist takes to be important. A merely objective person would have no idea of what facts are important in this moral sense.

Thus, a journalist must and should approach her practice with a complicated set of attitudes: striving to be objective (to be like an objective person) about the facts, while at the same time inevitably making choices about which facts are important based at least in part on her values. This is part of what makes journalism a difficult thing to do well.

In-Groups, Out-Groups, and Why I Care about the Olympics

photograph of fans in crowded stadium holding one big American flag

We all, to some extent, walk around with an image of ourselves in our own heads. We have, let’s say, a self-conception. You see yourself as a certain kind of person, and I see myself as a certain kind of person.

I bring this up because my own self-conception gets punctured a little every time the Olympics roll around. I think of myself as a fairly rational, high-brow, cosmopolitan sort of person. I see myself as the sort of person who lives according to sensible motives; I don’t succumb to biased tribal loyalties.

In line with this self-conception, I don’t care about sporting events. What does it matter to me if my university wins or loses? I’m not on either team, I don’t gain anything if FSU wins a football game. So yes, I am indeed one of those obnoxious and self-righteous people who a) does not care about sports and b) has to fight feelings of smug superiority over sports fans who indulge their tendencies to tribalism.

This is not to say I don’t have my loyalties: I’m reliably on team dog rather than cat, and I track election forecasts with an obsessive fervor equal to any sports fanatic. But I tell myself that, in both cases, my loyalty is rational. 

I’m on team dog because there are good reasons why dogs make better pets.”

“I track elections because something important is at stake, unlike in a sports game.”

These are the sorts of lies I tell myself in order to maintain my self-conception as a rational, unbiased sort of person. By the end of this post, I hope to convince you that these are, in fact, lies.

The Olympic Chink

The first bit of evidence that I’m not as unbiased as I’d like to think, comes from my interest in the Olympics. I genuinely care about how the U.S. does in the Olympics. For example, I was disappointed when, for the first time in fifty years, the U.S. failed to medal day one.

Nor do I have any clever story for why this bias is rational. While I think there is a strong case to be made for a certain kind of moral patriotism, my desire to see the U.S. win the most Olympic medals is not a patriotism of that sort. Nor do I think that the U.S. winning the most medals will have important implications for geopolitics; it is not as though, for example, the U.S. winning more medals than China will help demonstrate the value of extensive civil liberties.

Instead, I want the US to win because it is my team. I fully recognize that if I lived in Portugal, I’d be rooting for Portugal.

But why do I care if my team wins? After all, everything I said earlier about sports is also true of the Olympics. Nothing in my life will be improved if the U.S. wins more medals.

Turning to Psychology

To answer this question, we need to turn to psychology. It turns out that humans are hardwired to care about our in-group. Perhaps the most famous studies demonstrating the effects of in-group bias come from the social psychologist Henri Tajfel.

In one study, Tajfel brought together a group of fourteen- and fifteen-year-old boys. Tajfel wanted to know what it would take to get people invested in ‘their team.’ It turns out, it barely requires anything at all.

Tajfel first had the boys estimate how many dots were flashed on a screen, ostensibly for an experiment on visual perception. Afterwards the boys were told that they were starting a second experiment, and that, to make it easier to code the results, the experimenters were dividing the boys into subgroups based on if they tended to overestimate or underestimate the number of flashed dots (in actual fact the subgroups were random). The boys were then given the chance to distribute rewards anonymously to other participants.

What Tajfel found was that the mere fact of being categorized into a group of ‘overestimators’ or ‘underestimators’ was enough to produce strong in-group bias. When distributing the reward between two members of the same group, the boys tended to distribute the reward fairly. However, when distributing between one member of the in-group and one member of the out-group, the boys would strongly favor members in their same group. This was true even though there was no chance for reciprocation, and despite participants knowing that the group membership was based on something as arbitrary as “overestimating the number of dots flashed on a screen.”

Subsequent results were even more disturbing. Tajfel found that not only did the boys prioritize their arbitrary in-group, but they actually gave smaller rewards to people in their own group if it meant creating a bigger difference between the in-group and out-group. In other words, it was more important to treat the in-group out-group differently than it was to give the biggest reward to members of the in-group.

Of course, this is just one set of studies. You might think that these particular results have less to do with human nature and more to do with the fact that lots of teenage boys are jerks. But psychologists have found tons of other evidence for strong in-group biases. Our natural in-group bias seems to explain phenomena as disparate as racism, motherlove, sports fandom, and political polarization.

Sometimes this in-group bias is valuable. It is good if parents take special care of their children. Parental love provides an extremely efficient system to ensure that most children get plenty of individualized attention and care. Similarly, patriotism is an important political virtue, it motivates us to sacrifice to improve our nation and community.

Sometimes this in-group bias is largely benign. There is nothing pernicious in wanting your sports team to win, and taking sides provides a source of enjoyment for many.

But sometimes this in-group bias is toxic and damaging. A nationalistic fervor that insists your own country is best, as opposed to just your own special responsibility, often leads people to whitewash reality. In-group bias leads to racism and political violence. Even in-group sports fandom sometimes results in deadly riots.

A Dangerous Hypocrisy

If this is right, then it is unsurprising that I root for the U.S. during the Olympic games. What is perhaps much more surprising is that I don’t care about the results of other sporting games. Why is it then, if in-group bias is as deep as the psychologists say it is, that I don’t care about the performance of FSU’s football team?

Is my self-conception right, am I just that much more rational and enlightened? Have I managed to, at least for the most part, transcend my own tribalism?

The psychology suggests probably not. But if I didn’t transcend tribalism, what explains why I don’t care about the performance of my tribe’s football team?

Jonathan Haidt, while reflecting on his own in-group biases, gives us a hint:

“In the terrible days after the terrorist attacks of September 11, 2001, I felt an urge so primitive I was embarrassed to admit it to my friends: I wanted to put an American flag decal on my car. . . . But I was a professor, and professors don’t do such things. Flag waving and nationalism are for conservatives. Professors are liberal globetrotting universalists, reflexively wary of saying that their nation is better than other nations. When you see an American flag on a car in a UVA staff parking lot, you can bet that the car belongs to a secretary or a blue-collar worker.”

Haidt felt torn over whether to put up an American flag decal. This was not because he had almost transcended his tribal loyalty to the US. Rather he was being pulled between two different tribal loyalties. His loyalty to the US pulled him one way, his loyalty to liberal academia pulled the other. Haidt’s own reticence to act tribally by putting up an American flag decal, can itself be explained by another tribal loyalty.

I expect something similar is going on in my own case. It’s not that I lack in-group bias. It’s that my real in-group is ‘fellow philosophers’ or ‘liberal academics’ or even ‘other nerds,’ none of whom get deeply invested in FSU football. While I conceive of myself as “rational,” “high-brow,” and “cosmopolitan”; the reality is that I am largely conforming to the values of my core tribal community (the liberal academy). It’s not that I’ve transcended tribalism, it’s that I have a patriotic allegiance to a group that insists we’re above it. I have an in-group bias to appear unbiased; an irrational impulse to present myself as rational; a tribal loyalty to a community united around a cosmopolitan ideal.

But this means my conception of myself as rational and unbiased is a lie. I have failed to eliminate my in-group bias after all.

An Alternative Vision of Moral Education

But we seem to face a problem. On the one hand, my in-group bias seems to be so deep that even my principled insistence on rationality turns out to be motivated by a concern for my in-group. But on the other hand, we know that in-group bias often leads to injustice and the neglect of other people.

So what is the solution? How can we avoid injustice if concern for our in-group is so deeply rooted in human psychology?

We solve this problem, not by trying to eliminate our in-group bias, but rather by bringing more people into our in-group. This has been the strategy taken by all the greatest moral teachers throughout history.

Consider perhaps the most famous bit of moral instruction in all of human history, the parable of the Good Samaritan. In this parable, Jesus is attempting to convince the listening Jews that they should care for Samaritans (a political out-group) in the same way they care for Jews (the political in-group). But he does not do so by saying that we should not have a special concern for our in-group. Rather, he uses our concern for the in-group (Jesus uses the word ‘neighbor’) and simply tries to bring others into the category. He tells a story which encourages those listening to recognize, not that they don’t have special reasons to care for their neighbor (their in-group), but to redefine the category of ‘neighbor’ to include Samaritans as well.

This suggests something profound about moral education. To develop in justice, we don’t eliminate our special concern for the in-group. Instead we expand the in-group so that our special concern extends to others. This is why language like ‘brotherhood of man’ or ‘fellow children of God’ has proven so powerful throughout history. Rather than trying to eliminate our special concern for family, it instead tries to get us to extend that very same special concern to all human beings.

This is why Immanuel Kant’s language of the ‘Kingdom of Ends’ is so powerful. Rather than trying to eliminate a special concern for our society, instead we recognize a deeper society in which all humans are members.

The constant demand of moral improvement is not to lose our special concern for those near us, but to continually draw other people into that same circle of concern.

Automation in the Courtroom: On Algorithms Predicting Crime

photograph of the defense's materials on table in a courtroom

From facial recognition software to the controversial robotic “police dogs,” artificial intelligence is becoming an increasingly prominent aspect of the legal system. AI even allocates police resources to different neighborhoods, determining how many officers are needed in certain areas based on crime statistics. But can algorithms determine the likelihood that someone will commit a crime, and if they can, is it ethical to use this technology to sentence individuals to prison?

Algorithms that attempt to predict recidivism (the likelihood that a criminal will commit future offenses) sift through data to produce a recidivism score, which ostensibly indicates the risk a person poses to their community. As Karen Hao explains for the MIT Technology Review,

The logic for using such algorithmic tools is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether for rehabilitation or for prison sentences. In theory, it also reduces any bias influencing the process, because judges are making decisions on the basis of data-driven recommendations and not their gut.

Human error and racial bias contribute to over-incarceration, so researchers are hoping that color-blind computers can make better choices for us.

But in her book When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence, former judge Katherine B. Forrest explains that Black offenders are far more likely to be labeled high-risk by algorithms than their white counterparts, a fact which further speaks to the well-documented racial bias of algorithms. As Hao reminds us,

populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.

Because this technology is so new and lucrative, companies are extremely protective of their algorithms. The COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions), created by Northpointe Inc., is the most widely used recidivism predictor in the legal system, yet no one knows what data set it draws from or how it’s algorithm generates a final score. We can assume the system looks at factors like age and previous offenses, but beyond that, the entire process is shrouded in mystery. Studies also suggest that recidivism algorithms are alarmingly inaccurate; Forrest notes that systems like COMPAS are incorrect around 30 to 40 percent of the time. This means that for every ten people COMPAS labels low-risk, 3 or 4 will eventually relapse into crime. Even with a high chance for error, recidivism scores are difficult to challenge in court. In a lucid editorial for the American Bar Association, Judge Noel L. Hillman explains that,

A predictive recidivism score may emerge oracle-like from an often-proprietary black box. Many, if not most, defendants, particularly those represented by public defenders and counsel appointed under the Criminal Justice Act because of indigency, will lack the resources, time, and technical knowledge to understand, probe, and challenge the AI process.

Judges may assume a score generated by AI is infallible, and change their ruling accordingly.

In his article, Hillman makes a reference to Loomis v. Wisconsin, a landmark case for recidivism algorithms. In 2016, Eric Loomis was arrested for driving a car that had been involved in a drive-by shooting. During sentencing, the judge tacked an additional six years onto his sentence due to his high COMPAS score. Loomis attempted to challenge the validity of the score, but the courts ultimately upheld Northpointe’s right to protect trade secrets and not reveal how the number had been reached. Though COMPAS scores aren’t currently admissible in court as evidence against a defendant, the judge in the Loomis case did take it into account during sentencing, which sets a dangerous precedent.

Even if we could predict a person’s future behavior with complete accuracy, replacing a judge with a computer would make an already dehumanizing process dystopian. Hillman argues that,

When done correctly, the sentencing process is more art than science. Sentencing requires the application of soft skills and intuitive insights that are not easily defined or even described. Sentencing judges are informed by experience and the adversarial process. Judges also are commanded to adjust sentences to avoid unwarranted sentencing disparity on a micro or case-specific basis that may differ from national trends.

In other words, attention to nuance is lost completely when defendants become data sets. The solution to racial bias isn’t to bring in artificial intelligence, but to strengthen our own empathy and sense of shared humanity, which will always produce more equitable rulings than AI can.

More Than Words: Hate Crime Laws and the Atlanta Attack

photograph of "Stop Asian Hate' sign being held

There’s an important conversation happening about how we should understand Robert Aaron Long’s murder of eight individuals, including six Asian women (Daoyou Feng, Hyun Jung Grant, Suncha Kim, Soon Chung Park, Xiaojie Tan, Yong Ae Yue) last week. Were Long’s actions thoughtless or deliberate? Is the attack a random outburst at an unrelated target, or “a new chapter in an old story”? Is the attack better explained as a byproduct of anti-Asian American sentiment left to fester, or merely the result of a young, white man having “a really bad day”? Behind these competing versions lies a crucial distinction: in judging the act, should we take on the point of view of the attacker or his victims?

In the wake of the tragedy, President Biden urged lawmakers to endorse the COVID-19 Hate Crimes Act aimed at addressing the rise in violence directed at Asian Americans. The bill intends to improve hate crime reporting, expand resources for victims, and encourage prosecution of bias-based violence. As Biden has emphasized, “every person in our nation deserves to live their lives with safety, dignity, and respect.” By publicly condemning the Atlanta attack as a hate crime, the president hopes to address the climate of fear, distrust, and unrest that’s set in.

Unfortunately, hate crime legislation has proven more powerful as a public statement than a prosecutorial tool. The enhanced punishment attached to those criminal offenses motivated by the offender’s biases against things like race, religion, and gender are rarely sought. Part of the problem stems from the legal difficulty in demonstrating motive. This requires going beyond mere intent — assessing the degree to which one meant to cause harm — and instead considering the reasons why the person acted as they did. We’re encouraged to judge the degree to which prejudice might have precipitated violence. Establishing motive, then, requires us to speculate as to the inner workings of another’s mind. Without a confession, we’re left to try to string bits of information together into a compelling narrative of hate. It’s a flimsy thing to withstand scrutiny beyond a reasonable doubt.

This trouble with motive is currently on clear display: Long has insisted that race and gender had nothing to do with the attack, and the police seem willing to take him at his word. On Thursday, FBI director Christopher Wray deferred to the assessment by local police saying that “it does not appear that the motive was racially motivated.” Instead, Long’s actions have been explained as the consequence of sex addiction in conflict with religious conviction; Long’s goal has been described as the elimination of temptation.

How this explanation insulates Long’s actions from claims of bias-inspired violence is not clear. As Grace Pai of Asian Americans Advancing Justice suggested, “To think that someone targeted three Asian-owned businesses that were staffed by Asian American women […] and didn’t have race or gender in mind is just absurd.” The theory fails to appreciate the way Long’s narrative fetishizes Asian American women and reduces them to sexual objects. Rather than avoiding the appearance of bias, the current story seems to possess all the hallmarks. Sure, it might prove a bit more difficult to establish in a court of law, but as Senator Raphael Warnock argued, “we all know hate when we see it.”

So what makes politicians run toward, and law enforcement run from, the hate crime designation? In addition to the difficulty in prosecution, hate crime laws have a shaky record as a deterrent, made worse by the fact that they are rarely reported, investigated, or prosecuted. Despite all but three states now having hate crime laws on the books, rates of bias-inspired violence and harassment over the past several years have remained relatively high. (Many attribute this trend to the xenophobic and racist rhetoric that came out of the previous White House administration.)

But perhaps the value of hate crime legislation can’t be adequately captured by focusing on deterrence. Maybe it’s about communication. Perhaps the power of these laws is about coming together as a community to say that we condemn violence aimed at difference in a show of solidarity. We want it known that these particular individuals — these particular acts — don’t speak for us. Words matter, as the controversy regarding the sheriff’s office explanation of the attacker’s state of mind makes clear. Making the public statement, then, is a crucial step even if political and legal factors mean the formal charge is not pursued. It’s a performance directed at all of us, not at the perpetrator. The goal is restoration and reconciliation. Failing to call out bias-inspired violence when we see it provides cover and allows roots to take hold and to continue to grow unchecked.

Still, the importance of signalling this moral commitment doesn’t necessarily settle the legal question of whether hate crime legislation can (and should) play the role we’ve written for it. Hate crime laws are built on our belief that bias-inspired violence inflicts greater societal harm. These crimes inflict distinct emotional harms on their victims, and send a specific message to particular members of the community. Enhanced legal consequences are justified, then, on the basis of this difference in severity and scope. Punishment must fit the crime.

Some critics, however, worry that hate crime laws reduce individuality to membership of a protected group. In a way, it’s guilty of a harm similar to that perpetrated by the attacker: it renders victims anonymous. It robs a person of her uniqueness, strips her of her boundless self, and collapses her to a single, representative label. Because of this, hate crime laws seem at once both necessary for securing justice for the victim — they directly address the underlying explanation of the violence — and diametrically opposed to that goal — the individual victim comes to be defined first and foremost by her group identity.

The resolution to these competing viewpoints is not obvious. On the one hand, our intuitions suggest that people’s intentions impact the moral situation. Specifically targeting individuals on the basis of their gender or ethnicity is clearly a different category of moral wrong. But the consequences that come from the legal application of those moral convictions have serious repercussions. Ultimately, the lasting debate surrounding hate crime legislation speaks to the slipperiness in pinning down what precisely justice demands.

In the Limelight: Ethics for Journalists as Public Figures

photograph of news camera recording press conference

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Journalistic ethics are the evolving standards that dictate the responsibilities reporters have to the public. As members of the press, news writers play an important role in the accessibility of information, and unethical journalistic practices can have a detrimental impact on the knowledgeability of the population. Developing technology is a major factor in changes to journalism and the way journalists navigate ethical dilemmas. Both the field of journalism, and its ethics, have been revolutionized by the internet.

The increased access to social media and other public platforms of self-expression have expanded the role of journalists as public figures. The majority of journalistic ethical concerns focus on journalists’ actions in the scope of their work. As the idea of privacy changes, more people feel comfortable sharing their lives online and journalists’ actions outside of their work come further under scrutiny. Increasingly, questions of ethics in journalism include journalists’ non-professional lives. What responsibilities do journalists have as public-facing individuals?

As a student of journalism, I am all too aware that there is no common consensus on the issue. At the publication I write for, staff members are restricted from participating in protests for the duration of their employment. In a seminar class, a professional journalist discussed workplace moratoriums they’d encountered on publicly stating political leanings and one memorable debate about whether or not it was ethical for journalists to vote — especially in primaries, on the off-chance that their vote or party affiliation could become public. Each of these scenarios stems from a common fear that a journalist will become untrustworthy to their readership due to their actions outside of their work. With less than half the American public professing trust in the media, according to Gallup polls, journalists are facing intense pressure to prove themselves worthy of trust.

Journalists have a duty to be as unbiased as possible in their reporting — this is a well-established standard of journalism, promoted by groups like the Society for Professional Journalists (SPJ). How exactly they accomplish that is changing in the face of new technologies like social media. Should journalists avoid publicizing their personal actions and opinions and opt-out of any personal social media? Or should they restrict them entirely to avoid any risk of them becoming public? Where do we draw the lines?

The underlying assumption here is that combating biased reporting comes down to the personal responsibility of journalists to either minimize their own biases or conceal them. At least a part of this assumption is flawed. People are inherently biased; a person cannot be completely impartial. Anyone who attempts to pretend otherwise actually runs a greater risk of being swayed by these biases because they become blind to them. The ethics code of the SPJ advises journalists to “avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts.” Although this was initially written to be applied to journalists’ professional lives, I believe that that short second sentence is a piece of the solution. “Disclose unavoidable conflicts.” More effective than hiding biases is being clear about them. Journalists should be open about any connections or political leanings that intersect with their field. It truly provides the public with all the information and the opportunity to judge the issues for themselves.

I don’t mean to say that journalists should be required to make parts of their private lives public if they don’t intersect with their work. However, they should not be asked to hide them either. Although most arguments don’t explicitly suggest journalists hide their biases, they either suggest journalists avoid public action that could reveal a bias or avoid any connection that could result in a bias — an entirely unrealistic and harmful expectation. Expecting journalists to either pretend to be bias-free or to isolate themselves from the issues they cover as much as possible results in either dishonesty or “parachute journalism” — journalism in which reporters are thrust into situations they do not understand and don’t have the background to report on accurately. Fostering trust with readers and deserving that trust should not be accomplished by trying to turn people into something they simply cannot be, but by being honest about any potential biases and working to ensure the information is as accurate as possible regardless.

The divide between a so-called “public” or “professional” life and a “private” life is not always as clear as we might like, however. Whether they like it or not, journalists are at least semi-public figures, and many use social media to raise awareness for their work and the topics they cover, while also using social media in more traditional, personal ways. In these situations, it can become more difficult to draw a line between sharing personal thoughts and speaking as a professional.

In early 2020, New York Times columnist Ben Smith wrote a piece criticizing New Yorker writer Ronan Farrow for his journalism, including, in some cases the exact accuracy or editorializing of tweets Farrow had posted. Despite my impression that Smith’s column was in itself inaccurate, poorly researched and hypocritical, it raised important questions about the role of Twitter and other social media in reporting. A phrase I saw numerous times afterwards was “tweets are not journalism” — a criticism of the choice to place the same importance on and apply the same journalistic standards to Farrow’s Twitter account as his published work.

Social media makes it incredibly easy to share information, opinions, and ideas. It is far faster than many other traditional methods of publishing. It can, and has been, a powerful tool for journalists to make corrections and updates in a timely manner and to make those corrections more likely to be viewed by people who already read a story and might not check it again. If a journalist intends them to be, tweets can, in fact, be journalism.

Which brings us back to the issue of separating public from private. Labeling advocacy, commentary, and advertisement (and keeping them separated) is an essential part of ethical journalism. But which parts of these standards should be extrapolated to social media, and how? Many individuals will use separate accounts to make this distinction. Having a work account and personal account, typically with stricter privacy settings, is not uncommon. It does, however, prevent many of the algorithmic tricks people may use to make their work accessible, and accessibility is an important part of journalism. Separating personal and public accounts effectively divides an individual’s audience and prevents journalists from forming more personal connections to their audience in order to publicize their work. It also prevents the engagement benefits of more frequent posting that comes from using a single account. By being asked to abstain from a large part of what is now ordinary communication with the public, journalists are being asked to hinder their effectiveness.

Tagging systems within social media currently provide the best method for journalists to mark and categorize these differences, but there’s no “standard practice” amongst journalists on social media to help readers navigate these issues, and so long as debates about journalistic ethics outside of work focus on trying to restrict journalists from developing biases at all, it won’t become standard practice. Adapting to social media means shifting away from the idea that personal bias can be prevented by isolating individuals from the controversial issues, rather than helping readers and journalists understand, acknowledge, and deconstruct biases in media for themselves by promoting transparency and conversation.

Stereotyping and Statistical Generalization

photograph of three different multi-colored pie charts

Let’s look at three different stories and use them to investigate statistical generalizations.

Story 1

This semester I’m teaching a Reasoning and Critical Thinking course. During the first class, I ran through various questions designed to show that human thinking is subject to predictable and systematic errors. Everything was going swimmingly. Most students committed the conjunction fallacy, ignored regression towards the mean, and failed the Wason selection task.

I then came to one of my favorite examples from Kahneman and Tversky: base rate neglect. I told the students that “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail,” and then asked how much more likely it is that Steve is a librarian than a farmer. Most students thought it was moderately more likely that Steve was a librarian.

Delighted with this result, I explained the mistake. While Steve is more representative of a librarian, you need to factor in base-rates to conclude he is more likely to actually be a librarian. In the U.S. there are about two million farmers and less than one hundred and fifty thousand librarians. Additionally, while 70% of farmers are male, only about 20% of librarians are. So for every one librarian named Steve you should assume there are at least forty-five farmers so named.

This culminated in my exciting reveal: even if you think that librarians are twenty times more likely than farmers to fit the personality sketch, you should still think Steve is more than twice as likely to be a farmer.

This is counter-intuitive, and I expected pushback. But then a student asked a question I had not anticipated. The student didn’t challenge my claim’s statistically illegitimacy, he challenged its moral illegitimacy. Wasn’t this a troubling generalization from gender stereotypes? And isn’t reasoning from stereotypes wrong?

It was a good question, and in the moment I gave an only so-so reply. I acknowledged that judging based on stereotypes is wrong, and then I…

(1) distinguished stereotypes proper from empirically informed statistical generalizations (explaining the psychological literature suggesting stereotypes are not statistical generalizations, but unquantified generics that the human brain attributes to intrinsic essences);

(2) explained how the most pernicious stereotypes are statistically misleading (e.g., we accept generic generalizations at low statistical frequencies about stuff we fear), and so would likely be weakened by explicit reasoning from rigorous base-rates rather than intuitive resemblances;

(3) and pointed out that racial disparities present in statistical generalizations act as important clarion calls for political reform.

I doubt my response satisfied every student — nor should it have. What I said was too simple. Acting on dubious stereotypes is often wrong, but acting on rigorous statistical generalizations can also be unjust. Consider a story recounted in Bryan Stevenson’s Just Mercy:

Story 2

“Once I was preparing to do a hearing in a trial court in the Midwest and was sitting at counsel table in an empty courtroom before the hearing. I was wearing a dark suit, white shirt, and tie. The judge and the prosecutor entered through a door in the back of the courtroom laughing about something.

When the judge saw me sitting at the defense table, he said to me harshly, ‘Hey, you shouldn’t be in here without counsel. Go back outside and wait in the hallway until your lawyer arrives.’

I stood up and smiled broadly. I said, ‘Oh, I’m sorry, Your Honor, we haven’t met. My name is Bryan Stevenson, I am the lawyer on the case set for hearing this morning.’

The judge laughed at his mistake, and the prosecutor joined in. I forced myself to laugh because I didn’t want my young client, a white child who had been prosecuted as an adult, to be disadvantaged by a conflict I had created with the judge before the hearing.”

This judge did something wrong. Because Bryan Stevenson is black, the judge assumed he was the defendant, not the defense. Now, I expect the judge acted on an implicit racist stereotype, but suppose the judge had instead reasoned from true statistical background data. It is conceivable that more of the Black people who enter that judge’s courtroom — even those dressed in suit and tie — are defendants than defense attorneys. Would shifting from stereotypes to statistics make the judge’s behavior ok?

No. The harm done had nothing to do with the outburst’s mental origins, whether it originated in statistics or stereotypes. Stevenson explains that what is destructive is the “accumulated insults and indignations caused by racial presumptions,” the burden of “constantly being suspected, accused, watched, doubted, distrusted, presumed guilty, and even feared.” This harm is present whether the judge acted on ill-formed stereotypes or statistically accurate knowledge of base-rates.

So, my own inference about Steve is not justified merely because it was grounded in a true statistical generalization. Still, I think I was right and the judge was wrong. Here is one difference between my inference and judge’s. I didn’t act as though I knew Steve was a farmer — I just concluded it was more likely he was. The judge didn’t act the way he would if he thought it was merely likely Stevenson was the defendant. The judge acted as though he knew Stevenson was the defendant. But the statistical generalizations we are considering cannot secure such knowledge.

The knowledge someone is a defendant justifies different behavior than the thought someone is likely a defendant. The latter might justify politely asking Stevenson if he is the defense attorney. But the latter couldn’t justify the judge’s actual behavior, behavior unjustifiable unless the judge knows Stevenson is not an attorney (and dubious even then). A curious fact about ethics is that certain actions (like asserting or punishing a criminal) require, not just high subjective credence, but knowledge. And since mere statistical information cannot secure knowledge, statistical generalizations are unsuitable justifications for some actions.

Statistical disparities can justify some differential treatment. For instance, seeing that so few of the Black people in his courtroom are attorneys could justify the judge in funding mock trial programs only at majority Black public schools. Indeed, it might even justify the judge, in these situations, only asking Black people if they are new defense attorneys (and just assuming white people are). But it cannot justify behavior, like harsh chastisement, that requires knowledge the person did something wrong.

I didn’t do anything that required knowledge that Steve was a farmer. So does this mean I’m in the clear? Maybe. But let’s consider one final story from the recent news:

Story 3

Due to COVID-19 the UK canceled A-level exams — a primary determinant of UK college admissions. (If you’re unfamiliar with the A-levels they are sort of like really difficult subject-specific SAT exams.) The UK replaced the exams with a statistical generalization. They subjected the grades that teachers and schools submitted to a statistical normalization based on the historical performance of the student’s school. Why did the Ofqual (Office of Qualifications and Examinations Regulation) feel the need to normalize the results? Well, for one thing, the predicted grades that teachers submitted were 12% higher than last year’s scores (unsurprising without any external test to check teacher optimism).

The normalization, then, adjusted many scores downward. If the Ofqual predicted, based on historical data, that at least one student in a class would have failed the exam then the lowest scoring student’s grade was adjusted to that failing grade (irrespective of how well the teacher predicted the student would have done).

Unsurprisingly, this sparked outrage and the UK walked back the policy. Student’s felt the system was unfair since they had no opportunity to prove they would have bucked the trend. Additionally since wealthier schools tended to perform better on the A-levels in previous years, the downgrading hurt students in poorer schools at a higher rate.

Now, this feels unfair. (And since justifiability to the people matters for government policy, I think the government made the right choice in walking back the policy.) But was it actually unfair? And if so, why?

It’s not an issue of stereotypes — the changes weren’t based on hasty stereotypes, but rather on a reasonable statistical generalization. It’s not an issue of compounding algorithmic bias (of the sort described in O’Neil’s book) as the algorithm didn’t produce results more unequal than actual test results. Nor was the statistical generalization used in a way that requires knowledge. College admissions don’t assume we know one student is better than another. Rather, they use lots of data to make informed guesses about which students will be the fit. The algorithm might sometimes misclassify, but so could any standardized test.

So what feels unfair? My hunch is the algorithm left no space for the exceptional. Suppose four friends who attended a historically poor performing school spent the last two years frantically studying together in a way no previous group had. Had they sat the test, all could have secured top grades — a first for the school. Unfortunately, they couldn’t all sit the test, and because their grades are normalized against previous years the algorithm eliminates their possibility of exceptional performance. (To be fair to the UK, they said students could sit the exams in the fall if they felt they could out-perform their predicted score).

But what is unfair about eliminating the possibility of exceptional success? My further hunch is that seeing someone as having the possibility of exceptional success is part of what it is to see them as an individual (perhaps for Kantian reasons of seeing someone as a free first cause of their own actions). Sure, we can accept that most people will be like most people. We can even be ok with wealthier schools, in the aggregate, consistently doing better on standardized tests. But we aren’t ok with removing the possibility for any individual to be an exception to the trend.

When my students resisted my claim that Steve was likely a farmer, they did not resist the generalization itself. They agreed most farmers are men and most librarians are women. But they were uncomfortable moving from that general ratio to a probabilistic judgment about the particular person, Steve. They seemed to worry that applying the generalization to Steve precluded seeing Steve as an exception.

While I think the students were wrong to think the worry applied in this case — factoring in base-rates doesn’t prevent the exceptional from proving their uniqueness — they might be right that there is a tension between seeing someone within a statistical generalization and seeing someone as an individual. It’s a possibility I should have recognized, and a further way acting on even good statistical generalizations might sometimes be wrong.

Principles, Pragmatics, and Pragmatic Principles

close-up photograph of horse with blinders

In this post, I want to talk about a certain sort of neglected moral hypocrisy that I have noticed is common in my own moral thinking and, that I expect, is common in most of yours. And to illustrate this hypocrisy, I want to look carefully at the hypocritical application of democratic principles, and conclude by discussing President Trump’s recent tweet about delaying the election.

First, however, I want to introduce a distinction between two types of hypocrisy: overt and subtle hypocrisy. Overt hypocrisy occurs when you, in full awareness of the double standard, differentially apply a principle to relevantly similar cases. It is easy to find examples. One is Mitch McConnell’s claim that he would confirm a new Supreme Court Justice right before an election after blocking the confirmation of President Obama’s nominee Merrick Garland because of how close the nation was to a presidential election. It is clear that Senator McConnell knows he is applying the democratic principle inconsistently, he just also does not think politics is about principles, he thinks it is about promoting his political agenda.

Subtle hypocrisy, in contrast, occurs when you inconsistently apply your principles but you do not realize you are applying them inconsistently. Names aside, a lot of subtle hypocrisy, while it is hard to recognize in the moment, is pretty clear upon reflection. We tend to only notice our principles are at play in some contexts and not others. We are more likely to notice curtailments of free speech when it happens to those who say similar things to ourselves. We are much more likely to notice when we are harmed by inequitable treatment than when we are benefited by it.

We are especially likely to hypocritically apply our principles when we begin to consider purported reasons given for various policies. If the Supreme Court issues a decision I agree with, chances are good that I won’t actually go and check the majority reasoning to see if I think it’s sound. Rather, I’m content with the win and trust the court’s decision. In contrast, if the Court issues a decision I find dubious, I often do look up the reasoning and, if I think it is inadequate, will openly criticize the decision.

Why is this sort of hypocrisy so common? Because violations of our principles don’t always jump out at us. Often you won’t notice a principle is at stake unless you carefully deliberate about the question. Yet, we don’t just preemptively deliberate about every action in light of every principle we hold. Rather, something needs to incline us to deliberate. Something needs to prompt us to begin to morally reflect on an action, and, according to an awful lot of psychological research, it is our biases and unreflective intuitions that prompt our tendency to reason (see Part I of Jonathan Haidt’s The Righteous Mind). Because we are more likely to try and think of ethical problems in the behavior of our political enemies, we are much more likely to notice when actions we instinctively oppose violate our principles, and are unlikely to notice the same when considering actions we instinctively support.

I can, of course, provide an example of personal hypocrisy in my application of democratic principles against disenfranchisement. When conservative policy makers started trying to pass voter ID laws I was suspicious, I did my research, and I condemned these laws as democratically discriminatory. In contrast, when more liberal states gestured at moving towards mail-only voting to deal with COVID I just assumed it was fine. I never did any research, and it was just by luck that a podcast informed me that mail-only voting can differentially disenfranchise both liberal voting blocs like Black Americans and conservative voting blocs like older rural voters). Thus, but for luck and given my own political proclivities, my commitment to democratic principles would have been applied hypocritically to condemn only policies floated by conservative lawmakers.

This subtle hypocrisy is extraordinarily troubling because, while we can recognize it once it is pointed out, it is incredibly difficult to notice in the moment. This is one of the reasons it is important to hear from ideologically diverse perspectives, and to engage in regular and brutal self-criticism.

But while subtle hypocrisy is difficult to see, I think there is another sort of hypocrisy which is even more difficult to notice. To see it, it will be useful if we take a brief digression and try to figure out what exactly is undemocratic about President Trump’s proposal to delay the election. I, like many of you, find it outrageous that President Donald Trump would even suggest delaying the election due to the COVID crisis. Partly this is because I believe President Trump is acting in bad faith. Tweeting not because he wants to delay the election but because he wants to preemptively delegitimize it. Or perhaps because he wants to distract the media from otherwise damning stories about COVID-19 and the economy.

But a larger part of me thinks it would be outrageous even if President Trump were acting in good faith, and that is because delaying an election is in tension with core democratic principles. Now, you might think delaying the election is undemocratic because regular elections are the means by which a government is held democratically accountable to its citizens (this is the sort of argument I hear most people making). Thus, if the current government is empowered to delay an election, it might enable the government to, at least for a time, escape democratic accountability. Of course, this is not a real worry in the U.S. context. Even were the U.S. congress to delay the election, it would not change when President Trump is removed from office. His term ends January 20th whether or not a new President has been elected. If no one has been elected, then either the Speaker of the House or the President pro tempore of the Senate takes over (and I am eagerly awaiting whatever new TV show in the Spring decides to run with that counterfactual).

But there is a different principled democratic concern at stake. Suppose a political party, while in control of Congress, would delay an election whenever polls looked particularly unpromising. This would be troublingly undemocratic because while Congress would have to hold the election at some point before January 3rd, they could also wait till the moment that the party currently in power seems to have the largest comparative advantage. But just as gerrymandering is undemocratic because it allows those currently in power to employ their political power to secure an advantage in the upcoming elections, so too is this strategy of delaying elections for partisan reasons.

But what if Congress really were acting in good faith. Would that mean it could be democratic to delay the election? Perhaps. If you were confident you were acting on entirely non-partisan reasons, then delaying in such contexts is just as likely to harm your chances as to help them. And indeed, I could imagine disasters so serious as to justify delaying an election.

However, I think in general there are pragmatic reasons to stick to the democratic principles even when we are acting on entirely non-partisan reasons. First, it can be difficult to verify that reasons are entirely non-partisan. It can be hard to know the intention of Senators, and sometimes it can even be hard to know our own intentions.

Second, and I think more profoundly, there is a concern that we will tend to inequitably notice non-partisan reasons. Take the Brexit referendum. When I first saw some of the chaos that happened following the Brexit vote, I began to seriously consider if the UK should just hold a second referendum. After all, I thought, and still think, there were clear principled democratic issues with the election (for example, there seemed to be a systematic spread of misinformation).

The problem of course is that had the Brexit vote gone the other way, then I almost certainly would never have looked into the election, and so never noticed anything democratically troubling about the result. My partisan thoughts about Brexit influence what non-partisan reasons for redoing the election I ended up noticing. To call for redoing an election is surely at least as undemocratic as calling for delaying an election (indeed, I expect it is quite a bit more undemocratic, since it actually gives one side two chances at winning), and yet I almost instantly condemn the call to delay an election and it took me ages to see the democratic issues with redoing the Brexit vote.

Here, it is not that I was hypocritically applying a democratic principle. Rather, I was missing a democratic principle I should have already had given my tendency to hypocrisy. Because partisan preferences influence what non-partisan reasons I notice, I should have adopted a pragmatic principle against calling for reelections following results with which I disagreed. Not because reelections are themselves undemocratic (just as delaying an election might not itself be undemocratic), but because as a human reasoner, I cannot always trust my own even non-partisan reasoning and so should sometimes blinker it with pragmatic principles.

Prejudice in the NFL?

painting of lamar jackson in NFL game

The NFL is over for the next six months. The Superbowl has been won, all the player and coach accolades have been handed out, and teams are busy looking to build on the 2020-2021 season in free agency and the upcoming draft. But in today’s contemporary media environment, the NFL can’t be just about football. Over the past few seasons, the NFL has endured a series of serious media crisis–player safety, television ratings, and scandalous players (mostly Antonio Brown). But an issue that continues to linger is about diversity and the impact of racial issues on the game. This is no surprise to anyone, as the diversity issues were the subject of host Steve Harvey’s monologue at this year’s NFL 100 Awards ceremony. Indeed, the small pool of minorities that sit in front offices and on coaching staffs, as well as recent decisions regarding players of color raise the question of who’s to blame for the NFL’s diversity issues as well as who’s responsible for finding solutions for them.

70% of NFL players are black–the lineman, the runningbacks, the defense, the receiving core. But if you look at one position in particular, it’s not reflective of the majority demographic–the quarterback. Per The New York Times, 12 black quarterbacks started for the NFL 2019-2020 season, but it’s one QB short for tying the record of most black quarterback starts in a single season. There’s even been a bit of controversy regarding black quarterbacks in the last few seasons. The most recent being about the NFL 2019 MVP Lamar Jackson. The Ravens quarterback was unanimously voted the league’s most valuable player, but his talents weren’t always recognized. Many sports analysts, league owners, and coaches considered Jackson a running back disguised as a quarterback. Some even suggested that he move to the wide receiver position. On one hand, comments about Jackson’s game could be purely based on what he demonstrated at the combine. But on the other hand, a black man being judged predominantly by white males hints at something deeper. Maybe it wasn’t just Jackson’s performance at the combine, it was that he didn’t fit the traditional image of a NFL quarterback–Joe Montana, Dan Marino, or Tom Brady (who Jackson happened to beat last season). However, in the same token, Superbowl champ Patrick Mahomes and Texans QB Deshaun Watson are also impacting the traditional image of a quarterback through their style of play.

Lamar Jackson isn’t the only black quarterback that has received pushback for what he does on the field. There’s Colin Kaepernick, the former San Francisco 49ers QB who exited the league after kneeling on the sidelines during the national anthem in protest of police brutality of African Americans. Team GM’s, owners, and even the President of the United States condemned Kaepernick for his actions. Now, are the comments from NFL GM’s and owners indicative of prejudice? Like Lamar Jackson, Kaepernick’s critics were mostly white men. The fact that they were against speaking out against police brutality, no matter how controversial the topic might be for the league, is questionable. But at the same time, once Kaepernick left the league and couldn’t sign with a team, the main reason he couldn’t get a job was because he was considered a PR nightmare. Regardless if teams agreed with Kaep’s kneeling or not, no team wanted the news stories that would come from signing him. If so, then the issue of prejudice would be about the fans’ bias if they condemned Kaepernick for kneeling. To complicate matters even further, Dak Prescott, QB of the Dallas Cowboys, said that Kaepernick’s protests had no place in the league despite being a black man himself. Either way, some sentiment surrounding Jackson and Kaepernick might go beyond what they do on the field.

Jackson and Kaep are only the most recent cases though. Since black men were allowed to play quarterback in the league, they were often considered not smart enough to run offenses or read defenses. Marlin Briscoe, the first ever black quarterback to start in the league, threw 14 touchdowns during his rookie season with the Denver Broncos. John Elway, a legend Broncos QB, only threw half as many touchdowns as Briscoe during his rookie season. Despite the performance, Briscoe never played quarterback again. Warren Moon, the only black quarterback in the NFL Hall of Fame made MVP for the 1977 Rose Bowl and still wasn’t invited to the NFL Combine. He didn’t play in the NFL for six seasons after he left college. Like Jackson, Moon was also told to switch to running back or wide receiver.

The same negative sentiment didn’t only apply to players either. Although 70% of the players in the NFL are black, only 9% of the managers in league’s front offices are and 0% are CEO’s or team presidents. There is only one black general manager and out of the 32 NFL teams, 3 of the league’s head coaches are black. Back in 2003, the league introduced the Rooney Rule, a policy aimed at addressing the lack of diversity at the head coaching level. Per the Rooney Rule, teams are required to interview at least one minority for head-coaching positions and front office jobs. But per a study by the Global Sport and Education Lab at Arizona State University, the Rooney Rule didn’t improve minorities’ chances of being hired. According to The Atlantic, in the past three years 19 head coaching positions were made available and only 2 black coaches filled the openings. Some black coaches are rarely given a chance to make an impact on a team either. Former Detroit Lions coach Jim Caldwell was fired after back to back 9-7 records for the 2017 and 2018 season. Bob Quinn, the Lions’ GM, said that Caldwell wasn’t meeting expectations. But Quinn then went on to hire former New England Patriots defensive coordinator Matt Patricia, who went 9-22 in his first two seasons as head coach. Last season, the Lions record was 3-12-1.

It could be argued that rather than prejudice, the NFL’s diversity issues are purely “best man for the job” decisions. Teams look for the best quarterbacks that fit their offense and can lead a team. Team owners and GMs bring in coaches that can draw up plays accustomed to their team’s culture. But simultaneously, race is the driving force behind many if not all of the United States’ issues. Politics, advertising, music, fashion, literature, and every other medium that can be thought of is influenced by race is some form or fashion. Is it so farfetched to think that sports isn’t any different? Perhaps some personnel decisions are purely based on skill and compatibility. But at the same time, the league has been around for decades, and maybe some of the racist sentiment of the past century has seeped into the present.

Johnson’s Mumbling and Top-Down Effects on Perception

photograph of Boris Johnson scratching head

On December 6th, in the midst of his reelection campaign, UK Prime Minister Boris Johnson spoke about regulating immigration to a crowd outside a factory in central England, saying “I’m in favour of having people of talent come to this country, but I think we should have it democratically controlled.” When Channel Four, one of the largest broadcasters in the UK, uploaded video of the event online, their subtitles mistakenly read “I’m in favor of having people of color come to this country,” making it seem as though Johnson was, in this speech, indicating a desire to control immigration on racial grounds. After an uproar from Johnson’s Conservative party, Channel Four deleted the video and issued an apology.

However, despite Tory accusations of slander and media partisanship, at least two facts make it likely that this was, indeed, an honest mistake on the part of a nameless subtitler within Channel Four’s organization:

  1. Poorly-timed background noise and Johnson’s characteristic mumbling make the audio of the speech less-than-perfectly clear at the precise moment in question, and
  2. Johnson has repeatedly voiced racist, sexist, and homophobic attitudes in both official and unofficial speeches, as well as in his writings (again, repeatedly) and his formal policy proposals.

Given the reality of (2), someone familiar with Johnson may well be more inclined to interpret him as uttering something explicitly racist (as opposed to the still-problematic dog whistle “people of talent”), particularly in the presence of the ambiguities (1) describes. Importantly, it may not actually be a matter of judgment (where the subtitler would have to consciously choose between two possible words) – it may genuinely seem to someone hearing Johnson’s speech that he spoke the word “color” rather than “talent.”

Indeed, this has been widely reported to be the case in the days following Johnson’s campaign rally, with debates raging online regarding the various ways people report to hear Johnson’s words..

For philosophers of perception, this could be an example of a so-called “top-down” effect on the phenomenology of perceptual experience, a.k.a. “what it seems like to perceive something.” In most cases, the process of perception converts basic sensory data about your environment into information usable by your cognitive systems; in general, this is thought to occur via a “bottom-up” process whereby sense organs detect basic properties of your environment (like shapes, colors, lighting conditions, and the like) and then your mind collects and processes this information into complex mental representations of the world around you. Put differently, you don’t technically sense a “dog” – you sense a collection of color patches, smells, noises, and other low-level properties which your perceptual systems quickly aggregate into the concept “dog” or the thought “there is a dog in front of me” – this lightning-fast process is what we call “perception.”

A “top-down” effect – also sometimes called the “cognitive penetration of perception” – is when one or more of your high-level mental states (like a concept, thought, belief, desire, or fear) works backwards on that normally-bottom-up process to influence the operation of your low-level perceptual systems. Though controversial, purported examples of this phenomenon abound, such as how patients suffering from severe depression will frequently report that their world is “drained of color” or how devoted fans of opposing sports teams will both genuinely believe that their preferred player won out in an unclear contest. Sometimes, evidence for top-down effects comes from controlled studies, such as a 2006 experiment by Proffitt which found that test subjects wearing heavy backpacks routinely reported hills to be steeper than did unencumbered subjects. But we need not be so academic to find examples of top-down effects on perception: consider the central portion of the “B-13” diagram.

When you focus on the object in the center, you can probably shift your perception of what it is (either “the letter B” or “the number 13”) at-will depending on whether you concentrate on either the horizontal or vertical lines around it. Because letters and numbers are high-level concepts, defenders of cognitive penetrability can take this as proof that your concepts are influencing your perception (instead of just the other way around).

So, when it comes to Johnson’s “talent/color” word choice, much like the Yanny/Laurel debate of 2018 or the infamous white/gold (or blue/black?) Dress of 2015, different audience members may – quite genuinely – perceive the mumbled word in wholly different ways. Obviously, this raises a host of additional questions about the epistemological and ethical consequences of cognitive penetrability (many researchers, for example, are concerned to explore perceptions influenced by implicit biases concerning racism, sexism, and the like), but it does make Channel Four’s mistaken subtitling much easier to understand without needing to invoke any nefarious agenda on the part of sneaky anti-Johnson reporters.

Put more simply: even though Johnson didn’t explicitly assert a racist agenda in Derbyshire, it is wholly unsurprising that people have genuinely perceived him to have done so, given the many other times he has done precisely that.

The Jezebel Stereotype and Hip-Hop

photograph of Lil' Kim on stage

Back in the day, black people were depicted in media through a series of racist caricatures that endured the majority of the 20th century. These caricatures became popularized in films, television, cartoons, etc. There was the classic sambo–the simple minded black man often portrayed as lazy and incoherent. Then, there was the mammy–the heavyset black woman maid who possessed a head-scratching loyalty to her white masters. The picaninny depicted black children as buffoons and savages. The sapphire caricature was your standard angry black woman, a trope that is still often portrayed in media today. But perhaps one of the most enduring caricatures is that of the jezebel. This caricature had an insatiable need for sex, so much so that they were portrayed as predators. One of the ways that this stereotype has endured time is through hip-hop. It could be argued that some black women in the rap game today reflect some of the attributes of the jezebel due to the promiscuity in their music. Therefore, are black women in rap facilitating the jezebel stereotype and, in turn, adversely affecting the depiction of black women in general?

Before we get any further, it should be noted that rap music has never been kind to women, especially black women (see “Hip-Hop Misogyny’s Effects on Women of Color”). You wouldn’t have to look far to confirm this. After all, Dr. Dre’s iconic album The Chronic has a song called “Bitches Ain’t Shit” with uncle Snoop Dogg singing the hook. It’s become a staple in rap music to disregard women in some form or fashion. But perhaps a line from Kanye West’s verse on The Game’s song “Wouldn’t Get Far” best embodies treatment of women and black women in the rap genre. West raps “Pop quiz how many topless, black foxes did I have under my belt like boxers?” In the music video, a bunch of black women in bikinis dance around West while he raps. Black women in rap are presented as objects of sexual desire–they’re arm candy. It’s the updated version of the jezebel. Before, as a racist caricature, the jezebel stereotype was used by slave masters to justify sex with female slaves. But even prior to that, Europeans traveled to Africa and saw the women there with little to no clothing and practicing polygamy. To Europeans, this signaled an inherently promiscuous nature rather than a social tradition. To them, it meant sexual desire.

Now, there’s a narrative of black women rappers in hip-hop who are embracing their sexualization in media. Junior M.A.F.I.A rapper and the Notorious B.I.G. femme fatale Lil’ Kim started this trend, spitting verses that your parents definitely would not have let you listen to as a kid. For example, on her song “Kitty Box,” Kim raps,

“Picture Lil’ Kim masturbatin in a drop

Picture Lil’ Kim tan and topless on a yacht

Picture Lil’ Kim suckin on you like some candy

Picture Lil’ Kim in your shirt and no panties.”

Fast forward from Lil’ Kim, and there’s Nicki Minaj with her song “Anaconda,” where the music video features her and several other black women twerking. But even past Nikki Minaj, there’s new rapper Megan Thee Stallion, who, although having developed an original sound, seems to have traces of Kim and Minaj in her music. On her song “Big Ole Freak,” Megan raps,

“Pop it, pop it, daydreaming ‘bout how I rock it.

He hit my phone with a horse so I know that mean come over and ride it.”

Posing a compelling contrast to “Big Ole Freak,” is another MC, Doja Cat. In the music video for her song “Juicy,” Doja dances to her lyrics that sound like a mash up of Megan Thee Stallion and Nicki Minaj, rapping,

“He like the Doja and the Cat,

yeah, He like it thick he like it fat,

Like to keep him wanting more.”

Though Doja’s music has traces of that jezebel stereotype with sexual desire, there’s a positive aspect to it as well. With all of the sexual innuendos in “Juicy,” at its core, the song is about body positivity. While rapping about that “natural beauty,” Doja features women of all shapes and sizes in her music video and is unapologetic about her figure–it’s as if her message is more about empowerment than it is sex. Megan Thee Stallion also incorporates empowerment for women in her raps with the term she coined “Hot Girl Summer,” which to Megan, is where women are unapologetic about their sexuality and simply enjoying life. At the same time, women in rap have also always put forth some positive sentiment in their music. One of the pioneering rap artists for women were MC’s like Queen Latifah, Lauryn Hill, and MC Lyte. For example, in her song U.N.I.T.Y., Queen Latifah begins her verse by rapping,

“Every time I hear a brother call a girl a bitch or a ho,

Tryna make a sister feel low, You know all of that gots to go.”

So, are the rappers today merely facilitating the jezebel stereotype and sexualization of black women? True, the messages in their music are reminiscent of some aspects of the jezebel trope, but there’s an aspect of positivity that challenges this reductionist view. It could also be that rappers like Doja Cat and Megan Thee Stallion are just smart entrepreneurs who understand that sex sells and are simply capitalizing on an opportunity. But these rappers might also be changing the sexualization of black woman by taking over the narrative for themselves.

But what does this mean for the rest of us? How does this help the black women who have to endure that stereotype everyday? They don’t have the platform like Doja Cat and Megan Thee Stallion do to start trends and see its impact. But maybe that’s where trends like “Hot Girl Summer” come in handy here. While the music and image from rap artists like Doja and Megan seem negative to some, it’s a form of empowerment for black women. Perhaps listening to “Juicy” lets some black women feel proud about their bodies and trends like “Hot Girl Summer” let them feel unapologetic about their bodies. Simultaneously, it’s important to understand that as time passes, stereotypes–how we define people–change meaning or lose meaning completely. But with that said, it’s still important to not forget the history of where those ideas came from.

Forget PINs, Forget Passwords

photograph of two army personnel using biometric scanner

By 2022, passwords and PINs will be a thing of the past. Replacing these prevailing safety measures is behavioral biometrics – a new and promising generation of digital security. By monitoring and recording the pattern of human activity such as finger pressure, the angle at which you hold your device, hand-eye coordination and other hand movements, this technology creates your digital profile to prevent imposters from accessing your secure information. Behavioral biometrics does not focus on the outcome of your digital activity but rather the manner in which you enter data or conduct a specific activity, which is then compared to your profile on record to verify your identity. Largely used by banks at present, research sites predict that by 2023, there will be 2.6 billion biometric payment users.

Biometric systems necessitate and operate based on a direct and thorough relationship between a user and technology. Consequently, privacy is one of the main concerns raised by critics of biometric systems. Functioning as a digitized reserve of detailed personal information, the possibility of biometric systems being used by unauthorized parties to access stored data is a legitimate fear for many. Depending on how extensive the use of biometric technology becomes, an individual’s biometric profile could be stolen and used against them to gain access to all aspects of their life. Adding to this worry is the potential misuse of an individual’s personal information by biometric facilities. Any inessential use of private information without the individual’s knowledge is intuitively unethical and considered an invasion of privacy, yet the US currently has no law in place requiring apps that record and use biometric data to disclose this form of data collection. If behavioral biometrics is already being used to covertly record and compile user activity, who’s to say how extensive and intrusive unregulated biometric technology will become over time?

Another issue with biometric applications is the possibility of bias towards minorities, given the prominence of research that suggests certain races are more likely to be recognized by face recognition software. A series of extensive independent assessments of face recognition systems conducted by the National Institute of Standards and Technology in 2000, 2002 and 2006 showed that males and older people are more accurately identified than females and younger people. Therefore, algorithms could be designed without accounting for the possibility of unintended biases, which would make these systems unethical.

By the same token, people with disabilities may face obstacles when enrolling in biometric databases if they lack physical characteristics used to register oneself in the system. An ethical biometric system must cater to the needs of all people and allow differently abled and marginalized people fair opportunities to enroll in biometric databases. Similarly, a lack of standardization of biometric systems that can cater to geographic differences could lead to compromised efficiency of biometric applications. Because of this, users could face discrimination and unnecessary obstacles in the authentication process.

Behavioral biometrics is gaining traction as the optimum form of cybersecurity designed to prevent fraud via identity theft and automated threats, yet the social cost of incorporating technology as invasive and meticulous as this has not been fully explored. The social and ethical consequences the use of behavioral biometrics may have on individuals and society at large deserves significant consideration. It is therefore imperative that developers and utilizers of biometric systems keep in mind the socio-cultural and legal contexts of this type of technology and compare the benefits of depending on behavioral biometrics for securing personal information against its costs. Failure to do so can not only hinder the success of behavioral biometrics, but can also leave us unequipped to tackle its possible repercussions.

Life on Mars? Cognitive Biases and the Ethics of Belief

NASA satelite image of Mars surface

In 1877 philosopher and mathematician W.K. Clifford published his now famous essay “The Ethics of Belief” where he argued that it is ethically wrong to believe things without sufficient evidence. The paper is noteworthy for its focus on the ethics involved in epistemic questions. An example of the ethics involved in belief became prominent this week as William Romoser, an entomologist from Ohio claimed to have found photographic evidence of insect and reptile-like creatures on the surface of Mars. The response of others was to question whether Romoser had good evidence for his belief. However, the ethics of belief formation is more complicated than Clifford’s account might suggest.

Using photographs sent by the NASA Mars rover, Romoser has observed insect and reptile like forms on the Martian surface. This has led him to conclude, “There has been and still is life on Mars. There is apparent diversity among the Martial insect-life fauna which display many features similar to Terran insects.” Much of this conclusion is based on careful observation of the photographs which contain images of objects, some of which appear to have a head, a thorax, and legs. Romoser claims that he used several criteria in his study, noting the differences between the object and its surroundings, clarity of form, body symmetry, segmentation of body parts, skeletal remains, and comparison of forms in close proximity to each other.

It is difficult to imagine just how significant the discovery of life on other planets would be to our species. Despite all of this, several scientists have spoken out against Romoser’s findings. NASA denies that the photos constitute evidence of alien life, noting that the majority of the scientific community agree that Mars is not suitable for liquid water or complex life. Following the backlash against Romoser’s findings, the press release from Ohio University has been taken down. This result is hardly surprising; the evidence for Romoser’s claim simply is not definitive and does not fit with the other evidence we have about what the surface of Mars is like.

However, several scientists have offered an explanation for the photos. What Romoser saw can be explained by pareidolia, a tendency to perceive a specific meaningful image in ambiguous visual patterns. These include the tendency of many to see objects in clouds, a man in the moon, and even a face on Mars (as captured by the Viking 1 Orbiter in 1976). Because of this tendency, false positive findings can be more likely. If someone’s brain is trained to observe beetles and their characteristics, it can be the case that they would identify visual blobs as beetles and make the conclusion that there are beetles where there are none.

The fact that we are predisposed to cognitive biases means that it is not simply a matter of having evidence for a belief. Romoser believed he had evidence. But various cognitive biases can lead us to conclude that we have evidence when we don’t, or to dismiss evidence when it conflicts with our preferred conclusions. For instance, in her book Social Empiricism Miriam Solomon discusses several such biases that can affect our decision making. For example, one may be egocentrically biased toward using one’s own observation and data over others.

One may also be biased towards a conclusion that is similar to a conclusion from another domain. In an example provided by Solomon, Alfred Wegener once postulated that continents move through the ocean like icebergs drift through the water based on the fact that icebergs and continents are both large solid masses. Perhaps in just the same way Romoser was able to infer based on visual similarities between insect legs and a shape in a Martian image, not only that there were insects on Mars, but that the anatomical parts of these creatures were similar in function to similar creatures found on Earth despite the vastly different Martian environment.

There are several other forms of such cognitive biases. There is the traditional confirmation bias, where one focuses on evidence that confirms their existing beliefs and ignores evidence that does not. There is the anchoring bias, were one relies too heavily on the first information that they hear. There is also the self-serving bias, where one blames external forces when bad things happen to them, but they take credit when good things happen. All of these biases distort our ability to process information.

Not only can such biases affect whether we pay attention to certain evidence or ignore other evidence, but they can even affect what we take to be evidence. For instance, the self-serving bias may lead one to think that they are responsible for a success when in reality their role was a coincidence. In this case, their actions become evidence for a belief when it would not be taken as evidence otherwise. This complicates the notion that it is unethical to believe something without evidence, because our cognitive biases affect what we count as evidence in the first place.

The ethics of coming to a belief based on evidence can be even more complex. When we deliberate over using information as evidence for something else, or whether we have enough evidence to warrant a conclusion, we are also susceptible to what psychologist Henry Montgomery calls dominance structuring. This is a tendency to try to create a hierarchy of possible decisions with one dominating the others. This allows us to gain confidence and to become more resolute in our decision making. Through this process we are susceptible to trading off the importance of different pieces of information that we use to help make decisions. This can be done in such a way where once we have found a promising option, we emphasize its strengths and de-emphasize its weaknesses. If this is done without proper critical examination, we can become more and more confident in a decision without legitimate warrant.

In other words, it is possible that even as we become conscious of our biases, we can still decide to use information in improper ways. It is possible that, even in cases like Romoser, the decision to settle in a certain conclusion and to publish such findings are the result of such dominance structuring. Sure, we have no good reason to infer the fact that the Martian atmosphere could support such life, but those images are so striking; perhaps previous findings were flawed? How can one reject what one sees with their own eyes? The photographic evidence must take precedence.

Cognitive biases and dominance structuring are not merely restricted to science. They impact all forms of reasoning and decision making, and so if it is the case that we have an ethical duty to make sure that we have evidence for our beliefs, then we also have an ethical duty to guard against these tendencies. The importance of such ethical duties is only more apparent in the age of fake news and other efforts to deliberately deceive others on massive scales. Perhaps as a public we should more often ask ourselves questions like “Am I morally obliged to have evidence for my beliefs, and have I done enough to check my own biases ensure that the evidence is good evidence?”

The Ethics of Homeschooling

photograph of young girl doing school work in room

The National Home Education Research Institute labelled homeschooling one of the fastest growing forms of education in the US with an estimated two to eight percent rise in the population of homeschooled children each year over recent years. Although home-based learning as a concept is an old practice, it is now being adopted by a diverse range of Americans. This trend of homeschooling extends to countries around the globe including Brazil, the Philippines, Mexico, France, and Australia, among other nations.

One of the commonly cited motivations for homeschooling children is parents’ concern for their child’s safety. Homeschooling provides children with a safe learning environment, shielding them from exposure to possible harms such as physical and psychological abuse, bullying from peers, gun violence, and racism. Exposure to such harms can lead to poor academic performance and long-term self-esteem issues. Recent research suggests that homeschooled students often perform better on tests than other students. Additionally, homeschooling can also provide an opportunity for an enhanced parent-child bond, and is especially convenient for parents of special needs children requiring attentive care.

Homeschooling was legalized throughout the US in 1993, but the laws governing homeschooling vary from state to state. States with the strictest homeschool laws (Massachusetts, New York, Pennsylvania, Rhode Island, and Vermont) mandate annual standardized testing and an annual instruction plan. But policing in the least restrictive states (Texas, Oklahoma, Indiana and Iowa) border on negligence. Iowa, in particular, has no regulations at all, and considers notifying the district of homeschooling merely optional.

Even though homeschooling is legal and gaining traction in the US today, it is not immune to skeptics who view homeschooling as an inadequate and flawed form of education for students. The prevailing critique of homeschooling has to do with the lack of social interaction amongst homeschooled children with peers, which is an important aspect of a child’s socialization into society. However, as most of homeschooled children’s social interactions are limited to adults and their family members, this could lead to the child developing issues in the future regarding learning to handle individuals with different backgrounds, belief systems, and opinions. Homeschooling advocates counter this critique by contending that the environment at home is superior to the environment children are exposed to at school, but it raises the question, at what cost?

Another aspect of homeschooling that is a point of contention is the lack of qualification of parents who choose to homeschool their children. While teachers have experience teaching students over the course of years and therefore develop action plans that work best with students, the same cannot be said for most parents who are not teachers by profession. Therefore, while homeschooling parents may have the best intentions for their children, they may be ill-equipped to provide the standard of education offered in public or private schools. Furthermore, the learning facilities offered by parents at home may not be on par with the learning facilities available in schools.

An additional issue that must be taken into consideration is that homeschooled children in states with lax regulations are at increased risk for physical abuse that goes unreported and undetected, as a result of children being sequestered in their homes. Approximately 95% of child abuse cases are communicated to authorities by public school teachers or officials. By isolating the homeschooled child, unregulated homeschooling allows abusive guardians to keep their abuse unnoticed. Isolating children at home also poses a public health risk as schools require students to be immunized, but this is legally required of homeschooled children in only a few states. Not only are unimmunized children vulnerable to a multitude of diseases, but also put other children and adults alike at risk of contracting illnesses.

Parental bias is an added complication that homeschooled children must deal with. Parental bias refers to dogma a homeschooled child may be exposed to on account of being raised solely on their parents’ belief systems. For example, most homeschooled children come from pious, fundamentalist Protestant families. Elaborating on the possible repercussions of unregulated homeschooling, Robin L. West, Professor of Law and Philosophy at Georgetown University Law Center writes in her article The Harms of Homeschooling, “[..] in much of the country, if you want to keep your kids home from school, or just never send them in the first place, you can. If you want to teach them from nothing but the Bible, you can.” Parental bias can therefore cause an individual to develop a skewed understanding of the world and can also pose issues in the individual’s life outside of home, when they are exposed to ideologies that are at odds with their own. If the homeschooled individual was raised in an environment with a homogeneous view on political, social or cultural issues, and if that is the only outlook that the child is exposed to, adjusting to the outside world with a plethora of opinions and values could cause internal dissension within the individual.

Given that one’s early experiences in life can shape our persona as an adult, going to a regular school instead of being homeschooled can serve as a primer to being better equipped at handling the “real world.” Furthermore, with the rising demand of homeschooling, it becomes essential to ask if the child is better off by learning about the “real world” while being sheltered by one’s guardians. If homeschooling is indeed the superior option, perhaps constructing a standard curriculum for homeschooling could address the concerns raised by critics of home-based learning.

Faulty Forensics: Justice, Knowledge, and Bias

image of police tape with police lights in background

In June, Netflix began releasing a series called “Exhibit A,” which debunks one form of crime investigative science per episode. Dubious forensic techniques have been exposed for decades, yet still have been successful in incarcerating countless people. There are a number of reasons that this should be troubling to all of us and motivate real change. One issue that highlights the severity of continuing to rely on debunked forensic techniques is what psychologists call the “CSI effect” – jurors place an over-valued amount of credulity on evidence based on forensic methods. Thus, in a trial scenario, it is not just that some evidence is not as reliable as it seems, but it is just this sort of evidence that jurors seem to cling to in making their decisions.

It is well-documented that, even in some circumstances that we believe ourselves to be working with logical facts, we can be swayed by socialized prejudices and biases about historically disenfranchised, stigmatized, and marginalized groups. This is obviously unfortunate because it can lead to the continued unjust circumstances and treatment of such groups. A great deal of policies in a criminal justice system are put in place in order to create a more objective and just system than would be attained were the suspicions and individual reasoning of particular people with a great deal of power given full reign over crime and punishment. Practices in trials, standards for evidence, protections of citizen’s rights, and other features in the criminal justice system are in place to correct for the ways that injustices are socialized into individual reasoning, and improvements have been attempted to combat implicit biases in individual policing in many districts as well.

Because humans are socialized with these heuristics in our reasoning that are influenced by stigma and prejudices, people in the criminal justice system rely on the science of forensics to be more objective than hunches, suspicions, and our sometimes unreliable reasoning. These tools are one method of separating the functioning of our justice system from the injustice of our society. However, doubt has been cast on a number of common methods of forensics and the reliability of these tools.

Ten years ago, a report by the National Academy of Sciences stated, “[w]ith the exception of nuclear DNA analysis, . . . no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.” Blood splatter analysis, bite mark analysis, fingerprint analysis, and, perhaps most well-known to be unreliable, lie detector tests, all have had scientists’ doubt cast on them. The continued use of these methods in the court of law stacks the deck against defendants. Practitioners of the forensic methods “often believed their methods were reliable and their conclusions were accurate with little or no scientific foundation for their beliefs. As a consequence, judges and jurors were misled about the efficacy of forensic evidence, which too often resulted in wrongful convictions.”

Years ago, a study found that drug-sniffing dogs reacted to clues from the beliefs of their handlers. In the last two years there have been some efforts to develop training to minimize this bias. This is crucial for the system, for the drug-sniffing dogs are meant to be an objective way of detecting substances for further investigation, and, in most states, an alert form such a dog warrants police forces to further investigate citizens. If the canines are influenced by their perception of what their handlers think, then they are not a distinct source of information regarding whether potential illegal activity is taking place. If this is the case, the dogs’ actions should not be providing legal permission to search citizens beyond the officer’s suspicion: if the suspicion alone does not warrant search, then the dog’s behavior does not warrant search.

The problem with these methods isn’t that they aren’t completely objective or reliable, it is that they are currently playing a role in our criminal justice system that outstrips how objective or reliable they, in fact, are. When they are playing such a role in a system that so significantly alters lives, and does so at a disproportionate rate for groups that are marginalized already, it is crucial to critically engage with them as tools for legitimate investigation and trail.

Racist, Sexist Robots: Prejudice in AI

Black and white photograph of two robots with computer displays

The stereotype of robots and artificial intelligence in science fiction is largely of a hyper-rational being, unafflicted by the emotions and social infirmities like biases and prejudices that impair us weak humans. However, there is reason to revise this picture. The more progress we make with AI the more a particular problem comes to the fore: the algorithms keep reflecting parts of our worst selves back to us.

In 2017, research showed compelling evidence that AI picks up deeply ingrained racial- and gender-based prejudices. Current machine learning techniques rely on algorithms interacting with people in order to better predict correct responses over time. Because of the dependence on interacting with humans for standards of correctness, the algorithms cannot detect when bias informs a correct response or when the human is engaging in a non-prejudicial way. Thus, the best working AI algorithms pick up the racist and sexist underpinnings of our society. Some examples: the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions. Europeans were associated with pleasantness and excellence.

In order to prevent discrimination in housing, credit, and employment, Facebook has recently been forced to agree to an overhaul of its ad-targeting algorithms. The functions that determined how to target audiences for ads relating to these areas turned out to be racially discriminatory, not by design – the designers of the algorithms certainly didn’t encode racial prejudices – but because of the way they are implemented. The associations learned by the ad-targeting algorithms led to disparities in the advertising of major life resources. It is not enough to program a “neutral” machine learning algorithm (i.e., one that doesn’t begin with biases). As Facebook learned, the AI must have anti-discrimination parameters built in as well. Characterizing just what this amounts to will be an ongoing conversation. For now, the ad-targeting algorithms cannot take age, zip code, or gender into consideration, as well as legally protected categories.

The issue facing AI is similar to the “wrong kind of reasons” problem in philosophy of action. The AI can’t tell a systemic bias of humans from a reasoned consensus: both make us converge on an answer and support the algorithm to select what we may converge on. It is difficult to say what, in principle, the difference is between the systemic bias and a reasoned consensus is. It is difficult, in other words, to give the machine learning instrument parameters to tell when there is the “right kind of reason” supporting a response and the “wrong kind of reason” supporting the response.

In philosophy of action, the difficulty of drawing this distinction is illustrated by a case where, for instance, you are offered $50,000 to (sincerely) believe that grass is red. You have a reason to believe, but intuitively this is the wrong kind of reason. Similarly, we could imagine a case where you will be punished unless you (sincerely) desire to eat glass. The offer of money doesn’t show that “grass is red” is true, similarly the threat doesn’t show that eating glass is choice-worthy. But each somehow promote the belief or desire. For the AI, a racist or sexist bias leads to a reliable response in the way that the offer and threat promote a behavior – it is disconnected from a “good” response, but it’s the answer to go with.

For International Women’s Day, Jeanette Winterson suggested that artificial intelligence may have a significantly detrimental effect on women. Women make up 18% of computer science graduates and thus are left out of the design and directing of this new horizon of human development. This exclusion can exacerbate the prejudices that can be inherent in the design of these crucial algorithms that will become more critical to more arenas of life.