← Return to search results
Back to Prindle Institute

Affirmative Action for Whom?

cutout of white man on corporate ladder elevated above peers

Wednesday, Laura Siscoe challenged affirmative action advocates to reflect on their apparent tunnel vision: if what we seek are diverse campuses and workplaces – environments that attract and support students and colleagues who possess a diverse set of skills and approach problems from unique vantage points – then why confine our focus simply to race and gender categories? Surely realizing the intellectual diversity we claim to crave would require looking at characteristics that aren’t simply skin-deep – factors like socio-economic background or country of origin, just to name a few. If diversity in thought really is our goal, it seems there are better ways of getting there.

Laura is no doubt right that much more could be done to diversify campuses and workplaces. But, at minimum, it seems prudent to protect the gains that historically marginalized groups have secured. Time and time again, formal legal equality – that each enjoys identical treatment under the law – has failed to secure equality of opportunity – that each enjoys a level playing field on which to compete. And when policies like race-conscious admissions go away, we revert back to the status quo all too quickly. (The NFL’s lack of diversity at the head coach position and the impotence of the Rooney Rule offers a compelling example.)

Critics of affirmative action, however, are quick to characterize such policies as special treatment for the undeserving. But it’s important to separate the myth from the reality. As Jerusalem Demsas writes in The Atlantic, “No one deserves to go to Harvard.” There is no obvious answer to who the best 1200 applicants are in any given year. At some level, there is no meaningful distinction between the different portraits of accomplishment and promise that candidates present – their “combined qualifications.” No magic formula can separate the wheat from the unworthy; there is no chaff. There are grades; there are scores; there are awards; there are trophies; there are essays; there are statements; there are kind words and character references. But there is no mechanical process for impartially weighing these various pieces of evidence and disinterestedly ranking applicants’ relative merit. Nor is there an algorithm that can predict all that a seventeen-year-old will become. (This is perhaps why we should consider employing a lottery system: the infinitesimal differences between candidates coupled with the boundless opportunities for bias suggests it is the height of hubris to insist that the final decision remain with us.)

Contrary to critics, then, affirmative action is not a program for elevating the unqualified – a practice geared to inevitably deliver, in Ilya Shapiro’s unfortunate choice of words, a “lesser black woman.” Ultimately, affirmative action is a policy designed to address disparate impact – the statistical underrepresentation of the historically marginalized in positions of privilege and power. It’s aimed at addressing both real and apparent racial exclusion on the campus and in the workplace.

Those skewed results, however, need not be the product of a deliberate intention to discriminate – a conscious, malicious desire to keep others down. “Institutional networks,” Tom Beauchamp reminds us, “can unintentionally hold back or exclude persons. Hiring by personal friendships and word of mouth are common instances, as are seniority systems.” We gravitate to the familiar, and that inclination produces a familiar result. Affirmative action, then, intervenes to attempt to break that pattern, by – in Charles J. Ogletree Jr.’s words – “affirmatively including the formerly excluded.”

But just how far should these considerations extend? Some, for instance, complain of the inordinate attention paid to something as limited as college admissions. Fransisco Toro writing in Persuasion has argued that we should stop wringing our hands over which segments of the 1% gain entry. There are far greater inequalities to concern ourselves with than the uber-privileged makeup of next year’s incoming Harvard class. We should be worried about the social mobility of all and not just the lucky few. Let affirmative action in admissions go.

But one of the Court’s fears, from Justice Jackson to Justice O’Connor, concerns colleges’ ability to play kingmaker – to decide who inherits power and all the opportunities and advantages that come with it. They have also worried about where that power goes – that is, which communities benefit when different candidates are crowned. This is most easily witnessed in the life-and-death field of medicine, where there is, according to Georgetown University School of Medicine,

an incredibly well documented body of literature that shows that the best, and indeed perhaps the only way, to give outstanding care to our marginalized communities is to have physicians that look like them, and come from their backgrounds and understand exactly what is going on with them.

Similarly, the Association of American Medical Colleges emphasizes that “diversity literally saves lives by ensuring that the Nation’s increasingly diverse population will be served by healthcare professionals competent to meet its needs.” Minority representation matters, and not simply for the individual applicants themselves. Even the selection process in something as seemingly narrow as college admissions promises larger repercussions downstream.

Given the importance of representation, the gatekeeping function of colleges and employers, and the way discrimination works, some form of intervention seems necessary. And we don’t seem to have a comparable remedy on hand. “Affirmative action is not a perfect social tool,” Beauchamp admits, “but is the best tool yet created as a way of preventing a recurrence of the far worse imperfections of our past policies of segregation and exclusion.” That tool could no doubt stand to be sharpened: gender is a woefully crude measure of disadvantage and race is a poor proxy for deprivation. Still, the tool’s imprecision needn’t mean abandoning the task.

There’s reason why integration remains an indispensable, if demanding, goal. As Elizabeth Anderson claims, “Americans live in a profoundly segregated society, a condition inconsistent with a fully democratic society and with equal opportunity. To achieve the latter goals, we need to desegregate — to integrate, that is — to live together as one body of equal citizens.” We must ensure that everyone can see themselves reflected in our shared social world.

In the end, affirmative action is simply one means by which to accelerate desegregation – to encourage diversification in the positions of power that were formerly restricted. And it was never designed to last forever, as Wes Siscoe recently explored. Affirmative action is merely a stopgap measure – a bridge to carry us where we want to be: a colorblind world where superficial differences no longer act as impediments to advancement. Unfortunately, the equality of opportunity we seek is not yet a reality for all – we have not arrived.

Diversity of What?

photograph of the legs of people waiting for a job interview

Affirmative action privileges individuals who belong to particular social groups in processes of hiring and institutional admission. The practice still receives a great amount of endorsement from those in higher education, despite widespread public disagreement over the issue. The recent U.S. Supreme Court ruling thrust the issue into the limelight, fueling further debate. While there are a variety of moral arguments that can be employed in support of affirmative action, one of the most prominent is that affirmative action policies are morally permissible because they promote more diverse colleges and workplaces.

However, it is clear that not all forms of diversity should be included in the scope of affirmative action policies. We would balk at a college seeking a diverse range of shoe sizes amongst applicants. Similarly, we would scratch our heads at a company choosing employees based on the diversity of their culinary preferences. So which kinds of diversity should affirmative action policies target? In order to answer this question, we must first consider which kinds of diversity colleges and workplaces have reason to promote.

There are different kinds of reasons for action. For the sake of this discussion, let’s consider two distinct sorts of reasons: moral and prudential. Moral reasons are those which apply to individuals or groups regardless of the particular goals of that individual or group. Prudential reasons, on the other hand, apply only when an individual or group has particular goals.

I have a moral reason, for example, to be honest when filing my taxes. However, I might also have a prudential reason to lie, insofar as it would be good for my business’s bottom line to avoid paying a lot of taxes. But moral and prudential reasons need not only point us in conflicting directions. Oftentimes we have both moral and prudential reason to perform a particular action. For instance, I have a moral reason to keep my promises to my friends as well as a prudential reason to do so. If I want my friends to remain in my life, this gives me a prudential reason to honor the promises I make to them.

The moral/prudential reasons distinction is helpful in determining which kinds of diversity colleges and workplaces have reason to promote. Let’s start with the category of moral reasons. Some claim that our societal institutions bear a moral responsibility to privilege certain groups in admissions and employment. Typically, this argument is applied to racial minorities who have been subjected to historical injustices. If such a moral responsibility really does exist, it provides societal institutions with a moral reason to engage in affirmative action along racial lines.

The challenge for the proponent of this style of argument is to both defend why such a moral responsibility applies to all societal institutions (as opposed to merely some) as well as to explain why this moral responsibility trumps all other competing responsibilities and reasons that such institutions might have. Put differently, even if institutions have a moral reason to favor racial minorities in admissions and employment, a further argument must be given to show that this moral reason isn’t outweighed by stronger, countervailing reasons against affirmative action.

Now we can turn to the category of prudential reasons. Given the goals of colleges and businesses, what kinds of diversity might they have reason to promote? In the United States, affirmative action tends to be race- and gender-based. But if we consider the underlying goals of universities and employers, it’s not immediately clear why these are the types of diversity they have most reason to promote. Of course, there are important differences in the foundational goals of businesses and institutions of higher education. Colleges and universities are presumably most concerned with the effective education of students (as well as staying financially viable), while businesses and corporations tend to aim at profit maximization.

The spirit of open-minded inquiry that characterizes institutions of higher education seems to provide reason to promote diversity in thought. If the ideal college classroom is a place where ideas are challenged and paradigms are questioned, intellectual diversity can aid in achieving this goal. However, it is not immediately obvious that racial or gender diversity promote this end, particularly since the majority of individuals advantaged by affirmative action are from similar socio-economic backgrounds. In order to defend affirmative action along the lines of race or gender, a case would have to be made that selecting for these categories is a highly effective way of selecting for intellectual diversity.

A similar point holds true in regards to affirmative action policies put in place by employers. Given the fundamental goal of profit maximization that businesses and corporations possess, these institutions have prudential reason to choose individuals who best help achieve this end. There does exist compelling empirical evidence that more diverse groups tend to outperform less diverse groups when it comes to problem-solving, creativity, and other performance-based metrics. However, these studies tend to demonstrate the upsides of a team possessing diverse skills, rather than diverse racial or gender identities.

Thus, it appears businesses and corporations have prudential reason to create teams with diverse skills, but more argument must be given in order to make the case that selecting for racial or gender diversity is an effective way of achieving this goal. Insofar as proponents of affirmative action seek to defend the practice on the grounds that it promotes diversity, it is imperative we get clear on which kinds of diversity our societal institutions have the most reason to promote.

What Should Disabled Representation Look Like?

photograph of steps leading to office building

Over the course of the last two years, the COVID-19 pandemic has infected millions, with long-haul symptoms of COVID permanently impacting the health of up to 23 million Americans. These long-haul symptoms are expected to have significant impacts on public health as a whole as more and more citizens become disabled. This will likely have significant impacts on the workforce — after all, it is much more difficult to engage in employment when workplace communities tend to be relatively inaccessible.

In light of this problem, we should ask ourselves the following question:

Should we prioritize disabled representation and accommodation in the corporate and political workforce, or should we focus on making local communities more accessible for disabled residents?

The answers to this question will determine the systematic way we go about supporting those with disabilities as well as how, and to what degree, disabled people are integrated into abled societies.

The burdens of ableism — the intentional or unintentional discrimination or lack of accommodation of people with non-normative bodies — often fall on individuals with conditions that prevent them from reaching preconceived notions of normalcy, intelligence, and productivity. For example, those with long COVID might find themselves unable to work and with little access to financial and social support.

Conversely, accessibility represents the reversal of these burdens, both physically and mentally, specifically to the benefit of the disabled individual, rather than the benefit of a corporation or political organization.

Adding more disabled people to a work team to meet diversity and inclusion standards is not the same as accessibility, especially if nothing about the work environment is adjusted for that employee.

On average, disabled individuals earn roughly two-thirds the pay of their able-bodied counterparts in nearly every profession, assuming they can do their job at all under their working conditions. Pushing for better pay would be a good step towards combating ableism, but, unfortunately, the federal minimum wage has not increased since 2009. On top of this, the average annual cost of healthcare for a person with a disability is significantly higher ($13,492) than that for a person without ($2,835). Higher wages alone are not enough to overcome this gap.

It is our norm, societally, to push the economic burden of disability onto the disabled, all while reinventing the accessibility wheel often just to make able-bodied citizens feel like they have done a good thing. In turn, we have inventions such as $33,000 stair-climbing wheelchairs being pushed — inventions that rarely are affordable for the working disabled citizen, let alone someone who cannot work — in instances where we could just have built a ramp.

In order for tangible, sustainable progress to be made and for the requirements of justice to be met, we must begin with consistent, local changes to accessibility.

It can be powerful to see such representation in political and business environments, and it’s vital to provide disabled individuals with resources for healthcare, housing, and other basic needs. But change is difficult at the large, systemic level. People often fall through the cracks of bureaucratic guidelines. Given this, small-scale local changes to accessibility might be a better target for achieving change for the disabled community on a national scale.

Of course, whatever changes are made should be done in conversation with disabled members of the community, who will best understand their own experiences and needs. People with disabilities need to be included in the conversation, not made out as some kind of problem for abled people to solve.

This solution morally aligns with Rawls’ theory of justice as fairness, which emphasizes justice for all members of society, regardless of gender, race, ability level, or any other significant difference. It explains this through two separate principles. The first focuses on everyone having “the same indefeasible claim to a fully equal basic liberties.” This principle takes precedence over the second principle, which states that “social and economic inequalities… are to be attached to offices and positions open to all… to the greatest benefit of the least-advantaged.”

By Rawls’ standards, because of the order of precedence, we should prioritize ensuring disabled citizens’ basic liberties before securing their opportunities for positions of economic and social power.

But wouldn’t access to these positions of power provide a more practical path for guaranteeing basic liberties for all disabled members of society? Shouldn’t the knowledge and representation that disabled individuals bring lead us towards making better policy decisions? According to Enzo Rossi and Olúfémi O. Táíwò in their article on woke capitalism, the main problem with an emphasis on diverse representation is that, while diversification of the upper class is likely under capitalism, the majority of oppressive systems for lower classes are likely to stay the same. In instances like this, where the system has been built against the wishes of such a large minority of people for so long, it may be easier to effect change by working from the bottom up, bringing neighbors together to make their communities more accessible for the people who live there.

Oftentimes, disabled people simply want to indulge in the same small-scale pleasures that their nondisabled counterparts do. When talking to other disabled individuals about their desires, many of them are as simple as able-bodied counterparts’ daily taken-for-granted lives: cooking in their own apartment, navigating public spaces simply, or even just being able to go to the bank or grocery store. These things become unaffordable luxuries for disabled people in inaccessible areas.

In my own experience with certain disabilities, particularly in my worst flare-ups that necessitated the use of a wheelchair, I just wanted to be able to do very simple things again. Getting to class comfortably, keeping up with peers, or getting to places independently became very hard to achieve, or simply impossible.

Financial independence and some kind of say in societal decisions would certainly have been meaningful and significant, but I really just needed the basics before I could worry about career advancement or systemic change.

Accessibility for disabled people on such simple scales only improves their independence, and independence for nondisabled people as well. Any change for disabled people at a local scale would also benefit the larger community. Building better ramps, sidewalks, and doors for people with mobility limitations within homes, educational environments, and recreational areas not only eases the burden of disability, but it also improves quality of life for children, the temporarily disabled, and the elderly in the same community.

Obviously, there is something important to be said about securing basic needs — especially housing, healthcare, food, and clean drinking water — but these, too, would be best handled by consulting local disabled community members to meet their specific requirements.

From here, we could focus on making further investments in walkable community areas and providing adequate physical and social support like housing, basic income, and recreation. We can also make proper changes to our current social support systems, which tend to be dated and ineffective.

The more disabled peoples’ quality of lives improve, the more likely they will feel supported enough to make large-scale change. What matters at the end of the day is that disabled people are represented in real-life contexts, not just in positions of power.

Representation isn’t just being featured in TV shows or making it into the C-Suite, it’s being able to order a coffee at Starbucks, get inside a leasing office to pay rent, or to swim at the local pool.

This is not the end-all be-all solution to end ableism, nor is it guaranteed to fix larger structural and political issues around disability, like stigma and economic mobility. But, by focusing on ableism on a local scale in a non-business-oriented fashion, we can improve the quality of life of our neighbors, whether they are experiencing long COVID or living with another disability. Once we have secured basic liberties for disabled folks, then we can worry about corporate pay and representation.

Brian Flores, Equal Opportunity, and Affirmative Action

photograph of NFL emblem on football

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


We need to talk about Brian Flores’s lawsuit – the ex-Miami Dolphins head coach alleging racial discrimination and, once again, highlighting the lack of diversity in owners’ boxes and front offices around the league. But this isn’t a story about the NFL. It isn’t even about sports. Instead, this is a story about affirmative action; it’s a story about the relationship between equality of opportunity and equality of outcome, between fairness and equity.

The NFL has a problem (okay, the NFL has a few problems). One of the most obvious ways to see this is in representation. African Americans make up 70% of the NFL’s player base, but there is only one Black head coach working in the league today. (And there are even fewer Black owners.) While any result that fails to produce absolute statistical proportionality need not suggest nefarious intent, the degree to which these figures diverge warrants at least a raised eyebrow. It’s difficult to explain why so few Black players make the transition from the field to the front office. You’d think that at least some of the skills that made for a stand-out player might also translate to their proficiency with X’s and O’s. More generally, you’d expect that the same interest and commitment that leads so many African Americans to play the game at a professional level would produce a corresponding number of others deeply invested in coaching or managing.

Enter: the Rooney Rule. In an attempt to shake up this monochrome landscape, league officials implemented a policy requiring teams to interview at least one (now two) minority candidates for any head coaching vacancy (now coordinator positions as well). The hope was that by guaranteeing that a more diverse pool of finalists gets the opportunity to make their pitch, diversity in the coaching ranks would soon follow. It was assumed that all these candidates needed was to be given the chance to change hearts and minds in person. At long last, progress might finally be made in erasing the vast differences in the way white and non-white coaches have historically been evaluated.

The details of Flores’ lawsuit confirm that no such revolution has come to pass. Owners and general managers treat the Rooney Rule as a mere formality – another hoop that must be jumped, another box that must be ticked. The organizations identified in Flores’s suit scheduled a sit-down as formally required, but apparently couldn’t bring themselves to take him or the interview seriously. The results of their deliberations had been decided long before Flores walked into the room. These executives were playacting, but couldn’t even be bothered to try to disguise it. That said, Flores’s allegations aren’t about a failure of etiquette or good manners, they concern a league that still refuses to acknowledge even the appearance of racial bias, let alone the existence of an actual, pervasive problem. It seems the Rooney Rule may have been doomed from the start; as it turns out, the problem runs much deeper than simply putting a face with a name.

So who – if anyone – might be to blame for the NFL’s present predicament?

A not insignificant number of folks will answer: no one. Brian Flores isn’t owed a head coaching gig. These organizations are free to hire whomever they so choose. Head coaches represent a significant investment of time and resources, and it would be absurd for anyone to dictate to NFL teams who is and is not the most qualified person for the job. Jim Trotter, for example, recently recounted his interchange with an owner who suggested that anybody griping about the lack of representation in the NFL “should go buy their own team and hire who they want to hire.”

Others, meanwhile, will be inclined to point to race-conscious policies (like the Rooney Rule) as the guilty party. To these voices, it seems completely wrong-headed to pick some number out of thin air and then complain when we fail to reach that arbitrary diversity benchmark. Looking at race is precisely what got us into this mess, so surely it’s absolute folly to think that intentionally putting our thumb on the scale could get us out of it.

What’s worse, mandating that teams do their due diligence – and, more specifically, demanding that due diligence take the particular form of race-conscious interviewing practices – reduces people of color to tokens. It’s no wonder Flores reports feeling embarrassed – these folks will say – the Rooney Rule set him up, time and time again, to be treated as nothing more than a courtesy invite. Flores was required to go on performing while everyone else in the room was in on the joke. And we should expect none of that behavior to change, they’ll say, if we continue to force hiring committees to go through the motions when they’ve already made up their minds.

This, critics will tell you, is precisely the trouble with initiatives so enamored with equality of outcome – or equity – where an attempt is made to jerry-rig some result built to suit our preferred optics (say, having management roles more accurately reflect teams’ composition). We shouldn’t focus all our attention on meticulously shaping some preferred result; we can’t elevate some and demote others all according to irrelevant and impersonal considerations based in appearances. Any such effort refuses to appreciate the role of individual choice – of freedom, responsibility, and merit. (Owners don’t want to be told how they have to go about picking a winner, they know exactly what winners look like.) As long as we can maintain the right conditions – a level playing field of equal opportunity where everyone receives a fair shake – then we have no cause to wring our hands over the (mis)perception of unequal outcomes. There’s no need to invoke the dreaded language of “quotas,” there’s no cause for infringing on the people in charge’s freedom to choose. You simply call the game, deal the cards, and let the chips fall where they may.

Brian Flores’s lawsuit, however, insists that the deck is stacked against him and others like him. Indeed, Flores’s claim is that equality of opportunity does not exist. He’s alleging that he’s been passed over in the coaching carousel specifically because he is Black. Flores supports these claims with his personal experience of sham interviews and by pointing to a double standard evidenced by the accelerated rise of white coaches in comparison to their more accomplished Black counterparts. In essence, Flores argues these experiences and findings (as well as the individual accounts of some 40 other Black coaches, coordinators, and managers) all indicate racial discrimination is an all-too-real force in the NFL. Without an intentional and forceful intervention, business as usual will continue.

Given this fresh round of accusations, the NFL can’t continue to take a hands-off approach to the problem of representation; it clearly isn’t going to work itself out. Even the meager measures the league put in place to support equal opportunity are not being followed. The Rooney Rule has no teeth and seems to have resulted in no tangible gains. In the end, the policy relies on honorable intentions, personal commitments, and good-faith efforts. As Stephen Holder of The Athletic writes,

We just have to come to terms with an undeniable and inconvenient truth: You can encourage and even incentivize people to do the right thing. But what you cannot do is make them want to do the right thing.

The only way things change is if the people in power take the policy seriously, and it’s not clear that the appropriate carrots or sticks exist for encouraging teams to comply with the letter of the law – let alone embrace its spirit. Achieving the desired result demands an alternative approach. At some point outcomes have to matter.

So, where does that leave us? What have we learned? Where do we go from here? It’s difficult to know how to go about balancing two competing convictions: 1) focusing solely on equality of outcomes disrespects individuality 2) relying solely on equality of opportunity assumes an unbiased system. Or, perhaps more pointedly: 1) it’s wrong to reduce people solely to their various group identities, but 2) it’s also wrong to fail to appreciate the way people, organizations, institutions reduce people solely to their various group identities.

There is no obvious way to bridge the chasm between these two commitments. But maybe we could start by acknowledging that it isn’t hopelessly reductive to think that it might be best if, for instance, the next Supreme Court Justice wasn’t another white man; to think that no single group identity is so inherently qualified as to explain an absolute stranglehold on the positions of power and privilege; to think that for only the eighth time in 230+ years it might be best to break with tradition. Because, if the Rooney Rule has taught us anything, it’s that if you don’t ever actually commit to change, it doesn’t ever actually happen.

Ethics and Job Apps: Goodhart’s Law and the Temptation Towards Dishonesty

photograph of candidates waiting for a job interview

In the first post in this series, I discussed a moral issue I ran into as someone running a job search. In this post, I want to explore a moral issue that arose when applying to jobs, namely that the application process encourages a subtle sort of dishonesty.

My goal, when working on job applications, is to get the job. But to get the job, I need to appear to be the best candidate. And here is where the problem arises. I don’t need to be the best candidate; I just need to appear to be the best candidate. And there are things that I can do that help me appear to be the best candidate, whether I’m actually the best candidate or not.

To understand this issue, it will be useful to first look at Goodhart’s law, and then see how it applies to the application process.

Goodhart’s Law

My favorite formulation of Goodhart’s law comes from Marilyn Strathern:

When a measure becomes a target, it ceases to be a good measure. 

To understand what this means, we need to understand what a measure is. Here, we can think about a ‘measure’ as something you use as a proxy to assess how well a process is going. For example, if I go to the doctor they cannot directly test my health. Rather, they can test a bunch of things that act as measures of my health. They can check my weight, my temperature, my blood pressure, my reflexes, etc. If I have a fever, then that is good evidence I’m sick. If I don’t have a fever, that is good evidence I’m healthy. My temperature, then, is a measure of my health.

My temperature is not the same thing as my health. But it is a way to test whether or not I’m healthy.

So what Goodhart’s law says is that when the measure (in this case temperature) becomes a target, it ceases to be a good measure. What would it mean for it to become a target? Well, my temperature would be a target if I started to take steps to make sure my temperature remains normal.

Suppose that I don’t want to have a fever, since I don’t want to appear sick, and so, whenever I start to feel sick I take some acetaminophen to stop a fever. Now my temperature has become a target. So what Goodhart’s law says is that now that I’m taking steps to keep my temperature low, my temperature is no longer a good measure of whether I’m sick.

This is similar to the worry that people have about standardized tests. In a world where no one knew about standardized tests, standardized tests would actually be a pretty good measure of how much kids are learning. Students who are better at school will, generally, do better on standardized tests.

But, of course, that is not what happens. Instead, teachers begin to ‘teach to the test.’ If I spend hours and hours teaching my students tricks to pass standardized tests, then of course my students will do better on the test. But that does not mean they have actually learned more useful academic skills.

If teachers are trying to give students the best education possible, then standardized tests are a good measure of that education. But if teachers are instead trying to teach their kids to do well on standardized tests, then standardized tests are no longer a good measure of academic ability.

When standardized tests become a target (i.e., when we teach to the test) then they cease to be a good measure (i.e., a good way to tell how much teachers are teaching).

We can put the point more generally. There are various things we use as ‘proxies’ to assess a process (e.g., temperature to assess if someone is sick). We use these proxies, even though they are not perfect (e.g., you can be sick and not have a fever, or have a fever and not be sick), because they are generally reliable. But because the proxies are not perfect, there are often steps you can take to change the proxy, without changing the underlying thing that you are trying to measure (e.g., you can lower people’s temperature directly, without actually curing the disease). And so the stronger incentive people have to manipulate the proxy, the more likely they are to take steps that change the proxy without changing what the proxy was measuring (e.g., if you had to make it to a meeting where they were doing temperature checks to eliminate sick people, you’d be strongly tempted to take medicine to lower your temperature even if you really are sick). And because people are taking steps to directly change the proxy, the proxy is no longer a good way to test what you are trying to measure (e.g., you don’t be able to screen out sick people from the meeting by taking their temperature).

The thing is, Goodhart’s law explains a common moral temptation that we have to prioritize appearances.

Take, as an example, an issue that comes up in bioethics. Hospitals have a huge financial incentive to do well on various metrics and ratings. One measure is what percentage of patients die in the hospital. In general, the more a hospital contributes to public health, the lower the percentage of patients who will die there. And indeed, there are all sorts of ways a hospital might improve their care, which would also mean more people survive (they might adopt better cleaning norms, they might increase the number of doctors on shift, they might speed up the triage process in the emergency room, etc.). But there are also ways that a hospital could improve their numbers that would not involve improving care. For example, hospitals might refuse to admit really sick patients (who are more likely to die). Here the hospital would increase the percentage of their patients who survive, but would do so by actually giving worse overall care. The problem is, this actually seems to happen.

Student Evaluations?

So how does Goodhart’s law apply to my job applications?

Well, there is an ever-present temptation to do things that make me appear to be a better job candidate, irrespective of whether I am the better candidate.

The most well-known example of this is student course evaluations. One of the ways that academic search committees assess how good a teacher I will be, is by looking at my student evaluations. At the end of each semester, students rate how good the class was and how good I was as a teacher.

Now, there are two ways to improve my student evaluations. First, I can actually improve my teaching. I can make my class better, so that students get more out of it. Or…, I can make students like my class more in ways that have nothing to do with how much they actually learn. For example, students who do well in a class tend to rate it more highly. So by lowering my grading standards, I can improve my student evaluations.

Similarly, there are various teaching techniques (such as interleaving) which studies show are more effective at teaching. But studies also show that students rate them as less effective. Why? Because the techniques force the students to put in a lot of effort. Because the techniques make learning difficult, the students ‘feel like’ they are not learning as much. (Here is a nice video introducing this problem.)

One particularly disturbing study drives this point home. At the U.S. Air Force Academy, students are required to take Calculus I and Calculus II. They are required to take Calculus II even if they do very poorly in the first class (you can’t get out of it by becoming a humanities major). The cool thing about this data is that all students take the same exams which are independently graded (so there is no chance of lenient professors artificially boosting grades).

So what did the researchers find when they compared student evaluations and student performance? Well, if you just look at Calculus I, the results are what you’d naturally expect. Some professors were rated highly by students, and students in those classes outperformed the students of other teachers on the final exam. It seems, then, that the top-rated teachers did the best job teaching students.

However, you get a very different result if you then look at Calculus II. There, what they find is that the students who did the best in Calculus I (and who had the top-rated teachers), did the worst in Calculus II.

The researchers conclude that “our results show that student evaluations reward professors who increase achievement in the contemporaneous course being taught, not those who increase deep learning.” Popular teachers are those who ‘teach to the test,’ who give students tricks to help them answer the immediate questions they will face. Teachers who actually force students to do the hard work of understanding the material receive worse evaluations because students find the teaching more difficult and less intuitive. And because difficult, unintuitive learning is what is actually required to learn material deeply, there is an inverse correlation between student ratings and student learning.

Student evaluations are intended to be a measure of teaching competence. However, because I know they are used as a measure of teaching competence, there is constant temptation to treat them as a target – to do things that I know will improve my evaluations, but not actually improve my teaching.

Generalizing the Problem

Student evaluations are one example of this, but they are not the only one. There are tons of ways that measures become targets for job applicants. Take, for example, my cover letter.

For each job I apply for, I write a customized cover letter in which I explain why I’d be a good fit for the job. This cover letter is supposed to be a measure of ‘fit’. The search committee looks at the letter to see if my priorities line up with the priorities of the job.

The problem, however, is that I change around parts of my cover letter to fit what I think the search committee is looking for. My interests are wide-ranging, and my interests are likely to remain wide-ranging. But in my cover letters I don’t emphasize all of these things to the extent of my actual interest. In jobs in normative ethics, I focus my cover letter on my work in normative ethics. For teaching jobs, I focus on my teaching. In other words, I write my cover letter to try and make it look like my interest concentrations match up with what the search committee is looking for.

My cover letters become a target. But because they become a target, they cease to be a good measure.

Another example is anything people do just so that they can reference it in their applications. If a school wants a teacher who cares about diversity, then they may want to hire someone involved in their local Minorities and Philosophy chapter. But, of course, they don’t want to hire someone involved in that chapter just so that they appear to care about diversity.

Similarly, if a school wants to hire someone interested in the ethics of technology, they don’t want to hire someone who wrote a paper on AI ethics just so that they can appear competitive for technology ethics jobs.

Anytime someone does something just for appearances, they are targeting the measure. And by targeting the measure, they damage the measure itself.

Is This a Moral Problem

As a job applicant, I face a strong temptation to ‘target the measure.’ I am tempted to improve how good an applicant I appear to be, and not how good an applicant I am.

When I give into that temptation, am I doing something morally wrong? I think so, pursuing appearances is a form of dishonesty. It’s not exactly a lie. I might really think I’m the best applicant for the job. In fact, I might be the best applicant for the job. So it’s not that by pursuing appearances I’m trying to give the other person a false belief. But even when I’m trying to get them to believe something true, I’m presenting the wrong evidence for that true conclusion.

Let’s consider three versions of our temperature example.

Case 1: Suppose as a kid I wanted to stay home from school. Thus, I ‘fake’ a fever by sticking the thermometer in 100ºF water before showing it to my mom. My mom is using ‘what the thermometer says’ as a measure of whether I’m sick. I take advantage of that by targeting the measure, and thus create a misleading result.

Clearly what I did was dishonest. I was creating misleading evidence to get my mom to falsely believe I was sick. But the dishonesty does not depend on the fact that I’m healthy.

Case 2: Suppose I really think I’m sick (and that I really am sick), but I know my mom won’t believe me unless I have a fever. And again I stick the thermometer in 100ºF water before showing it to my mom.

Here I’m trying to get my mom to believe something true, namely that I’m sick (just as in the application where I’m trying to get the search committee to reach the true conclusion that I’m the best person for the job). But still it’s dishonest. One way to see this is that the evidence (what the thermometer says) only leads to the conclusion through a false belief (namely that I have a fever). But the dishonesty does not depend on that false belief.

Case 3: Suppose I know both that I am sick and that my mom won’t believe me unless I have a fever. I don’t want to trick her with the false thermometer result, and so instead I take a pill that will raise my temperature by a few degrees, thereby giving myself a fever.

Here my mom will look at the evidence (what the thermometer says), conclude I have a fever (which is true), and so conclude I am sick (which is also true). But still what I did was dishonest. It was not dishonest because it brought about a false belief, but because in targeting the measure. It’s dishonest because I’m giving ‘bad evidence’ for my true conclusion. I’m getting my mom to believe something true, but doing so by manipulation. I’m weaponizing her own ‘epistemic processes’ against her.

Now, this third case seems structurally similar to all the various steps people take to improve their ‘appearance’ as a job applicant. Those steps all ‘target the measure’ in a way that damages the sort of evidential support the measure is supposed to provide.

It seems clear that honesty requires that I not take steps to target my student evaluations directly. Similarly, it would be dishonest to put extra effort into classes that will be observed by my letter writers. I recognize that it is morally important to avoid this sort of propaganda. For example, if I’m going to give end-of-semester extra credit, for instance, I will wait till after evaluations are done just to make sure I’m not tempted to give that extra credit as a way to boost my evaluations.

But those are the (comparatively) easy temptations to avoid. It’s easy to not do something just to make yourself appear better. What is much harder is being equally willing to do something even knowing it will make me appear worse. For example, there are times when I’ve avoided giving certain really difficult assignments or covering certain controversial topics which I think probably would have been educationally best, because I thought there was a chance they might negatively affect my teaching evaluations. It’s much easier to not do something for a bad reason, than it is to not refrain from doing something for a bad reason.

Conclusion

Once you start noticing this temptation to ‘play to appearances’ you start to notice it everywhere. In this way it’s like the vice of vainglory.

In fact, you start to notice that it might be at play in your very posts about the problem. If a potential employer is reading this piece, I expect it reflects well on me. I think it gives the (I hope, true) impression that I try to be unusually scrupulous about my application materials. And that is not necessarily dishonest, but it is dishonest if I would not have written this piece except to give that impression. So is that the real reason I wrote it?

Honestly, I’m not sure. I don’t think so, but self-knowledge is hard for us ordinary non-saintly people. (Though I’ll leave that topic for a future post.)

Ethics and Job Apps: Why Use Lotteries?

photograph of lottery balls coming out of machine

This semester I’ve been 1) applying for jobs, and 2) running a job search to select a team of undergraduate researchers. This has resulted in a curious experience. As an employer, I’ve been tempted to use various techniques in running my job search that, as an applicant, I’ve found myself lamenting. Similarly, as an applicant, I’ve made changes to my application materials designed to frustrate those very purposes I have as an employer.

The source of the experience is that the incentives of search committees and the incentives job applicants don’t align. As an employer, my goal is to select the best candidate for the job. While as an applicant, my goal is that I get a job, whether I’m the best candidate or not.

As an employer, I want to minimize the amount of work it takes for me to find a dedicated employee. Thus, as an employer, I’m inclined to add ‘hoops’ to the application process, by requiring applicants to jump through those hoops, I make sure I only look through applications of those who are really interested in the job. But as an applicant, my goal is to minimize the amount of time I spend on each application. Thus, I am frustrated with job applications that require me to develop customized materials.

In this post, I want to do three things. First, I want to describe one central problem I see with application systems — what I will refer to as the ‘treadmill problem.’ Second, I want to propose a solution to this problem — namely the use of lotteries to select candidates. Third, I want to address an objection employers might have to lotteries — namely that it lowers the average quality of an employer’s hires.

Part I—The Treadmill Problem

As a job applicant, I care about the quality of my application materials. But I don’t care about the quality intrinsically. Rather, I care about the quality in relation to the quality of other applications. Application quality is a good, but it is a positional good. What matters is how strong my applications are in comparison to everyone else.

Take as an analogy the value of height while watching a sports game. If I want to see what is going on, it’s not important just to be tall, rather it’s important to be taller than others. If everyone is sitting down, I can see better if I stand up. But if everyone stands up, I can’t see any better than when I started. Now I’ll need to stand on my tiptoes. And if everyone else does the same, then I’m again right back where I started.

Except, I’m not quite back where I started. Originally everyone was sitting comfortably. Now everyone is craning uncomfortably on their tiptoes, but no one can see any better than when we began.

Job applications work in a similar way. Employers, ideally, hire whosoever application is best. Suppose every applicant just spends a single hour pulling together application materials. The result is that no application is very good, but some are better than others. In general, the better candidates will have somewhat better applications, but the correlation will be imperfect (since the skills of being good at philosophy only imperfectly correlate with the skills of being good at writing application materials).

Now, as an applicant, I realize that I could put in a few hours polishing my application materials — nudging out ahead of other candidates. Thus, I have a reason to spend time polishing.

But everyone else realizes the same thing. So, everyone spends a few hours polishing their materials. And so now the result is that every application is a bit better, but still with some clearly better than others. Once again, in general, the better candidates will have somewhat better applications, but the correlation will remain imperfect.

Of course, everyone spending a few extra hours on applications is not so bad. Except that the same incentive structure iterates. Everyone has reason to spend ten hours polishing, now fifteen hours polishing. Everyone has reason to ask friends to look over their materials, now everyone has reason to hire a job application consultant. Every applicant is stuck in an arms race with every other, but this arms race does not create any new jobs. So, in the end, no one is better off than if everyone could have just agreed to an armistice at the beginning.

Job applicants are left on a treadmill, everyone must keep running faster and faster just to stay in place. If you ever stop running, you will slide off the back of the machine. So, you must keep running faster and faster, but like the Red Queen in Lewis Carrol’s Through the Looking Glass, you never actually get anywhere.

Of course, not all arms races are bad. A similar arms race exists for academic journal publications. Some top journals have a limited number of article slots. If one article gets published, another article does not. Thus, every author is in an arms race with every other. Each person is trying to make sure their work is better than everyone else’s.

But in the case of research, there is a positive benefit to the arms race. The quality of philosophical research goes up. That is because while the quality of my research is a positional good as far as my ability to get published, it is a non-positional good in its contribution to philosophy. If every philosophy article is better, then the philosophical community is, as a whole, better off. But the same is not true of job application materials. No large positive externality is created by everyone competing to polish their cover letters.

There may be some positive externalities to the arms race. Graduate students might do better research in order to get better publications. Graduate students might volunteer more of their time in professional service in order to bolster their CV.

But even if parts of the arms race have positive externalities, many other parts do not. And there is a high opportunity cost to the time wasted in the arms race. This is a cost paid by applicants, who have less time with friends and family. And a cost paid by the profession, as people spend less time teaching, writing, and helping the community in ways that don’t contribute to one’s CV.

This problem is not unique to philosophy. Similar problems have been identified in other sorts of applications. One example is grant writing in the sciences. Right now, top scientists must spend a huge amount of their time optimizing grant proposals. One study found that researchers collectively spent a total of 550 working years on grant proposals for Australia’s National Health and Medical Research Council’s 2012 funding round.

This might have a small benefit in leading research to come up with better projects. But most of the time spent in the arms race is expended just so everyone can stay in place. Indeed, there are some reasons to think the arms race actually leads people to develop worse projects, because scientists optimize for grant approval and not scientific output.

Another example is college admissions. Right now, high school students spend huge amounts of time and money preparing for standardized tests like the SAT. But everyone ends up putting in the time just to stay in place. (Except, of course, for those who lack the resources required to put in the time; they just get left behind entirely.)

Part II—The Lottery Solution

Because I was on this treadmill as a job applicant, I didn’t want to force other people onto a treadmill of their own. So, when running my own job search, I decided to modify a solution to the treadmill problem that has been suggested for both grant funding and college admissions. I ran a lottery. I had each applying student complete a short assignment, and then ‘graded’ the assignments on a pass/fail system. I then choose my assistants at random from all those who had demonstrated they would be a good fit. I judged who was a good fit. I didn’t try to judge, of those who were good fits, who fit best.

This allowed students to step off the treadmill. Students didn’t need to write the ‘best’ application. They just needed an application that showed they would be a good fit for the project.

It seems to me that it would be best if philosophy departments similarly made hiring decisions based on a lottery. Hiring committees would go through and assess which candidates they think are a good fit. Then, they would use a lottery system to decide who is selected for the job.

The details would need to be worked out carefully and identifying the best system would probably require a fair amount of experimentation. For example, it is not clear to me the best way to incorporate interviews into the lottery process.

One possibility would be to interview everyone you think is likely a good fit. This, I expect, would prove logistically overwhelming. A second possibility, and I think the one I favor, would be to use a lottery to select the shortlist of candidates, rather than to select the final candidate. The search committee would go through the application and identify everyone who looks like a good fit. They would then use a lottery to narrow down to a shortlist of three to five candidates who come out for an interview. While the shortlisted candidates would be placed on the treadmill, a far smaller number of people are subject to the wasted effort. A third possibility would use the lottery to select a single final candidate, and then use an in-person interview merely to confirm the selected candidate really is a good fit. There is a lot of evidence that hiring committees systematically overweight the evidential weight of interviews, and that this creates tons of statistical noise in hiring decisions (see chapters 11 and 24 in Daniel Kahneman’s book Noise).

Assuming the obstacles could be overcome, however, lotteries would have an important benefit in going some way towards breaking the treadmill.

There are a range of other benefits as well.

  • Lotteries would decrease the influence of bias on hiring decisions. Implicit bias tends to make a difference in close decisions. Thus, bias is more likely to flip a first and second choice, than it is to flip someone from making it onto the shortlist in the first place.
  • Lotteries would decrease the influence of networking, and so go some way towards democratizing hiring. At most, an in-network connection will get someone into the lottery but it won’t increase you chance of winning the lottery.
  • It would create a more transparent way to integrate hiring preferences. A department might prefer to hire someone who can teach bioethics, or might prefer to hire a female philosopher, but not want to restrict the search to people who meet such criteria. One way to integrate such preferences more rigorously would be to explicitly weight candidates in the lottery by such criteria.
  • Lotteries could decrease interdepartmental hiring drama. It is often difficult to get everyone to agree on a best candidate. It is generally not too difficult to get everyone to agree on a set of candidates all who are considered a good fit.

Part III—The Accuracy Drawback

While there are advantages accrue to applicants and the philosophical community, employers might not like a lottery system. The problem for employers is that a lottery will decrease the average quality of hires.

A lottery system means you should expect to hire the average candidate who meets the ‘good fit’ criteria. Thus, as long as trying to pick the best candidate results in a candidate at least above average, then the average quality of the hire goes down with a lottery.

However, while there is something to this point, the point is weaker than most people think. That is because humans tend to systematically overestimate the reliability of judgment. When you look at the empirical literature a pattern emerges. Human judgment has a fair degree of reliability, but most of that reliability comes from identifying the ‘bad fits.’

Consider science grants. Multiple studies have compared the scores that grant proposals receive to the eventual impact of research (as measured by future citations). What is found is that scores do correlate with research impact, but almost all of that effect is explained by the worst performing grants getting low scores. If you restrict your assessment to the good proposals, researchers are terrible at judging which of the good proposals are actually best. Similarly, while there is general agreement about which proposals are good and which bad, evaluators rarely agree about which proposals are best.

A similar sort of pattern emerges for college admission counselors. Admissions officers can predict who is likely to do poorly in school, but can’t reliably predict which of the good students will do best.

Humans are fairly good at judging which candidates would make a good fit. We are bad at judging which good fit candidates would actually be best. Thus, most of the benefit of human judgment comes at the level of identifying the set of candidates who would make a good fit, not at the level of deciding between those candidates. This, in turn, suggests that the cost to employers of instituting a lottery system is much smaller than we generally appreciate.

Of course, I doubt I’ll convince people to immediately use lotteries on major important decisions. Thus, for now I’ll suggest that for smaller less consequential decisions, try a lottery system. If you are a graduate department, select half your graduating class the traditional way, and half by a lottery of those who seem like a good fit. Don’t tell faculty which are which, and I expect several years later it will be clear that the lottery system works just as well. Or, like me, if you are hiring some undergraduate researchers, try the lottery system. Make small experiments and let’s see if we can’t buck the current status quo.

U-Haul’s Anti-Smoking Workplace Wellness

photograph of overcrowded UHaul rental lot

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


U-Haul International recently announced that, beginning next month, the company will not hire anyone who uses nicotine products (including smoking cessation products like nicotine gum or patches). The new rule will take effect in the 21 states that do not have smoker protection laws. The terms of employment will require new hires to submit to nicotine screenings, placing limits on employees’ lawful, off-duty conduct.

The truck and trailer rental company has defended the new policy as nothing more than a wellness initiative. U-Haul executive Jessica Lopez has described the new policy as “a responsible step in fostering a culture of wellness at U-Haul, with the goal of helping our Team Members on their health journey.” But as the LA Times points out, “Simply barring people from working at the company doesn’t actually improve anyone’s health.”

U-Haul, however, is not alone, and employer bans on smoking are not new. Alaska Airlines has had a similar policy since 1985, and many hospitals have had nicotine-free hiring policies for over a decade. But there are important distinctions between these past policies and U-Haul’s new policy. Alaska Airlines’ ban was, at least in part, justified by the risk and difficulty of smoking on planes and in places surrounding airports; smoking simply isn’t conducive to that particular work environment. Meanwhile, hospitals’ change in hiring process was meant to support the healthy image they were trying to promote, and to demonstrate their commitment to patient health.

Interestingly (and importantly), U-Haul has not defended its new policy as a measure to improve customer experience or improve employees’ job performance. The (expressed) motivation has centered on corporate paternalism – U-Haul’s policy intends to protect their (prospective) employees’ best interests against their employees’ expressed preferences – and this has significant implications. This isn’t like screening for illicit drugs or forbidding drinking on the job. As Professor Harris Freeman notes, it “makes sense to make sure people are not intoxicated while working … there can be problems with safety, problems with productivity.” But in prohibiting nicotine use, U-Haul “seems like they’re making a decision that doesn’t directly affect someone’s work performance.” Unlike Alaska Airlines or Cleveland Clinic,

“This is employers exercising a wide latitude of discretion and control over workers’ lives that have nothing to do with their own business interests. Absent some kind of rationale by the employer that certain kind of drug use impacts job performance, the idea of telling people that they can’t take a job because they use nicotine is unduly intrusive into the personal affairs of workers.”

Similarly, the ACLU has argued that hiring policies like these amount to “lifestyle discrimination” and represent an invasion of privacy whereby “employers are using the power of the paycheck to tell their employees what they can and cannot do in the privacy of their own homes.” This worry is further compounded by the fact that,

“Virtually every lifestyle choice we make has some health-related consequence. Where do we draw the line as to what an employer can regulate? Should an employer be able to forbid an employee from going skiing? or riding a bicycle? or sunbathing on a Saturday afternoon? All of these activities entail a health risk. The real issue here is the right of individuals to lead the lives they choose. It is very important that we preserve the distinction between company time and the sanctity of an employee’s private life. Employers should not be permitted to regulate our lives 24 hours a day, seven days a week.”

Nicotine-free hiring policies or practices that levy surcharges on employees who smoke tend to rely heavily on the notion of individual responsibility: employees should be held accountable for the financial burden that their personal choices and behaviors place on their employers and fellow employees. But these convictions seem to ignore the fact that smoking is highly addictive, and 88% of smokers formed these habits before they were 18. Given this, the issue of accountability cannot be concluded so cleanly.

Apart from concerns of privacy or questions about individual responsibility, smoking bans on employment present a problem for equality of opportunity. According to the CDC, about 14 percent of adults in the U.S. smoke cigarettes. But smokers are not evenly distributed across socioeconomic and racial groups. For instance, half of unemployed people smoke; 42% of American Indian or Alaska Native adults smoke, 32% of adults with less than a high school education smoke; and 36% of of Americans living below the federal poverty line are smokers. It’s not hard to see that nicotine-free hiring practices disproportionately burden vulnerable populations who are already greatly disadvantaged. U-Haul’s low-wage, physical labor jobs, from maintenance workers to truck drivers to janitors, are restricted from those who may need them most (on grounds that have nothing to do with a candidate’s ability to perform job-related tasks).

This is no small thing; the Phoenix-based moving-equipment and storage-unit company employs roughly 4,000 people in Arizona and 30,000 across the U.S. and Canada. Lopez has claimed that “Taking care of our team members is the primary focus and goal” and that decreasing healthcare costs is merely “a bonus,” but it’s hard to separate the two. A recent study by Ohio State University estimated the cost employees who smoke pose to employers. Added insurance costs as well as the productivity lost to smoke breaks and increased sick time amounted to nearly $6,000 annually. Clearly, employee health, insurance costs, and worker output are all linked, and all contribute directly to a company’s profitability. The question is who should have to pay the cost for the most preventable cause of cancer and lung disease: employers or employees?

It may be that the real villain here is employer-sponsored insurance. By decoupling one’s employment from one’s healthcare, companies like U-Haul might be less invested in meddling with their employees’ off-duty choices. They have much less skin in the game if their employees’ behaviors aren’t so intimately tied to the company’s bottom line. Unless healthcare in the US changes, we may be destined to constantly police the line separating our private lives from our day jobs.

Should EpiPens be as Expensive as iPhones?

The EpiPen price controversy has been in the news for over a month now. For those not aware of what I am referring to, let me give a short recap. In 2007, a single EpiPen, a device for injecting a drug that reverses severe allergic reactions, cost about $47, according to an August 25, 2016 article from NPR. By this summer the price of a single EpiPen rose to $284. What’s more is that EpiPens are no longer available as single pens, but rather only as double packs. So, the price to fill an EpiPen prescription now tops $600.

Continue reading “Should EpiPens be as Expensive as iPhones?”