← Return to search results
+

Why Care About Economic Inequality?

photograph of skyscrapers behind a favela

Imagine a society where everyone is economically flourishing. Beyond the basic goods required for subsistence, individuals have access to a host of luxury goods and services as well. Opportunities currently only afforded to the upper middle class or wealthy, such as world travel, home ownership, and access to cutting-edge medical care, are accessible to all within this imaginary society. Despite this, there remains radical economic inequality. The top one percent of society’s earners have exponentially more financial resources than the rest of the population. However, the only material difference that results from this inequality is the capacity to own a multitude of vacation homes instead of one, the ability to space travel instead of merely to travel around the globe, and the ability to pass down greater amounts of wealth to one’s descendants.

For the purposes of this article, the salient point is not whether such a society is an actual economic possibility, but whether the kind of economic inequality present in this society constitutes an injustice. What gives many reason to suspect something has gone morally awry in societies with massive economic disparity is the suffering of the lower classes, including the inability of many to meet basic needs. It’s at least plausible that a society where everyone can fulfill both their basic needs as well as many of their wants, can tolerate significant disparity between the economically worst off and the best off without threatening any injustice.

Thus, let’s grant for the sake of argument that economic inequality, even radical inequality, is not intrinsically unjust. This simply means that a disproportionate economic distribution amongst individuals can be just. Do we still have reason to care about large-scale inequality, even in cases where the overall economic distribution is just? This article outlines a few reasons why we might still have strong reason to avoid severe economic inequality.

One such reason concerns the psychological impacts of economic inequality. Studies have shown that as the disparity between the top 1% and the rest of the population grows, there is also an increase in negative emotional experiences across the population. Additionally, the average person’s self-reported life satisfaction tends to decline in response to growing inequality. There are, of course, immense complications when trying to draw causal inferences at this scale. For instance, one outstanding theoretical question is whether it’s actual economic inequality that generates negative psychological consequences, or rather the mere perception of inequality that does this. Despite these kinds of ambiguities in the data, there remains significant evidence suggesting severe economic inequality comes with certain psychological and emotional costs that are worth further evaluation.

Another non-justice-based reason to care about economic inequality is due to its linkage with social trust. One way of glossing the notion of social trust is to say it involves the propensity of individuals to assume basically good intentions in other people, groups, and societal institutions. Social trust is an invaluable resource for political communities, as it engenders a number of benefits, including the promotion of governmental efficiency and improvements to public health. The very notion of democratic liberalism is at least partially constructed on the possibility of social trust amongst very diverse groups of people.

There is some evidence pointing to economic inequality as a detriment to social trust. At least within the context of the United States, there is particularly strong evidence that economic inequality causes those with less education and/or those who fall within the bottom third of earners to experience less social trust. Others cast doubt on this conclusion, arguing for a merely correlative relationship between rising inequality and declining social trust. For instance, perhaps it’s not inequality itself causing the decline in social trust, but it’s the fact that stark economic inequality frequently coexists with higher levels of governmental/legal corruption. This corruption causes a drop in social trust, which we then misattribute to inequality — or so the thought goes. As with the discussion of the psychological ramifications of inequality, drawing definitive causal connections between these kinds of metrics is complex, but there is at least some compelling evidence that rampant economic inequality and social trust are at odds.

Another reason one might object to extreme levels of economic inequality is due to its potential impacts on the very long-term future. This line of objection is a bit more philosophically dense than the previous two we’ve examined, but the basic thought goes something like this: The negative impacts of economic inequality might compound with time, resulting in an increase in certain existential risks. Such risks are those severe enough to threaten the continuation of humanity, including environmental disasters, global warfare, and the misapplication of AI.

To paint a clearer picture of how inequality might increase certain existential risks, we can consider a concrete example. There is some evidence suggesting that increases in inequality cause decreases in the quality of public institutions (e.g., institutions of higher education, governmental agencies, etc.). Such institutions are plausibly highly important in preventing certain existential risks, and thus we have strong reason to care about their quality. This gives us an indirect reason to prevent radical economic inequality. If the trend towards increasing inequality continues in particular societies, it is reasonable to think the damage done to institutions will only continue to aggregate, putting us at greater existential risk.

Whether or not radical economic inequality is intrinsically unjust, it seems we have significant reasons to care about it. Importantly, there may still be countervailing reasons to tolerate stark economic disparity (considerations of economic freedom, overall economic growth, national productivity, etc.). Thus, it is imperative that policy makers weigh the full array of pros and cons when it comes to permitting widespread economic inequality.

Capitalist Humanitarianism with Lucia Hulsether

Ethnographer and historian of religion Lucia Hulsether is on the show today to talk about the strange phenomenon she calls “capitalist humanitarianism.” She studies the ways that corporations attempt to distance themselves from the harms of capitalism by doing things like by selling environmentally-friendly goods or promoting socially-responsible investing.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Lucia HulsetherCapitalist Humanitarianism

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Single Still” by Blue Dot Sessions

Capering” by Blue Dot Sessions

Should Work Pay?

Color photograph of Haines Hall at UCLA, a large red brick building with lots of Romanesque arches

“The Department of Chemistry and Biochemistry at UCLA seeks applications for an assistant adjunct professor,” begins a recent job listing, “on a without salary basis. Applicants must understand there will be no compensation for this position.”

The listing has provoked significant backlash. Many academics have condemned the job as exploitative. They have also noted the hypocrisy of UCLA’s stated support of “equality” while expecting a highly qualified candidate with a Ph.D. to work for free. For context, UCLA pays a salary of $4 million to its head men’s basketball coach, Mick Cronin.

UCLA has now responded to the growing criticism, pointing out:

These positions are considered when an individual can realize other benefits from the appointment that advance their scholarship, such as the ability to apply for or maintain grants, mentor students and participate in research that can benefit society. These arrangements are common in academia.

It is certainly true that such arrangements are fairly common in academia. But are they ethical?

The university’s ethical argument is that the unpaid worker receives significant compensation other than pay. For example, having worked at a prestigious university might advance one’s career in the longer term – adding to their “career capital.” The implication is that these benefits are significant enough that the unpaid job is not exploitative.

Similar arguments are given by organizations that offer unpaid internships. The training, mentoring, and contacts an intern receives can be extremely valuable to those starting a new career. Some unpaid internships for prestigious companies or international organizations are generally regarded to be so valuable for one’s career that they are extremely competitive, sometimes receiving hundreds of applications for each position.

Employers point out that without unpaid internships, there would be fewer internships overall. Companies and organizations simply do not have the money to pay for all these positions. They argue that the right comparison is not between unpaid and paid internships, but between unpaid internships and nothing. This might explain why so many well-known “progressive” organizations offer unpaid positions despite publicly disavowing the practice. For example, the U.N. has famously competitive unpaid internships, as does the U.K.’s Labour Party, a left-wing political party whose political manifesto promises to ban unpaid internships, and whose senior members have compared the practice to “modern slavery.” Not long ago, the hashtag #PayUpChuka trended when Chuka Umunna, a Labour Member of Parliament, was found to have hired unpaid interns for year-long periods.

Besides the sheer usefulness of these jobs, there is also a libertarian ethical case for unpaid positions. If the workers are applying for these jobs, they are doing so because they are choosing to. They must think the benefits they receive are worth it. How could it be ethical to ban or prevent workers from taking jobs they want to take? “It shouldn’t even need saying,” writes Madeline Grant, “but no one is forced to do an unpaid internship. If you don’t like them, don’t take one—get a paid job, pull pints, study, go freelance—just don’t allow your personal preferences to interfere with the freedoms of others.”

On the other side of the debate, the opponents of unpaid jobs argue that the practice is inherently exploitative. The first Roman fire brigade was created by Marcus Licinius Crassus:

Crassus created his own brigade of 500 firefighters who rushed to burning buildings at the first cry for help. Upon arriving at the fire, the firefighters did nothing while their Crassus bargained over the price of their services with the property owner. If Crassus could not negotiate a satisfactory price, the firefighters simply let the structure burn to the ground.

Any sensible homeowner would accept almost any offer from Crassus, so long as it was less than the value of the property. The homeowner would choose to pay those prices for Crassus’ services. But that doesn’t make it ethical. It was an exploitative practice – the context of the choice matters. Likewise, employers may find workers willing to work without compensation. But that willingness to work without compensation could be a sign of the worker’s desperation, rather than his capacity for autonomous choice. If you need to have a prestigious university like UCLA on your C.V. to have an academic career, and if you can’t get a paid position, then you are forced to take an unpaid adjunct professorship.

Critics of unpaid jobs also point out that such practices deepen economic and social inequality. “While internships are highly valued in the job market,” notes Rakshitha Arni Ravishankar, “research also shows that 43% of internships at for-profit companies are unpaid. As a result, only young people from the most privileged backgrounds end up being eligible for such roles. For those from marginalized communities, this deepens the generational wealth gap and actively obstructs their path to equal opportunity.” Not everyone can afford to work without pay. If these unpaid positions are advantageous, as their defenders claim, then those advantages will tend to go toward those who are already well-off, worsening inequality.

There are also forms of unpaid work which are almost universally seen as ethical: volunteering, for instance. Very few object to someone with some spare time and the willingness to help a charity, contribute to Wikipedia, or clean up a local park. The reason for this is that volunteering generally lacks many of the ethical complications associated with other unpaid jobs and internships. There are exceptions; some volunteer for a line on their C.V. But volunteering tends to be done for altruistic reasons rather than for goods like career capital and social connections. This means that there is less risk of exploitation of the volunteers. Since volunteers do not have to worry about getting a good reference at the end of their volunteering experience, they are also freer to quit if work conditions are unacceptable to them.

On the ethical scale, somewhere between unpaid internships and volunteering are “hidden” forms of unpaid work that tend to be overlooked by economists, politicians, and society more generally. Most cooking, cleaning, shopping, washing, childcare, and caring for the sick and disabled represents unpaid labor.

Few consider these forms of unpaid work as directly unethical to perform or request family members to help carry out. But it is troubling that those who spend their time doing unpaid care work for the sick and disabled are put at a financial disadvantage compared to their peers who choose to take paid forms of work instead. An obvious solution is a “carer’s allowance,” a government payment, paid for by general taxation, to those who spend time each week taking care of others. A very meager version of this allowance (roughly $100/week) already exists in the U.K.

These “hidden” forms of unpaid work also have worrying implications for gender equality, as they are disproportionately performed by women. Despite having near-equal representation in the workforce in many Western countries, women perform the majority of unpaid labor, a phenomenon referred to as the “double burden.” For example, an average English female 15-year-old is expected, throughout her life, to spend more than two years longer performing unpaid caring work compared to the average male 15-year-old. This statistic is no exception. The Human Development Report, studying 63 countries, found that 31% of women’s time is spent doing unpaid work, as compared to 10% for men. A U.N. report finds that, in developed countries, women spend 3:30 hours a day on unpaid work and 4:39 hours on paid work. In comparison, men spend only 1:54 hours on unpaid work, and 5:42 on paid work. Finding a way to make currently unpaid work pay, such as a carer’s allowance, could also be part of the solution to this inequality problem.

Is unpaid work ethical? Yes, no, and maybe. Unpaid work covers a wide bandwidth on the ethical spectrum. At one extreme, there are clear cases of unpaid work which are morally unproblematic, such as altruistically volunteering for a charity or cooking yourself a meal. And, at the other extreme, there are cases where unpaid work is clearly unethical exploitation: cases of work that ought to be paid but where employers take advantage of their workers’ weak bargaining positions to deny them the financial compensation to which they are morally entitled. And many cases of unpaid work fall somewhere between these two extremes of the moral spectrum. In thinking about these cases, we have no alternative but to look in close detail at the specifics: at the power dynamics between the employers and the employees, the range and acceptability of the options that were available to workers, and the implications for equality.

Thinking about Trust with C. Thi Nguyen

Many of us rely heavily on our smartphones and computers. But does it make sense to say we “trust” them? On today’s episode of Examining Ethics, the philosopher C. Thi Nguyen explores the relationship of trust we form with the technology we use. We not only can trust non-human objects like smartphones, we tend to trust those objects in an unquestioning way; we’re not thinking about it all that much. While this unquestioning trust makes our everyday lives easier, we don’t recognize just how vulnerable we’re making ourselves to large and increasingly powerful corporations.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Big Ten by Blue Dot Sessions

Lemon and Melon by Blue Dot Sessions

Ethics and Job Apps: Why Use Lotteries?

photograph of lottery balls coming out of machine

This semester I’ve been 1) applying for jobs, and 2) running a job search to select a team of undergraduate researchers. This has resulted in a curious experience. As an employer, I’ve been tempted to use various techniques in running my job search that, as an applicant, I’ve found myself lamenting. Similarly, as an applicant, I’ve made changes to my application materials designed to frustrate those very purposes I have as an employer.

The source of the experience is that the incentives of search committees and the incentives job applicants don’t align. As an employer, my goal is to select the best candidate for the job. While as an applicant, my goal is that I get a job, whether I’m the best candidate or not.

As an employer, I want to minimize the amount of work it takes for me to find a dedicated employee. Thus, as an employer, I’m inclined to add ‘hoops’ to the application process, by requiring applicants to jump through those hoops, I make sure I only look through applications of those who are really interested in the job. But as an applicant, my goal is to minimize the amount of time I spend on each application. Thus, I am frustrated with job applications that require me to develop customized materials.

In this post, I want to do three things. First, I want to describe one central problem I see with application systems — what I will refer to as the ‘treadmill problem.’ Second, I want to propose a solution to this problem — namely the use of lotteries to select candidates. Third, I want to address an objection employers might have to lotteries — namely that it lowers the average quality of an employer’s hires.

Part I—The Treadmill Problem

As a job applicant, I care about the quality of my application materials. But I don’t care about the quality intrinsically. Rather, I care about the quality in relation to the quality of other applications. Application quality is a good, but it is a positional good. What matters is how strong my applications are in comparison to everyone else.

Take as an analogy the value of height while watching a sports game. If I want to see what is going on, it’s not important just to be tall, rather it’s important to be taller than others. If everyone is sitting down, I can see better if I stand up. But if everyone stands up, I can’t see any better than when I started. Now I’ll need to stand on my tiptoes. And if everyone else does the same, then I’m again right back where I started.

Except, I’m not quite back where I started. Originally everyone was sitting comfortably. Now everyone is craning uncomfortably on their tiptoes, but no one can see any better than when we began.

Job applications work in a similar way. Employers, ideally, hire whosoever application is best. Suppose every applicant just spends a single hour pulling together application materials. The result is that no application is very good, but some are better than others. In general, the better candidates will have somewhat better applications, but the correlation will be imperfect (since the skills of being good at philosophy only imperfectly correlate with the skills of being good at writing application materials).

Now, as an applicant, I realize that I could put in a few hours polishing my application materials — nudging out ahead of other candidates. Thus, I have a reason to spend time polishing.

But everyone else realizes the same thing. So, everyone spends a few hours polishing their materials. And so now the result is that every application is a bit better, but still with some clearly better than others. Once again, in general, the better candidates will have somewhat better applications, but the correlation will remain imperfect.

Of course, everyone spending a few extra hours on applications is not so bad. Except that the same incentive structure iterates. Everyone has reason to spend ten hours polishing, now fifteen hours polishing. Everyone has reason to ask friends to look over their materials, now everyone has reason to hire a job application consultant. Every applicant is stuck in an arms race with every other, but this arms race does not create any new jobs. So, in the end, no one is better off than if everyone could have just agreed to an armistice at the beginning.

Job applicants are left on a treadmill, everyone must keep running faster and faster just to stay in place. If you ever stop running, you will slide off the back of the machine. So, you must keep running faster and faster, but like the Red Queen in Lewis Carrol’s Through the Looking Glass, you never actually get anywhere.

Of course, not all arms races are bad. A similar arms race exists for academic journal publications. Some top journals have a limited number of article slots. If one article gets published, another article does not. Thus, every author is in an arms race with every other. Each person is trying to make sure their work is better than everyone else’s.

But in the case of research, there is a positive benefit to the arms race. The quality of philosophical research goes up. That is because while the quality of my research is a positional good as far as my ability to get published, it is a non-positional good in its contribution to philosophy. If every philosophy article is better, then the philosophical community is, as a whole, better off. But the same is not true of job application materials. No large positive externality is created by everyone competing to polish their cover letters.

There may be some positive externalities to the arms race. Graduate students might do better research in order to get better publications. Graduate students might volunteer more of their time in professional service in order to bolster their CV.

But even if parts of the arms race have positive externalities, many other parts do not. And there is a high opportunity cost to the time wasted in the arms race. This is a cost paid by applicants, who have less time with friends and family. And a cost paid by the profession, as people spend less time teaching, writing, and helping the community in ways that don’t contribute to one’s CV.

This problem is not unique to philosophy. Similar problems have been identified in other sorts of applications. One example is grant writing in the sciences. Right now, top scientists must spend a huge amount of their time optimizing grant proposals. One study found that researchers collectively spent a total of 550 working years on grant proposals for Australia’s National Health and Medical Research Council’s 2012 funding round.

This might have a small benefit in leading research to come up with better projects. But most of the time spent in the arms race is expended just so everyone can stay in place. Indeed, there are some reasons to think the arms race actually leads people to develop worse projects, because scientists optimize for grant approval and not scientific output.

Another example is college admissions. Right now, high school students spend huge amounts of time and money preparing for standardized tests like the SAT. But everyone ends up putting in the time just to stay in place. (Except, of course, for those who lack the resources required to put in the time; they just get left behind entirely.)

Part II—The Lottery Solution

Because I was on this treadmill as a job applicant, I didn’t want to force other people onto a treadmill of their own. So, when running my own job search, I decided to modify a solution to the treadmill problem that has been suggested for both grant funding and college admissions. I ran a lottery. I had each applying student complete a short assignment, and then ‘graded’ the assignments on a pass/fail system. I then choose my assistants at random from all those who had demonstrated they would be a good fit. I judged who was a good fit. I didn’t try to judge, of those who were good fits, who fit best.

This allowed students to step off the treadmill. Students didn’t need to write the ‘best’ application. They just needed an application that showed they would be a good fit for the project.

It seems to me that it would be best if philosophy departments similarly made hiring decisions based on a lottery. Hiring committees would go through and assess which candidates they think are a good fit. Then, they would use a lottery system to decide who is selected for the job.

The details would need to be worked out carefully and identifying the best system would probably require a fair amount of experimentation. For example, it is not clear to me the best way to incorporate interviews into the lottery process.

One possibility would be to interview everyone you think is likely a good fit. This, I expect, would prove logistically overwhelming. A second possibility, and I think the one I favor, would be to use a lottery to select the shortlist of candidates, rather than to select the final candidate. The search committee would go through the application and identify everyone who looks like a good fit. They would then use a lottery to narrow down to a shortlist of three to five candidates who come out for an interview. While the shortlisted candidates would be placed on the treadmill, a far smaller number of people are subject to the wasted effort. A third possibility would use the lottery to select a single final candidate, and then use an in-person interview merely to confirm the selected candidate really is a good fit. There is a lot of evidence that hiring committees systematically overweight the evidential weight of interviews, and that this creates tons of statistical noise in hiring decisions (see chapters 11 and 24 in Daniel Kahneman’s book Noise).

Assuming the obstacles could be overcome, however, lotteries would have an important benefit in going some way towards breaking the treadmill.

There are a range of other benefits as well.

  • Lotteries would decrease the influence of bias on hiring decisions. Implicit bias tends to make a difference in close decisions. Thus, bias is more likely to flip a first and second choice, than it is to flip someone from making it onto the shortlist in the first place.
  • Lotteries would decrease the influence of networking, and so go some way towards democratizing hiring. At most, an in-network connection will get someone into the lottery but it won’t increase you chance of winning the lottery.
  • It would create a more transparent way to integrate hiring preferences. A department might prefer to hire someone who can teach bioethics, or might prefer to hire a female philosopher, but not want to restrict the search to people who meet such criteria. One way to integrate such preferences more rigorously would be to explicitly weight candidates in the lottery by such criteria.
  • Lotteries could decrease interdepartmental hiring drama. It is often difficult to get everyone to agree on a best candidate. It is generally not too difficult to get everyone to agree on a set of candidates all who are considered a good fit.

Part III—The Accuracy Drawback

While there are advantages accrue to applicants and the philosophical community, employers might not like a lottery system. The problem for employers is that a lottery will decrease the average quality of hires.

A lottery system means you should expect to hire the average candidate who meets the ‘good fit’ criteria. Thus, as long as trying to pick the best candidate results in a candidate at least above average, then the average quality of the hire goes down with a lottery.

However, while there is something to this point, the point is weaker than most people think. That is because humans tend to systematically overestimate the reliability of judgment. When you look at the empirical literature a pattern emerges. Human judgment has a fair degree of reliability, but most of that reliability comes from identifying the ‘bad fits.’

Consider science grants. Multiple studies have compared the scores that grant proposals receive to the eventual impact of research (as measured by future citations). What is found is that scores do correlate with research impact, but almost all of that effect is explained by the worst performing grants getting low scores. If you restrict your assessment to the good proposals, researchers are terrible at judging which of the good proposals are actually best. Similarly, while there is general agreement about which proposals are good and which bad, evaluators rarely agree about which proposals are best.

A similar sort of pattern emerges for college admission counselors. Admissions officers can predict who is likely to do poorly in school, but can’t reliably predict which of the good students will do best.

Humans are fairly good at judging which candidates would make a good fit. We are bad at judging which good fit candidates would actually be best. Thus, most of the benefit of human judgment comes at the level of identifying the set of candidates who would make a good fit, not at the level of deciding between those candidates. This, in turn, suggests that the cost to employers of instituting a lottery system is much smaller than we generally appreciate.

Of course, I doubt I’ll convince people to immediately use lotteries on major important decisions. Thus, for now I’ll suggest that for smaller less consequential decisions, try a lottery system. If you are a graduate department, select half your graduating class the traditional way, and half by a lottery of those who seem like a good fit. Don’t tell faculty which are which, and I expect several years later it will be clear that the lottery system works just as well. Or, like me, if you are hiring some undergraduate researchers, try the lottery system. Make small experiments and let’s see if we can’t buck the current status quo.

“Cruel Optimism,” Minimum Wage, and the Good Life

photograph looking up at Statue of Liberty

In early May, executives from the fast casual restaurant Chipotle Mexican Grill announced that the company would be raising its average hourly wage to $15 by the end of June. A few weeks later, Chipotle also announced that its menu prices would be increasing by about four percent to help offset those higher wages (as well as the increasing costs of ingredients). This means that instead of paying, say, $8.00 for a burrito, hungry customers will now instead be expected to pay $8.32 for the same amount of food.

While you might think that such a negligible increase would hardly be worth arguing about, opponents of a minimum wage hike jumped on this story as an example of the supposed economic threat posed by changing federal labor policies. During recent debates in Congress, for example, those resistant to the American Rescue Plan’s original provision to raise the federal minimum wage frequently argued that doing so could disadvantage consumers by causing prices to rise. Furthermore, Chipotle’s news exacerbated additional complaints about the potential consequences of the Economic Impact Payments authorized in light of the coronavirus pandemic: allegedly, Chipotle must raise their wages so as to entice “lazy” workers away from $300/week unemployment checks.

Nevertheless, despite the cost of burritos rising by a quarter or two, the majority of folks in the United States (just over six out of ten) support raising the federal minimum wage to $15 per hour. As many as 80% think the wage is too low in general, with more than half of surveyed Republicans (the political party most frequently in opposition to raising the minimum wage) agreeing. Multiple states have already implemented higher local minimum wages.

Why, then, do politicians, pundits, and other people continue to spread the rhetoric that minimum wage increases are unpopular and financially risky for average burrito-eaters?

Here’s where I think a little philosophy might help. Often, we are attracted to things (like burritos) because we recognize that they can satisfy a desire for something we presently lack (such as sustenance); by attaining the object of our desire, we can likewise satisfy our needs. Lauren Berlant, the philosopher and cultural critic who recently died of cancer on June 28th, calls this kind of attraction “optimism” because it is typically what drives us to move through the world beyond our own personal spaces in the hopes that our desires will be fulfilled. But, importantly, optimistic experiences in this sense are not always positive or uplifting. Berlant’s work focuses on cases where the things we desire actively harm us, but that we nevertheless continue to pursue; calling such phenomenon cases of “cruel optimism,” they explain how “optimism is cruel when the object/scene that ignites a sense of possibility actually makes it impossible to attain the expansive transformation for which a person or a people risks striving.” Furthermore, cruel optimism can come about when an attraction does give us one kind of pleasure at the expense of other, more holistic (and fundamental) forms of flourishing.

A key example Berlant gives of “cruel optimism” is the fallacy of the “good life” as something that can be achieved if only one works hard enough; as they explain, “people are trained to think that what they’re doing ought to matter, that they ought to matter, and that if they show up to life in a certain way, they’ll be appreciated for the ways they show up in life, that life will have loyalty to them.” Berlant argues that, as a simple matter of fact, this characterization of “the good life” fails to represent the real world; despite what the American Dream might offer, promises of “upward mobility” or hopes to “lift oneself up by one’s own bootstraps” through hard work and faithfulness have routinely failed to manifest (and are becoming ever more rare).

Nevertheless, emotional (or otherwise affective) appeals to stories about the “good life” can offer a kind of optimistic hope for individuals facing a bleak reality — because this hope is ultimately unattainable, it’s a cruel optimism.

Importantly, Berlant’s schemata is a paradigmatically natural process — there need not be any individual puppetmaster pulling the strings (secretly or blatantly) to motivate people’s commitment to a given case of cruel optimism. However, such a cultural foundation is apt for abuse by unvirtuous agents or movements interested in selfishly profiting off of the unrealistic hopes of others.

We might think of propaganda, then, as a sort of speech act designed to sustain a narrative of cruel optimism. According to Jason Stanley, a key kind of damaging propaganda is “a contribution to public discourse that is presented as an embodiment of certain ideals, yet is of a kind that tends to erode those very ideals.” When a social group’s ideals are eroded into hollowness — when stories about “the good life” perpetuate a functionally unattainable hope — then the propagandistic narratives facilitating this erosion (and, by extension, the vehicles of propaganda spreading these narratives) are morally responsible.

The case of Chipotle arises at the center of several overlapping objects of desire: for some, the neoliberal hope of economic self-sufficiency is threatened by governmental regulations on market prices of commodities like wage labor, as well as by federal mechanisms supporting the unemployed — with the minimum wage and pandemic relief measures both (at least seemingly) relating to this story, it is unsurprising that those optimistic about the promise of neoliberalism interpreted Chipotle as a bellwether for greater problems. Furthermore, consumer price increases, however slight, threaten to damage hopes of achieving one’s own prosperity and wealth. The fact that these hopes are ultimately rather unlikely means that they are cases of cruel optimism; the fact that politicians and news outlets are nevertheless perpetuating them (or at least framing the information in a manner that elides broader conversations about wealth inequity and fair pay) means that those stories could count as cases of propaganda.

And, notably, this is especially true when news outlets are simply repeating information from company press releases, rather than inquiring further about their broader context: for example, rather than raising consumer prices, Chipotle could have instead saved hundreds of millions of dollars in recent months by foregoing executive bonuses and stock buybacks. (It is also worth noting that the states that elected to prematurely freeze pandemic-related unemployment funding, ostensibly to provoke workers to re-enter the labor market, have not seen the hoped-for increase in workforce participation — that is to say, present data suggests that something other than $300/week unemployment checks has contributed to unemployment rates.)

So, in short, plenty of consumers are bound to cruel optimisms about “the good life,” so plenty of executives or other elites can leverage this hope for their own selfish ends. The recent outcry over a burrito restaurant is just one form of how these strings are pulled.

Commodifying Activism

"Nike" by Miguel Vaca licensed under CC BY 2.0 (Via Flickr).

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Recently, Nike aired an advertisement that sparked a lot of cultural and political buzz. This ad contained professional football player, Colin Kaepernick, a man who has become a household name in political discourse through his protest to police brutality, delivering a simple message: “Believe in something, even if it means sacrificing everything.”

Since the airing of this ad, there has been a considerable backlash with a variety of Twitter hashtags like #justburnit or #BoycottNike becoming increasingly popular. Despite this response to Nike’s use of Kaepernick’s controversial figure, the value of Nike’s stock has only risen and sales have increased. Nike’s promotion has helped spread awareness and increase support for Kaepernick, but what right do companies with a history like Nike’s have to be champions of social justice?

Nike has a notorious history of utilizing sweatshops and child labor and not only that, but they just signed a new contract with the same league that has collectively barred Kaepernick from playing. This amalgam of good and bad aspects of Nike’s support for social justice begs the question: is it ethical for companies to commodify social and political activism? And what are its effects on our societal norms? In the following paragraphs, I will explore how similar ad campaigns have informed their respective social justice movements and if there is an ethical way to market these movements within a consumerist economy.

Activism within consumerism can play many valuable roles: the increased awareness that marketing campaigns offer represents one of the most powerful ways for a social justice movement to take flight. One prominent example is the #LikeAGirl movement in 2015. In a commercial that was popularized after its airing during Super Bowl XLIX, children were asked to perform actions “like a girl.” According to Alana Vagionos of the Huffington Post, when the young boys acted out these things, “Instead of simply doing these actions, each person weakly reenacted them, by accidentally dropping the ball or slapping instead of punching,” making it clear that in American culture femininity is often synonymous with weakness. As Vagionos notes, the phrase “like a girl” is similar to saying something is “gay” — both are used in a derogatory manner. But when little girls were asked to complete the same actions “like a girl,” they did so with vigor, strength, and confidence.

Efforts like this, while ultimately designed to generate more profit, can be very productive in shifting public opinion on social issues. According to a case study done by D&AD, almost 100 million people viewed the commercial on YouTube alone and prior to watching the clip, just 19% of 16-24 year-olds had a positive association toward the phrase “like a girl.” After watching, however, 76% said they no longer saw the phrase negatively. So, from the standpoint of publicity and raising awareness of the larger issues at play, this type of activism seems fruitful.

However, there are many who object to commodifying activism. While there is potential for positive change, there is also the possibility of further reinforcing inequality and exacerbating damaging societal norms. Using movements like #BlackLivesMatter to promote a new product line or a special offer dilutes the meaning and value of these symbols and covers over systemic power inequalities.

One campaign that demonstrates many of the faults in retail activism is the “NIKE(RED)” campaign put on during the 2010 World Cup. This movement sought to increase awareness and funding for programs that combat the AIDS epidemic through a new line of merchandise emphasizing the color red. But Spring-Serenity Duvall and Matthew C. Guschwan believe that this “retail activism” reinforces colonial norms, asserting in Communication, Culture and Critique that this campaign simplifies an extremely complex global health predicament.

They claim that it further reinforces the way that Western consumers view the people in need of aid. It exacerbates the perceived divide between the aid recipients and the consumers and does nothing to increase solidarity between them. Ultimately, the “NIKE(RED)” movement,

“perpetuat[es] images of hierarchies that privilege Western consumers and marginaliz[es] African peoples whom the organization seeks to aid […] The ‘us’ and ‘them’ dichotomy positions Western consumers as a powerful force and Third World peoples as passively in need of aid. So, a major contradiction within (RED) is that while consumer-based campaigns use rhetorics of unity, they ultimately rely on the individual, private, and personal expenditure of money that does not promote substantial social solidarity.”

Additionally, the simplified view that these campaigns perpetuate merely pacifies consumer bases rather than helping to resolve the issue. It breeds ignorance about the power structures in play and distorts the fact that these powerful brands are often contributors to the problem. Guil Louis of the Lawrentian says that “it seems as if social consciousness has become something that not just these celebrities can commodify, but so too can their sponsors.” The truth of the matter is that when it comes to retail activism, there is always an ulterior motive: the profit-making potential of the issue brought about in the advertisement. When meaningful change is a positive externality instead of the primary goal, Louis says that it will “pacify us and make it even more difficult to identify oppressive structures or conditions.”

It is clear that there are both benefits and detriments to this type of approach to activism, but it is important to be aware of the effects that this commercialization has on the movements themselves. Ultimately this approach to activism, while beneficial in some ways, is not enough if it is the only approach to activism. There are a variety of meaningful and effective ways to sway positive social change, and ultimately awareness, especially if diluted by a profit-making incentive, can only go so far without action.

“Nudges” and the Environmental Influences on our Morals

A photo of a telephone booth

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Richard Thaler, a behavioral economist, won the Nobel Prize in economics this year. He co-authored the book, Nudge, in 2008. The theory behind “nudges” (a term he coined) changed the perspective of economics on the agents to be studied. Instead of picturing humans as rational preference satisfiers, Thaler suggests that we are susceptible to all sorts of irrational pressures and rarely do we decide to behave in ways that can be modeled on principles of rationality and our individual preferences. The “nudge” is one tool he uses in order to see one way in which we deviate from the rationalistic model of classical economics.

A “nudge” is a non-rational psychological factor that makes it more likely that someone behave in one way rather than another. Thaler himself came up with the model for behavior when he became frustrated at the speed with which he and his roommates were going through their stash of cashews. They took familiar enough means to remedy the problem – they kept the cashews out of the living rooms so that they’d have to trek to the kitchen to fill up, or started to hide them. Dieters everywhere have come up with similar solutions to reduce their intake of their favorite food – putting chips into a portioned bowl instead of eating from the bag, for instance. Taking steps like these are providing yourself with “nudges” to eat less: doing these things makes it more likely that you will exhibit self-control.

We can see intuitively why this might be. By putting food into portions, you need to exert effort to refill, and by putting the food at a distance away that also ups the effort level. Along with this extra effort expenditure comes a break in behavior: now you can’t more or less “mindlessly” continue to consume. You face a choice when your bowl is empty: get up and refill? Do you really want or need more? So there is more chance you will stop. You’re helping yourself out with these nudges.

Along these lines, we are more likely to opt for healthy food if it is at eye level when we are hungry – otherwise we’ll go for our normal junk food of choice. The arrangement of grocery stores can have an impact on the healthy choices of its customers by arranging food accordingly and making it easier to choose healthful options.

Thaler noted how irrational we are as agents. We make decisions mostly based on convenience and speed, and fall victim to irrational decision structures like sunk costs.

Perhaps the most famously effective nudges have been outside the realm of food: the presence of an insect on the bottom of a urinal raises the likelihood that people urinating make their target. There are also policies spreading worldwide that have citizens opt out of organ donation programs if they prefer not to be donors rather than opt in if they would like to be a donor. This change to policy increased participation significantly, despite the fact that according to classical economics, the method of selection should be a neutral factor: the number of people who want to be donors should end up being donors either way.

In Chicago in 2016, the ideas24 anti-gun program used the underlying psychology of Thaler’s views to develop an approach to working with incarcerated teenagers. Noting that people behave in a scripted way when under stress, teenagers were encouraged to note triggers and rewrite the scripts, with “group lessons around decision making” that take Thaler’s views to heart by focusing on things that would nudge them away from violence: “In one lesson, inmates list the people who may be affected by something they’ve done.” Nudge theories are on the horizon for use in decreasing gun crime in New York City as well as other cities in the United States. Daniel Webster, director of Johns Hopkins Center for Gun Policy Research, hopes that nudging people to believe that carrying a gun isn’t normal – “that their behavior is ‘out there’” – will affect change.

The key commitment behind implementing nudges in order to affect change in behavior is that the way that individuals decide what to do isn’t on the basis of deciding what they think is best and behaving accordingly. The nudge is in place based on an external determination about what is best to do – external to the agent’s decision-making process at the time. This has led to some controversy in politics: to what extent should the government be actively trying to shape the behavior of its citizens?

Recall the benighted soda tax in New York City. Though cities such as Philadelphia and Berkeley have a tax on sugary drinks, intended to dissuade people from purchasing excessive amounts of the detrimental beverages, when such a bill was proposed in New York City, it was met with public outrage.

We can ask questions beyond the legitimacy of such government policies. If our behavior can be so easily influenced by factors outside of our control, or disconnected from our preferences and commitments about what to do, then how will that affect our notions about how responsible we are for our actions?

The impact of nudges on our behavior fits with a family of psychological studies in the second half of the twentieth century that showed the significant impact of apparently irrelevant features of the environment on our moral behavior.

Intuitively, it is important to develop and cultivate a good moral personality or character. We care about having good character traits ourselves and look for good traits in one another. Traits such as honesty, compassion, bravery, humility, etc. are desirable and typically relevant for assessing the behavior we come across in the world. If we take someone to be honest, we think they will be more likely to tell the truth. This is why we value having honest friends – we take them to be more reliable in this regard. If someone we take to be honest lies on one occasion, the fact that we take them to have this trait of honesty typically allows us to chalk the lie up to a rare or one-time event. We can lean on the reliability of the trait and maintain the relationship.

The research on nudges may break the connection between character traits and behavior. For when a nudge results in a behavior, something other than the character trait was the cause of the behavior. This is in line with other psychological studies that showed trait-irrelevant factors to be better predictors of behavior than these traits that our moral practices tend to favor.

Consider a sampling of the well-known studies. The “dime experiment,” conducted in 1972 by Isen and Levin, tested the likelihood that subjects would help someone who dropped a large amount of papers on the street (the paper-dropper being a participant in the experiment). The subjects had just used a payphone, and those that found a dime left in the phone were significantly more likely to help than those that did not find a dime. In what we could call the “smell experiments,” people were more fair and generous when in a room that smelled clean. Finally, the proximity of a perceived authority figure significantly affected subjects’ willingness to cause pain to a stranger in the famous Milgram experiments. These effects did not track the moral makeup of the agents in question, yet predicted their moral behavior.

The effectiveness of nudges, and the success of the manipulation of subjects in the experiments sampled above, suggests a worry for our practices of moral evaluation. If someone acts in a certain way because of a smell, or because they found a dime, or because of a nudge, to what extent was the action their own? Would we blame or praise them to the same extent if they had acted without the external factor?

The more we learn about human agency, the more we must accept that we are influenced by a host of external factors. Thaler’s work to a large degree suggested harnessing this feature for good – using the influence-ability of our agency to direct it towards higher-order goals. This may be the direction our blame and praise practices head towards as well: how well we are managing our agency, rather than individual actions. On such a picture, we are creatures to manage and direct, responsible for driving ourselves well.

Gender Segregation: Empowering or Exclusive?

A black-and-white photo of a movie theatre during a film.

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


With over $400 million dollars in North American profits, Wonder Woman has set the record for the biggest U.S. film opening with a female director. Even before setting this record, the 2017 comic book adaptation was heralded by many as a feminist film, including actress and former Wonder Woman Lynda Carter. Despite its success, the film was not without criticism, with some women claiming that they did not find the film empowering, and even that it ignores non-white women. Perhaps the biggest controversy surrounding the film has to do with a Texas movie theatre, which offered “women-only” screenings of the film back in June. This decision was met with a wave of retaliation, accusations of discrimination, and even a lawsuit. Is it sexist to provide a women-only screening of the film? Is it fair to call the movie theatre’s actions as feminist? And most importantly, how does this reaction reflect American society’s tolerance, or lack thereof, of gender segregation?

Continue reading “Gender Segregation: Empowering or Exclusive?”

The Dangers of Ethical Fading in the Workplace

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Suppose your boss asks you to fudge certain numbers on a business report on the same week the company is conducting layoffs. Is this an ethical dilemma, a financial dilemma, or seeing as it will affect your family, a social dilemma? Likely, all three are true, and more layers exist beneath the surface. Are you in debt from taking a luxurious vacation? Do you have children in college? Are you hoping to get a promotion soon? Research shows that navigating through these many layers makes it increasingly difficult to see the ethical dilemma. This describes “ethical fading,” the process by which individuals are unable to see the ethical dimensions of a situation due to overriding factors.

Ann Tenbrunsel first described ethical fading in 2004 as, “the process by which the moral colors of an ethical decision fade into bleached hues that are void of moral implications.” Since moral decisions are made in the same parts of the brain that process emotions, moral decisions are made almost automatically, instinctively, and therefore are prone to self-deception. Self-deception appears in the workplace when employees see an ethical dilemma as firstly a financial dilemma or personal dilemma instead. Seeing a dilemma, such as polishing numbers in a report, as a choice that could affect personal financial stability allows an individual to make unethical decisions while still referring to themselves as an ethical person. In fact, ethical fading eliminates the awareness that one is making an unethical decision in the first place.

This phenomenon can manifest in a variety of ways, making ethical fading a difficult problem to tackle. Sometimes, an individual replaces the idea of an ethical dilemma with a financial or personal dilemma. Sometimes an individual is under so much pressure that an ethical dilemma passes through them unseen. In other cases, individuals are exposed to ethical dilemmas so often that they become jaded.

Tenbrunsel argues that ethics training in companies is null and void if ethical fading is occurring. No amount of training can teach an individual how to navigate an ethical dilemma if one doesn’t see an ethical dilemma in the first place. One recent case study of ethical fading is with college administration. In 2009, The University of Illinois was found to have a hidden admissions process that pushed through applicants with significant ties to politicians, donors, and university officials. Since the ethical dilemma was lost in the culture and organizational structure of the university’s administration, this case has been deemed an example of ethical fading. Michael N. Bastedo, director of the Michigan’s Center for the Study of Higher and Postsecondary Education, stated that a growing number of college administrations are “starting to see ethical problems as system problems.”

Like other examples of ethical fading, budget cuts were pressuring the administration to reach out to donors more, and the ethical problem of giving preferential treatment to certain applicants was forgotten. Following Tenbrunsel’s argument, this problem wouldn’t be remedied with ethics training, unless the hidden applications system was fixed as well. Since those inside the administration didn’t see the hidden application system as an ethical problem in the first place, ethics training wouldn’t prompt employees to come forward and fix the application system.

A similar incident has been occurring in the military as well. In 2015, a study by Army War College professors Leonard Wong and Stephen J. Gerras found that lying is rampant in the military, and is likely caused by the immense physical and emotional strain that soldiers experience. Ethical fading in this case means that Army officers have become “ethically numb” to the consequences of lying. When the professors pressed their participants on how they manage juggling their many duties, classic sugar-coat phrases often heard in the business sector were reported. In order to satisfy their many duties and requirements, Army officers routinely resort to deception in the form of “hand-waving, fudging, massaging, and checking the box.” This case reveals that financial strain is not the only cause of ethical fading, but physical and emotional strain as well, and that sectors besides business are prone to ethical fading in their employees.

Tenbrunsel’s argument for self-deception provides yet another obstacle for business ethics. If the cause of unethical behavior isn’t caused by a lack of information and training, but the human trait of self-deception, no amount of ethics seminars will discourage unethical behavior. As a start, ethics training should include information on how to spot ethical fading, overcome prejudices, and tips to handle emotional strain in the workplace. However, ethical fading helps address the fact that unethical behavior is not limited to unethical people. Tenbrunsel points out the fact that everyone practices self-deception at some point, and this may be the start to addressing unethical behavior in the workplace properly. Addressing unethical behavior as a human tendency will hopefully start to fill the gaps in current ethics training programs. If not, ethical dilemmas will continue to be sugar-coated and slip through the cracks.

Visit Us.

LOCATION

2961 W County Road 225 S
Greencastle, IN 46135
765.658.5857

DIRECTIONS

BUILDING HOURS

Monday - Friday: 8:00AM - 5:00PM
Saturday-Sunday: closed