← Return to search results
Back to Prindle Institute

Is College Worth It?

photograph of college commencement

It’s not a new question, but it’s been receiving renewed attention after a recent analysis circulated online. According to a study from “The Foundation for Research on Equal Opportunity” (FREOPP), there are a number of popular bachelor’s and master’s degrees offered at schools in the U.S. that have a low or negative return on investment (ROI) which “leave students worse off.”

The calculation is a simple one: a college degree is an investment, as it costs money and time. People with college degrees have, in the past, typically made that money back long-term, since careers that require college degrees tended to pay higher salaries than those that didn’t. But with rising costs of college tuition and many well-paying careers no longer requiring college degrees, these days one may be better off, at least in terms of long-term earnings, to skip college altogether, rather than go to college to study certain subjects.

Some of the degrees identified in the study were perhaps surprising – many MBA programs, for example, provide an overall poor ROI according to the analysis. Others were less surprising, as they fit into the stereotype of degrees that aren’t “worth it”: degrees in fine arts, humanities, and education, for example, were identified as having low or negative ROIs.

Although it’s been reported on by numerous media outlets, the FREOPP’s study has not gone unchallenged. However, even if we take the results at face value, what should we do with them? The authors of the study argue that prospective students have a right to know about the ROI of a program they’re interested in pursuing, and that information about a program’s ROI should even be used to inform policy in the form of scholarships and bursaries.

I think we should do something different with the study: we should ignore it. Far from being useful information, focusing too much on ROI can have negative consequences.

There is an obvious concern with talking about which degrees are worth pursuing purely in financial terms: there are clearly other, non-financial benefits that come along with earning a college degree. This is perhaps especially the case for careers that may have comparatively lower earning potential but are seen as more rewarding by students who have certain interests.

This does not go unnoticed by the authors of the FREOPP report, who cite that the “joy factor” is something that needs to be considered when choosing a degree to pursue, and that degrees with low ROIs can nevertheless produce significant social benefits. At the same time, the report also claims that it would be “irresponsible for defenders of negative-ROI programs to use “social benefits” as a catchall excuse for poor performance,” while also claiming that “programs which generate large social benefits also come with significant private rewards.” The argument, then, is that if it is the case that when pursuing a degree with a low ROI one does produce significant social benefits, that investment will pay off, since producing social benefits, in turn, produces (presumably monetary) rewards.

Whether this is true depends on how we define a “social benefit.” There are clearly cases where social benefits are rewarded – the report’s example is that of someone trained as a biologist (another field identified as having an overall low ROI) contributing to the development of a life-saving vaccine. Conspicuously missing from this discussion, however, are less tangible social benefits that are more likely to be produced in the more stereotypically “underperforming” degrees, such as those that come about from contributions to the arts. While some of these contributions may also be accompanied by “significant private rewards” this is certainly not always the case.

Rather than acting as an “excuse,” then, a more inclusive and less obtuse interpretation of “social benefits” may very well on their own compensate for a lower ROI from degrees with so-called “poor performance.” Indeed, a fundamental issue with assigning any type of value to something like a college degree is that one’s preconceptions about what should constitute that value will taint any such calculations.

Solely calculating benefits in terms of the long-term financial wellbeing of individuals also ignores the value that lies in a society that encourages a variety of pursuits. Will you make more money learning how to program computers than learning how to paint? Probably. But is a society consisting exclusively of computer programmers one we should pursue? Probably not.

Information about the ROI of college degrees is also not useful for policy recommendations; indeed, it will likely cause more harm than good.

The FREOPP report notes that “[a]round 29 percent of federal Pell Grant and student loan dollars over the last five years were used at programs that leave students with a negative ROI,” and that such results “point to a role for federal policymakers in improving the ROI of higher education.” The thought here is that other stakeholders – the government, perhaps, or taxpayers, depending on the type of subsidy provided – ought to know about the ROI of programs they are helping students attend so that they can determine if their investment is really worth it.

But the implications of this kind of recommendation are potentially chilling. It is not difficult to envision a policy where, for example, Pell Grants are only provided to students who enroll in a degree that has been declared “worth it.” Since such grants are given to low-income students, it would essentially gatekeep entire swaths of academic pursuit to only allow the participation of the already well-off.

Instead of a recommendation at the policy level, isn’t information about ROI still useful when it comes to individuals trying to decide what they want to study in school? The report cites another report that claims that the primary motivation of most college students is to get a good job that will pay them well. While there are certainly conversations to be had about what college really is “for,” and whether the primary concern of students when pursuing a higher education should be trying to get training for the workforce, it is undoubtedly the case that students are concerned with this. Surely, then, knowing the ROI of a college degree will help them make that decision.

Will it, though? From the FREOPP report, engineering and computer science are listed as the “best financial bets,” while the fine arts are the worst. Is this surprising information? Today’s high school students are likely all too familiar with what is sometimes seen as a myopic focus on STEM careers and the monetary rewards associated with in-demand careers in tech. It is unlikely that many are shocked to learn that artists make less money.

There are, however, two potential takeaways from the information in the FREOPP. One the report itself gestures at is that tuition fees for some programs and schools are too high. Since the cost of tuition is potentially a significant factor in determining ROI, lower tuition fees would result in higher ROIs.

A second takeaway is that if ROI is a significant concern, then this is simply an indication that workers need to be paid more. It has been well-documented that, despite increasing productivity over decades, wages have not kept up. Combined with increased tuition fees, this means that regardless of what one chooses to study in college, one’s ROI will inevitably continue to decline.

OneLove?

What obligations come with representation?

The SAT and the Limitations of Discrimination

In 2020, at the height of America’s pandemic-fueled racial reckoning, numerous colleges and universities dropped standardized tests as an admission requirement. No mere PR move, such action was supported by influential anti-racist activists such as Ibrahim Kendi, who declared, “Standardized tests have become the most effective weapon ever devised to objectively degrade Black and Brown minds and legally exclude their bodies from prestigious schools.” Racial gaps in SAT scores persist to the present. Yet, in the past several weeks multiple prominent universities, including Brown, Dartmouth, Yale, and UT Austin, have reinstated standardized testing as an admission requirement. Their reasoning — combating inequality.

The schools argue that careful use of standardized testing, in concert with other factors, can help to identify promising applicants who would otherwise be overlooked. Recent research has also affirmed that standardized test scores are predictive of performance, especially at highly selective universities. Moreover, standardized tests seem to be less biased than other more impressionistic aspects of the college admissions process like letters of recommendations and essays.

But all this does not necessarily vindicate the SAT. It can still be biased, even if less biased. And one can still find standardized testing too narrow an evaluative tool, even if acknowledging that more holistic methods or lottery-based approaches to admission have their own problems. However, the saga also reveals the very different ways we choose to measure and explain “inequality” in the first place.

One approach is to focus on discrimination. If one is committed to the belief that racial disparities are generally caused by discrimination, then the racial gap in test scores becomes evidence of that discrimination, and the tests emerge as the problem. Standardized testing reflects societal biases.

But racial inequality in America isn’t merely a matter of differential treatment; it is also a product of differential resources. Home ownership rates, family income, wealth, school funding, exposure to environmental toxinsall vary by race. If we believe these structural features impact standardized testing (and we should), our perception shifts from focusing exclusively on discrimination to a wider view of how resource inequality also shapes the picture. What follows from this shift in focus?

First, it requires us to admit the racial and socioeconomic achievement gap as measured by standardized tests at least partly reflects a real gap in the abilities those tests measure. This certainly does not imply these gaps are innate, nor that discrimination is not real, nor that standardized tests are the best measure of societal value. The concern is that by the time someone is taking the SAT at 16, harms from poverty, deprivation, and inequality have already accrued. Some of these harms, such as a lack of access to nutritional food or a lack of knowledge about test taking, can be addressed fairly easily. Other harms, for example exposure to allergens or environmental toxins, such as lead due to substandard housing, may cause lifelong negative effects.

It might be objected that while the gap in abilities measured by standardized tests is real, the abilities themselves are rather artificial — that these tests measure test taking and nothing more. Historically, the SAT stood for Scholastic Aptitude Test, with the implication it measured something like innate potential. In the 90s, it was rebranded to replace Aptitude with Assessment (it is now simply SAT). The question of what precisely standardized tests are measuring is complicated and controversial. However, the fear from a resource inequality perspective is that if differences are truly deep and structural with far reaching implications, then we should expect to find these differences emerge across many kinds of evaluation. This is a statistical claim about the overall effect of inequality. It does not imply that childhood environment is destiny or that there cannot also be benefits, to mentality, insight, or what have you, from a less privileged upbringing.

Second, resource inequality highlights a tension between two different missions of education. On the one hand, higher education, especially elite education, is a means of meritocratic selection, picking out those currently succeeding in K-12 American educational institutions and providing them additional opportunities and resources. On the other hand, education is a means of social uplift, by which people can allegedly transcend difficult circumstances and build a better life for themselves. But what if meritocratic means of selection themselves reflect and reinforce difficult circumstances? In fact, if resource inequality is causing a real effect, then we should expect a standardized test – even one with no discrimination whatsoever – to perfectly recapitulate an unequal society. If education is to be ameliorative of inequality, then institutions of higher education must accept different ability (at least at the time of evaluation) even on a fair test. Although, as previously discussed in The Prindle Post, this does not mean that these students are unqualified.

Finally, moving beyond discrimination to unequal resources challenges our understanding of societal change. If we believe the racial achievement gap to reflect discriminatory testing practices, then the natural solution is to change (or eliminate) the test. Better yet is to eliminate the prejudices behind the discrimination through educating ourselves and each other. But what if the racial achievement gap reflects instead the distribution of resources across society? What if people’s starting place is the most significant factor in determining SAT performance? The solution becomes far more ponderous. It may be rebutted that resource inequalities are still ultimately the result of discrimination, merely past discrimination, but this misses the point. For regardless of how we characterize the ultimate historical causes, correcting present discrimination will not automatically address the enduring impacts of the past. Of course, discrimination and material resources interact in complex ways: a lack of resources can lead to differential treatment, and differential treatment to a lack of resources. A natural hypothesis is that challenges for minorities which are redistributed by birth every generation (e.g., women and LGBTQ+ individuals) – and therefore don’t accumulate material disadvantage the way racial minorities can – may be better addressed by tackling discrimination and ideology, whereas resource inequality may require more redistributive solutions. As for the SAT, even if judicious use is an improvement to college admissions without standardized testing, we should not expect it to overcome the limitations of an unequal society.

The Ethics of Conscription

photograph of military boots and fatigues standing in line

As the conflict wrought from the occupation of Ukraine enters its third year, the nation struggles to find warm bodies for the front. Its leaders consider an expansion of the draft. Russia, too, suffers war fatigue as their conscription fueled invasion trudges on. Surrounding nations, eyeing the conflict and their own military limitations, mull expanding mandatory military service. Seeking to deter Russian hostilities, Latvia reintroduced conscription as of January 1st. Serbia, historically close with Russia but studiously non-committal on the issue of Ukraine, reopened discussions of conscription this January as a way to ensure military preparedness. Even  Germany — long gun-shy about all things military — has been reconsidering mandatory service, formally ended in 2011.

Russia and Ukraine are focused on the draft to sustain a war effort. Latvia, Serbia, and Germany are considering a general requirement to engage in military service, in peace times as well as during war.  One situation is certainly more emergent than the other, but both assert the government’s right to send its citizens (without consent) to fight and die. How might we justify such incredible power?

The most straightforward justification is that it is simply part of the deal. The “state,” the political institution which reigns sovereign over its people and territories, provides certain privileges and protections. In return, it can impose obligations on its people: taxation, jury duty, mandatory military service, what have you. Under this analysis, the legitimacy of conscription stems from the general political legitimacy of the state and its coercive powers.

A potent concern is consent. How can we justify the state’s power of conscription if people did not explicitly consent to it? This concern echoes across all the state’s coercive powers, but it is especially acute for military service where so much can be on the line. The most historically influential response by philosophers is essentially hypothetical consent. The idea is that, understanding the situation, a reasonable person would agree to be governed by the state and hence consents in theory. This is hypothetical consent to be governed, not necessarily to conscription specifically. But if we agree that a reasonable person would consent to be governed, consent to abide by decisions made through the political process, and consent to the protection provided by the state, then conscription is not far away. However, hypothetical consent clearly has its limitations: Imagine the absurdity of hypothetical consent as a defense in cases involving sexual harassment. Moreover, consent typically implies respect for the individualness of personal decisions (regardless if others may judge them as unreasonable).

One might also, while not objecting to coercive powers of the state generally, take issue with conscription specifically. If government is understood as existing partly to protect certain rights, life among them, then conscription would seem antithetical to the very nature of government. Although one may respond that the government needs to infringe the rights of some, to protect the rights of many. Governments can also provide more flexibility. For example, many European countries with mandatory service (such as Austria), provide a choice of military service or civil service.

If conscription can be justified as something citizens owe to the state, an implication of this is that the state needs to hold up its end of the bargain. A state that serves its people is best positioned to ask for service in return. A corrupt or tyrannical state, an unjust war, all these might undermine the legitimacy of conscription. Perhaps unsurprisingly, countries have often adopted a carrot and stick approach to compulsory military service. Revolutionary France, the birthplace of modern conscription, also ensured that military service provided a path of advancement for those serving. In the United States, the GI Bill, initiated at the end of World War II, provides extensive support for education for veterans.

Along these lines we may also worry about a mismatch between who benefits from the state and who pays the price of conscription. During the Vietnam war, the poor and minorities were far less able to avoid the draft than those with more resources. This unfairness is immortalized in the art and music of the time, such as Creedence Clearwater Revival’s “Fortunate Son” or Freda Payne’s “Bring the Boys Home,” which was written in response to the disproportionate deaths of Black Americans.

Alternatively, we may justify conscription (and indeed, the state generally) on the basis of utility — that it provides the most good to the most people. Clearly, mandatory military conscription, especially in times of war, comes with risks. But it can also come with benefits, e.g., enabling a nation to fight off an invader that could otherwise lead to far larger casualties. Arguing for conscription on the basis of benefits, or even necessity, is clearest in a moment of humanitarian crisis. More generally, the challenge is not whether conscription can come with benefits, but whether it is legitimately the best option for the people.

Can changes be made to increase voluntary recruitment? Can technology be used instead of soldiers? Can new alliances be made? In short, is mandatory military service truly the least injurious option? Using benefits to the people as our metric also places war itself in the crosshairs. Some wars, such as repelling invasion, are of uncontroversial public benefits. Other wars — Vietnam, again, is a notable example — seem to be in service of the government but not necessarily its people.

Perhaps we shouldn’t expect conscription to have a clear moral justification at all. The historical roots of conscription lay not in ethical analysis, but military expediency. In early 1800s Europe, when European governments had achieved a level of control and centralization to carry out conscription, it simply became a fact of war. This is not to say ethical reflection on the matter is not valuable, nor that it can never be justified, nor that there are not better and worse ways to implement conscription. But is a general moral justification what we should expect?  Or is it more likely that conscription is often just a government tactic in need of a moral fig leaf?

The Case for Allowing Advocacy of Violence on Campus

photograph of University of Pennsylvania courtyard

Last week M. Elizabeth Magill, the University of Pennsylvania’s president, was forced to resign after she gave testimony before Congress concerning her university’s response to pro-Palestinian demonstrations on its campus. The controversy over her testimony has focused upon the following exchange with Republican Representative Elise Stefanik:

Stefanik: “Does calling for the genocide of Jews violate Penn’s rules or code of conduct, yes or no?”

Magill: “If the speech turns into conduct, it can be harassment.”

Stefanik: “Calling for the genocide of Jews, does that constitute bullying or harassment?”

Magill: “If it is directed and severe, pervasive, it is harassment.”

Stefanik: “So the answer is yes.”

Magill: “It is a context-dependent decision, congresswoman.”

Stefanik: “That’s your testimony today? Calling for the genocide of Jews is depending upon the context?”

After news broke that Magill had resigned, Stefanik, referring to Magill’s co-testifiers from Harvard and MIT, said in a statement: “One down. Two to go.”

As others have pointed out, what is astonishing about this episode is that Magill’s response, which (bizarrely) even some prominent law professors have criticized, was a straightforward recital of First Amendment law as applied to campus speech. The First Amendment protects from censorship advocacy of violence that falls short of verbal harassment or incitement — the latter defined as conduct intended and objectively likely to cause imminent violence. In line with this principle, Magill’s sensible position is that there are likely some situations where even advocacy of genocide does not rise to the level of harassment or incitement. But critics of Magill’s position would have us believe that the scope of permissible speech — that is, speech not subject to institutional sanction — on our elite campuses should not be as broad as it is in any public park, any periodical, or any public library in America. In this column, I will try to provide a rationale for Magill’s position.

The first thing to observe is that free speech is not only a legal, but also an ethical issue that extends far beyond the purview of First Amendment law. That’s because free speech concerns arise in a variety of contexts, from the family to the workplace — indeed, wherever one person or group has the power to sanction others for their speech. It is not my position that in all of these contexts, the scope of permissible speech should be the same. The value of free speech must be weighed against other values, and in different contexts, the results of that weighing exercise may vary. My claim is that in academic institutions, the value of free speech is unusually weighty, and this justifies maintaining a very strong presumption, in this particular context, in favor of not sanctioning speech. So, while the First Amendment is only directly implicated where the government seeks to use the coercive power of the state to censor or otherwise restrict speech, the First Amendment may serve as a useful model for how private universities like the University of Pennsylvania should handle speech.

Academic institutions are where knowledge is generated and transmitted. To do this well requires an open exchange of ideas in which participants can rigorously test arguments and evidence. Any institutional limits upon this exchange inevitably hinder this testing process because they entail that certain ideas are simply beyond the exchange’s scope. While some limits are nevertheless justifiable for the sake of encouraging maximum participation and preventing violence or other serious harm to persons, academic institutions should not draw the line at mere advocacy of violence or crime for a couple of reasons.

First, it would deprive faculty and students of the opportunity to openly and freely examine ideas that might, like or not, have great currency in the wider society. This is particularly lamentable given that a college campus is a relatively safe and civil environment, one much more conducive to productive conversation about difficult topics than others in which students will find themselves after graduation. It is also, at least ideally, an environment relatively free from the kind of political pressures that could make open and free conversation difficult for faculty. For this reason, if a point of view that advocates violence or crime is without merit, the best arguments against it may be generated at a university. If it has merit — I do not presume a priori that any position advocating any kind of violence or crime is without merit — it is likewise at a university that the best arguments for the position may be uncovered.

In other words, it makes no difference that pro-violence ideas may be intellectually indefensible, or that some might wish them consigned to the dustbin of history. Academic institutions perform a public service simply by publicly demonstrating that fact. Moreover, Hannah Arendt said that in every generation, civilization is invaded by barbarians — we call them children. Her point was that no generation springs into existence armed with the truths established by its predecessors; each must relearn the hard-won lessons of the past, reflecting upon and deciding for itself what is good and bad, true and untrue. To shut down discussion of ideas we have deemed to be without merit is to tell the next generation of students that we have made up their minds for them. There could be nothing less consistent with the spirit of liberal education, with what Immanuel Kant called Enlightenment, than that.

It may be objected that advocacy of violence per se, in any context, frightens or even traumatizes would-be targets of violence, whether student, faculty, or staff, and this justifies censoring it. But my position is not that advocacy of violence is permissible at any time and place, or in any manner. There are better and worse ways for an institution to handle speech that is capable of harm. My point is simply that the solution cannot be to simply restrict any discussion of ideas supportive of violence, no matter how it is conducted. I have previously made the point that we — that is, free speech proponents, including the liberal Supreme Court of the 1960s that was responsible for so many seminal free speech decisions — do not support free speech because we think speech is harmless. By arguing for the central importance of free speech as a value, we implicitly recognize speech’s power to do evil as well as good. Our position must be that we support free speech despite the harm speech can cause, although we can and should take steps to minimize that harm.

This discussion has, so far, been somewhat abstract. Let me close by considering a concrete hypothetical that illustrates the gulf between my view and Stefanik’s. Suppose that a substantial portion of Americans come to support the involuntary, physical removal of Jews from Palestine, effectively an “ethnic cleansing.” Pundits and politicians start advocating for this position openly. On my view, one role of universities in that scenario would be to serve as a forum for discussion of this idea. Proponents of that view should be invited on campus and debated. Students and faculty, including those sympathetic to the idea, should discuss it at length. The hope would be that by exposing it to the kind of scrutiny that universities can uniquely provide, the idea would be discredited all the more swiftly and comprehensively. There is no guarantee that this would happen, of course. On the other hand, those who hold to the view that advocacy of violence has no place on campuses must insist that, in this world, universities and colleges should shun proponents of the view, insulating their students from exposure to the treacherous currents of thought coursing through the wider society. This, I submit, would be a mistake.

Should the U.S. Continue Aid to Ukraine?

photograph of Ukrainian flag on military uniform

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On Wednesday, September 7th, U.S. Secretary of State Anthony Blinken announced a new aid package to Ukraine worth over $1 billion. The announcement came during what may be a critical juncture for the war. Ukraine’s counter-offensive has been slower than initially hoped, leading U.S. officials to question Ukrainian military strategy. However, progress has been made in recent weeks – the Ukrainian military has broken through the first line of Russian defenses in the south and liberated settlements. Further, there is some reason to believe future gains may come at an accelerated rate, as intelligence officials believe the Russian military concentrated its defenses at the first line.

Regardless, continued U.S. aid to Ukraine is no longer an ironclad guarantee. Although a majority of U.S. citizens still approve of aid to Ukraine, poll numbers have shown changing attitudes in recent months. About half of Republican respondents polled feel that the U.S. is doing too much to help Ukraine, and that they prefer ending the war as soon as possible, even if Ukraine concedes lost territory to Russia. Further, despite a majority of Democrats and independents favoring aid to Ukraine even in a prolonged conflict, support for that position has declined somewhat. During the Republican presidential debate in August two candidates, Vivek Ramaswamy and Ron DeSantis, stated they would end U.S. aid to Ukraine (in DeSantis’s case, this was qualified with the statement that he would stop aid unless European nations “pull their weight”). Donald Trump has suggested that all aid to Ukraine should pause until U.S. agencies turn over alleged evidence that incriminates President Joseph Biden.

Given the amount of aid the U.S. has sent to Ukraine – about $76 billion at the time of this article’s writing (although Congress has approved up to $113 billion) – it is worth pausing to weigh the moral arguments for and against continuing to provide aid.

Before beginning that discussion, I want to note two things.

First, while aid to Ukraine is normally reported in dollar amounts, this is misleading. The U.S. has not sent $76 billion in cash to Kyiv. While some money has gone to financing, significant portions of the aid are supplies from the U.S. stockpiles, training Ukrainian soldiers, and collaborating on intelligence. The value of the aid is estimated at $76 billion but this does not mean the U.S. has spent $76 billion. Less than half of the aid has been cash, and some portion of this figure includes loans.

Second, there are arguments about aid this article will not consider. Namely, these concern the strategic or political value of aiding Ukraine. One might argue that a repulsion of the invasion would humiliate and weaken Putin’s regime, thereby advancing U.S. interests. Alternatively, one could argue that if the war effort fails while the U.S. sends aid, it could damage U.S.’s standing internationally; there would be doubts that cooperation with the U.S. is sufficient to ensure security. While these considerations matter and should enter our decision making, they are too complex to discuss in sufficient detail here.

What arguments might someone make against continuing aid to Ukraine? The most common arguments in public discourse stem from what the U.S. government ought to prioritize. For instance, during the Republican primary debate, Ramaswamy commented that the U.S. would be better off sending troops to the border with Mexico. Trump has similarly questioned how the U.S. can send aid to Ukraine but cannot prevent school shootings.

The idea here appears to be something like this. Governments have obligations which should shape their decisions. Specifically, governments have greater duties to resolve domestic issues and help their citizens before considering foreign affairs. Thus, the claim here seems to be that the U.S. should simply spend the resources it is currently allocating towards Ukraine in ways that more tangibly benefit citizens of the U.S.

There are a few reasons to be skeptical of this argument. First, without a specific policy alternative it is not clear what those who utter this argument are suggesting. For any particular program, it is always theoretically possible that a government could do something more efficient or more beneficial for its citizens. But this claim is merely theoretical without a particular proposal.

Second, this argument may pose what philosophers call a false dichotomy. This fallacy occurs when an argument limits the number of options available, so that one choice seems less desirable. False dichotomies leave listeners with an “either this or that” choice when the options are not mutually exclusive. Consider Ramaswamy’s proposal in particular. It is unclear why the U.S. could not both provide military aid to Ukraine and deploy soldiers to protect its borders.

Third, not all aid sent to Ukraine could clearly benefit U.S. citizens. For instance, it is not clear how anti-tank missiles, mine-clearing equipment, or artillery can be used to solve domestic issues in the U.S.

More compelling, however, are the arguments that may appeal to the long-term consequences of prolonged war in Ukraine. Some may point to more speculative consequences. Perhaps a long war in Ukraine will result in a more hostile relationship between Western nations and Russia. This is especially true given recent discussion of Ukraine joining NATO and Russian officials’ attitudes towards the alliance. Further, a prolonged conflict may create more tense relationships between the U.S. and China, and could provide a diplomatic advantage to the latter. So, some might argue that it could be in the interests of long-term peace to bring an end to the war in Ukraine; the more strained these relations become, the less probable cooperation between major powers becomes.

Less speculative is the simple fact that, the longer the war drags on, the more people will die. The more battles fought, the more casualties. Additionally, given that the Ukrainian military is now using munitions like cluster bombs and the Russian military has blanked portions of Ukraine with land mines, it is certain that the increased casualties will include civilians. Given that there is moral reason to avoid deaths, we may have moral reason to bring an end to the war in Ukraine to reduce the number of lives lost – the sooner it ends, by whatever means, the fewer people will die.

However, proponents of aid to Ukraine also appeal to the long-term consequences of current events. In particular, some argue that failing to support Ukraine’s war effort will enable future aggression, specifically, aggression by Moscow. The idea is something like this. The costlier the war is for Russia, the less likely its leaders will be to pursue war in the future. Further, the more support that nations like the U.S. are willing to provide to nations that are the victims of aggression, presumably, the less likely it would make future aggressive acts. Although a prolonged war in Ukraine will lead to a greater loss of life now, one might argue that in the end it will prevent even larger losses in the future by changing the cost-benefit analysis of future would-be aggressors.

Perhaps the most compelling argument for continuing aid to Ukraine comes from just war theory – the application of moral theory to warfare. Just war theorists often distinguish between jus ad bellum – the justification of going to war – and jus in bello – the morality of the conduct of combatants once war has broken out. Typically, just war theorists agree that wars of aggression are not justified unless they are to prevent a future, more severe act of aggression. Defensive warfare, in particular defensive warfare against an unjust aggressor, is justified.

To put the matter simply, Ukraine has been unjustly invaded by the Russian military. As a result, the efforts to defend their nation and retake captured territory are morally justified. So long as we have moral reason to aid those who are responding to unjust aggression, it seems we have moral reason to aid Ukraine. For many, this is enough to justify the expenditures required to continue military aid.

Of course, one might question how far this obligation gets us. It is not clear how much we are required to aid others who have a just pursuit. Resources are finite and we cannot contribute to every cause. This point will be more pressing as the monetary figure associated with aid to Ukraine rises, and our public discourse questions the other potential efforts towards which that aid could have been directly.

As noted earlier, however, there are some reasons to question arguments of this sort when they are light on specifics. It is one thing to reassess the situation as circumstances have changed and find that your moral obligations now seem to pull you in a different direction. It is another entirely to abandon a democratic nation to conquest simply over sophistry. The severe consequences of our choices on this matter should prompt us to think carefully before committing ourselves to a particular plan of action.

The Case For and Against Nuclear Disarmament

photograph of bomb shelter sign in Ukraine

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


When the Cold War ended thirty years ago, many hoped that the chances of nuclear war would decline, and even that nuclear weapons might be on the road to ultimate extinction. For a time, it seemed those hopes might be fulfilled. The Bulletin of the Atomic Scientists’ famous Doomsday Clock stood at six minutes to midnight – that is, global catastrophe – in 1988. The Clock was rolled back to fourteen minutes to midnight in 1995 as Russia and the United States agreed to unprecedented reductions in their strategic nuclear arsenals.

Sadly, though, these optimistic predictions have faded in recent years. Russian nuclear saber-rattling over Ukraine, the impending expiration of the one remaining nuclear weapons  treaty between Russia and the United States, signs of nuclear proliferation in the Middle East, and the unprecedented challenge of managing a three-sided geopolitical competition between nuclear-armed Russia, China, and the United States have brought concerns about nuclear war back to the forefront of policymakers’ agendas. Some prominent Americans commentators are now calling for a big build-up of our nuclear arsenal. Today, the Clock stands at ninety seconds to midnight, closer to catastrophe than it has ever been – largely, the Bulletin claims, because of the mounting dangers of the war in Ukraine. Christopher Nolan’s film Oppenheimer has even reignited debate about the United States’ use of nuclear weapons against Japan during World War II – so far, the only instance of their use in anger. Thus, now seems like a propitious moment to go back to first principles: that is, to reconsider what ultimately should be done about nuclear weapons.

At the risk of oversimplifying, the basic question is whether or not to adopt disarmament as the ultimate goal. “Disarmament” means both dismantling all nuclear warheads and delivery systems, as well as eliminating stockpiles of weapons-grade fissile materials that could be used to quickly assemble a weapon. The arguments against disarmament come in two flavors: first, that nuclear weapons are effective deterrents to nuclear, chemical, biological, and conventional forms of aggression; and second, that nuclear disarmament is an unrealistic goal.

The historical case for the value of nuclear weapons as deterrents to conventional military aggression is weak. In 1950, the United States enjoyed a near-monopoly on nuclear weapons, the Soviet Union having only tested their first atomic bomb a year earlier. This did not deter North Korea from invading South Korea with the Soviet Union’s support, and it did not deter China from entering the war when U.S., South Korean, and allied forces advanced almost to the border between North Korea and China in the fall of that year. Nor did the United States’ nuclear arsenal deter North Vietnam from invading and ultimately conquering South Vietnam, a country to which the U.S. had made clear security guarantees, in the early 1970s.

The reason that the U.S.’s nuclear “umbrella” was unable to dissuade Soviet-supported regimes from engaging in aggressive conventional military action against U.S. allies during the Cold War is not difficult to understand. Ultimately, the United States’ interest in avoiding a nuclear exchange with the Soviet Union, which it might have precipitated by using nuclear weapons against one of the Soviet Union’s allies, trumped its interest in protecting its own allies from conventional aggression. Knowing this, North Korea, North Vietnam and other Soviet-backed states were confident that the United States would not actually use its nuclear arsenal against them. Today, Russia or China or some other revisionist power may reasonably believe that the United States would, for precisely the same reason, never actually use its nuclear weapons against them if they threatened the sovereignty of countries like Taiwan, South Korea, or Poland with conventional military force – even one which enjoys a treaty-based U.S. security guarantee.

Nuclear weapons have historically also failed to deter states from directly aggressing against states that possessed their own nuclear arsenals. It is certainly true that the Cold War never went hot in a conventional sense, and that might be chalked up to superpowers’ nuclear arsenals. Still, the existence of a small Israeli nuclear arsenal was widely known since the late 1960s, though not officially acknowledged; but this did not deter a coalition of Arab states from invading Israel during the 1973 Yom Kippur War. A few years before, the Soviet Union and China, which both possessed publicly-acknowledged nuclear arsenals, engaged in a series of intense military clashes on their border. And in 1999, Pakistani forces occupied strategic positions on Indian territory in the Kashmir region, leading to a conventional military conflict between the two nuclear-armed states. Again, the reason that aggressor states are not necessarily deterred by their victims’ nuclear arsenals is the cost of their use both in terms of possible nuclear counterstrikes by the aggressor or its ally and international reputation. This makes it unlikely that nuclear-armed states will use their arsenals against any but the most grave existential threats, whatever their official policy.

The case for nuclear weapons as deterrents against the use of other weapons of mass destruction – chemical, biological, or nuclear weapons – seems to rest on firmer historical ground. In the eighty-year history of nuclear weapons, there has never been a single nuclear exchange or chemical or biological attack by one state against a nuclear-armed state. The principle of mutually assured destruction or MAD, as it is popularly known, seems to have played a role here. According to this theory, two nuclear-armed states are unlikely to attack each other with nuclear weapons because there is no entirely adequate defense against a nuclear counterstrike. Because a state contemplating a first strike could expect to suffer cataclysmic losses from such a counterstrike, it will be effectively deterred.

In the 1950s, prior to the advent of ballistic missiles, the greatest nuclear threat to both superpowers was their adversary’s thousands-strong fleet of strategic bombers. Although the country that struck first could expect to destroy some of these bombers – both the United States and the Soviet Union built thousands of fighter interceptors to shoot them down – it was well-understood that at least some would manage to hit their targets. And even a handful of thermonuclear-armed bombers could cause millions of casualties. The lack of an adequate defense to nuclear counterstrike became only more apparent once the superpowers diversified their weapons delivery systems, developing the so-called “nuclear triad” of submarines, bombers, and missiles. Even today, anti-missile defense systems are notoriously unreliable, and submarines difficult to detect and destroy.

On the other hand, there is ample evidence that the United States and Soviet Union came perilously close to nuclear war at various points, notwithstanding the elegant logic of MAD. President John F. Kennedy estimated that the chances of a nuclear exchange during the Cuban Missile Crisis were one in three; his national security advisor, McGeorge Bundy, put the odds at one in one hundred. Either way, these are surely terrifying figures given the potentially catastrophic, even civilization-ending impact of full-blown nuclear war not just on those countries, but the entire planet.

In another famous incident in 1983, a lieutenant colonel in the Soviet Air Force named Stanislav Petrov likely single-handedly averted nuclear war when his nuclear early warning system mistakenly reported an intercontinental ballistic missile launch from the United States. Petrov chose to wait for corroborating evidence before relaying the warning up the chain of command, a decision credited with preventing a retaliatory nuclear strike at a time of heightened tension between the superpowers. The superpowers’ hair-trigger deployment of their nuclear arsenals meant that misunderstandings and the fog of (Cold) war could cause even rational actors to choose a fundamentally irrational course, and there was little time to deliberate or think twice about whether to launch. A world of MAD is not a safe world.

Moreover, the argument that nuclear weapons deter nuclear war is not by itself sufficient to justify their existence unless nuclear war would be more likely in a disarmed world or a world that attempted disarmament than in a world of nuclear deterrence. This point brings me to the arguments against nuclear disarmament based on the practical infeasibility of that goal.

In A Skeptic’s Case for Nuclear Disarmament, Michael O’Hanlon, a senior fellow at the Brookings Institution, argues that the process of disarmament raises two dangers: the danger of incentivizing proliferation and the danger of cheating. Because any serious move toward disarmament would have to be led by the United States – the second-largest arsenal in the world – its allies, like Japan, South Korea, or Poland, might feel so apprehensive about losing America’s nuclear umbrella in light of mounting geopolitical tensions and rivalries that they would decide to acquire their own nuclear deterrent in response. For this reason, O’Hanlon recommends deferring nuclear disarmament until after major geopolitical tensions between Russia, China, and the United States have been resolved. It could be added that nuclear disarmament, which would require extensive cooperation between these great powers, would itself probably be more feasible if they were to resolve their disputes.

One reply to this argument is that it threatens to defer disarmament into the indefinite future – in practical terms, it implies no change to the intolerable status quo. There is no guarantee that even if the current disputes between the great powers were resolved, some new ones would not arise. As we have seen, there are also reasons to doubt whether America’s nuclear arsenal really is an effective deterrent. Moreover, that these disputes increase the likelihood of nuclear war is one of the best reasons for pursuing disarmament. And historically, it is not unheard of for nuclear-powered rivals to work together to reduce their nuclear arsenals, or even to talk seriously about disarmament.

O’Hanlon also argues that because of the extreme difficulty of verifying compliance with a disarmament agreement, particularly with respect to stockpiles of fissile materials, there is a serious danger that some rogue state will secretly build a nuclear weapon and use it for the purpose of nuclear blackmail. For this reason, he recommends that any disarmament treaty include a reconstitution provision pursuant to which any party could temporarily withdraw from the treaty and reconstitute its arsenal if it can show to an impartial body that it faces a serious nuclear, chemical, biological, or even conventional threat.

Such a reconstitution provision might, however, introduce further instability into the disarmament regime. Once a treaty party withdraws, its geopolitical rivals would certainly be strongly motivated to withdraw as well; indeed, one party’s withdrawal could be a sufficient reason for its rivals’ withdrawal. In effect, this would unravel the disarmament regime and take the world back to square one. Moreover, even if O’Hanlon is correct that no conventional deterrent could adequately prevent nuclear blackmail or conventional aggression by a rogue state, arguably a world characterized by a higher risk of conventional aggression and nuclear blackmail is still preferable to a world characterized by a non-trivial risk of a nuclear exchange.

Of course, there is much more to be said about the arguments for and against disarmament; in the foregoing I have only managed to scratch the surface. Some useful further resources include O’Hanlon’s book, Raimo Väyrynen and David Cortwright’s Towards Nuclear Zero, George Perkovich and James M. Acton’s Abolishing Nuclear Weapons, and McGeorge Bundy’s Danger & Survival: Choices About the Bomb in the First Fifty Years. Whichever way you ultimately come down on this issue, with the nuclear order straining under new challenges, it behooves all of us to reflect seriously upon the desirability and feasibility of a renewed push for nuclear disarmament.

Taking Offense with Emily McTernan

Imagine sitting in a staff meeting where one of your co-workers makes a joke about people with disabilities. You’re offended, so you roll your eyes and cross your arms in front of your chest for the rest of the meeting. You might worry that your reaction was pretty insignificant, and didn’t really do any good. My guest, philosopher Emily McTernan, argues that taking offense and showing disapproval, even in small ways, can actually be a force for social good.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Emily McTernanOn Taking Offence
  2. Amy Olberding, The Wrong of Rudeness: Learning Modern Civility from Ancient Chinese Philosophy
  3. Sarah Buss, “Appearing Respectful: The Moral Significance of Manners
  4. Cheshire Calhoun, “The Virtue of Civility
  5. Joel Feinberg, Offense to Others

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Funk and Flash” by Blue Dot Sessions

Rambling” by Blue Dot Sessions

A Right to a Home?

photograph of homeless tents in downtown Los Angeles

Over half a million people are homeless in America. In New York City, shelters overflow as endemic homelessness combines with migrants seeking refuge. Struggling with rising crime and drug use, famously progressive Portland now takes more direct action against its homeless population, such as clearing out camps. California, where almost one third of America’s homeless population lives, is fighting a losing battle against surging housing prices and the societal sequelae of COVID.

Homeless Americans are a diverse population. Some live on the streets, others sleep in shelters, in their cars, or on friends’ couches. Many homeless people work, but find their incomes inadequate to pay for housing. Causes of homelessness are diverse including domestic abuse, disability, mental illness, inadequate pay, and housing prices, but there is little evidence to support the sometimes heard allegation that homelessness is a choice. Unsurprisingly, most homeless people want adequate housing.

But just what is owed to the homeless of America? Is homelessness merely unfortunate, or does it represent a deeper moral failing of the government? Could there even be a right to a home which is currently unfulfilled for so many Americans?

One answer is that nothing is owed to the hundreds of thousands that are currently unhoused. Or at least, nothing special. Nonetheless, such a belief would not preclude the government from helping homeless individuals. For starters, the government may act to ensure that basic rights (e.g., due process of law, expression, security, the right to seek emergency health care) are not unfairly denied to homeless individuals. A government might act further out of compassion, as homelessness often goes hand in hand with poverty, vulnerability, and harm. More calculatingly, a government could be moved by purely practical concerns. Cities can struggle with the impacts of large indigent populations. A government may also want to address homelessness to increase potential productivity, or even the aesthetics of a community.

This may seem callous, but even an “owe nothing” account can take us fairly far. Increasingly popular homeless bills of rights, such as the one recently introduced in Michigan proceed along such lines. The Michigan bill aims to secure rights such as “equal treatment by all state and municipal employees” and “freedom from discrimination in employment.” The core idea is that a particular class of people should not be unfairly discriminated against – that they are owed the same rights as everyone else, and such legislation therefore echoes previous legislation which enshrined women’s rights, LGBTQ+ rights, and racial minority rights.

What such legislation does not do is contend that homeless individuals are owed resources. It is certainly intended to have an alleviatory effect on homelessness, but it does not obligate the government to do anything other than prevent discrimination. A much stronger claim would be homeless individuals are entitled to shelter or perhaps even homes.

Such rights are challenging. More than a mere good thing to do, a right to a home would provide a positive obligation on the government to provide shelter. We rarely think this way. Cars, computers, and smartphones, are also important to the way people live and work, yet few feel that the government owes us these.

Why should the government provide houses? And even if we accept that such a right exists, what exactly is required to fulfill it? Does the government merely need to provide some shelter? Does it have to be nice?

Despite these hurdles, the idea of housing as a right has a long history. The Universal Declaration of Human Rights, for example, identifies access to adequate housing as a fundamental right.

One way to approach a right to home is via a famous thought experiment from political philosophy – the veil of ignorance. Imagine a discussion among people trying to design the ideal society. However, there is a major caveat: these individuals do not know who they will be in this society – what characteristics they might possess and what kind of social position they might come to hold. They are behind the veil of ignorance. Consequently, designing a deeply unequal society where many will experience a poor quality of life is a risky proposition. Few would endorse such a society knowing they stand a good chance of receiving the short end of the stick. Instead, these idealized actors may wish to ensure that no matter what kind of life they come to have, they are guaranteed access to certain basic goods, including shelter. Such a thought experiment gives us insight into how we might reason our way to a just society, rather than simply taking what history has given us.

Alternatively, a right to a home might be secured by the pursuit of freedom and equality of opportunity. As the legal theorist Jeremy Waldron has pointed out, homelessness greatly restricts freedom. For starters, everything that is banned in public is, for the homeless, forbidden. Moreover, the material fact of being homeless (and not simply discrimination against homeless individuals) restricts one’s ability to acquire and keep property, raise a family, stay healthy, and seek medical care. Anything that depends on being specific places at specific times, entering private property, or having stable access to a cell phone or computer (keeping cell phones charged is particularly difficult), can be challenged by homelessness and its attendant hardships. If we care about ensuring that all people are in a position to pursue opportunities and better their lives, then a home may not be a goal but rather a prerequisite, and thus something that the government should provide as part of a minimum standard of living.

This moves us away from the idea that a house is simply a resource or good – something to be bought and sold on the market – and towards the idea that the significance of the house is the capabilities it enables.

The right to housing, then, is not simply an entitlement to a structure with certain amenities, but rather a way to satisfy basic needs like privacy and safety.

If one or both of these arguments is compelling, a final concern still needs to be discussed, namely, money. A frequent challenge to rights such as healthcare and housing is that they are bottomless money pits. There is nuance here, with some arguing that addressing homelessness can actually save money long term. Regardless, there is at least the risk that implementing a right to homelessness could be expensive. But this is not a substantive objection to the existence of a right. If it is accepted by a society that there is a fundamental right to housing, then the cost is a secondary consideration. By the same token we do not nullify the right to free speech because we do not always like its consequences. Under both the above accounts, some redistribution of money is justified to secure a more foundational form of fairness.

However, an implementation of a right to a home would have to take seriously that homelessness is caused by more than a lack of houses. Major underlying causes such as soaring housing prices along with addiction, disability, mental illness, and inequality continue to drive people towards homelessness. Additionally, there is an inherent tension between property owners – who want to keep the value of a commodity high – and property seekers; NIMBY sentiment encourages us to simply move the problem elsewhere. If the government does finally commit to housing as a right, that entitlement will have to be secured in the face of these enduring challenges. This may demand more serious societal modifications than simply investing in new construction.

Capitalist Humanitarianism with Lucia Hulsether

Ethnographer and historian of religion Lucia Hulsether is on the show today to talk about the strange phenomenon she calls “capitalist humanitarianism.” She studies the ways that corporations attempt to distance themselves from the harms of capitalism by doing things like by selling environmentally-friendly goods or promoting socially-responsible investing.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Lucia HulsetherCapitalist Humanitarianism

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Single Still” by Blue Dot Sessions

Capering” by Blue Dot Sessions

Supervised Injection Facilities and the Morality of Harm Reduction

photograph of discarded syringe on asphalt

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


People often support policies that lessen the harms others experience. For instance, proponents of abortion rights often argue that banning abortion does not eliminate abortions, it only makes them unsafe. Some high school sex education programs provide condoms to students to curb the spread of sexually transmitted diseases. Although traditionally alcohol is banned in homeless shelters, some have shifted to a “wet” model allowing residents to use alcohol and in some cases even prescribing alcohol. The rationale here being that it is easier to get one’s sobriety under control in a managed environment and when one has shelter at night.

More recently, some have considered the role harm reduction may play in addressing the U.S. opioid epidemic. According to the Centers for Disease Control, 93,655 Americans died of drug overdoses in 2020, a 30% increase from 2019, and a further 107,622 died of overdose in 2021. One of the leading contributors to this spike in deaths is the increased presence of fentanyl. Because of its potency, lower cost, and addictive potential, fentanyl is often mixed with other powdered drugs or sold in their place. As a result, people who unknowingly consume fentanyl may accidentally overdose, not realizing the strength of the drug they are consuming.

In response, policy makers have been taking measures to reduce the risk of harm fentanyl poses. For instance, although once labeled as “drug paraphernalia” lawmakers across the U.S. have worked to decriminalize fentanyl test strips, hoping to help drug users avoid fentanyl. Some have called for further steps including the creation of Supervised Injection Facilities (SIFs). At these facilities, individuals are permitted to bring in and consume drugs. They are then provided with the means to use these drugs as safely as possible; they receive clean needles, alcohol pads to sterilize injection sites, and medical staff remain on standby to monitor for potential signs of overdose. Additionally, staff can help secure access to resources such as addiction counseling and treatment. The idea is to reduce overall harm by ensuring that those who would otherwise use drugs in public are instead in a private, controlled space with access to resources which can help secure their long-term health. OnPointNYC, the organization running the SIFs, reports they have intervened in 848 overdoses on site and zero deaths have occurred in 68,264 uses.

SIFs, however, are not popular in the U.S. Although other locales have considered opening SIFs, New York City contains the only two officially operating in the U.S. – one in East Harlem and one in Washington Heights. However Representative Nicole Malliotakis of New York’s 11th District has called on the Justice Department to shut down “heroin shooting galleries that only encourage drug use and deteriorate our quality of life.” Pennsylvania’s state senate recently passed a bill banning SIFs by a 41-9 margin. Senator Christine Tartaglione, a Democrat from Philadelphia, stated that her “constituents do not want safe injections site in the neighborhood” and claimed that these sites “enable addiction… [and] we should be in the business of giving these folks treatments.”

These, and other potential objections, warrant further examination. For the purposes of this discussion, I want to consider arguments against harm reduction in the context of SIFs. However, in doing so, these reflections may lead to some insight about harm reduction arguments in other contexts.

One might object to SIFs because they appear to publicly endorse illegal behavior. Yet we may have reason to find this reason uncompelling – the law and morality often diverge. To oppose SIFs because the drugs consumed there are illicit is to merely pass the buck. Why should we regard the use of particular drugs morally objectionable? Why prefer a policy of abstention to moderation? Our focus is better placed on arguments that target SIFs themselves.

The claims by public figures quoted earlier suggest that SIFs fail to prevent harm and instead increase it. There seem to be two purported reasons for this. First, that SIFs enable or even promote drug addiction. Second, that SIFs lead to a deterioration of the surrounding area, encouraging drug users to occupy it, which leads to drug dealing, public drug use, and further threats to the local community.

The available data, however, does not support these arguments. Researchers have found that SIFs lead to lower rates of overdose and decreases in infectious disease rates among drug users. So, SIFs appear to lessen harm to addicts, at least in the short term. Further, SIFs do not seem to impact local crime rates, and, at worst, have no impact on public drug use and needle litter (though there is some evidence that they reduce both).

There is an intuitive argument that these facilities will deteriorate neighborhoods by drawing in drug dealers – the supply may seek out the demand. However, support for this claim is primarily anecdotal. Further, while narcotics arrests have increased in New York neighborhoods with SIFs, these areas now have additional police presence outside of SIFs. It’s at least plausible that an increased police presence is the cause of additional arrests.

Further, there seems to be little, if any, data on the long-term effects of SIFs for overcoming addiction. Perhaps more clarity on long-term consequences of SIFs will come as their impacts are further researched. But currently there seems to be little evidence suggesting they are harmful. They seem to benefit addicts, at least in the short term, and there does not appear to be conclusive evidence that they harm the surrounding community.

But perhaps considering only the consequences misses the point. As I have argued elsewhere, sometimes the consequences of a policy do not seem to matter in the face of other moral objections. Consider, for instance, someone arguing that making cannibalism illegal just produces additional harms – it pushes the market for human meat into the underground, making regulation and oversight impossible, harming both the producers and consumers of human meat. Thus, this person concludes that legalizing cannibalism and regulating human meat consumption would make things safer.

These points, however, fail to resonate as objections to prohibiting cannibalism. This is because harm is just one factor (if even a factor) behind cannibalism’s illegality. Part of the reason why we have laws is to express our attitudes towards a behavior. In this case, eating human flesh simply seems deeply morally wrong to us.

Following this logic, the opponent of SIFs could argue that there is something morally objectionable in drug use, even if SIFs do reduce harm in the long run. That explanation could come in various forms. For instance, in the Groundwork of the Metaphysics of Morals, Immanuel Kant argues that someone who refuses to develop their talents acts immorally by disrespecting her own humanity – she has a potential that she is ignoring in favor of seeking pleasure. Alternatively, one might ground an objection to drug use in virtues. Given the long-term risks associated with drug use, one who regularly uses may fail to demonstrate the virtue of prudence. Thus, one might argue that, if drug use is morally wrong, then facilitating it via SIFs would make one complicit in wrongdoing.

Even if one can give a compelling argument that drug use is in some way immoral (although this may be difficult given the disease model of addiction) there are hurdles this explanation must overcome. Namely, it is unclear whether these concerns are the proper basis of legislation. The government has, at best, a limited prerogative to promote virtue, at least in a society with robust individual rights to self-determination. Further, given the sheer scale of deaths from drug overdoses in the United States, it seems more plausible that reducing harms by participating in or facilitating wrongdoing is a lesser evil than continuing with a status quo that results in tens of thousands of deaths a year. And even still, it is not clear that facilitating a wrong behavior for the sake of minimizing harm is itself wrong.

Opponents of SIFs seem to have two rhetorical options available to them. They may argue that SIFs do not, in fact, reduce harm. But this claim has a tenuous relationship to current data. Alternatively, they may argue that even if they do reduce harms, SIFs are ultimately unjustifiable for moral reasons. There is more flexibility in developing arguments of this nature, but there are still serious theoretical difficulties one must resolve even if they can give a plausible argument for drug use’s immorality. Perhaps this is why opponents of SIFs couch their arguments in terms of the consequences of SIFs, even when they lack the data to support these claims.

Ultimately, if OnPoint’s figures are accurate, SIFs show great promise at limiting deaths from overdose. Even if this is their only benefit, this alone should make us pause before rejecting them. While they may only address the symptoms of the opioid crisis in the U.S., we have compelling moral reason to minimize harms while solving the underlying problems behind addiction.

Is NIMBYism Immoral?

photograph of high cedar fencing on neighborhood homes

Why can’t we make significant strides in combating homelessness? Why does the construction of adequate housing in high-demand regions persistently falter? Why are we unable to execute the extensive setup of wind farms and solar plants? Why does the emergence of next-generation nuclear power plants seem a distant dream? Among the complex array of answers that emerge, one frequent, simple response often floats to the top: “NIMBY-ism.”

The acronym “NIMBY,” which stands for “Not In My Back Yard,” is a phrase emblematic of certain residents who vehemently oppose development projects in their local areas. Their opposition, interestingly, is not necessarily premised on any deep-seated issues with the project itself. Rather, it is the development’s proximity to their home that evokes their protest. The term NIMBY has an unsurprisingly pejorative tone. It conjures an image of an individual prioritizing personal comforts over the common good. NIMBY tends to paint a picture of selfishness — an individual who comprehends the potential advantages of a project for the broader community and could even endorse it enthusiastically, provided it happened elsewhere. Picture a resident who resists a development project for fear it may reduce the exclusivity of their neighborhood, cause a slight dip in their property value, or result in the tiniest disruption to their everyday routine.

This portrayal often transforms NIMBYism into a moral failing — an ethically suspect character-type indicative of a lack of empathy and commitment to collective responsibility. Indeed, many philosophers suggest that the cornerstone of morality is impartiality — an unbiased concern for the rights and well-being of all individuals. This view implies that moral violation occurs when a person fails to act with such impartiality, demonstrating inequitable concern for others.

Can anything be said in defense of NIMBY sentiment? Is it possible that some NIMBYs could be misunderstood “NIABYs,” defenders of the principle: “Not in Anyone’s Backyard”? There can be instances where opposition to development springs from genuine impartial concerns about preserving local community values, upholding neighborhood aesthetics, or ensuring environmental and cultural preservation. The impartial NIABY opposes development in any area where these values are at stake, not merely in their own. This perspective, in contrast to NIMBYism, doesn’t seem selfish and doesn’t appear to violate the impartiality central to morality.

But what about the true NIMBYs? Aren’t they necessarily morally deficient? Well, the moral demand for strict impartiality isn’t always clear-cut. We wouldn’t demand a parent care equally about the well-being of a stranger’s child as they do their own. Likewise, we wouldn’t expect someone to invest the same effort for anybody as they would for a dear friend. Thus, some level of partiality — varying degrees of care contingent on the significance and the special nature of relationships — might not just be morally permissible but could even be an aspect of having good moral character, of being connected to others in the right kind of way.

Viewed through a generous lens, NIMBYism could be seen in a similar light. Just as it seems socially acceptable for most of us to contribute to a friend’s healthcare costs (despite the fact that our dollar could have more impact donating to highly effective charities), perhaps it is also acceptable to care particularly about the welfare of one’s own community and its residents. After all, people share deep and meaningful connections with their communities, akin to their ties with friends and family members.

This defense of NIMBYism, however, has its limits. Even if morality can accommodate a degree of partiality, there comes a point when the needs of the wider community must be taken into account. NIMBYs still might be taking their partiality too far, just as a parent might inappropriately overprioritize the well-being of their own child above the well-being of others.

If morality does allow for some degree of partiality, if it makes space for special concern for specific relationships, then perhaps the issue with NIMBYism lies elsewhere. Perhaps NIMBYism’s ultimate problem lies more in the realm of justice. Certain people and communities are strategically positioned to leverage existing zoning and development laws to block local development. Areas populated by educated, wealthy, and time-rich residents have an apparent advantage here, thereby nudging undesirable development towards areas with fewer resources to resist effectively. This inevitably creates disparities in the distribution of developmental benefits and burdens.

So, if this perspective on NIMBYism holds water, then perhaps the typical moral condemnation of NIMBYs is misguided. But what’s the appropriate alternative? One solution could be a reform of development and zoning laws to ensure a level playing field amongst communities. If it’s morally permissible for all of us to harbor special care for our own communities, then it becomes crucial to have a political system that equally enables all of us to express that special care.

Social Equality with Jessica Flanigan

Social or relational egalitarians believe that humans should treat one another as equals. They’ll often point to democracy as the most realistic means of achieving their political goals in an egalitarian way. And this makes sense in theory. Everyone gets a vote, everyone gets an equal say. My guest today argues that democracy might not actually be the most equitable way of making decisions in a society. Jessica Flanigan is a philosopher at the Jepson School of Leadership Studies at the University of Richmond, and she says that egalitarians might want to rethink their commitment to democracy.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Jessica Flanigan, “Social Equality and the Stateless Society

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Gin Boheme” by Blue Dot Sessions

Borough” by Blue Dot Sessions

Moral Education in an Age of Ideological Polarization: Teaching Virtue in the Classroom

photograph of apple on top of school books stacked on desk

The Program for Character and Leadership at Wake Forest University was recently awarded $30.7 million by Lilly Endowment Inc. to create a national higher education network focused on virtue formation. Approximately $7 million will go towards further strengthening the program at Wake Forest, while $23 million will be earmarked for funding initiatives on character at other colleges and universities.

While this project is a big win for Lilly, which supports “the causes of community development, education and religion,” it also raises pressing questions about the role of the moral virtues within higher education. In the wake of the Unite the Right Rally in Charlottesville, Virginia, professor Chad Wellmon wrote in The Chronicle of Higher Education that the University of Virginia could not unambiguously condemn the demonstrations. This is because universities, Wellmon wrote, “cannot impart comprehensive visions of the good,” making them “institutionally incapable of moral clarity.” On Wellmon’s view, universities should focus solely on the life of the mind, leaving profound moral questions to churches, political affiliations, and other civic organizations.

Supporting this vision of the university, many conservatives have complained that higher education is insufficiently neutral when it comes to moral and political values. In rejecting courses on Black history deemed to lean too far left, Florida Governor Ron DeSantis claimed that citizens “want education, not indoctrination.”

If higher education ought to remain neutral and eschew a deep moral vision, however, then how is it possible for universities to stay true to their mission while, like Wake Forest, simultaneously engaging in character education?

One thing that can be said is that institutions of higher education already do engage in virtue education. Due to their commitment to help their students think well, colleges and universities encourage their students to be curious, open-minded, and intellectually humble. As even Wellmon acknowledges, forming the life of the mind requires robust intellectual virtues, including “an openness to debate, a commitment to critical inquiry, attention to detail, and a respect for argument.”

Along with these intellectual virtues, higher education also supports a number of civic virtues as well. Because colleges and universities are tasked with preparing students to be responsible citizens, they often aim at promoting civility, tolerance, and civic engagement. These virtues equip graduates to contribute within liberal democracies, coupling their intellectual development with civic preparation.

The obvious objection to these examples is that the virtues in question are not moral virtues. Intellectual and civic virtues may be well within the purview of higher education, but should professors really take it upon themselves to teach compassion, courage, generosity, integrity, and self-control?

While these might seem strange in context of the modern university, it is interesting to note that higher education does emphasize at least one moral virtue – the virtue of honesty. Regardless of the institution, academic honesty policies are ubiquitous, forbidding cheating, plagiarism, and other forms of academic dishonesty. We have, then, at least one obvious example of a moral virtue being promoted at the university level. If the moral virtues generally seem so out of place at colleges and universities, then why does honesty get a pass?

The intellectual virtues find their place within the academic world because of the ways they promote the mission of higher education. The flourishing life of the mind requires the intellectual virtues, and so there are no complaints when professors help students form their intellectual characters.

But honesty also plays an important role in thinking well. If, every time a student encounters an intellectual challenge, they turn to cheating or plagiarism, they are missing out on an opportunity to do the difficult work of developing the intellectual virtues. Academic dishonesty short-circuits their ability to grow in the life of the mind, making it important for instructors to not only encourage the intellectual virtues, but to guide students towards honesty as well.

From this we can see that, while universities do not typically engage in moral education, this is not because they must always remain neutral on moral issues. Instead, universities simply do not see the other moral virtues as necessary for their mission.

But such an omission is not always well-motivated, as there are many moral virtues that are integral to the goals that universities have for their students. Consider, for example, the goal of helping students prepare for careers post-graduation. While employers might be looking for candidates that are open-minded and intellectually curious, they likely also hope to hire professionals with honesty, integrity, and self-control. Employers want doctors who are compassionate, professors who are humble, and lawyers who are just.

If college presidents, deans, and provosts see it as part of their mission to prepare students for the working world, then there is a place for character formation on campus. While some may contest that job training is not the most important mission of the university, it is nevertheless a significant one, making the task of developing morally virtuous teachers, nurses, and engineers a central mission of higher education.

This emphasis on moral virtue, of course, still allows universities to leave space for students to develop their own visions of what a good and meaningful life might look like. Emphasizing the moral virtues does not require compromising the ideological neutrality necessary for a diverse and challenging university experience. Instead, emphasizing character can only deepen and strengthen what higher education has to offer, teaching students to not only be good thinkers, but to be good people as well.

Phenomenology of Black Spirit with Biko Mandela Gray and Ryan J. Johnson

We’re reframing the philosophical canon today with Biko Mandela Gray and Ryan J. Johnson. Their new book, Phenomenology of Black Spirit, puts major Black thinkers in conversation with the work of the philosopher Hegel.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Biko Mandela Gray
  2. Ryan J. Johnson
  3. Phenomenology of Black Spirit
  4. Ethics and phenomenology
  5. Ronald Judy, Sentient Flesh: Thinking in Disorder, Poiēsis in Black
  6. More on Ella Baker

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Single Still” by Blue Dot Sessions

Cran Ras” by Blue Dot Sessions

Ethics in Focus with Elena Ruíz and Nora Berenstain

This episode is part of our Ethics in Focus series where we present full-length interviews with expert guests. This series features conversations about ethics for folks already familiar with the field of ethics. Today, I’m talking to the philosophers Elena Ruíz and Nora Berenstain about the criminalization of pregnancy in North America. We’re discussing their 2018 article “Gender Based Administrative Violence as Colonial Strategy.”

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Elena Ruíz and Nora Berenstain, “Gender-Based Administrative Violence as Colonial Strategy
  2. Leanne Simpson, “Land as pedagogy: Nishnaabeg intelligence and rebellious transformation
  3. John Locke, “Second Treatise of Government
  4. Shannon Speed, Incarcerated Stories
  5. Cate Young, “This Is What I Mean When I Say ‘White Feminism’
  6. Katherine Stewart, The Power Worshippers
  7. Anthea Butler
  8. Horatio Robinson Storer
  9. Kimberlé Crenshaw
  10. Angela Y. Davis, Women, Race and Class
  11. Reproductive justice

Resources provided by Nora Berenstain and Elena Ruíz

  1. Freefrom National Abortion Access Fund for Survivors
  2. Indigenous Women Rising abortion fund
  3. Women’s Reproductive Rights Assistance Project
  4. Mariposa Fund
  5. Surkuna (Ecuador)
  6. Fondo Maria (Mexico)
  7. Agrupación Ciudadana por la Despenalización del Aborto (El Salvador)
  8. Lilith fund
  9. Abortions without borders
  10. Plan C
  11. Sister Song
  12. Women with a Vision
  13. National Network of Abortion Funds
  14. Third Wave Fund
  15. Survived and Punished

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Single Still” by Blue Dot Sessions

Cran Ras” by Blue Dot Sessions

Rules with Lorraine Daston

Lorraine Daston is Director emerita of the Max Planck Institute for the History of Science, and a visiting professor in the Committee on Social Thought at the University of Chicago. On today’s episode of Examining Ethics, we’re discussing her new book Rules: A Short History of What We Live By. She explains that regulations that seem to have little to do with morality–like spelling rules–are often tied to deep-seated values in a society.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Lorraine DastonRules: A Short History of What We Live By
  2. Thomas Kuhn, The Structure of Scientific Revolutions
  3. Ludwig Wittgenstein on rule following
  4. Immanuel Kant’s categorical imperatives
  5. 1908 National Spelling Bee

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Gin Boheme” by Blue Dot Sessions

Songe d’Automne” by Latché Swing from the Free Music Archive. CC BY-NC-SA 2.0 FR

Debating the Death Penalty: Judicial Override of Life Sentences

photograph of gavel and judge's seat in courtroom

In 1986, 18-year-old Ronda Morrison was shot in the back multiple times while working her job at Jackson Cleaners in Monroeville, Alabama. Under pressure from police, Ralph Meyers, who was facing charges for a different crime implicated Walter McMillan in the murder of Ms. Morrison. McMillan, however, insisted that he was hosting a fish fry at his home at the time of the crime and his account was supported by many witnesses who were present at the event. All of these witnesses were Black. Ultimately, McMillan was tried and convicted of aggravated murder by a jury comprising eleven white jurors and one Black juror. The jury recommended life in prison, but the state of Alabama at the time allowed judges to override the sentencing recommendations of juries. The judge in McMillan’s case ignored the jury’s recommendation and sentenced him to death. Despite the outcome of the trial, MacMillan was factually innocent of the murder. (And Myers later recanted his account of the events.) As a result of the appeals process, after spending six years on death row, McMillan was exonerated and released.

In 1975, Furman v Georgia effectively abolished the death penalty across the country. One dominant rationale for the decision was that there was strong evidence that the death penalty was not imposed in a consistent way – the manner by which it was meted out in practice provided evidence of strong racial bias. The court ruled that states must ensure that sentencing not be discriminatory or capricious.

In response to the Furman decision, four states passed legislation allowing for judicial override of jury sentencing recommendations: Alabama, Delaware Florida, and Indiana.

The initial rationale for passing these laws was to reduce the number of cases in which the death penalty was imposed. The idea was that judges could overturn jury recommendations of a sentence of death and instead impose a sentence of life in prison.

However, the legislation also gave judges the power to go in the other direction — to overturn a jury’s sentence of life in prison and instead impose death.

The primary concern with this kind of legislation is that it violates the defendant’s sixth amendment right to a trial by jury. In recent years, all of these states have, in principle, abolished the practice of judicial override of this type. In practice, however, Alabama still executes individuals who were sentenced to life by juries but death by the judge, even though it abolished judicial override in 2017.

This issue made news again earlier this month as the execution date of Kenneth Eugene Smith approached. In 1988, Smith was convicted of murder for hire; a preacher paid him $1000 dollars to kill his wife, Elizabeth Sennett. Smith stabbed her eight times in the neck and chest. The jury in his second trial voted 11-1 to impose a life sentence, and the judge took advantage of his ability to override this decision and impose the death sentence instead.

The right of a person to be tried and sentenced by a jury of their peers is a cornerstone of democracy. We do not want punishment to be exacted at the hands and in the interests of tyrants.

We value a process of rational deliberation and discourse that allows a group of people who share similar cultural and moral values to evaluate evidence and to engage in discourse to come to agreement on what conclusions the evidence supports. This process, we think, generates the best conclusions we could hope to reach. In theory, the deliberative procedure ensures fairness.

Unfortunately, the decisions a jury reaches do not always live up to the standards of procedural fairness. Individuals are prone to bias and that bias does not always, or even often, disappear when you get more people together. In fact, problems of bias can often intensify under these circumstances. A juror who might otherwise be leaning toward acquittal or toward a lighter sentence might be hopelessly influenced by peer pressure during deliberations.

It’s also true that there are no standards when it comes to the required intelligence levels and educational backgrounds of jurors. So, the same concerns some have about voters may also apply to jurors  — sometimes groups of people who don’t know much about the things they’re being asked to decide make very bad decisions. This is a heightened challenge when cases turn on highly technical evidence or on the finer points of the law.

It might be tempting, then, to think that the most serious and impactful decisions should be left to people who know the system best. Certainly, judges know the law; they’ve heard evidence of all types and presumably have refined methods for processing and interpreting it. They may not be subject to the same kinds of bias that one might expect to see in a group of jurors. If they see a person who might be sentenced to death as a result of racial bias, they can stop it before it happens. On this view, judges are like Plato’s philosopher kings, adept at reason and in a position to serve as a shield against the tyranny of the many, in this case, the jury. Of course, this is hopelessly idealized as well.

All human beings act in biased ways, and judges are no exception. Far from shielding us from tyranny, when judges make decisions unilaterally and in conflict with the decisions of the jury, they may simply be acting as tyrants.

Judges also often have political aspirations and are subject to elections. This means that they have good reason to desire that their decisions in any particular case are politically popular. This seemed to have played a role in the sentencing of both McMillan and Smith. It was common knowledge in McMillan’s community that he had affair with a white woman in an area and at a time during which people had deeply bigoted attitudes toward interracial relationships. For this reason, a death sentence for McMillan may well have been popular with local voters. In the Smith case, the sentence was imposed during a second trial granted after an appeal of the results of the first. In the first trial, Smith was sentenced to death by the jury and many members of the community were distressed that the sentence might change — they viewed a life sentence as a miscarriage of justice. After all, Smith was willing to take someone’s life for the paltry sum of $1000.

To meet their burden of proof, the prosecution must present evidence that convinces the jury beyond a reasonable doubt that the defendant is guilty.

We’d all like to think that jurors always take that standard seriously, but human beings are fallible. One of the reasons why a jury might opt for a life sentence instead of death is lingering doubt about the guilt of the defendant.

If it turns out that the jury got it wrong, a life sentence allows for a much greater possibility that the truth will come to light, and the innocent person will be exonerated. Death preempts that possibility permanently.

In another twist in this case with serious moral implications, on November 17th, the state of Alabama attempted to execute Smith. They tried, unsuccessfully, to find a vein and establish a line to administer drugs that would kill Smith. They prodded him with needles for an hour before finally giving up and calling off the execution for the night. This is the third time that this problem has occurred during an execution in the state, raising concerns about the competency of the people charged with killing human beings in the name of the state.

Critically, this case motivates reflection on one of the most important questions our country faces: should we abolish the death penalty outright? Death is the most extreme and irreversible punishment a society can impose. Ought we be imposing a sentence this severe when judges and juries can come to such dramatically different conclusions about whether it is appropriate in any given case? If we think that there are fundamental flaws with both jury and judicial sentencing, should we be willing to accept death as an outcome of an inescapably flawed system? If, on top of all of this, the ability to impose the death penalty humanely in practice is so often called into question by botched case after botched case, isn’t the death penalty obviously cruel and unusual?

Victims’ Rights with Lenore Anderson

Practical or applied ethics involves a lot of discussion about harm. And when we’re examining harm as it relates to crime, we tend to focus on victims. However, president of the Alliance for Safety and Justice Lenore Anderson argues that we need to take care that our discussion of harm isn’t centered on just one group of people. She’s here to discuss her new book In Their Names: The Untold Story of Victims’ Rights, Mass Incarceration, and the Future of Public Safety.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Lenore Anderson, In Their Names: The Untold Story of Victims’ Rights, Mass Incarceration, and the Future of Public Safety
  2. National Academies of Science, “The Growth of Incarceration in the United States
  3. Victims of crime in New Orleans jailed so they could provide testimony in court

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Single Still” by Blue Dot Sessions

Songe d’Automne” by Latché Swing from the Free Music Archive. CC BY-NC-SA 2.0 FR

Obedience with Pauline Shanks Kaurin

There’s perhaps no better example of an obedient person than a soldier. And yet, soldiers often thoughtfully disobey direct orders, and in some cases, are legally obligated to disobey the rules. Pauline Shanks Kaurin, who is a philosopher and professor of military ethics at the U.S. Naval War College joins us to explore the ethics of obedience. She’s discussing her book On Obedience: Contrasting Philosophies for the Military, Citizenry, and Community.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Pauline Shanks Kaurin, On Obedience: Contrasting Philosophies for the Military, Citizenry, and Community
  2. Mỹ Lai massacre
    1. Hugh Thompson
  3. Alasdair MacIntyreAfter Virtue
  4. Martin Luther King, Jr., “Letter from a Birmingham Jail
  5. Thomas Aquinas on unjust laws
  6. USS Theodore Roosevelt and COVID-19

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Gin Boheme” by Blue Dot Sessions

Calgary Sweeps” by Blue Dot Sessions