← Return to search results
Back to Prindle Institute

Affirmative Action for Whom?

cutout of white man on corporate ladder elevated above peers

Wednesday, Laura Siscoe challenged affirmative action advocates to reflect on their apparent tunnel vision: if what we seek are diverse campuses and workplaces – environments that attract and support students and colleagues who possess a diverse set of skills and approach problems from unique vantage points – then why confine our focus simply to race and gender categories? Surely realizing the intellectual diversity we claim to crave would require looking at characteristics that aren’t simply skin-deep – factors like socio-economic background or country of origin, just to name a few. If diversity in thought really is our goal, it seems there are better ways of getting there.

Laura is no doubt right that much more could be done to diversify campuses and workplaces. But, at minimum, it seems prudent to protect the gains that historically marginalized groups have secured. Time and time again, formal legal equality – that each enjoys identical treatment under the law – has failed to secure equality of opportunity – that each enjoys a level playing field on which to compete. And when policies like race-conscious admissions go away, we revert back to the status quo all too quickly. (The NFL’s lack of diversity at the head coach position and the impotence of the Rooney Rule offers a compelling example.)

Critics of affirmative action, however, are quick to characterize such policies as special treatment for the undeserving. But it’s important to separate the myth from the reality. As Jerusalem Demsas writes in The Atlantic, “No one deserves to go to Harvard.” There is no obvious answer to who the best 1200 applicants are in any given year. At some level, there is no meaningful distinction between the different portraits of accomplishment and promise that candidates present – their “combined qualifications.” No magic formula can separate the wheat from the unworthy; there is no chaff. There are grades; there are scores; there are awards; there are trophies; there are essays; there are statements; there are kind words and character references. But there is no mechanical process for impartially weighing these various pieces of evidence and disinterestedly ranking applicants’ relative merit. Nor is there an algorithm that can predict all that a seventeen-year-old will become. (This is perhaps why we should consider employing a lottery system: the infinitesimal differences between candidates coupled with the boundless opportunities for bias suggests it is the height of hubris to insist that the final decision remain with us.)

Contrary to critics, then, affirmative action is not a program for elevating the unqualified – a practice geared to inevitably deliver, in Ilya Shapiro’s unfortunate choice of words, a “lesser black woman.” Ultimately, affirmative action is a policy designed to address disparate impact – the statistical underrepresentation of the historically marginalized in positions of privilege and power. It’s aimed at addressing both real and apparent racial exclusion on the campus and in the workplace.

Those skewed results, however, need not be the product of a deliberate intention to discriminate – a conscious, malicious desire to keep others down. “Institutional networks,” Tom Beauchamp reminds us, “can unintentionally hold back or exclude persons. Hiring by personal friendships and word of mouth are common instances, as are seniority systems.” We gravitate to the familiar, and that inclination produces a familiar result. Affirmative action, then, intervenes to attempt to break that pattern, by – in Charles J. Ogletree Jr.’s words – “affirmatively including the formerly excluded.”

But just how far should these considerations extend? Some, for instance, complain of the inordinate attention paid to something as limited as college admissions. Fransisco Toro writing in Persuasion has argued that we should stop wringing our hands over which segments of the 1% gain entry. There are far greater inequalities to concern ourselves with than the uber-privileged makeup of next year’s incoming Harvard class. We should be worried about the social mobility of all and not just the lucky few. Let affirmative action in admissions go.

But one of the Court’s fears, from Justice Jackson to Justice O’Connor, concerns colleges’ ability to play kingmaker – to decide who inherits power and all the opportunities and advantages that come with it. They have also worried about where that power goes – that is, which communities benefit when different candidates are crowned. This is most easily witnessed in the life-and-death field of medicine, where there is, according to Georgetown University School of Medicine,

an incredibly well documented body of literature that shows that the best, and indeed perhaps the only way, to give outstanding care to our marginalized communities is to have physicians that look like them, and come from their backgrounds and understand exactly what is going on with them.

Similarly, the Association of American Medical Colleges emphasizes that “diversity literally saves lives by ensuring that the Nation’s increasingly diverse population will be served by healthcare professionals competent to meet its needs.” Minority representation matters, and not simply for the individual applicants themselves. Even the selection process in something as seemingly narrow as college admissions promises larger repercussions downstream.

Given the importance of representation, the gatekeeping function of colleges and employers, and the way discrimination works, some form of intervention seems necessary. And we don’t seem to have a comparable remedy on hand. “Affirmative action is not a perfect social tool,” Beauchamp admits, “but is the best tool yet created as a way of preventing a recurrence of the far worse imperfections of our past policies of segregation and exclusion.” That tool could no doubt stand to be sharpened: gender is a woefully crude measure of disadvantage and race is a poor proxy for deprivation. Still, the tool’s imprecision needn’t mean abandoning the task.

There’s reason why integration remains an indispensable, if demanding, goal. As Elizabeth Anderson claims, “Americans live in a profoundly segregated society, a condition inconsistent with a fully democratic society and with equal opportunity. To achieve the latter goals, we need to desegregate — to integrate, that is — to live together as one body of equal citizens.” We must ensure that everyone can see themselves reflected in our shared social world.

In the end, affirmative action is simply one means by which to accelerate desegregation – to encourage diversification in the positions of power that were formerly restricted. And it was never designed to last forever, as Wes Siscoe recently explored. Affirmative action is merely a stopgap measure – a bridge to carry us where we want to be: a colorblind world where superficial differences no longer act as impediments to advancement. Unfortunately, the equality of opportunity we seek is not yet a reality for all – we have not arrived.

Diversity of What?

photograph of the legs of people waiting for a job interview

Affirmative action privileges individuals who belong to particular social groups in processes of hiring and institutional admission. The practice still receives a great amount of endorsement from those in higher education, despite widespread public disagreement over the issue. The recent U.S. Supreme Court ruling thrust the issue into the limelight, fueling further debate. While there are a variety of moral arguments that can be employed in support of affirmative action, one of the most prominent is that affirmative action policies are morally permissible because they promote more diverse colleges and workplaces.

However, it is clear that not all forms of diversity should be included in the scope of affirmative action policies. We would balk at a college seeking a diverse range of shoe sizes amongst applicants. Similarly, we would scratch our heads at a company choosing employees based on the diversity of their culinary preferences. So which kinds of diversity should affirmative action policies target? In order to answer this question, we must first consider which kinds of diversity colleges and workplaces have reason to promote.

There are different kinds of reasons for action. For the sake of this discussion, let’s consider two distinct sorts of reasons: moral and prudential. Moral reasons are those which apply to individuals or groups regardless of the particular goals of that individual or group. Prudential reasons, on the other hand, apply only when an individual or group has particular goals.

I have a moral reason, for example, to be honest when filing my taxes. However, I might also have a prudential reason to lie, insofar as it would be good for my business’s bottom line to avoid paying a lot of taxes. But moral and prudential reasons need not only point us in conflicting directions. Oftentimes we have both moral and prudential reason to perform a particular action. For instance, I have a moral reason to keep my promises to my friends as well as a prudential reason to do so. If I want my friends to remain in my life, this gives me a prudential reason to honor the promises I make to them.

The moral/prudential reasons distinction is helpful in determining which kinds of diversity colleges and workplaces have reason to promote. Let’s start with the category of moral reasons. Some claim that our societal institutions bear a moral responsibility to privilege certain groups in admissions and employment. Typically, this argument is applied to racial minorities who have been subjected to historical injustices. If such a moral responsibility really does exist, it provides societal institutions with a moral reason to engage in affirmative action along racial lines.

The challenge for the proponent of this style of argument is to both defend why such a moral responsibility applies to all societal institutions (as opposed to merely some) as well as to explain why this moral responsibility trumps all other competing responsibilities and reasons that such institutions might have. Put differently, even if institutions have a moral reason to favor racial minorities in admissions and employment, a further argument must be given to show that this moral reason isn’t outweighed by stronger, countervailing reasons against affirmative action.

Now we can turn to the category of prudential reasons. Given the goals of colleges and businesses, what kinds of diversity might they have reason to promote? In the United States, affirmative action tends to be race- and gender-based. But if we consider the underlying goals of universities and employers, it’s not immediately clear why these are the types of diversity they have most reason to promote. Of course, there are important differences in the foundational goals of businesses and institutions of higher education. Colleges and universities are presumably most concerned with the effective education of students (as well as staying financially viable), while businesses and corporations tend to aim at profit maximization.

The spirit of open-minded inquiry that characterizes institutions of higher education seems to provide reason to promote diversity in thought. If the ideal college classroom is a place where ideas are challenged and paradigms are questioned, intellectual diversity can aid in achieving this goal. However, it is not immediately obvious that racial or gender diversity promote this end, particularly since the majority of individuals advantaged by affirmative action are from similar socio-economic backgrounds. In order to defend affirmative action along the lines of race or gender, a case would have to be made that selecting for these categories is a highly effective way of selecting for intellectual diversity.

A similar point holds true in regards to affirmative action policies put in place by employers. Given the fundamental goal of profit maximization that businesses and corporations possess, these institutions have prudential reason to choose individuals who best help achieve this end. There does exist compelling empirical evidence that more diverse groups tend to outperform less diverse groups when it comes to problem-solving, creativity, and other performance-based metrics. However, these studies tend to demonstrate the upsides of a team possessing diverse skills, rather than diverse racial or gender identities.

Thus, it appears businesses and corporations have prudential reason to create teams with diverse skills, but more argument must be given in order to make the case that selecting for racial or gender diversity is an effective way of achieving this goal. Insofar as proponents of affirmative action seek to defend the practice on the grounds that it promotes diversity, it is imperative we get clear on which kinds of diversity our societal institutions have the most reason to promote.

The Empathetic Democracy: Countering Polarization with Considerate Civic Discourse

photograph of two closed doors labeled "Democrat" and "Republican"

Political polarization is at an all-time high, making partisan politics more bitter and divisive than even the recent past. One proposal for mitigating polarization’s rise is a focus on empathy, as empathizing with others can reduce feelings of contempt and encourage us to see things from another point of view. At the same time though, empathy comes with its own risks, calling into question whether it is the right response to the growing political divide.

Political polarization involves more than just political disagreement. It also involves an emotional dimension. Polarized citizens often hold a number of negative attitudes towards their political opponents, actively disliking and distrusting them. According to Stanford Political Science Professor Shanto Iyengar, “Democrats and Republicans both say the other party’s members are hypocritical, selfish, and close-minded.” So not only do polarized voters disagree with one another, but they also have deep feelings of antipathy and contempt towards their political opponents.

One proposal aimed at countering this growing divide suggests that we should focus on building empathy. Not only can empathy help us to find common ground with those whom we disagree, but it can also help us overcome feelings of contempt towards our political opponents. Empathy involves seeing things from another person’s point of view, adopting both their mindset and motivations. And taking on this perspective may enable us to experience more positive emotions towards our political opponents, realizing that we may well hold the same views if we were in their shoes.

Some have even argued that the development of empathy is what makes democratic discourse valuable. By attempting to understand those we disagree with, we have the opportunity to grow in empathy, seeing issues from a perspective that we do not normally inhabit. On this way of thinking, civic discourse is just as valuable for forming how we think of others as it is for forming our political beliefs.

And empathy might provide other benefits as well. It can help to instill civic and intellectual virtues like toleration and open-mindedness, and might even take us from merely tolerating the views of others to a place of mutual appreciation and respect. Furthermore, it can help us harness the benefits of cognitive diversity. Only when we are able to listen to those on the other side of the aisle are we able to work together to find solutions that are better for everyone.

Empathy, however, doesn’t come without its risks. Paradoxically, a growth in empathy can sometimes be connected with a rise in outgroup antipathy. Research has shown that those who are naturally more empathetic might identify more strongly with their family, political party, or nationality, making them less considerate of those outside their group. This selective empathy can then worsen, rather than alleviate, political polarization. As someone empathizes more deeply with those who hold similar political views, it can become more difficult to identify with those on the other side of the aisle.

At the same time though, while it might be more natural to empathize with those that we are already close to, empathy is not simply an automatic reaction like fear or attraction. Rather, we can choose when we are empathetic, making the conscious decision to put in the work to understand those who are different from us. And those who adopt a growth mindset when it comes to empathy are more likely to make an effort even when empathy doesn’t come quite so naturally. So even though it might be easier to empathize with those who vote for the same candidate, that does not prevent us from empathizing with those who cast a different ballot.

Nevertheless, even if we make a conscious effort to empathize with our political opponents, there are still other pitfalls. What if, for example, someone finds the views of their political opponents downright crazy or absurd? If a progressive Democrat attempts to empathize with a Republican and convinced Q’Anon believer, for example, and they cannot understand why the Republican believes the things that they do, then this could exacerbate polarization. The Democrat may be less likely to consider other conservative views in the future, concluding that the perspective of the other side is simply beyond the pale.

Furthermore, too much of an emphasis on empathy runs the risk of failing to acknowledge existing injustices. When coalition-building is the primary goal, harms that have not yet been corrected may fall by the wayside, or even worse, get swept under the rug. Simply empathizing with others is not enough to repair historically broken relationships, and empathy might only serve as a first step in reaching deeper, more meaningful forms of reconciliation.

For these reasons, empathy alone is not enough to solve political polarization. But even though empathy is not a complete solution, it can still play an important role. Lessening feelings of contempt and antipathy between political opponents can be the first step to more robust engagement, starting a conversation that can lead to rapprochement even for those who are currently bitter political rivals.

How to End ChatGPT Cheating Immediately and Forever

photograph of laptop and calculator with hands holding pen

How do we stop students from turning in essays written with the help of ChatGPT or the like? We cannot.

How do we stop students from cheating by using ChatGPT or the like to write their papers? We stop treating it as cheating.

It’s not magic. If we encourage students to use ChatGPT to create their papers, it won’t be cheating for them to turn in a paper created that way. In the long run, this may be the only solution to the great ChatGPT essay crisis.

Most teachers who rely on student essays as part of the learning process are in panic mode right now about the wide-spread availability of ChatGPT and other “Large Language Model” AIs. As you have probably heard by now, these LLMs can write passable (or better) essays – especially the standard short, five-page essay used in many classes.

In my experience, and based on what I have heard from other teachers, the essays currently written by LLMs tend to require some revisions to be passable. LLMs also have some blind spots that make unedited LLM papers suspect. For example, they shamelessly fabricate sources, often using real names and real journals, but cite articles that do not exist. With a little fixing up, however, LLM papers will usually do, especially if the student is happy with a B. And the quality of LLM generated essays is only going to get better.

There are many proposals out there about how to fight back. Here are two of my own. Multiple-choice questions are, according to social science research, just as valid and reliable as short-essay questions. As far as I can tell, LLMs are terrible at answering multiple-choice questions. And if you ask an LLM a question you want to use and it gets the answer right, you can either reword the question until the AI fails – or drop it. Another approach, that I have used in my applied ethics classes, is to replace the term paper with in-class debates. For all I know, some students are still using LLMs to write speeches, but it doesn’t really matter. In a debate, the student has to actively defend their ideas and explain why the other side is incorrect. What I care about is whether they really “get” the arguments or not. I think it is working beautifully so far.

Still, students have to learn to write papers. Period. So, what are we to do? Whenever there’s a panic over technological change, I always remember that Socrates and Plato were against the new technology of their time, too. They were against writing. For one thing, Socrates said (according to Plato) it will destroy people’s memories if they can just write things down. Of course, we only know about this because Plato wrote everything Socrates said down.

Prefer more recent examples? Digital pocket calculators were the scourge of grade school math teachers everywhere when I was a kid. By the time I got to high school, you were required to bring your calculator to every math class. At one university I was at, students were only allowed to use their laptops in class with special permission. Now, at my current school, all students are required to have a laptop and are usually encouraged to use it in class.

Essay writing will survive the rise of LLMs somehow. But how?

People are going to use whatever useful technology is available to use. So, as I said, we may as well encourage students to use LLMs to write, and think, better. Is it true that it is not cheating, if we simply cease to regard it as cheating?

It’s cheating to turn in a paper that you claim is your own work, if it isn’t. It’s not cheating where you have permission to work with someone, or an LLM, on it.

There are at least two important objections to this view and I will end by describing, but not necessarily settling, these.

One objection is that LLMs are trained by basically consuming most of the internet – along with input from human interlocutors. In other words, the massive amounts of data processed by any LLM is all the work of other people. There are serious concerns about whether this will stifle creativity in the long run. But our question is this: if you turn in a paper created with an LLM, isn’t the LLM contribution still plagiarism, since it’s mostly regurgitating stuff it stole from others, without regard to copyright or intellectual property rules?

I lack the expertise to settle this. But I do think that the way LLMs learn to write is not very different from the way I learned to write. I read stuff from other people and borrowed their style, their thoughts, and occasionally even their words. Even now, when I think I am being creative, I worry that I have just not read the earlier version of every single sentence I write – which is out there somewhere. I can only say that if LLMs are eventually regarded as inappropriately using material from other people, then I take back my proposal.

But here’s a more tractable objection, one I think I can answer. How should teachers respond to the fact that using an LLM will make it quicker and easier for students to do essays? Especially as the technology improves, it will be easier and easier to feed in a prompt or two and get a passable essay by doing next to nothing.

If we are going to allow the use of LLMs by students, there is one essential change we need to make in our approach to evaluating and grading student essays. We need to raise our grading standards. Raise them dramatically. (I am not the first person to suggest this.) If any student can get a passable essay by doing next to nothing, then with a little work – trying different prompts, editing and rewriting, etc. – they should be able to produce work on a whole new level.

Higher standards are not meant to be punitive. In fact, we may be entering a new era of quality writing. Just as a pocket calculator allows you do some calculations in a way that leaves you freer to do higher math, or being able to write down a grocery list frees up your memory for more important things (like remembering your passwords!), so having LLMs to create, and then using them to refine an essay, leaves you time to work on the argument and the quality of the writing more than ever. Some people worry that this leads student essayists to think less, but I would argue that, like so many technologies, by taking away some of the “grunt” work, it actually gives students more time to think. However, just as you can’t grade a student who does their times tables on the calculator on their phone the same as one who does it from memory, the standard essay produced by a student using an LLM should be held to a much higher standard.

With new technologies, you never know exactly what will be lost and what will be gained. But if a technology is bound to change the world, it’s probably better to work with it, rather than against it.

The Cure for Imposter Syndrome Isn’t Confidence

photograph of dog wearing costume disguise

Imposter syndrome is the gnawing sense that one isn’t good enough to be in the social position one is in. How did I get this far? I’m not even that smart. When will everyone find out that I don’t know what I’m doing? Common in graduate school and high-pressure careers, these thoughts and feelings cause numerous harms. Below, I’ll outline some of the harms of imposter syndrome and their sources, focusing on academic settings. Then I’ll turn to some thoughts about what we can do to mitigate them. While imposter syndrome can be described as a lack of confidence, I’ll suggest that the best approach on an individual level has little to do with trying to be confident.

Let’s start by identifying some of the harms caused by imposter syndrome. First, imposter syndrome causes epistemic problems — problems in how we form beliefs. These issues occur at the level of the individual and the community. The person who has imposter syndrome does not believe in their own ability to succeed in the high-pressure environment they’re in.

At the individual level, these beliefs distort our understanding of our own (and others’) work. When another person’s praise is filtered through my own insecurities, my doubts prevent me from receiving their testimony about my work accurately.

Imposter syndrome also inhibits community-building in departments, workplaces, and fields. People are much less likely to share with each other and learn from each other if each is convinced that closer scrutiny will reveal that they don’t belong. This lack of community-building is its own harm, and it also contributes to epistemic problems in the community. Simply put, the community misses out on good work from those who are too insecure to engage with others fully, reducing overall learning and progress. Finally, imposter syndrome is unpleasant and distracting. It is exhausting to think that one doesn’t really measure up to one’s position.

There are structural contributions to imposter syndrome that make it difficult to eradicate on an individual basis. Feelings of inadequacy often arise in conditions of scarcity. When there are not enough full-time jobs for all the graduates in one’s field of study, the sense that one must be at the top of the class in order to achieve one’s goals is understandable. These structural considerations intersect with considerations of justice and public identity. Did I receive this opportunity because I’m a woman? Would I have gotten in if I didn’t look good for department diversity? Do I even belong here? What is often cast as an individual psychological issue is exacerbated by larger-scale issues one cannot directly control.

A harsh climate can also amplify imposter syndrome, such as a department that rewards taking cheap shots at others, or one that encourages hierarchical thinking and discourages collaboration. Departments — and the individuals they comprise — have a duty to support an environment that is welcoming rather than isolating.

This duty is not only a moral duty of care; but, given the epistemic problems caused by imposter syndrome, it is also an extension of an academic department’s commitment to intellectual flourishing.

On an individual level, one might think that working on one’s confidence is the way out of imposter syndrome. Consider the grad student who feels they never quite deserve the praise their work receives. If they could just have greater confidence in their own abilities, they would be able to accept others’ praise as evidence of their abilities, right?

This answer is, perhaps, half right. Feeling confident in your abilities (to the extent that one can affect these feelings directly) would help reduce imposter syndrome. But confidence is not unshakeable, and it’s difficult to gain. Most of us are familiar with the trope of the person who exudes confidence to cover over deep insecurities. It doesn’t work. Burying one’s insecurities doesn’t get rid of them, because darkness is their natural habitat. The person with imposter syndrome is not well-suited to perceive their abilities accurately, and forced confidence is unlikely to succeed.

The best way out may be a counterintuitive one. I suggest that the antidote to imposter syndrome lies in cultivating the virtue of humility. Following Aristotle, we can conceive of a virtue as a character trait (or tendency to act) that lies between two extremes, both of which are vices. Humility is proper regard for oneself, avoiding both the excess of bravado (regarding oneself too highly) and the deficiency of self-doubt (regarding oneself too lowly).

Proper regard for yourself depends on a proper understanding of who you are. In a climate that pits colleagues against one another, it is easy to see yourself as an individual who needs to prove their worth, rather than as a member of a community who already belongs because of your common purpose. Likewise, it is easy to find your identity and self-worth in your professional successes, which can breed a deep insecurity and sense of precarity. Viewing yourself as a whole person (a trustworthy friend, a beloved sibling, an adventurous cook, a curious listener…) eases the anxiety surrounding your professional success.

So how does one cultivate humility? As ever, the best advice may be to practice. Surround yourself with people you can learn from, which is not difficult in most academic departments. Ask them about their work. Practice taking joy in their successes. Ask questions without regard for how they reflect on your intellect. And wrestle your sense of worth from your academic accomplishments. Consider the possibility that professional accolades needn’t be a load-bearing part of your sense of self.

Has the humble person given up the possibility of confidence? Not necessarily. Confidence is not the opposite of humility, but one of its natural results. As C.S. Lewis says in his essay “The Weight of Glory,” “Perfect humility dispenses with modesty.” When your work is satisfactory, humility allows you to be satisfied with it — both in the sense of giving you permission to feel satisfied and in the sense of clearing the way of self-doubt so as to make satisfaction in your own work possible. We cannot all be the smartest person in every room, but perhaps with practice and a shift in focus we can be content with that.

Credit Cards and Virtue Ethics

photograph of hand holding credit card over swipe machine

The majority of American adults use credit cards. The majority of this majority are also in credit card debt. The high interest rates and fees associated with this debt have led many in the personal finance industry to warn of the risks of putting charges on the plastic. However, Americans seem reluctant to heed the warnings, with national credit card debt recently surpassing the one trillion dollar mark for the first time in history. Given current economic realities, experts claim there is little reason to think this trend toward ever-increasing consumer debt will change anytime soon.

Given how many Americans find themselves stuck with credit card debt, it is worth considering whether or not the benefits of credit outweigh the downsides for the average consumer. Put differently: are credit cards actually good for people?

There are a multitude of ways to approach answering this question, but I propose we consider credit card usage through a virtue ethicist’s lens. Virtue ethics is one of the most historically influential approaches to ethical theorizing, and it focuses on the importance of cultivating the right habits in one’s daily life. Virtue ethicists stress that moral development is something that occurs across a lifetime, and that the morally ideal agent is one who continually steeps themselves in the right kinds of practices and cultivates the right kinds of habits.

A key feature of the virtue ethics framework is that it typically avoids positing universal rules for determining moral behavior. Instead, the approach encourages moral reflection on the part of the moral agent; it is up to the individual and their broader community to discern which actions encourage virtue and which encourage vice. While it is safe to assume that certain habits – such as violent, greedy, or dishonest ones – are indicative of vice regardless of cultural context or time period, there are a number of behaviors which fall into more of a gray area. Media consumption tendencies or wine-drinking predilections, for instance, need not signal virtue or vice. Depending on one’s motives and personal situation, such habits can either aid one’s moral development or harm it.

There are multiple features of credit card usage that make the topic morally complex. One critique is that the middle and upper classes enjoy access to credit cards with the best rewards programs, while those in lower economic classes are effectively shut out of this system. Those who are not as financially well-off might still qualify for credit cards, but the options available to them come with minimal (if any) rewards incentives.

The majority of the funding for the high-end credit card rewards programs comes from processing fees, which are the fees credit card companies charge businesses to allow their customers to pay with credit. Inevitably, businesses attempt to pass these processing fees onto consumers, as they do not want these fees they owe the credit card companies to chip away at their bottom line. The way this works out in practice, is that businesses simply bake the processing fees into the cost of their products. For instance, while a pizza company might determine they should charge $3 per slice to turn a profit, they bump their prices to $3.10 per slice to pass the transaction fee costs onto customers. This might be a tolerable result for middle class and wealthy customers who have access to the rewards programs funded by those higher costs, but those in less financially fortunate positions are simply stuck with higher bills. This economic reality gives rise to the criticism that the widespread usage of credit results in a tax on the poor.

Another morally salient feature of credit cards is the ease with which they allow you to rack up significant consumer debt. As opposed to being forced to make all of your purchases with cash or the money currently in your checking account, credit cards allow you to kick the financial can down the road. For the consumers who can afford to pay off their credit card bill each month, this feature of credit might not be particularly morally relevant. However, for those stuck in the cycle of overspending, the flexibility offered by credit cards can fuel this potential vice.

Additionally, studies show that the average individual tends to spend more when shopping with credit cards. This is not necessarily a morally significant feature of credit card usage, but it could be relevant for some in determining the role of credit in a maximally virtuous life. In recent years, many people have started turning to philosophies such as minimalism to help declutter and simplify their lives. This movement is marked by a rejection of materialism as a road to personal fulfillment, often encouraging people to buy less. Insofar as one adopts this philosophy in their own life, this might provide a practical reason to dump credit cards.

The judgment of whether or not credit cards are conducive to virtuous or vicious financial habits is likely highly dependent on the individual in question. If upon careful reflection one does not feel their usage of credit contributes to any type of communal economic injustice, nor that it encourages reckless spending in their personal life, perhaps credit cards are compatible with living a maximally virtuous life. On the other hand, if that same reflection leads one to believe their reliance on credit promotes negative consequences both on the individual and societal level, then the pursuit of virtue for that person might involve shredding their cards. Ultimately, the virtue ethics framework is a helpful one for discerning the role credit cards should play in one’s financial life.

Does the Public Get a Say on Interest Rates?

close-up photograph of Canadian bank notes covering politician's face

Canada is in the midst of a housing crisis – the average cost of a house has risen to over $700,000 while the cost of renting has also skyrocketed. The country is also facing inflation with a high of eight percent at one point and food prices that keep increasing. After years of very low interest rates, and in response to rising inflation, the Bank of Canada has raised interest rates eight times since April of 2022. While inflation has fallen, it still persists, and the Bank has not reached its target of two percent. With the cost-of-living problem and the worry that Canada might enter (or is already in) a recession, some politicians have called on the Bank not to raise interest rates further. Nevertheless, economists have expressed worry about the political influence on the central bank. Is it appropriate for politicians to attempt to influence monetary policy in this way?

Something of a controversy emerged at the end of August when British Columbia Premier David Eby issued a plea to the Bank of Canada to pause any potential rate hikes. With the rate now at five percent and inflation still present, there was concern that further hikes were on the horizon. In a letter to Bank of Canada Governor Tiff Macklem, Eby pleads to consider the “human impact” of increasing rates again and claims that unnecessary increases would pose a danger to both homeowners and renters. The Government of Canada and the Bank of Canada have had an agreement since 1991 that the Bank would commit itself to an inflation target of two percent, and Macklem has been firm in insisting on hitting that target: no more and no less.

In response to this unusual public plea from a politician to the central bank, some economists are expressing frustration. UBC Okanagan associate professor Ross Hickey has called Eby’s move a “reckless act.” The appeal jeopardizes the impartiality, independence, and non-partisanship of the bank.

We don’t want our central bank to respond to politicians at all, it’s independent. It’s akin to the Supreme Court of Canada, we don’t want the Supreme Court of Canada to be responding to what politicians say in letters … we want the Bank of Canada to follow its mandate to pursue keeping inflation at a target of two percent per year.

Hickey is adamant that asking a justice to change their decisions to suit an appeal on various people’s behalf would be wrong. As Hickey describes the move, “I understand you’re independent, but I still want you to do something for me, that’s gobbledygook.”

The situation became more nuanced when the Conservative Premier of Ontario, Doug Ford, issued a letter of his own five days later similarly calling on the bank to halt hikes making it difficult for people to make ends meet. Having entered into a black-out period prior to rate change announcements, the Bank did not respond to either letter. Nevertheless, when the announcement did finally come, the Bank held interest rates steady. There is no evidence to suggest that these appeals had any effect on the Bank’s decisions, and the federal government has placed almost the entire onus for dealing with inflation on the bank, not wanting to get involved in the issue. But is Hickey right that it’s wrong for politicians like Ford and Eby to make such an appeal?

Typically, a lot of importance is assigned to central bank independence and on maintaining inflation targets. These targets reassure people and businesses that they can make long-term financial plans. Central bank independence from political leadership aims to ensure stability by preventing political interference favoring short-term considerations. If the independence of the central bank is undermined, it could erode confidence, creating financial instability. Given this, Hickey may be right that public pleas from politicians are a bad idea.

On the other hand, so much of this argument hinges on how we understand the concepts “independence” and “risk.” First, let’s respond to Hickey’s analogy about the justice system. There are, in fact, ways in which you might indicate to an independent judge what you would like them to do and still have the court retain its independence. They are called courtrooms. Nothing about appealing to a person or making one’s preferences known inherently subverts independence. In fact, governments are often granted “intervener” status in court. If I ask a judge to not convict someone of a crime before they make their judgment, it doesn’t stop the judge from coming to their own decision. So long as I cannot override the judge or imply that I will fire them if they do not decide what I want, their independence need not be threatened. Independence does not imply that you cannot appeal to people as they make their choice, it just implies that at the end of the day, the choice is theirs to make. The same is true of the central bank and the case of these Premiers. There is no way for either Premier to exert any direct influence.

Second, Hickey’s point that we don’t want central banks to respond to politicians at all is inherently self-defeating. The two-percent target only started as part of an agreement in 1991, having shifted from an initial five-percent aim. The two-percent target as a long-term goal was only standardized after 1998. That target has been renewed several times, as recently as 2021 where the government gave additional leeway to consider employment as they consider how to meet their goal. Central banks, then, should have to respond to concerns of the public; they are not above reproach, and to suggest that a central bank should not have to respond to public concerns is undemocratic.

But while the bank can set a target, we can still have public discussions about how best to achieve that aim. Central banks are not above reproach, and it’s undemocratic to suggest that economic policy is not a public issue. Some economists, for example, have criticized the federal government for leaning so heavily on the central bank and interest rates to solve the problem. Indeed, this is the point that Eby’s letter was attempting to make. The bank’s actual mandate, for example, requires a target of 1-3% inflation over “the medium term.” There is no hard-and-fast rule for how fast the target must be met. Given this, it’s not even obvious that Eby and Ford were asking the Bank to act against its mandate. As there are different ways to measure inflation and different assumptions involved in making inflation projections, political debate seems necessary. It should not be the case that the central bank’s methodology or approach for fulfilling their mandate are beyond the public’s purview.

There are, for example, reasons to question the assumptions that underpinned inflation targets in the 1990s and whether this strategy should be used to fight inflation today. Unlike the 1990s, inflation is not the result of decades of rising wages. Instead, it is the product of global politics – such as the war in Ukraine – and, more significantly, supply chain issues caused by the COVID-19 pandemic. These new factors may mean we need to approach the present situation differently. Surely there is some way to adopt temporary changes to monetary policy without the sky falling. Some, for instance, have floated the idea of temporarily adopting a three-percent target. Economists, meanwhile, balk and continue to decry “political interference.”

Still, there are reasons for thinking that economic policy requires political oversight. Ultimately, comments like Hickey’s and others’ exemplify a technocratic mindset that undercuts democratic discussion by relying on the assumptions of experts that remain closed off from public scrutiny.

Should the U.S. Continue Aid to Ukraine?

photograph of Ukrainian flag on military uniform

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On Wednesday, September 7th, U.S. Secretary of State Anthony Blinken announced a new aid package to Ukraine worth over $1 billion. The announcement came during what may be a critical juncture for the war. Ukraine’s counter-offensive has been slower than initially hoped, leading U.S. officials to question Ukrainian military strategy. However, progress has been made in recent weeks – the Ukrainian military has broken through the first line of Russian defenses in the south and liberated settlements. Further, there is some reason to believe future gains may come at an accelerated rate, as intelligence officials believe the Russian military concentrated its defenses at the first line.

Regardless, continued U.S. aid to Ukraine is no longer an ironclad guarantee. Although a majority of U.S. citizens still approve of aid to Ukraine, poll numbers have shown changing attitudes in recent months. About half of Republican respondents polled feel that the U.S. is doing too much to help Ukraine, and that they prefer ending the war as soon as possible, even if Ukraine concedes lost territory to Russia. Further, despite a majority of Democrats and independents favoring aid to Ukraine even in a prolonged conflict, support for that position has declined somewhat. During the Republican presidential debate in August two candidates, Vivek Ramaswamy and Ron DeSantis, stated they would end U.S. aid to Ukraine (in DeSantis’s case, this was qualified with the statement that he would stop aid unless European nations “pull their weight”). Donald Trump has suggested that all aid to Ukraine should pause until U.S. agencies turn over alleged evidence that incriminates President Joseph Biden.

Given the amount of aid the U.S. has sent to Ukraine – about $76 billion at the time of this article’s writing (although Congress has approved up to $113 billion) – it is worth pausing to weigh the moral arguments for and against continuing to provide aid.

Before beginning that discussion, I want to note two things.

First, while aid to Ukraine is normally reported in dollar amounts, this is misleading. The U.S. has not sent $76 billion in cash to Kyiv. While some money has gone to financing, significant portions of the aid are supplies from the U.S. stockpiles, training Ukrainian soldiers, and collaborating on intelligence. The value of the aid is estimated at $76 billion but this does not mean the U.S. has spent $76 billion. Less than half of the aid has been cash, and some portion of this figure includes loans.

Second, there are arguments about aid this article will not consider. Namely, these concern the strategic or political value of aiding Ukraine. One might argue that a repulsion of the invasion would humiliate and weaken Putin’s regime, thereby advancing U.S. interests. Alternatively, one could argue that if the war effort fails while the U.S. sends aid, it could damage U.S.’s standing internationally; there would be doubts that cooperation with the U.S. is sufficient to ensure security. While these considerations matter and should enter our decision making, they are too complex to discuss in sufficient detail here.

What arguments might someone make against continuing aid to Ukraine? The most common arguments in public discourse stem from what the U.S. government ought to prioritize. For instance, during the Republican primary debate, Ramaswamy commented that the U.S. would be better off sending troops to the border with Mexico. Trump has similarly questioned how the U.S. can send aid to Ukraine but cannot prevent school shootings.

The idea here appears to be something like this. Governments have obligations which should shape their decisions. Specifically, governments have greater duties to resolve domestic issues and help their citizens before considering foreign affairs. Thus, the claim here seems to be that the U.S. should simply spend the resources it is currently allocating towards Ukraine in ways that more tangibly benefit citizens of the U.S.

There are a few reasons to be skeptical of this argument. First, without a specific policy alternative it is not clear what those who utter this argument are suggesting. For any particular program, it is always theoretically possible that a government could do something more efficient or more beneficial for its citizens. But this claim is merely theoretical without a particular proposal.

Second, this argument may pose what philosophers call a false dichotomy. This fallacy occurs when an argument limits the number of options available, so that one choice seems less desirable. False dichotomies leave listeners with an “either this or that” choice when the options are not mutually exclusive. Consider Ramaswamy’s proposal in particular. It is unclear why the U.S. could not both provide military aid to Ukraine and deploy soldiers to protect its borders.

Third, not all aid sent to Ukraine could clearly benefit U.S. citizens. For instance, it is not clear how anti-tank missiles, mine-clearing equipment, or artillery can be used to solve domestic issues in the U.S.

More compelling, however, are the arguments that may appeal to the long-term consequences of prolonged war in Ukraine. Some may point to more speculative consequences. Perhaps a long war in Ukraine will result in a more hostile relationship between Western nations and Russia. This is especially true given recent discussion of Ukraine joining NATO and Russian officials’ attitudes towards the alliance. Further, a prolonged conflict may create more tense relationships between the U.S. and China, and could provide a diplomatic advantage to the latter. So, some might argue that it could be in the interests of long-term peace to bring an end to the war in Ukraine; the more strained these relations become, the less probable cooperation between major powers becomes.

Less speculative is the simple fact that, the longer the war drags on, the more people will die. The more battles fought, the more casualties. Additionally, given that the Ukrainian military is now using munitions like cluster bombs and the Russian military has blanked portions of Ukraine with land mines, it is certain that the increased casualties will include civilians. Given that there is moral reason to avoid deaths, we may have moral reason to bring an end to the war in Ukraine to reduce the number of lives lost – the sooner it ends, by whatever means, the fewer people will die.

However, proponents of aid to Ukraine also appeal to the long-term consequences of current events. In particular, some argue that failing to support Ukraine’s war effort will enable future aggression, specifically, aggression by Moscow. The idea is something like this. The costlier the war is for Russia, the less likely its leaders will be to pursue war in the future. Further, the more support that nations like the U.S. are willing to provide to nations that are the victims of aggression, presumably, the less likely it would make future aggressive acts. Although a prolonged war in Ukraine will lead to a greater loss of life now, one might argue that in the end it will prevent even larger losses in the future by changing the cost-benefit analysis of future would-be aggressors.

Perhaps the most compelling argument for continuing aid to Ukraine comes from just war theory – the application of moral theory to warfare. Just war theorists often distinguish between jus ad bellum – the justification of going to war – and jus in bello – the morality of the conduct of combatants once war has broken out. Typically, just war theorists agree that wars of aggression are not justified unless they are to prevent a future, more severe act of aggression. Defensive warfare, in particular defensive warfare against an unjust aggressor, is justified.

To put the matter simply, Ukraine has been unjustly invaded by the Russian military. As a result, the efforts to defend their nation and retake captured territory are morally justified. So long as we have moral reason to aid those who are responding to unjust aggression, it seems we have moral reason to aid Ukraine. For many, this is enough to justify the expenditures required to continue military aid.

Of course, one might question how far this obligation gets us. It is not clear how much we are required to aid others who have a just pursuit. Resources are finite and we cannot contribute to every cause. This point will be more pressing as the monetary figure associated with aid to Ukraine rises, and our public discourse questions the other potential efforts towards which that aid could have been directly.

As noted earlier, however, there are some reasons to question arguments of this sort when they are light on specifics. It is one thing to reassess the situation as circumstances have changed and find that your moral obligations now seem to pull you in a different direction. It is another entirely to abandon a democratic nation to conquest simply over sophistry. The severe consequences of our choices on this matter should prompt us to think carefully before committing ourselves to a particular plan of action.

Workplace Autocracy

image of businessman breaking rocks in a quarry

The popular fast-food restaurant Chipotle has agreed to pay over 300,000 dollars for alleged violations of Washington D.C. child labor laws. Similar violations of child labor laws have occurred at several Sonics in South Carolina – the same state where just a few months earlier a labor contractor was fined over $500,000 for exploitation of migrant farm workers. In fields like finance and tech, where child labor is rare, there are other concerning employer practices, such as extensive digital surveillance of employee computers.

These stories drive home that largely two groups of people decide what happens in the American workplace: The first, management; the second, government. Americans love democracy, but we leave this particular leadership preference at home when we go to work.

Workplaces are often explicitly hierarchical, and workers do not get to choose their bosses. Beyond pay, employers may regulate when we eat, what we wear, what hours we get to spend with family, and whether we have access to certain contraceptives. Worker choice usually boils down to a binary decision: do I take the job or do I leave it. And concerns of money, family, and health insurance often put their thumb on the scale of this ostensibly personal decision.

A good boss may listen to their employees, or allow them significant independence, but this discretion is ultimately something loaned out, never truly resting with the employees. In some cases, either through their own position or through joining a collective organization like a union, a worker may have more leverage in negotiations with their employer, and thus more say in the workplace. Robust workers’ rights and worker protection laws can further strengthen employee agency and negotiating position. All these however preserve an autocratic understanding of the workplace – the power of management is limited only by countermanding power from the government, organized labor, or individual workers.

The philosopher Elizabeth Anderson has referred to the incredible power that employers have over the lives of their employees as “private government.” And yet oddly, Anderson argues, one rarely hears the same concerns voiced about government as about the governing power of management. We fret over government-imposed regulation on companies, but rarely company-imposed regulations on employees. We want our political leadership accountable to those they rule over, but not company leadership. We worry about government surveillance, but not internal workplace surveillance.

What might justify this seemingly incongruous state of affairs?

One answer is to appeal to human nature. We often hear that humans tend to be selfish and hierarchical, and therefore shouldn’t be surprised that our companies end up like this. But this response faces problems on several fronts. First, at best this would explain the autocratic workplace, it would not provide a moral justification for it. Philosophers generally reject that something is good simply because it is natural. Second, biologists dispute this simplistic account of human nature. Humans are highly responsive to social context, and our underlying tendencies seem to be just as cooperative as they are competitive. Finally, it fails to explain why we are concerned by public government overreach but not private government overreach.

Alternatively, we may argue that because the firm is ultimately owned by someone, it is only appropriate that person have control over their employees as long as those employees freely entered into agreement to work there. However, with pay and healthcare on the line and the generally skewed power between employers and employees, only the rarest of employees get to truly negotiate terms with their employer. Many undesirable features of a workplace such as digital surveillance, mandatory arbitration, and noncompete clauses (see Prindle Post coverage) are so widespread as to be inescapable in certain industries. Consequently, it is specious to argue employers are entitled to treat employees as they see fit, simply because employees agreed to work there – although this general argument would hold up better with stronger workers’ rights.

The final, and perhaps the most obvious, answer is simple efficiency. A more autocratic workplace may just work much better from an economic standpoint.

There are two things to note with this line of defense. First, it needs to have limits to preserve human rights and dignity. The American government undoubtedly finds the prohibition against unreasonable searches and seizures inefficient at times, but that does not justify scrapping it. By the same token, a company would presumably not be justified in whipping their employees, no matter how much it increased productivity. Limits need to be set on the power companies have over their workers – indeed, as the government does to some extent currently. However, this does not necessarily speak against workplace autocracy tout court. Second is that the efficiency benefit of workplace autocracy depends on what it is being compared to. Convening an assembly of thousands of workers to make every decision is perhaps insufficiently “agile” for the modern economy, but there are many examples of businesses owned and run cooperatively by their employees. Even something as simple as allowing employees to vote out management could be worth considering. Workplaces may even benefit from more empowered and invested employees.

For Anderson, herself, the current power of private government is best explained not by reasons, but by history. The rise of an obsession with efficiency, burgeoning faith in the liberatory power of free markets, and the collapse of organized labor in America have all conspired to help managerial power in the workplace to grow unquestioned. Agree or disagree with her history, we can follow her in thinking that, like public government, private government is sufficiently impactful and complex to merit robust discussion about its justifications, dangers, preferred implementation, and needed checks and balances.

The Case For and Against Nuclear Disarmament

photograph of bomb shelter sign in Ukraine

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


When the Cold War ended thirty years ago, many hoped that the chances of nuclear war would decline, and even that nuclear weapons might be on the road to ultimate extinction. For a time, it seemed those hopes might be fulfilled. The Bulletin of the Atomic Scientists’ famous Doomsday Clock stood at six minutes to midnight – that is, global catastrophe – in 1988. The Clock was rolled back to fourteen minutes to midnight in 1995 as Russia and the United States agreed to unprecedented reductions in their strategic nuclear arsenals.

Sadly, though, these optimistic predictions have faded in recent years. Russian nuclear saber-rattling over Ukraine, the impending expiration of the one remaining nuclear weapons  treaty between Russia and the United States, signs of nuclear proliferation in the Middle East, and the unprecedented challenge of managing a three-sided geopolitical competition between nuclear-armed Russia, China, and the United States have brought concerns about nuclear war back to the forefront of policymakers’ agendas. Some prominent Americans commentators are now calling for a big build-up of our nuclear arsenal. Today, the Clock stands at ninety seconds to midnight, closer to catastrophe than it has ever been – largely, the Bulletin claims, because of the mounting dangers of the war in Ukraine. Christopher Nolan’s film Oppenheimer has even reignited debate about the United States’ use of nuclear weapons against Japan during World War II – so far, the only instance of their use in anger. Thus, now seems like a propitious moment to go back to first principles: that is, to reconsider what ultimately should be done about nuclear weapons.

At the risk of oversimplifying, the basic question is whether or not to adopt disarmament as the ultimate goal. “Disarmament” means both dismantling all nuclear warheads and delivery systems, as well as eliminating stockpiles of weapons-grade fissile materials that could be used to quickly assemble a weapon. The arguments against disarmament come in two flavors: first, that nuclear weapons are effective deterrents to nuclear, chemical, biological, and conventional forms of aggression; and second, that nuclear disarmament is an unrealistic goal.

The historical case for the value of nuclear weapons as deterrents to conventional military aggression is weak. In 1950, the United States enjoyed a near-monopoly on nuclear weapons, the Soviet Union having only tested their first atomic bomb a year earlier. This did not deter North Korea from invading South Korea with the Soviet Union’s support, and it did not deter China from entering the war when U.S., South Korean, and allied forces advanced almost to the border between North Korea and China in the fall of that year. Nor did the United States’ nuclear arsenal deter North Vietnam from invading and ultimately conquering South Vietnam, a country to which the U.S. had made clear security guarantees, in the early 1970s.

The reason that the U.S.’s nuclear “umbrella” was unable to dissuade Soviet-supported regimes from engaging in aggressive conventional military action against U.S. allies during the Cold War is not difficult to understand. Ultimately, the United States’ interest in avoiding a nuclear exchange with the Soviet Union, which it might have precipitated by using nuclear weapons against one of the Soviet Union’s allies, trumped its interest in protecting its own allies from conventional aggression. Knowing this, North Korea, North Vietnam and other Soviet-backed states were confident that the United States would not actually use its nuclear arsenal against them. Today, Russia or China or some other revisionist power may reasonably believe that the United States would, for precisely the same reason, never actually use its nuclear weapons against them if they threatened the sovereignty of countries like Taiwan, South Korea, or Poland with conventional military force – even one which enjoys a treaty-based U.S. security guarantee.

Nuclear weapons have historically also failed to deter states from directly aggressing against states that possessed their own nuclear arsenals. It is certainly true that the Cold War never went hot in a conventional sense, and that might be chalked up to superpowers’ nuclear arsenals. Still, the existence of a small Israeli nuclear arsenal was widely known since the late 1960s, though not officially acknowledged; but this did not deter a coalition of Arab states from invading Israel during the 1973 Yom Kippur War. A few years before, the Soviet Union and China, which both possessed publicly-acknowledged nuclear arsenals, engaged in a series of intense military clashes on their border. And in 1999, Pakistani forces occupied strategic positions on Indian territory in the Kashmir region, leading to a conventional military conflict between the two nuclear-armed states. Again, the reason that aggressor states are not necessarily deterred by their victims’ nuclear arsenals is the cost of their use both in terms of possible nuclear counterstrikes by the aggressor or its ally and international reputation. This makes it unlikely that nuclear-armed states will use their arsenals against any but the most grave existential threats, whatever their official policy.

The case for nuclear weapons as deterrents against the use of other weapons of mass destruction – chemical, biological, or nuclear weapons – seems to rest on firmer historical ground. In the eighty-year history of nuclear weapons, there has never been a single nuclear exchange or chemical or biological attack by one state against a nuclear-armed state. The principle of mutually assured destruction or MAD, as it is popularly known, seems to have played a role here. According to this theory, two nuclear-armed states are unlikely to attack each other with nuclear weapons because there is no entirely adequate defense against a nuclear counterstrike. Because a state contemplating a first strike could expect to suffer cataclysmic losses from such a counterstrike, it will be effectively deterred.

In the 1950s, prior to the advent of ballistic missiles, the greatest nuclear threat to both superpowers was their adversary’s thousands-strong fleet of strategic bombers. Although the country that struck first could expect to destroy some of these bombers – both the United States and the Soviet Union built thousands of fighter interceptors to shoot them down – it was well-understood that at least some would manage to hit their targets. And even a handful of thermonuclear-armed bombers could cause millions of casualties. The lack of an adequate defense to nuclear counterstrike became only more apparent once the superpowers diversified their weapons delivery systems, developing the so-called “nuclear triad” of submarines, bombers, and missiles. Even today, anti-missile defense systems are notoriously unreliable, and submarines difficult to detect and destroy.

On the other hand, there is ample evidence that the United States and Soviet Union came perilously close to nuclear war at various points, notwithstanding the elegant logic of MAD. President John F. Kennedy estimated that the chances of a nuclear exchange during the Cuban Missile Crisis were one in three; his national security advisor, McGeorge Bundy, put the odds at one in one hundred. Either way, these are surely terrifying figures given the potentially catastrophic, even civilization-ending impact of full-blown nuclear war not just on those countries, but the entire planet.

In another famous incident in 1983, a lieutenant colonel in the Soviet Air Force named Stanislav Petrov likely single-handedly averted nuclear war when his nuclear early warning system mistakenly reported an intercontinental ballistic missile launch from the United States. Petrov chose to wait for corroborating evidence before relaying the warning up the chain of command, a decision credited with preventing a retaliatory nuclear strike at a time of heightened tension between the superpowers. The superpowers’ hair-trigger deployment of their nuclear arsenals meant that misunderstandings and the fog of (Cold) war could cause even rational actors to choose a fundamentally irrational course, and there was little time to deliberate or think twice about whether to launch. A world of MAD is not a safe world.

Moreover, the argument that nuclear weapons deter nuclear war is not by itself sufficient to justify their existence unless nuclear war would be more likely in a disarmed world or a world that attempted disarmament than in a world of nuclear deterrence. This point brings me to the arguments against nuclear disarmament based on the practical infeasibility of that goal.

In A Skeptic’s Case for Nuclear Disarmament, Michael O’Hanlon, a senior fellow at the Brookings Institution, argues that the process of disarmament raises two dangers: the danger of incentivizing proliferation and the danger of cheating. Because any serious move toward disarmament would have to be led by the United States – the second-largest arsenal in the world – its allies, like Japan, South Korea, or Poland, might feel so apprehensive about losing America’s nuclear umbrella in light of mounting geopolitical tensions and rivalries that they would decide to acquire their own nuclear deterrent in response. For this reason, O’Hanlon recommends deferring nuclear disarmament until after major geopolitical tensions between Russia, China, and the United States have been resolved. It could be added that nuclear disarmament, which would require extensive cooperation between these great powers, would itself probably be more feasible if they were to resolve their disputes.

One reply to this argument is that it threatens to defer disarmament into the indefinite future – in practical terms, it implies no change to the intolerable status quo. There is no guarantee that even if the current disputes between the great powers were resolved, some new ones would not arise. As we have seen, there are also reasons to doubt whether America’s nuclear arsenal really is an effective deterrent. Moreover, that these disputes increase the likelihood of nuclear war is one of the best reasons for pursuing disarmament. And historically, it is not unheard of for nuclear-powered rivals to work together to reduce their nuclear arsenals, or even to talk seriously about disarmament.

O’Hanlon also argues that because of the extreme difficulty of verifying compliance with a disarmament agreement, particularly with respect to stockpiles of fissile materials, there is a serious danger that some rogue state will secretly build a nuclear weapon and use it for the purpose of nuclear blackmail. For this reason, he recommends that any disarmament treaty include a reconstitution provision pursuant to which any party could temporarily withdraw from the treaty and reconstitute its arsenal if it can show to an impartial body that it faces a serious nuclear, chemical, biological, or even conventional threat.

Such a reconstitution provision might, however, introduce further instability into the disarmament regime. Once a treaty party withdraws, its geopolitical rivals would certainly be strongly motivated to withdraw as well; indeed, one party’s withdrawal could be a sufficient reason for its rivals’ withdrawal. In effect, this would unravel the disarmament regime and take the world back to square one. Moreover, even if O’Hanlon is correct that no conventional deterrent could adequately prevent nuclear blackmail or conventional aggression by a rogue state, arguably a world characterized by a higher risk of conventional aggression and nuclear blackmail is still preferable to a world characterized by a non-trivial risk of a nuclear exchange.

Of course, there is much more to be said about the arguments for and against disarmament; in the foregoing I have only managed to scratch the surface. Some useful further resources include O’Hanlon’s book, Raimo Väyrynen and David Cortwright’s Towards Nuclear Zero, George Perkovich and James M. Acton’s Abolishing Nuclear Weapons, and McGeorge Bundy’s Danger & Survival: Choices About the Bomb in the First Fifty Years. Whichever way you ultimately come down on this issue, with the nuclear order straining under new challenges, it behooves all of us to reflect seriously upon the desirability and feasibility of a renewed push for nuclear disarmament.

On the Possibility of Presidential Self-Pardoning

image of Alexander Hamilton aside crown on ten dollar bill

It will have taken 247 years to come up, but next year’s President of the United States may present a unique political problem: There’s a chance that the person elected president will have been convicted by then of one of the ninety-one crimes with which they have been charged. But the Constitution, in enumerating the powers of the presidency, states that “The President shall have Power to grant Reprieves and Pardons for Offenses against the United States, except in Cases of impeachment” (Article II, Section 2, Clause 1). Which raises the question: does the president have the power to pardon himself?

Textualism is a version of what philosophers call “positivism.” Positivists believe that the law is just a set of rules and that judges should keep as close to the letter of those rules as possible. Only where the law “runs out” – where there’s a need for a legal finding, but not enough textual evidence to support any particular one – should we settle for the subjective opinion of the judge.

The main textualist argument supporting the president pardoning himself is that nothing says he can’t. Absent any direct textual evidence against the self-pardon, there’s no basis for preventing it. But even if we stick to the text and confine ourselves to the strict meaning of the word “pardon,” can a person really pardon themself? Normally, you ask, or even beg, another for pardon. Just as you can’t “bequeath” to yourself, or “endow” yourself, or “bestow” something upon yourself, in ordinary English, “pardoning” is not something you can do to yourself.

This interpretation, then, seems to frustrate linguistic convention, but maybe there’s an argument to be made regarding original intention. Perhaps, we need to go beyond the text and ask about the history and intended purpose of the presidential pardon. This is “originalism.”

Here’s some history. The Founders were concerned with limiting the president’s power and ensuring those powers couldn’t be abused in self-serving ways. There wasn’t an explicit debate about self-pardons during the Constitutional Convention, but there was a debate about whether there should be an explicit treason exemption. Edmund Randolph said pardon authority in such cases “was too great a trust” since the president “may himself be guilty,” and George Mason said that the president might “frequently pardon crimes advised by himself.” Alexander Hamilton responded that the exception for impeachment was already sufficient to prevent the abuse of the power by self-pardon, because any crime committed by the president would result in an impeachment, which a self-pardon could not overturn. In other words, no one favored self-pardoning, but Hamilton thought the impeachment clause would be enough to prevent it.

Part of our legal trouble, however, is that we simply have too much history. You can always find some historical evidence to support whatever position you like. James Madison alone left us over six hundred pages of notes from the Constitutional Convention. More recently, originalist Samuel Alito cited an English common-law text from the twelfth century to support his opinion in Dobbs v Jackson. Imagine the number of pages of primary texts available on any legal question since the twelfth century.

This originalist approach is inherently conservative since it promotes the idea that we should interpret the Constitution exactly the way people would have 250 years ago. (In the Colonies, at that time, adultery was punished by a fine and a public whipping, while sodomy was punished by public execution.) Originalists would argue we are still bound by that original understanding, and, maybe, that there are more specific principles of historical interpretation to use.

But here’s another theory. One could argue that, while we must start from the text and acknowledge that the historical context is not irrelevant, our aim is to build a unified theory of “pardon power.” What is a pardon’s purpose? How does a pardon achieve its goal? Pardon power was not created to allow presidents to evade responsibility for wrong-doing, and so self-pardons that serve no other purpose should not be permitted.

What if the president is the victim of a weaponized justice system? Even so, another feature of law that should influence our theory of pardon power is that no one should be the judge of their own case. We shouldn’t allow the president to have the last word on whether or not they have committed, or should be excused from, crimes for which they have been duly convicted.

This is called the liberal, or constructive, theory of law; though it’s closer to natural law theory than to “liberalism” in the way that it is used on Fox or CNN. The idea is that the law and the Constitution rely on notions like equality, liberty, and rights, and that there is no letter of the law, nor historical substitute, for spelling out their precise content.

Consider the Ninth Amendment. Madison, who wrote the Constitution, initially opposed adding the Bill of Rights to the Constitution, and only agreed if this addition was included: “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” Liberals would argue that since unenumerated rights are, you know, not enumerated, there is no straightforward way to interpret what those rights might be without developing an accompanying theory of what rights people should have and what entitlements they deserve. Just for context, the unenumerated right most debated these days is the right for people to make private family and sexual choices without government interference. Is this an essential part of liberty properly understood, or are liberals using the Constitution to impose their own contentious view on others? (Ironically, the view liberals are trying to impose is the view that others should not impose their particular views on others.)

Originalists and textualists, however, claim that the liberal approach is too indeterminate and that we can’t fairly interpret the Constitution by simply using our private understandings of pardons and their purpose. But it looks, at least based on this brief survey, as if presidential self-pardons are inconsistent with most theories of law – except perhaps the strictest kind of textualism. Still, let’s hope we don’t have to find out what the courts think of the constitutionality of a presidential self-pardon.