← Return to search results
Back to Prindle Institute

Should We Measure How Ethical We Are?

image of three-star rating

We like to rate each other. We rate restaurants on Yelp, drivers on Lyft, and movies on Rotten Tomatoes. And these ratings can help us make decisions. Google Reviews can guide me away from the worst coffee shops, and IMDB can tell me that the fourth Jaws movie was a flop (okay, maybe I could have figured that out on my own).

With all of this rating going on, wouldn’t it be helpful if we rated how ethical other people are? Not only do we already personally keep track of who treats us well and who doesn’t, but this would help us make better decisions. Knowing the moral scruples of others could help us make friends, choose who to date, and avoid getting ripped off.

But even though lots of ratings are useful, I don’t think that giving each other a moral score is a good idea. In fact, I think it might make us even more unethical.

There are many different kinds of ratings systems. Yelp, Lyft, and IMDB – all of these are crowd-sourced and numerical. Grades are also ratings of a sort, scoring how well a student did in a particular class. But grades are not crowd-sourced, and instead depend solely on the decision of the instructor. Other ratings systems make use of expert opinion, like judges for figure skating or the critics on Rotten Tomatoes, while yet others, like the MCAT or the LSAT, hire employees to produce exams and scoring rubrics.

One rating question that has been gaining traction recently is how to measure virtue. Psychologists interested in moral character have been working on how to score virtues like honesty and humility, using strategies ranging from self-report surveys to ratings supplied by close associates. And it would undoubtedly be helpful to know how ethical people are. Not only would this information help us to decide who to trust, it could also assist us in designing interventions to become even better. But is developing such a measure possible?

With all rating systems comes the worry of “value capture.” Value capture redirects our attention from what we originally cared about to the rating itself. Maybe I enrolled in a class because I was interested in the material, but now all I care about is getting a good grade. Or maybe I got on Instagram to share pictures with my friends, but now I’m just in it for the likes. In these cases, chasing the ratings co-opts and corrupts my original goals and desires.

But just as ratings systems come in many varieties, the severity of the issues created by value capture can differ as well. In some cases, a rating system can very well encourage the kinds of behavior we want to see. Restaurants that are pursuing higher Yelp reviews will likely be more attentive to their customers, and AirBnB ratings can drive down prices while increasing the quality of a weekend vacation.

But the problem of value capture makes other kinds of ratings systems almost useless. Imagine, for example, a system that rated hotels but only allowed one rating from the owner of each hotel. Because the owners have a strong financial incentive to rate their hotels highly, there would be little to learn from these scores.

And not only are some ratings systems borderline meaningless, but others can make things worse than if no rating system had been introduced in the first place. Social media “likes,” while encouraging further engagement with the platform, may be helping drive a youth mental health crisis.

So what about measuring virtue? Like in our other examples, introducing a rating system can create a value capture problem. Instead of aiming at actually being a good person, we may start to aim instead at just getting a good score.

But it may be even worse. When it comes to ethics, we are already primed for a value capture issue. If we know others are watching, we are less likely to cheat, lie, and steal. This reveals that often we don’t desire to be ethical, but merely to appear like we are ethical. And this makes sense. Appearing to be a good person allows us to get all the benefits of having a great reputation with none of the costs of doing the right thing when no one is looking.

Because we want to appear virtuous, however, this makes the value capture potential of virtue scores even more potent. But how seriously should we take this concern? Does this value capture problem generally improve our lives, like the ratings for Airbnb stays, or does it make things worse, as with likes on social media?

An ethical rating system has the potential to not only sidetrack our values, but also to make us less ethical. An important part of being a good person comes down to our motivations. If someone regularly does the right thing because they care about others, then this can gradually mold them into a better person. On the other hand, if someone regularly does what they think other people want to see, this can actually make them worse, undermining their integrity and making them more deceptive.

In this way, virtue scores could actually make us worse by corrupting our motivations. Those who start out wanting to appear virtuous will only become more duplicitous, never doing the right thing for its own sake. And those who start out wanting to do the right thing will slide, slowly but surely, into doing it because they want to achieve a high score.

Because the concerns of value capture are even more acute in the ethical domain, we should think carefully about how we rate the virtues. It might be possible to measure how ethical we are, but by introducing such a measure, we might also just make things worse.

Should Corporations Be Democratic?

photograph of chairs in a boardroom

It can be easy to forget what a corporation essentially is. We may all have preconceived notions about what a corporation should do, but, ultimately (and historically), a corporation is simply a group of people engaged in some kind of enterprise that are legally recognized as a distinctive entity. It is common enough to think that a corporation should primarily focus on maximizing its returns for shareholders, but obviously the business world is more complicated than the narrow view of earnings. We saw this last week with Elon Musk highlighting that he is willing to prioritize political principles over profits. A similar question was raised regarding the future of OpenAI when its board of directors was forced to rehire its CEO after employees threatened to leave. This tension between profits and principles gives way to an interesting question: Should corporations be democratic – should employees have a direct say over how a corporation is run? Is this likely to result in more ethical corporations?

There is no fixed notion of what a corporation is for or how it is supposed to be run. Typically, a corporation is governed by a board of directors who are elected by shareholders and who appoint a CEO to run the company on their behalf. Typically, this creates a structure where executives and managers have a clear fiduciary responsibility to act in the interests of shareholders. This can create tension within a corporation between management and labor. Whereas the owners and managers seek to maximize profit, employees seek higher wages, greater benefits, and job security. But would it be better if the gap between workers and owners were erased, and employees had direct democratic influence on the company?

Industrial democracy is an arrangement where employees are capable of making decisions, sharing responsibility, and exercising some authority in the workplace. Giving employees a democratic say can be done in many ways. A company could include employees in a trust where that trust controls the corporation’s shares and thus the board of directors. Another way employees can engage in democratic control over a corporation is by establishing a cooperative where every employee owns a share. Finally, a third form of industrial democracy takes the form of “codetermination” such as is practiced in Germany where any corporation with more than 1,000 employees must be given representation on the corporation’s board of directors.

Defenders of industrial democracy argue that the practice would ensure employees have a more meaningful influence on their workplace, giving them greater say over how corporations are run and what their wages will be. They further argue that this can lead to higher wages, less turnover, greater income equality, and greater productivity. For example, Mondragon – one of the largest worker cooperatives headquarters in Spain – allows their employees to vote on important corporate policies. They have, for instance, enacted regulations that require that wages for executives be no capped at a certain ratio compared to those making the lowest wage.

Some have argued that efforts to democratize the workplace help to offset the lost influence and bargaining power that’s come from the demise of unions. It’s possible that some forms of industrial democracy could be an alternative to unions where rather than creating a countervailing power to management, individual employees can enjoy a seat at the table. In this way, employees might become more invested in the company, and also potentially prevent disputes and conflicts between employees and management.

Supporters of industrial democracy argue that reforming the corporate governance system is necessary because the standard model contributes to economic underperformance, poor social outcomes, and political instability. Giving employees more say can make them more enthusiastic about the company. It may also be especially desirable in a case like OpenAI’s where there is concern about how the AI industry is regulating itself. Greater employee say could even encourage greater accountability.

On the other hand, critics worry that increasing democracy in the workplace, particularly if required by law, represents an unwarranted interference with business practice. Obviously, people on different sides of the political spectrum will differ when it comes to regulations mandating such practices, but there may be a more fundamental ethical argument against granting employees representation on a board of directors: they don’t deserve it. The argument that investors should have representation follows from the idea that they are choosing to risk their own funds to finance the company. Given their investment, the board and corporate executives have the right to dictate the company’s direction.

In the abstract, critics of industrial democracy argue that it will undermine the business ultimately because employees will be incentivized to work to their own self-interest rather than what is best for the company. However, in practice perhaps one reason for not adopting industrial democracy is that it doesn’t necessarily have a significant impact on business practices in areas where it is practiced.

A 2021 study from MIT found “the evidence indicates that … codetermination is neither a panacea for all problems faced by 21st century workers, nor a destructive institution that appeals obviously inferior to shareholder control. Rather, it is a moderate institution with nonexistent or small positive effects.” When employees have a stake in the company, they tend to behave more like shareholders. If their bottom lines are affected, there is no reason to think that an employee-controlled corporation will be more inclined to act more ethically or more cutthroat.

In other words, the greatest argument against increasing workers’ say in the workplace may be that it simply won’t have the positive effects on corporate governance that might be hoped for. Even with direct representation on the board, that does not mean that this will equate to any kind of direct control over the corporation’s practices, particularly if they are in the minority. Union supporters have long argued that rebuilding union strength is a superior alternative to increasing employee representation.

It is worth pointing out, however, that even if workers don’t invest money directly in a business, they do invest their time and risk their livelihood. Perhaps this fact alone means employees deserve greater voice in the workplace. While representation on the board of directors won’t necessarily lead to dramatic change, numerous studies have shown that it can supplement union and other activities in increasing employee bargaining power. Thus, while it may not be a full solution to improving corporate governance, it might be a start.

The Case for Allowing Advocacy of Violence on Campus

photograph of University of Pennsylvania courtyard

Last week M. Elizabeth Magill, the University of Pennsylvania’s president, was forced to resign after she gave testimony before Congress concerning her university’s response to pro-Palestinian demonstrations on its campus. The controversy over her testimony has focused upon the following exchange with Republican Representative Elise Stefanik:

Stefanik: “Does calling for the genocide of Jews violate Penn’s rules or code of conduct, yes or no?”

Magill: “If the speech turns into conduct, it can be harassment.”

Stefanik: “Calling for the genocide of Jews, does that constitute bullying or harassment?”

Magill: “If it is directed and severe, pervasive, it is harassment.”

Stefanik: “So the answer is yes.”

Magill: “It is a context-dependent decision, congresswoman.”

Stefanik: “That’s your testimony today? Calling for the genocide of Jews is depending upon the context?”

After news broke that Magill had resigned, Stefanik, referring to Magill’s co-testifiers from Harvard and MIT, said in a statement: “One down. Two to go.”

As others have pointed out, what is astonishing about this episode is that Magill’s response, which (bizarrely) even some prominent law professors have criticized, was a straightforward recital of First Amendment law as applied to campus speech. The First Amendment protects from censorship advocacy of violence that falls short of verbal harassment or incitement — the latter defined as conduct intended and objectively likely to cause imminent violence. In line with this principle, Magill’s sensible position is that there are likely some situations where even advocacy of genocide does not rise to the level of harassment or incitement. But critics of Magill’s position would have us believe that the scope of permissible speech — that is, speech not subject to institutional sanction — on our elite campuses should not be as broad as it is in any public park, any periodical, or any public library in America. In this column, I will try to provide a rationale for Magill’s position.

The first thing to observe is that free speech is not only a legal, but also an ethical issue that extends far beyond the purview of First Amendment law. That’s because free speech concerns arise in a variety of contexts, from the family to the workplace — indeed, wherever one person or group has the power to sanction others for their speech. It is not my position that in all of these contexts, the scope of permissible speech should be the same. The value of free speech must be weighed against other values, and in different contexts, the results of that weighing exercise may vary. My claim is that in academic institutions, the value of free speech is unusually weighty, and this justifies maintaining a very strong presumption, in this particular context, in favor of not sanctioning speech. So, while the First Amendment is only directly implicated where the government seeks to use the coercive power of the state to censor or otherwise restrict speech, the First Amendment may serve as a useful model for how private universities like the University of Pennsylvania should handle speech.

Academic institutions are where knowledge is generated and transmitted. To do this well requires an open exchange of ideas in which participants can rigorously test arguments and evidence. Any institutional limits upon this exchange inevitably hinder this testing process because they entail that certain ideas are simply beyond the exchange’s scope. While some limits are nevertheless justifiable for the sake of encouraging maximum participation and preventing violence or other serious harm to persons, academic institutions should not draw the line at mere advocacy of violence or crime for a couple of reasons.

First, it would deprive faculty and students of the opportunity to openly and freely examine ideas that might, like or not, have great currency in the wider society. This is particularly lamentable given that a college campus is a relatively safe and civil environment, one much more conducive to productive conversation about difficult topics than others in which students will find themselves after graduation. It is also, at least ideally, an environment relatively free from the kind of political pressures that could make open and free conversation difficult for faculty. For this reason, if a point of view that advocates violence or crime is without merit, the best arguments against it may be generated at a university. If it has merit — I do not presume a priori that any position advocating any kind of violence or crime is without merit — it is likewise at a university that the best arguments for the position may be uncovered.

In other words, it makes no difference that pro-violence ideas may be intellectually indefensible, or that some might wish them consigned to the dustbin of history. Academic institutions perform a public service simply by publicly demonstrating that fact. Moreover, Hannah Arendt said that in every generation, civilization is invaded by barbarians — we call them children. Her point was that no generation springs into existence armed with the truths established by its predecessors; each must relearn the hard-won lessons of the past, reflecting upon and deciding for itself what is good and bad, true and untrue. To shut down discussion of ideas we have deemed to be without merit is to tell the next generation of students that we have made up their minds for them. There could be nothing less consistent with the spirit of liberal education, with what Immanuel Kant called Enlightenment, than that.

It may be objected that advocacy of violence per se, in any context, frightens or even traumatizes would-be targets of violence, whether student, faculty, or staff, and this justifies censoring it. But my position is not that advocacy of violence is permissible at any time and place, or in any manner. There are better and worse ways for an institution to handle speech that is capable of harm. My point is simply that the solution cannot be to simply restrict any discussion of ideas supportive of violence, no matter how it is conducted. I have previously made the point that we — that is, free speech proponents, including the liberal Supreme Court of the 1960s that was responsible for so many seminal free speech decisions — do not support free speech because we think speech is harmless. By arguing for the central importance of free speech as a value, we implicitly recognize speech’s power to do evil as well as good. Our position must be that we support free speech despite the harm speech can cause, although we can and should take steps to minimize that harm.

This discussion has, so far, been somewhat abstract. Let me close by considering a concrete hypothetical that illustrates the gulf between my view and Stefanik’s. Suppose that a substantial portion of Americans come to support the involuntary, physical removal of Jews from Palestine, effectively an “ethnic cleansing.” Pundits and politicians start advocating for this position openly. On my view, one role of universities in that scenario would be to serve as a forum for discussion of this idea. Proponents of that view should be invited on campus and debated. Students and faculty, including those sympathetic to the idea, should discuss it at length. The hope would be that by exposing it to the kind of scrutiny that universities can uniquely provide, the idea would be discredited all the more swiftly and comprehensively. There is no guarantee that this would happen, of course. On the other hand, those who hold to the view that advocacy of violence has no place on campuses must insist that, in this world, universities and colleges should shun proponents of the view, insulating their students from exposure to the treacherous currents of thought coursing through the wider society. This, I submit, would be a mistake.

Anti-Natalism, Harm, and Objectionable Paternalism

My colleagues here at The Prindle Post have, over the last several weeks, been engaged in a thoughtful discussion surrounding anti-natalism, the view on which it is (at least in part) unethical to have children. Laura Siscoe raised an interesting objection to the anti-natalist view: insofar as the vast majority of people prefer existence to non-existence, and we have no reason to think that future people will be any different, Siscoe argues that anti-natalism is likely to be objectionably paternalistic. Benjamin Rossi, in his response to that article, argued against Siscoe’s position by raising a counterexample: he argues that if anti-natalism is objectionably paternalistic, then so are all forms of birth control, and insofar as Rossi takes this conclusion to be unintuitive, he thereby rejects Siscoe’s argument. In explaining why he believes Siscoe’s argument gives rise to these unintuitive conclusions, Rossi points the blame at Siscoe’s idea of what makes paternalism objectionable — a key idea in the broader debate.

Here, I don’t intend to resolve the debate, or come down authoritatively on one side of it — rather, I (cheekily) intend to complicate it further. I’d like to focus on the concepts which underlie the debate, namely those of paternalism and objectionable paternalism, and ask if we’ve truly captured the heart of the question at hand.

First, let’s be clear as to what has already been said. Siscoe defines paternalism as “interference in the freedom of another for the sake of promoting their perceived good,” and identifies paternalism as objectionable, in the case of people who will exist in the future, when it “contradicts the strongly held desires of future people.” Rossi assents to Siscoe’s definition of paternalism, but disagrees with what constitutes objectionable paternalism: he argues that paternalism is objectionable “if it interferes with a person’s exercise of their ability to act as they want, where that person is entitled to such exercise under the particular circumstances of their choice.” It is the notion of entitlement which is key in Rossi’s argument. He concludes his essay with: “If this account is correct, then to make good on the claim that choices to refrain from reproduction … are objectionable, Siscoe must establish that future people have a right to exist, and not just that they very likely would want to exist.

I think that all three of these definitions have a grain of truth, but are flawed in important ways.

First, let’s take the definition of paternalism which Siscoe and Rossi employ: “interference in the freedom of another for the sake of promoting their perceived good.” This is a good approximation of paternalism, but misses a vital aspect of how paternalism is implemented in practice: paternalism is rarely about promoting someone’s good, and is much more frequently about preventing someone from coming to harm. Now, this may seem like an obtuse distinction to make, but consider the examples of paternalism which Siscoe and Rossi discuss: Siscoe references seatbelt and anti-drug laws, and Rossi discusses the example of a parent stopping a child from playing hopscotch along the edge of a roof. These are not examples of paternalism for the sake of increasing the subject’s well-being, but are instead fundamentally about preventing the subject’s well-being from becoming worse. Or, in more common terms: paternalism is about making sure you don’t hurt yourself, rather than making sure you maximize your potential.

Why is this distinction important? First, it shows where our concern really is when it comes to paternalism. Imagine a child who, with exceptionally hard work and dedication, would go on to become a graduate of the best law school in the country and an exceptional civil rights lawyer. Very few people believe in paternalistic policies which would force this child to do that hard work or have that dedication; however, we generally do believe in paternalistic policies which will make sure this child at least goes to school until they turn 18. This is because we, fundamentally, care about avoiding bad outcomes rather than continually forcing their well-being upward; and this, importantly, re-orients our discussion from benefiting the subject of paternalistic intervention to avoiding harm for the subject of paternalistic intervention. This re-framing reveals the second important point of this distinction: that in order for paternalism to make sense, we need to understand what harm is, and this is notoriously difficult. This is especially complicated in the setting of future peoples, as whether or not we can understand harm to future persons is an entirely open question, and casts doubt on the metaphysical backing which anti-natalists build upon.

But the waters only get muddier when we turn to the idea of objectionable paternalism. I agree with Rossi’s assessment of Siscoe’s definition: that objectionable paternalism cannot merely be that which “contradicts the strongly held desires” of the subject, as seemingly justifiable examples of paternalism (i.e., a parent pulling a child out of oncoming traffic) can involve strongly held desires on the part of the subject. However, Rossi’s suggested reformulation — that paternalism is only objectionable “if it interferes with a person’s exercise of their ability to act as they want, where that person is entitled to such exercise under the particular circumstances of their choice” — is also deeply incomplete. The problem here, in my appraisal, is the emphasis on entitlement. If you care about the moral value of autonomy, then this should trouble you: on Rossi’s view, you are only entitled to act without interference from paternalistic intervention in cases where you have previously demonstrated your entitlement to act in the first place. This is very different from our usual understanding of autonomy-based ethics, where the burden falls on the paternalist to demonstrate the ethical grounding of their intervention.

The difference between these views can be seen clearly in a simple example. Imagine a case where a patient refuses life-saving dialysis, but their healthcare team seeks an injunction to paternalistically force her to receive dialysis. In assessing whether or not this paternalism is objectionable, Rossi would have us ask: is the patient entitled to refuse dialysis? I would hold, however, that a different question is more important: do the physicians have a right to dialyze her against her will? These are two very different questions, and Rossi’s definition of paternalism seems to put the onus on the subject to demonstrate their freedom from paternalistic intervention, rather than on the paternalist to demonstrate the ethical grounds of their intervention.

These considerations, unfortunately, obfuscate the debate between Siscoe and Rossi, but they demonstrate an important pattern in ethical debates: the words we use matter, and, more often than not, the definitions which we attach to the ideas in our debates are the true heart of the disagreement, and the true question at hand.

Who Should Get an A?

painting of crowded schoolhouse

The New York Times reports that for every 10 grades assessed to undergraduates at Yale during the last academic year, 8 were either an A or an A minus: corresponding to an increase in average GPA by nearly 0.3 points since the turn of the century, up to 3.7 from 3.42. This comes after similar patterns were uncovered at Harvard in early October, and a series of university professors were fired over their poor grade distributions: including one at Spelman College last month and a high-profile case last year at New York University.

There are many ways in which to understand these popular controversies: perhaps the problem is grade inflation, or students are struggling following the pandemic. Such theories are important to discuss, and significant attention has been devoted to them since the pandemic. However, there is an observation which we might make here, raising questions with implications spanning pre-K through graduate school: disagreements over low test scores and increasingly high grades are often disagreements over the very purpose of education, and the role it plays in our larger society. The question at the heart of the matter is deceivingly simple: who should get an A?

When asked this question, two categories of answers may come to mind. The first, and perhaps most common, is: the students who understand the material exceptionally well. The entire idea of grading on a curve is based on this premise: for any given class, a group of students will understand the material exceptionally well, a group will understand it exceptionally poorly, and most will fall somewhere in the middle. Under this scheme, grading — and, by extension, education — functions to stratify students: it supposedly identifies the best and most deserving individuals. And, by assumption, someone must always be on the opposite end of the spectrum — for someone to be the best, someone else must always be the worst. This idea, for better or worse, has had an incredibly deep impact on how we, as a society, understand both grades and education more broadly. When grades function to stratify, good grades become the instrument of meritocratic advancement up the socioeconomic hierarchy.

The logic here will be familiar to any high school student, having been echoed for years. To get a good job, you need good grades in college, and to get into college, you need good grades in high school; and to get the best grades in high school, you need to do after school tutoring in elementary school, learn to read as early as possible, and so on. When good grades are a primary vehicle for socioeconomic security, education becomes a bloodsport for which training must begin as early as possible. On this view, the awarding of A’s or A-‘s to 80% of students – as Harvard and Yale and others have done –  is an unacceptable obfuscation of who has won; grades no longer function to establish the differentiation which our broader economy relies upon.

But mixed in our social consciousness is another concept of grading, built on a different idea of education. Perhaps the student who should get an A is the student who satisfied, to the fullest extent, the expectations of the course. The key difference between this notion and that described above is that, here, everyone can get an A so long as all students satisfy those expectations. Imagine, for example, you’re teaching a class on accounting, designed to introduce students to basic concepts in Microsoft Excel and prepare them for higher-level coursework which will require a basic set of skills and a common vocabulary. If this is the goal of the course, then there is no reason that every student shouldn’t get an A: if the goal is for students to develop certain skills, then it only matters that the goal is met, and the degree to which those goals are surpassed is superfluous to the purpose of the course. With realistic goals, proper teaching, and appropriate effort, every student will develop those skills, and the course will have fulfilled its educational mission. Under this scheme, grading functions to indicate competency, and education functions to cultivate it; education is not about sorting students, but rather, uplifting them as a group.

This may seem to be a radical idea of education’s purpose, but I’d argue that the idea is more common than one might think. The idea of educational standards, both at the federal and state level, is built on this idea of education: that a graduate of high school, for example, should have certain competencies. It is also why grading entirely on a curve is uncommon — if the best student gets a 98% and the worst gets a 95%, it hardly seems appropriate to award an A to the former and an F to the latter — and, further, why educators are often blamed for their student’s poor grades: we expect professors to teach all students a set of material, not merely succeed in stratifying their students into the best and worst.

Across education, we can see these two ideas of the educational mission — education as stratifying and education as uplifting — coming into conflict with one-another. Perhaps they even co-exist within most grading systems, where a C is intended to indicate competency and an A indicates exceptional understanding. But even though we may be intuitively familiar with both, I think there’s reason to take the conflict between them seriously: I would argue that not only do these two concepts of education conflict, but that they’re fundamentally at odds with one another. If stratifying students requires always failing some, then education cannot simultaneously function to uplift all students; and if uplifting all students requires providing second and third chances, then grades and education cannot play their fundamental role in our society’s larger economic system. This is exactly what has happened in medical education when the first United States Medical Licensing Exam was transitioned from a scored system to a simple pass/fail: when this change was finalized, residency program directors lost their primary metric for deciding which medical students to interview.

But we can also understand this conflict at a different level. Take the perspective of a professor. Very few educators want to be the gatekeepers of socioeconomic privilege, and most find the idea of failing students unpleasant, especially when those students make a genuine effort: most professors want to teach, to uplift their students, share their passion for the subject they have devoted their life to studying. Take the perspective of a student. In a stratifying educational system, students are actively punished for helping their classmates, and are tacitly encouraged to undermine other students to increase their standing in the grading hierarchy; in an uplifting system, no such incentives exist, and collaboration is tacitly encouraged.

Grading controversies are, fundamentally, a debate which happens between these two, radically different ideas about education and the social role it should serve. Should education uplift all, or determine who can go on? Should education be rigorous and challenging, or designed to accommodate the flourishing of students? These are not easy questions, but they are questions which we will continue to face until the contradiction inherent to modern education is resolved.

Smoking and Limitations on Liberty

close-up photograph of defiant smoker in sunglasses

At the end of last month, the recently elected coalition government in New Zealand decided to scrap a world-leading policy implementing an effective ban on smoking nationwide. The legislation – passed in 2022 and set to come into force in July 2024 – would have raised the smoking age annually, so that someone who was 14 years old at the time of the policy’s implementation would never be able to legally purchase a cigarette. The pioneering approach subsequently inspired the proposal of similar legislation in the U.K. amongst other jurisdictions.

The chief reason for the axing of this policy was financial. Tobacco sales generate revenue, and the incoming government of New Zealand needs this revenue in order to fund its many promised tax cuts. However other concerns played a role, including the familiar specter of the nation becoming a “nanny state” that dictates how people should live their lives. But are these concerns sufficient to justify the overturning of a policy that would have reduced mortality rates by 22% for women, and 9% for men – saving approximately 5,000 New Zealand lives per year?

At its core, this policy – like others that limit our ability to imbibe potentially harmful substances – becomes a debate about whether we should take a paternalistic or libertarian view of the role of government. Paternalists see the government in a parental light, and – as such – believe that the government is justified in restricting the liberty of its citizens where doing so is in the citizens’ best interests. Libertarians, on the other hand, see freedom as being of paramount importance, and endorse the government restricting personal freedoms in only very limited scenarios. What kind of cases might qualify? One approach the Libertarian might take is to apply something like John Stuart Mill’s Harm Principle, which holds that our freedoms should only be limited where our actions will cause harm to others. Could, then, a Libertarian justify an effective ban on smoking? Perhaps. The harms of secondhand smoke (i.e., the inhaling of cigarette smoke by those who do not choose to smoke) are well-known. In the U.S. alone, secondhand smoke causes nearly 34,000 premature deaths every year. This is precisely the kind of harm that might justify a limitation of our personal freedom under a libertarian approach.

But suppose that an individual manages to smoke in a manner that creates no harm whatsoever for anyone else. This isolated smoker consumes tobacco exclusively in a private, sealed environment so that the only harm caused is harm to themself. Might the state nevertheless be justified in restricting the liberty of this individual? Here, the libertarian will most likely say “no.” The paternalist, on the other hand, might endorse a liberty-restricting policy. But on what basis?

There are myriad ways in which the paternalist might justify the infringement of an individual’s liberty, even where no harm is done to others. One method comes via an application of utilitarianism (also popularized by John Stuart Mill). At its core, utilitarianism claims that the right thing to do is that which maximizes welfare – i.e., how well people’s lives go. How are we to measure this? One way (and the way which Mill himself adopts) is hedonistically. This approach involves tallying up the total pleasures and pains brought about by different options, and choosing that which maximizes pleasure (or, at the very least, minimizes pain).

What would this hedonistic utilitarian make of the isolated smoker case above? Well, chief among their considerations would be the pleasures (presumably) gained from the smoker’s enjoyment of their cigarettes. But these pleasures would then need to be weighed against the pains caused by this same activity: specifically, the detrimental effects that smoking has on one’s health. Now, some of those pains might not be immediate – and some might never occur. In this case, the calculation of pains might need to take into account the risk of those harms eventuating – discounting them according to how unlikely they are to occur. Ultimately, the question posed by the hedonistic utilitarian will be do the pleasures of smoking outweigh the actual (and potential) harms? Where they do not, the state might find moral justification in preventing that individual smoking, since it will not be the action that maximizes their welfare.

But utilitarianism isn’t the only moral theory we might apply. Immanuel Kant’s approach is decidedly different, and focuses on a respect for human dignity. His Humanity Formulation of the Categorical Imperative states that an action is right if and only if it treats persons as ends in themselves and not as a mere means to an end. Might the Kantian object to restricting the liberty of the isolated smoker? It would certainly seem that the state is using the individual as a means to an end – that being the end of promoting health. But are they using the individual as a mere means? Arguably not. If I befriend a classmate for the sole purpose of having them help me write an assignment, I am using them as a mere means. If, however, I pay a mechanic to work on my car, I am not using them as a “mere” means, since my treatment of the mechanic happens to facilitate their end of gainful employment.

The same might be true in the case of liberty-limiting legislation and smoking. While the state is using the individual as a means, they might be doing so in a way that promotes the ends of the individuals. What are those ends? We can take our pick from the many things that the smoker values in life: waking up each morning to enjoy the sunrise, engaging in physical exercise, watching their grandchildren graduate. All of these ends are threatened by their smoking, so that preventing this individual from smoking might in fact respect those ends.

Whether or not the state is right to limit their citizens’ ability to engage in harmful behavior is a conversation both complex and nuanced. It’s unfortunate, then, that in the case of New Zealand this decision seems to have been made largely on the basis of financial considerations and political pragmatism. Instead, careful attention should be paid to how we see the state: whether its role is paternalistic, and – if so – what kinds of moral principles might justify its intervention in our lives.

If Anti-Natalism Is Objectionably Paternalistic, Then So Is Family Planning

photograph of child and parent shadow on asphalt

In her recent column, Laura Siscoe argues that reproductive choices motivated by anti-natalism are objectionably paternalistic because they “seek to decide what’s best for future people (i.e., their non-existence)” and “contradict the strongly held desires of future people.” Although I think her argument is mistaken, it raises some important issues regarding our duties to future generations that are well worth exploring.

To illustrate how her argument goes awry, consider a devoutly Catholic couple who successfully use the rhythm method because they want to delay having children until they feel confident that they can provide a sufficiently stable environment for their offspring. It seems to follow from Siscoe’s account that this practice is objectionably paternalistic because it entails that some future person or people who might have come into existence had the couple not intentionally employed a form of “natural family planning” will not in fact exist. We can safely assume that this would contradict their strongly held desires, so their practice is not just paternalistic, but objectionably paternalistic.

The point of this example is that if the anti-natalist choice to refrain from having children full stop is objectionably paternalistic, then so is any choice to refrain from having children under some particular set of circumstances, when that choice is motivated by the desire to do what is best for one’s future children. Perhaps it does not follow from a choice’s being objectionably paternalistic that it is, all-things-considered, morally wrong. But Siscoe seems committed to the view that the Catholic couple should at least consider the interests of the potential future people whose existence is precluded by their use of the rhythm method in their moral calculus. Moreover, in this calculus, such interests weigh heavily against practicing this or any other form of birth control. This is surely an odd result, given that even an organization as avowedly “pro-life” as the Catholic Church sanctions, and even encourages, some forms of family planning.

If we try to trace the counterintuitive implications of Siscoe’s argument back to one of its premises, however, a problem confronts us. On the one hand, these implications seem to flow from the claim that possible future people have interests that are entitled to moral consideration. Once we grant this premise, and we also acknowledge the seemingly undeniable fact that our actions affect those interests, we seem to be committed to extending moral consideration to the interests of possible future persons who are affected by any choice to refrain from reproduction. On the other hand, the claim that we have some responsibility to act with an eye toward future generations is commonplace both within and outside of moral philosophy, despite some well-known puzzles associated with it. Must we, along with Siscoe, simply bite the bullet and concede that any choice to refrain from reproduction for the sake of the unborn is objectionably paternalistic?

Perhaps we can avoid this result if we examine the notion of paternalism in greater depth. Siscoe’s gloss on “paternalism” is “interference in the freedom of another for the sake of promoting their perceived good.” Rightly, I think, she does not build into the notion of “paternalism” that it is morally objectionable. After all, there are strong arguments in favor of some degree of interference in the freedom of others for their own sake under certain circumstances — paradigmatically, parents’ interference with their children’s freedom.

So, in addition to a definition of “paternalism,” we need an account of what makes paternalism objectionable. Siscoe seems to imply that paternalism is objectionable when it “contradicts the strongly held desires” of others. But this can’t be the whole story: a small child may strongly desire to play hopscotch along the edge of a tall building’s roof, but its parent’s decision to prevent it from doing so, while undeniably paternalistic, is not morally objectionable.

I suggest, then, that paternalism is objectionable if it interferes with a person’s exercise of their ability to act as they want, where that person is entitled to such exercise under the particular circumstances of their choice. This account would explain why the kind of paternalism that gave the notion its name — the paternalism of parents with respect to their children — may not be objectionable. There are many contexts where there are strong arguments that children should not be able to act as they want — arguments that in effect show that they have no right to act as they want in those contexts.

If this account is correct, then to make good on the claim that choices to refrain from reproduction — whether motivated by a commitment to anti-natalism or concerns that are less absolute in their implications — are objectionable, Siscoe must establish that future people have a right to exist, and not just that they very likely would want to exist. Without a legitimate claim on us of this kind, we are not bound to respect their interest in existing, and the argument against anti-natalism from paternalism falls apart.

Is Anti-Natalism Objectionably Paternalistic?

black and white photograph of parent and child holding walking through tunnel

There is something about envisioning a future without children that is intuitively objectionable to many. This sentiment is portrayed in the film Children of Men, which depicts a child-less world as bleak and devoid of hope. Despite this intuitive pull, the position known as anti-natalism enjoys a certain degree of popularity in both philosophical and public discourse. The basic premise behind the anti-natalist movement is that life is sufficiently bad in some way, such that we have a general moral duty not to bring new human life into the world. There are various reasons anti-natalists appeal to as grounds for this conclusion, including the impacts of climate change on future generations, the inevitably of life bringing about suffering, as well as just a general pessimism about the moral trajectory of humanity.

I propose here a possible objection to anti-natalism, namely, that it is objectionably paternalistic. The moral concept of paternalism consists in the notion of interference in the freedom of another for the sake of promoting their perceived good. Commonplace examples of public paternalism include seatbelt laws and anti-drug legislation. There are, of course, also familial examples such as imposing bedtimes on children or forcing them to eat a healthy diet.

It is generally accepted that we should exercise at least a certain amount of moral and political caution when endorsing strongly paternalistic policies. There is some degree of good in human autonomy and honoring peoples’ preferences, even when we believe those preferences to be ill-advised. Caution seems particularly advisable when the freedom being infringed upon by the paternalist policy carries great weight. For instance, China’s infamous one-child-policy tends to strike people as more ethically objectionable than a policy limiting certain kinds of hard drug use. The reason for this is (at least partially) because the right to have children seems much more central to human expression and vital to the preservation of one’s autonomy than does the right to use severely dangerous drugs.

The way that the topic of paternalism interfaces with debates over anti-natalism is two-fold. For one, anti-natalism deals with the procreative choices of individuals. Some strong versions of anti-natalism seek to impose a vision of what’s best on prospective parents, whose opinions might sharply diverge from that of the anti-natalist. Secondly, anti-natalist stances are paternalistic in that they seek to decide what’s best for future people (i.e. their non-existence). Of course, some degree of paternalism is involved in both the choice to have as well as not to have children, as it is parents who must determine on behalf of their children if the life they aim to create is worth living. So in contrast with pro-natalist positions, what makes anti-natalism potentially objectionably paternalistic?

When surveying the preferences of most people — including many of those who face tremendous suffering — the verdict seems most do not wish for non-existence. Given that most (though certainly not all) people would choose their existence over non-existence if confronted with the choice, what degree of weight should this fact carry for anti-natalists? Given that peoples’ expressed preferences seem to tilt clearly in one direction, paired with the significance of the issue at hand (i.e., existence over non-existence), it seems we might have reason to be morally cautious of anti-natalist sentiments.

One way of objecting to this conclusion is to point out that moral concerns about paternalism typically apply to people that are already living. It is less common to think about paternalism as it relates to future or potentially future people. After all, we don’t actually have access to the preferences of future people. Thus, we are merely extrapolating their preferences from those who are already living. A limitation of this approach is that we could be discounting certain factors that might make this prediction inaccurate. For instance, perhaps the condition of the world gets so bad as to cause the majority of future people to opt for non-existence.

This is certainly not a possibility that we can rule out. However, we have reason to be dubious of this outcome. If anything, there are many signs that human suffering is (on a whole) measurably less than what it once was. People are being lifted out of severe poverty at increasing rates, many preventable diseases have been nearly eradicated, and the rights of certain marginalized populations are now legally enshrined. Absent an argument that we can predict with a very high level of confidence that future peoples’ lives will be dramatically worse than peoples’ lives now, it is reasonable to assume future people will continue to prefer existence over non-existence.

If we grant this empirical point, the paternalist concern starts to emerge. Anti-natalism runs the risk of being objectionably paternalistic insofar as it contradicts the strongly held desires of future people. Making the judgment of which lives are worth living places one in the morally precarious position of having to potentially undermine the preferences of those whose lives actually hang in the balance. Thus, while there is unavoidable moral risk involved in procreative decisions, it is particularly incumbent on anti-natalists to consider the weight that the expressed preferences of living people should carry when it comes to procreative choice.

No Fit Place: On Books Bound in Human Skin

closeup image of leather texture

Many museums around the world hold and display human remains. Be they in the form of the ancient Egyptian mummies housed within the British Museum, the skeleton of the notorious murderer William Burke at Edinburgh’s Anatomical Museum, or the University of Oxford’s Amazonian “shrunken heads,” the relationship between the deceased and museums is intimate. It should surprise no one, then, that museums are the site of significant ethical debate. After all, where we find death, we often find controversy.

For example, and possibly a topic with which many will be familiar, there is considerable debate surrounding how museums obtain their exhibits and whether this has any bearing on how, or even if, they should be displayed. If a body falls into a museum’s stewardship via less than official means – like graverobbing – should this affect whether a museum should exhibit the body? We might think the answer is yes, as such an action would be patiently unethical (and likely illegal). But is this true across the board? Does time play a factor here? After all, many of the bodily remains of ancient peoples were taken from their burial places many decades, even centuries ago. Should this matter when compared to the educational and social value such remains might produce?

My point so far here is not to pick sides in such debates (although I do have a side). Instead, I want to highlight that when it comes to how museums treat remains, there is a long pedigree of philosophical and political debate, as well as legislation. This is not the case, however, when it comes to another form of venue in which the public can engage in academic and educational pursuits – libraries.

Now, this might not strike you as particularly relevant; after all, libraries don’t hold human remains, they hold books. But this is, strictly speaking, not true as some libraries house books bound in human skin.

This practice, called anthropodermic bibliopegy, became fashionable in the 19th century. While it conjures up images of occult rituals and human sacrifice – vis-à-vis the necronomicon – in reality, it has closer ties to far more legitimate professions, such as medicine and criminal punishment.

One of the most famous examples of this comes from Edinburgh in the form of a name already mentioned in this piece – William Burke. Over ten months in 1828, William Burke and his accomplice, William Hare, terrorized the streets of Edinburgh by murdering at least sixteen people. This was not simply mindless violence, however, as Burke and Hare sold their victim’s bodies for dissection in anatomy lectures. The pair were eventually caught, and, while it is unclear what happened to Hare after he turned King’s evidence, Burke was sentenced to death by hanging and, with a sense of poetic justice, his corpse was publicly dissected and his skeleton was placed on display.

What is interesting here, however, is that a section of Burke’s skin was removed and used to cover a small notebook, which is now housed in Edinburgh’s Surgeon’s Hall Museums. In a very macabre tone, on the front of the book, in faded gold text, reads BURKE’S SKIN POCKET BOOK.

This is not the only example of such a morbid tome, however. While rare, there are many more examples of such items at multiple institutes, including Harvard’s Houghton Library, Philadelphia’s Mutter Museum, and Brown University’s John Hay Library, to name a few. And, while the reasons for such unusual bookbinding vary, from punishment to memorialization to collection, it is essential to remember that each book is bound in the remains of a person who, regardless of how they lived their life, was once a living, breathing individual. Thus, how we treat these items matters. These are not just books, nor are they simply bodies; they exist somewhere in between. And while these items are not exclusively found in libraries, given the fact that they are books and that it can be hard to tell what exactly they are made from (differentiating between human and pig skin is difficult without testing), libraries must acknowledge the potentially ethically tricky situation they find themselves in; not just as curators of books, but the potential guardian of human remains.

So, what should a library do in such a scenario? How should they respond if, after testing, an item in their collection is found to have as its binding human skin?

One option is to destroy it as such an item might be deemed too offensive to be allowed to continue existing. This might be because it draws up unpleasant connotations linked to how it came into being or from whom the remains come (the skin is from a murderer or was obtained through bodily desecration). It may also be offensive not because of who it is made from or how it came to be, but simply because of what it is. As such, it might be that the best way forward is to destroy the piece humanely, thus preventing anyone in the future from obtaining it or causing any further offence. Yet, while this may be the simplest option, it is far from uncontroversial. The item holds its own story, and many may learn much from it. Not just in terms of how it was created, but also in what it represents as well as the narrative of how it came to be in the form it is. Thus, to destroy it is to abandon the knowledge and legacy that the item has accrued. The issue could be further complicated if the remains used in the item were offered willingly. Is it acceptable to destroy such an item if this contradicts the wishes of the person from whom the book has been made?

An alternative, then, might be to continue holding onto the book but to keep it away from the public. To house it away in a secure room in the depths of a library and, while not forgetting about it, letting it slip from public and professional consciousness. While potentially avoiding some issues related to the offence it causes, this does little to address one’s responsibilities towards the remains, and may very well compound any such dereliction of duties. After all, if one is aware that an item in their collection contains parts of a human body, keeping it locked away in storage might be seen as ignoring the issue. Few would find this a satisfactory solution if the item in question was a severed human head. Should the fact that the remains no longer resemble a body part and are now part of a book really make this option any more palatable?

So, if destroying the book and hiding it away are not options (or at least problem-free options), then what is left? Well, the final course of action considered here is to openly acknowledge what the items are and how they came to be; make the items available to the public to come and learn about. After all, if museums can use remains for educational purposes, then why can’t libraries? Burke’s remains, for example, were put on display so that future people might reflect on his actions. It’s an easy argument to make that his skin, regardless of whether wrapped around a book or not, should serve the same purpose.

Yet, this contravenes what libraries are typically for. While they are places of learning, and their roles are ever-changing and expanding, asking them to be home to bodily remains and to take on all the additional responsibilities that come with such a role might be asking too much.

Ultimately, then, whether libraries should retain their morbid, part-human, part-bibliographic items is far from a simple question as it draws in concerns about what the library’s role is in society, how we treat the deceased, and whether the form of bodily remains alters our responsibilities to them. And, while this is not a pressing question given how many books exist, we are talking about our duties to the deceased. Thus, both sensitivity and decisiveness are required.