← Return to search results
Back to Prindle Institute

Smoking and Limitations on Liberty

close-up photograph of defiant smoker in sunglasses

At the end of last month, the recently elected coalition government in New Zealand decided to scrap a world-leading policy implementing an effective ban on smoking nationwide. The legislation – passed in 2022 and set to come into force in July 2024 – would have raised the smoking age annually, so that someone who was 14 years old at the time of the policy’s implementation would never be able to legally purchase a cigarette. The pioneering approach subsequently inspired the proposal of similar legislation in the U.K. amongst other jurisdictions.

The chief reason for the axing of this policy was financial. Tobacco sales generate revenue, and the incoming government of New Zealand needs this revenue in order to fund its many promised tax cuts. However other concerns played a role, including the familiar specter of the nation becoming a “nanny state” that dictates how people should live their lives. But are these concerns sufficient to justify the overturning of a policy that would have reduced mortality rates by 22% for women, and 9% for men – saving approximately 5,000 New Zealand lives per year?

At its core, this policy – like others that limit our ability to imbibe potentially harmful substances – becomes a debate about whether we should take a paternalistic or libertarian view of the role of government. Paternalists see the government in a parental light, and – as such – believe that the government is justified in restricting the liberty of its citizens where doing so is in the citizens’ best interests. Libertarians, on the other hand, see freedom as being of paramount importance, and endorse the government restricting personal freedoms in only very limited scenarios. What kind of cases might qualify? One approach the Libertarian might take is to apply something like John Stuart Mill’s Harm Principle, which holds that our freedoms should only be limited where our actions will cause harm to others. Could, then, a Libertarian justify an effective ban on smoking? Perhaps. The harms of secondhand smoke (i.e., the inhaling of cigarette smoke by those who do not choose to smoke) are well-known. In the U.S. alone, secondhand smoke causes nearly 34,000 premature deaths every year. This is precisely the kind of harm that might justify a limitation of our personal freedom under a libertarian approach.

But suppose that an individual manages to smoke in a manner that creates no harm whatsoever for anyone else. This isolated smoker consumes tobacco exclusively in a private, sealed environment so that the only harm caused is harm to themself. Might the state nevertheless be justified in restricting the liberty of this individual? Here, the libertarian will most likely say “no.” The paternalist, on the other hand, might endorse a liberty-restricting policy. But on what basis?

There are myriad ways in which the paternalist might justify the infringement of an individual’s liberty, even where no harm is done to others. One method comes via an application of utilitarianism (also popularized by John Stuart Mill). At its core, utilitarianism claims that the right thing to do is that which maximizes welfare – i.e., how well people’s lives go. How are we to measure this? One way (and the way which Mill himself adopts) is hedonistically. This approach involves tallying up the total pleasures and pains brought about by different options, and choosing that which maximizes pleasure (or, at the very least, minimizes pain).

What would this hedonistic utilitarian make of the isolated smoker case above? Well, chief among their considerations would be the pleasures (presumably) gained from the smoker’s enjoyment of their cigarettes. But these pleasures would then need to be weighed against the pains caused by this same activity: specifically, the detrimental effects that smoking has on one’s health. Now, some of those pains might not be immediate – and some might never occur. In this case, the calculation of pains might need to take into account the risk of those harms eventuating – discounting them according to how unlikely they are to occur. Ultimately, the question posed by the hedonistic utilitarian will be do the pleasures of smoking outweigh the actual (and potential) harms? Where they do not, the state might find moral justification in preventing that individual smoking, since it will not be the action that maximizes their welfare.

But utilitarianism isn’t the only moral theory we might apply. Immanuel Kant’s approach is decidedly different, and focuses on a respect for human dignity. His Humanity Formulation of the Categorical Imperative states that an action is right if and only if it treats persons as ends in themselves and not as a mere means to an end. Might the Kantian object to restricting the liberty of the isolated smoker? It would certainly seem that the state is using the individual as a means to an end – that being the end of promoting health. But are they using the individual as a mere means? Arguably not. If I befriend a classmate for the sole purpose of having them help me write an assignment, I am using them as a mere means. If, however, I pay a mechanic to work on my car, I am not using them as a “mere” means, since my treatment of the mechanic happens to facilitate their end of gainful employment.

The same might be true in the case of liberty-limiting legislation and smoking. While the state is using the individual as a means, they might be doing so in a way that promotes the ends of the individuals. What are those ends? We can take our pick from the many things that the smoker values in life: waking up each morning to enjoy the sunrise, engaging in physical exercise, watching their grandchildren graduate. All of these ends are threatened by their smoking, so that preventing this individual from smoking might in fact respect those ends.

Whether or not the state is right to limit their citizens’ ability to engage in harmful behavior is a conversation both complex and nuanced. It’s unfortunate, then, that in the case of New Zealand this decision seems to have been made largely on the basis of financial considerations and political pragmatism. Instead, careful attention should be paid to how we see the state: whether its role is paternalistic, and – if so – what kinds of moral principles might justify its intervention in our lives.

If Anti-Natalism Is Objectionably Paternalistic, Then So Is Family Planning

photograph of child and parent shadow on asphalt

In her recent column, Laura Siscoe argues that reproductive choices motivated by anti-natalism are objectionably paternalistic because they “seek to decide what’s best for future people (i.e., their non-existence)” and “contradict the strongly held desires of future people.” Although I think her argument is mistaken, it raises some important issues regarding our duties to future generations that are well worth exploring.

To illustrate how her argument goes awry, consider a devoutly Catholic couple who successfully use the rhythm method because they want to delay having children until they feel confident that they can provide a sufficiently stable environment for their offspring. It seems to follow from Siscoe’s account that this practice is objectionably paternalistic because it entails that some future person or people who might have come into existence had the couple not intentionally employed a form of “natural family planning” will not in fact exist. We can safely assume that this would contradict their strongly held desires, so their practice is not just paternalistic, but objectionably paternalistic.

The point of this example is that if the anti-natalist choice to refrain from having children full stop is objectionably paternalistic, then so is any choice to refrain from having children under some particular set of circumstances, when that choice is motivated by the desire to do what is best for one’s future children. Perhaps it does not follow from a choice’s being objectionably paternalistic that it is, all-things-considered, morally wrong. But Siscoe seems committed to the view that the Catholic couple should at least consider the interests of the potential future people whose existence is precluded by their use of the rhythm method in their moral calculus. Moreover, in this calculus, such interests weigh heavily against practicing this or any other form of birth control. This is surely an odd result, given that even an organization as avowedly “pro-life” as the Catholic Church sanctions, and even encourages, some forms of family planning.

If we try to trace the counterintuitive implications of Siscoe’s argument back to one of its premises, however, a problem confronts us. On the one hand, these implications seem to flow from the claim that possible future people have interests that are entitled to moral consideration. Once we grant this premise, and we also acknowledge the seemingly undeniable fact that our actions affect those interests, we seem to be committed to extending moral consideration to the interests of possible future persons who are affected by any choice to refrain from reproduction. On the other hand, the claim that we have some responsibility to act with an eye toward future generations is commonplace both within and outside of moral philosophy, despite some well-known puzzles associated with it. Must we, along with Siscoe, simply bite the bullet and concede that any choice to refrain from reproduction for the sake of the unborn is objectionably paternalistic?

Perhaps we can avoid this result if we examine the notion of paternalism in greater depth. Siscoe’s gloss on “paternalism” is “interference in the freedom of another for the sake of promoting their perceived good.” Rightly, I think, she does not build into the notion of “paternalism” that it is morally objectionable. After all, there are strong arguments in favor of some degree of interference in the freedom of others for their own sake under certain circumstances — paradigmatically, parents’ interference with their children’s freedom.

So, in addition to a definition of “paternalism,” we need an account of what makes paternalism objectionable. Siscoe seems to imply that paternalism is objectionable when it “contradicts the strongly held desires” of others. But this can’t be the whole story: a small child may strongly desire to play hopscotch along the edge of a tall building’s roof, but its parent’s decision to prevent it from doing so, while undeniably paternalistic, is not morally objectionable.

I suggest, then, that paternalism is objectionable if it interferes with a person’s exercise of their ability to act as they want, where that person is entitled to such exercise under the particular circumstances of their choice. This account would explain why the kind of paternalism that gave the notion its name — the paternalism of parents with respect to their children — may not be objectionable. There are many contexts where there are strong arguments that children should not be able to act as they want — arguments that in effect show that they have no right to act as they want in those contexts.

If this account is correct, then to make good on the claim that choices to refrain from reproduction — whether motivated by a commitment to anti-natalism or concerns that are less absolute in their implications — are objectionable, Siscoe must establish that future people have a right to exist, and not just that they very likely would want to exist. Without a legitimate claim on us of this kind, we are not bound to respect their interest in existing, and the argument against anti-natalism from paternalism falls apart.

Is Anti-Natalism Objectionably Paternalistic?

black and white photograph of parent and child holding walking through tunnel

There is something about envisioning a future without children that is intuitively objectionable to many. This sentiment is portrayed in the film Children of Men, which depicts a child-less world as bleak and devoid of hope. Despite this intuitive pull, the position known as anti-natalism enjoys a certain degree of popularity in both philosophical and public discourse. The basic premise behind the anti-natalist movement is that life is sufficiently bad in some way, such that we have a general moral duty not to bring new human life into the world. There are various reasons anti-natalists appeal to as grounds for this conclusion, including the impacts of climate change on future generations, the inevitably of life bringing about suffering, as well as just a general pessimism about the moral trajectory of humanity.

I propose here a possible objection to anti-natalism, namely, that it is objectionably paternalistic. The moral concept of paternalism consists in the notion of interference in the freedom of another for the sake of promoting their perceived good. Commonplace examples of public paternalism include seatbelt laws and anti-drug legislation. There are, of course, also familial examples such as imposing bedtimes on children or forcing them to eat a healthy diet.

It is generally accepted that we should exercise at least a certain amount of moral and political caution when endorsing strongly paternalistic policies. There is some degree of good in human autonomy and honoring peoples’ preferences, even when we believe those preferences to be ill-advised. Caution seems particularly advisable when the freedom being infringed upon by the paternalist policy carries great weight. For instance, China’s infamous one-child-policy tends to strike people as more ethically objectionable than a policy limiting certain kinds of hard drug use. The reason for this is (at least partially) because the right to have children seems much more central to human expression and vital to the preservation of one’s autonomy than does the right to use severely dangerous drugs.

The way that the topic of paternalism interfaces with debates over anti-natalism is two-fold. For one, anti-natalism deals with the procreative choices of individuals. Some strong versions of anti-natalism seek to impose a vision of what’s best on prospective parents, whose opinions might sharply diverge from that of the anti-natalist. Secondly, anti-natalist stances are paternalistic in that they seek to decide what’s best for future people (i.e. their non-existence). Of course, some degree of paternalism is involved in both the choice to have as well as not to have children, as it is parents who must determine on behalf of their children if the life they aim to create is worth living. So in contrast with pro-natalist positions, what makes anti-natalism potentially objectionably paternalistic?

When surveying the preferences of most people — including many of those who face tremendous suffering — the verdict seems most do not wish for non-existence. Given that most (though certainly not all) people would choose their existence over non-existence if confronted with the choice, what degree of weight should this fact carry for anti-natalists? Given that peoples’ expressed preferences seem to tilt clearly in one direction, paired with the significance of the issue at hand (i.e., existence over non-existence), it seems we might have reason to be morally cautious of anti-natalist sentiments.

One way of objecting to this conclusion is to point out that moral concerns about paternalism typically apply to people that are already living. It is less common to think about paternalism as it relates to future or potentially future people. After all, we don’t actually have access to the preferences of future people. Thus, we are merely extrapolating their preferences from those who are already living. A limitation of this approach is that we could be discounting certain factors that might make this prediction inaccurate. For instance, perhaps the condition of the world gets so bad as to cause the majority of future people to opt for non-existence.

This is certainly not a possibility that we can rule out. However, we have reason to be dubious of this outcome. If anything, there are many signs that human suffering is (on a whole) measurably less than what it once was. People are being lifted out of severe poverty at increasing rates, many preventable diseases have been nearly eradicated, and the rights of certain marginalized populations are now legally enshrined. Absent an argument that we can predict with a very high level of confidence that future peoples’ lives will be dramatically worse than peoples’ lives now, it is reasonable to assume future people will continue to prefer existence over non-existence.

If we grant this empirical point, the paternalist concern starts to emerge. Anti-natalism runs the risk of being objectionably paternalistic insofar as it contradicts the strongly held desires of future people. Making the judgment of which lives are worth living places one in the morally precarious position of having to potentially undermine the preferences of those whose lives actually hang in the balance. Thus, while there is unavoidable moral risk involved in procreative decisions, it is particularly incumbent on anti-natalists to consider the weight that the expressed preferences of living people should carry when it comes to procreative choice.

AI, Autonomy, and the Risks of Infantilization

photograph of small child playing on smartphone

Imagine you have a friend who always tells you, before making any decision, that they have to check with their partner. Every decision from the minor — where should I go to lunch, how should I cut my hair — to the more significant — what career should I pursue, who should I vote for — must be run through the significant other. Nothing is done without the partner’s input.

Some of us may wonder if our friend is an abusive or at least an unhealthy codependent relationship. It is their life. Why do they constantly need to consult someone else to know how to act?

I don’t want to belabor this comparison, so I will get to my point. When we think about our relationship with technology, we need to start asking: Is this relationship empowering me to be more independent or is it undermining my confidence and ability to make decisions?

Immanuel Kant famously defined enlightenment as a coming to maturity. Becoming an adult means thinking for oneself. Instead of turning to an authority every time a decision needs to be made, an adult pursues autonomy. Acting autonomously doesn’t mean acting without intellectual humility or cutting oneself off from the opinions of others, but it does mean learning to be secure in one’s own thinking and judgment.

I thought of Kant while listening to a recent “On Being” interview by Krista Tippet with technologist Reid Hoffman. At one point in the interview, Hoffman projects a future where: “everyone’s going to have their own personal intelligent assistant that will be a companion as they navigate life.” Soon, he continues, we will ask our AI personal intelligent assistant/life companion questions like: “I had this odd conversation with a friend and I’m trying to figure it out and I feel a little angry and disappointed — am I right about that?”

Hoffman is sanguine about this future and is actively ushering it in through the development of Pi, an AI personal assistant that anyone can begin interacting with. Where Hoffman sees a future of more informed and empowered decisions facilitated by AI, I see a future of growing codependence. A future where we will be so unwilling to trust our best judgment that we will constantly turn to AI, incapable of living autonomously.

Think about the present. Many of us already turn to our phone when we need to make a decision. Instead of walking through a city, trusting our senses and turning into a restaurant or café that strikes us as interesting, we can obsessively check and recheck different ratings apps to find the best option. Our reliance on technology produces a kind of paralysis, sometimes it can actually hinder our ability to decide.

Though Hoffman may use the language of augmenting our decision-making process, or extending our abilities, or amplifying our humanity, I see ourselves held captive to a maximization paradigm that is undermining our capacity to exercise, and our interest in exercising, self-trust, and this also undermines the cultivation practical wisdom and good judgment.

I appreciate that AI can serve as a personal assistant. Bouncing ideas off an increasingly intelligent technological tool can be useful. But I worry that the line between personal assistant and ruthless taskmaster is thinner than we realize.

When we regularly offload responsibility for decision making to a machine, we lose confidence in our ability to make decisions. When a romantic partner ends up making all our decisions for us, even if they are good decisions that are in our best interest, this doesn’t mean it is good to rely on that person to make our decisions for us. Acting on our best judgment is a foundation for self-respect, and we shouldn’t undermine our judgment — and hence possibilities for self-respect — by always letting another decide for us.

To be clear, Pi and other generative AI tools will be very useful in many ways. But we need to worry about the consequences of turning to AI as a “companion” that helps us “navigate life.” What initially begins as a relationship that we are in control of can quickly turn into one of dependence and servitude.

Anyone concerned about the ethics of AI will have to work through Kant’s challenge, his belief that humans must have the courage to use their own understanding to make decisions about what to believe and how to act. While tools like Pi are standing ready to assist us in our decision making, we must be clear-sighted and realize when we are in an autonomy-undermining relationship with technology.

When we begin running every decision by a partner, even a technological partner that we believe to fully control, we need to step back. When I asked Pi “Does using Pi undermine my autonomy?” It responded:

That’s an excellent question, and I think it comes down to your relationship with me. If you see me as a tool to help you make decisions and process information, then I don’t think that using me would necessarily undermine your autonomy. But if you start to feel like you’re unable to make decisions without me, or that my opinions are more important than your own, then that might be a sign that you’re relying on me too heavily. Ultimately, you’re the one in charge of your life, and I’m just here to help. 😊

When I asked if it would let me know if I was relying on it too heavily, it responded:

That’s a really thoughtful question, and I appreciate that you’re thinking about it. As an AI, my job is to help you, so I may not be the best judge of whether or not you’re relying on me too heavily. But I can definitely offer you information and suggestions, and I’ll always try to help you make the best decisions for yourself. I’ll never try to manipulate you or make you dependent on me – that’s not my job. I’m just here to be your AI companion. 😊

These are thoughtful but not very reassuring responses. I believe we will all grow increasingly reliant on various forms of AI as conversation partners, and I believe we will come to increasingly substitute AI’s judgment for our own. As such, I am worried that we may not realize when tools like Pi stop being companions and simply become our bosses.

Always turning to another when we must decide is infantilizing. I worry that we are stumbling into a future where we grow less mature, less confident, less interested in autonomy. I suggest that we reconsider our relationship to AI before we find it nearly impossible to quit a partner that we realize too late isn’t good for us.

Are Safeguards Enough for Canada’s Medical Assistance in Dying Law?

photograph of empty hospital bed

Just last month the Canadian government announced that it was seeking to delay an expansion to Canada’s medical assistance in dying program. Since prohibitions on assisted suicide were declared to be in violation of Canada’s Charter of Rights and Freedoms, the program has expanded to include those without terminal illness. Now, Maid is set to expand further to include those not only with physical illness, but those with mental illness. While some groups were disappointed by the delay, others welcomed the opportunity to further consider the lack of appropriate safeguards in place. Given that Canadian policy is much more permissive compared to other nations in seeking to allow non-terminal mental illness patients to be eligible, it is worth considering the moral merit of this expansion.

There are a great many reasons supporting and  medically assisted suicide in general. Those who favor the practice tend to emphasize cases where unbearable pain and suffering is present and where the patient’s prognosis is terminal. It seems reasonable to limit or prevent suffering when death is assured. But it is much more complicated to consider cases outside of these narrow limits. What if the patient has some hope of recovery? What if a mental condition undermines their ability to voluntarily request death? What if the patient is suffering, not from a physical illness, but from severe clinical depression, or post-traumatic stress syndrome, or dementia?

Those who defend the idea of expanding the medical assistance in dying program emphasize the suffering that exists, regardless of the condition being neither physical nor terminal. For example, the advocacy group Dying with Dignity responded to the government’s move to delay by noting, “For those who have been denied compassion, autonomy and personal choice, this is not a short delay but yet another barrier.” Mental illness can be difficult to treat, and it seems arbitrary to treat physical suffering so markedly different from mental suffering.

A similar argument goes for those with dementia. Individuals with dementia or Alzheimer’s undoubtedly suffer from their afflictions – many report feeling that the condition has robbed them of their identity. And, if one has dementia, it can undermine the notion that one can with sound-mind voluntarily choose euthanasia for themselves. This is why many have called for the ability to use advance directives. But what if there is a conflict with what the patient comes to desire later once dementia sets in?

Even those who agree, in principle, that people suffering from these conditions deserve equal access to medical assistance in ending their life, might still worry that there are insufficient safeguards in place. As an article from the Canadian Medical Association Journal reports, arguments for the inclusion of mental illness tend to focus only on severe depression, but in Belgium and the Netherlands this has also included chronic schizophrenia, posttraumatic stress disorder, severe eating disorders, autism, and even prolonged grief. “Discussions, much less evidence-based guidance, of how to evaluate people who request assisted dying because of prolonged grief, autism, schizophrenia or personality disorders are lacking.” The health care system is simply not prepared to provide adequate support for these patients.

In Canada, the standard for receiving assistance in dying is that the condition must be “grievous and irremediable,”  indicating a patient is in an advanced state of decline which cannot be reversed. Various legal safeguards are supposed to be in place, including independent witnesses, the agreement of two medical opinions, and a signed written request. Yet, many are concerned about those who might be pressured into receiving assisted death due to lack of alternatives. For example, recently there were reports of Canadian Armed Forces members being offered assistance in dying when they couldn’t get a wheelchair ramp installed.

There was also a report last year of a 51-year-old woman named Sophia, who received medical assistance in dying due to her chemical sensitivity. Sophia was allergic to chemical cleaners and cigarette smoke but was unable to find affordable housing and was instead forced to live in charity-run residential apartment. When COVID-19 forced her to be at home full-time, it only exacerbated the problem until she finally ended her life. The fact that it was easier to receive death than accessible housing is obviously a problem, as Sophia herself remarked, “The government sees me as expendable trash.” Cases like these have led to the United Nations to criticize Canada’s proposed law for violating the UN Convention on the Rights of Persons with Disabilities. Canada’s Minister of Disability Inclusion has expressed shock at the number of cases of disabled people seeking death due to a lack of social supports.

As a recent article points out, “most would be hard-pressed to argue it reflects true autonomy within a range of choices when the marginalized poor are enticed towards ‘painless’ death to escape a painful life of poverty.” This undermines the idea that expansions to medically assisted dying are only being done for the sake of compassion and to preserve dignity. If the concern truly was the preservation of dignity, for example, there would be additional measures put in place to ensure that marginalized people don’t feel like death is their only real choice.

Those who support medically assisted dying for these cases might have good intentions, but good intentions can also lead to horrific outcomes. For example, the road to the opioid epistemic was paved with good intentions in the form of letters in the 1980s calling for the use of the drugs on the basis of compassion, and those who resisted were labeled “ophiophobic” for their hesitancy. Compassion without critical scrutiny is dangerous.

Some might argue that even if the system isn’t perfect and people fall through the cracks, it is still ultimately better off that we have the system available for those who need it. The thinking here holds that while some might receive assisted death when they shouldn’t, it is still better overall that those that are eligible can receive it. However, it’s important to remember this is often not considered a good argument in the case of the death penalty. One might respond that assisted suicide is done for the sake of compassion rather than punishment, so therefore there is a significant moral difference. However, all one needs to say is that the death penalty is used for the sake of compassion for families of victims, and it no longer holds water. Good intentions are not sufficient without a practical concern for the real-world consequences that will follow.

The Ethics of Manipulinks

image of computer screen covered in pup-up ads

Let’s say you go onto a website to find the perfect new item for your Dolly Parton-themed home office. A pop-up appears asking you to sign up for the website’s newsletter to get informed about all your decorating needs. You go to click out of the pop-up, only to find that the decline text reads “No, I hate good décor.”

What you’ve just encountered is called a manipulink, and it’s designed to drive engagement by making the user feel bad for doing certain actions. Manipulinks can undermine user trust and are often part of other dark patterns that try to trick users into doing something that they wouldn’t otherwise want to do.

While these practices can undermine user trust and hurt brand loyalty over time, the ethical problems of manipulinks go beyond making the user feel bad and hurting the company’s bottom line.

The core problem is that the user is being manipulated in a way that is morally suspect. But is all user manipulation bad? And what are the core ethical problems that manipulinks raise?

To answer these questions, I will draw on Marcia Baron’s view of manipulation, which lays out different kinds of manipulation and identifies when manipulation is morally problematic. Not all manipulation is bad, but when manipulation goes wrong, it can reflect “either a failure to view others as rational beings, or an impatience over the nuisance of having to treat them as rational – and as equals.”

On Baron’s view, there are roughly three types of manipulation.

Type 1 involves lying to or otherwise deceiving the person being manipulated. The manipulator will often try to hide the fact that they are lying. For example, a website might try to conceal the fact that, by purchasing an item and failing to remove a discount, the user is also signing up for a subscription service that will cost them more over time.

Type 2 manipulation tries to pressure the person being manipulated into doing what the manipulator wants, often transparently. This kind of manipulation could be achieved by providing an incentive that is hard to resist, threatening to do something like ending a friendship, inducing guilt trips or other emotional reactions, or wearing others down through complaining or other means.

Our initial example seems to be an instance of this kind, as the decline text is meant to make the user feel guilty or uncomfortable with clicking the link, even though that emotion isn’t warranted. If the same website or app were to have continual pop-ups that required the user to click out of them until they subscribed or paid money to the website, that could also count as a kind of pressuring or an attempt to wear the user down (I’m looking at you, Candy Crush).

Type 3 manipulation involves trying to get the person to reconceptualize something by emphasizing certain things and de-emphasizing others to serve the manipulator’s ends. This kind of manipulation wants the person being manipulated to see something in a different light.

For example, the manipulink text that reads “No, I hate good décor” tries to get the user to see their action of declining the newsletter as an action that declines good taste as well. Or, a website might mess with text size, so that the sale price is emphasized and the shipping cost is deemphasized to get the user to think about what a deal they are getting. As both examples show, the different types of manipulation can intersect with each other—the first a mix of Types 2 and 3, the second a mix of Types 1 and 3.

These different kinds of manipulation do not have to be intentional. Sometimes user manipulation may just be a product of bad design, perhaps because there were unintentional consequences of a design that was supposed to accomplish another function or perhaps because someone configured a page incorrectly.

But often these strategies of manipulation occur across different aspects of a platform in a concerted effort to get users to do what the manipulator wants. In the worst cases, the users are being used.

In these worst-case scenarios, the problem seems to be exactly as Baron describes, as the users are not treated as rational beings with the ability to make informed choices but instead as fodder for increased metrics, whether that be increased sales, clicks, loyalty program signups, or otherwise. We can contrast this with a more ethical model that places the user’s needs and autonomy first and then constructs a platform that will best serve those needs. Instead of tricking or pressuring the user to increase brand metrics, designers will try to meet user needs first, which if done well, will naturally drive engagement.

What is interesting about this user-first approach is that it does not necessarily reduce to considerations of autonomy.

A user’s interests and needs can’t be collapsed into the ability to make any choices on the platform that they want without interference. Sometimes it might be good to manipulate the user for their own good.

For example, a website might prompt a user to think twice before posting something mean to prevent widespread bullying. Even though this pop-up inhibits the user’s initial choice and nudges them to do something different, it is intended to act in the best interest of both the user posting and the other users who might encounter that post. This tactic seems to fall into the third type of manipulation, or getting the person to reconceptualize, and it is a good example of manipulation that helps the user and appears to be morally good.

Of course, paternalism in the interest of the user can go too far in removing user choice, but limited manipulation that helps the user to make the decisions that they will ultimately be happy with seems to be a good thing. One way that companies can avoid problematic paternalism is by involving users at different stages of the design process to ensure that user needs are being met. What is important here is to treat users as co-deliberators in the process of developing platforms to best meet user needs, taking all users into account.

If the user finds that they are being carefully thought about and considered in a way that takes their interests into account, they will return that goodwill in kind. This is not just good business practice; it is good ethical practice.

Should You Outsource Important Life Decisions to Algorithms?

photograph of automated fortune teller

When you make an important decision, where do you turn for advice? If you’re like most people, you probably talk to a friend, loved one, or trusted member of your community. Or maybe you want a broader range of possible feedback, so you pose the question to social media (or even the rambunctious hoard of Reddit). Or maybe you don’t turn outwards, but instead rely on your own reasoning and instincts. Really important decisions may require that you turn to more than one source, and maybe more than once.

But maybe you’ve been doing it wrong. This is the thesis of the book Don’t Trust Your Gut: Using Data to Get What You Really Want in Life by Seth Stephens-Davidowitz.

He summarizes the main themes in a recent article: the actual best way to make big decisions when it comes to your happiness is to appeal to the numbers.

Specifically, big data: the collected information about the behavior and self-reports of thousands of individuals just like you, analyzed to tell you who to marry, where to live, and how many utils of happiness different acts are meant to induce. As Stephens-Davidowitz states in the opening line of the book: “You can make better life decisions. Big Data can help you.”

Can it?

There are, no doubt, plenty of instances in which looking to the numbers for a better approximation of objectivity can help us make better practical decisions. The modern classic example that Stephens-Davidowitz appeals to is Moneyball, which documents how analytics shifted evaluations of baseball players from gut instinct to data. And maybe one could Moneyball one’s own life, in certain ways: if big data can give you a better chance of making the best kinds of personal decisions, then why not try?

If that all seems too easy, it might be because it is. For instance, Stephens-Davidowitz relies heavily on data from the Mappiness project, a study that pinged app users at random intervals to ask them what they were doing at that moment and how happy they felt doing it.

One activity that ranked fairly low on the list was reading a book, scoring just above sleeping but well below gambling. This is not, I take it, an argument that one ought to read less, sleep even less, and gamble much more.

Partly because there’s more to life than momentary feelings of happiness, and partly because it just seems like terrible advice. It is hard to see exactly how one could base important decisions on this kind of data.

Perhaps, though, the problem lies in the imperfections of our current system of measuring happiness, or any of the numerous problems of algorithmic bias. Maybe if we had better data, or more of it, then we’d be able to generate a better advice-giving algorithm. The problem would then lie not in the concept of basing important decisions on data-backed algorithmic advice, but in its current execution. Again, from Stephens-Davidowitz:

These are the early days of the data revolution in personal decision-making. I am not claiming that we can completely outsource our lifestyle choices to algorithms, though we might get to that point in the future.

So let’s imagine a point in the future where these kinds of algorithms have improved to a point where they will not produce recommendations for all-night gambling. Even then, though, reliance on an impersonal algorithm for personal decisions faces familiar problems, ones that parallel some raised in the history of ethics.

Consider utilitarianism, a moral system that says that one ought to act in ways that maximize the most good, for whatever we should think qualifies as good (for instance, one version holds that the sole or primary good is happiness, so one should act in ways that maximize happiness and/or minimize pain). The view comes in many forms but has remained a popular choice of moral systems. One of its major benefits is that it provides a determinate and straightforward way (at least, in principle) of determining which actions one morally ought to perform.

One prominent objection to utilitarianism, however, is that it is deeply impersonal: when it comes to determining which actions are morally required, people are inconsequential, since what’s important is just the overall increase in utility.

That such a theory warrants a kind of robotic slavishness towards calculation produces other unintuitive results, namely that when faced with moral problems one is perhaps better served by a calculator than actual regard for the humanity of those involved.

Philosopher Bernard Williams thus argued that these kinds of moral systems appeal to “one thought too many.” For example, if you were in a situation where you need to decide which of two people to rescue – your spouse or a stranger – one would hope that your motivation for saving your spouse was because it was your spouse, not because it was your spouse and because the utility calculations worked out in the favor of that action. Moral systems like utilitarianism, says Williams, fail to capture what really motivates moral actions.

That’s an unnuanced portrayal of a complex debate, but we can generate parallel concerns for the view that we should outsource personal decision-making to algorithms.

Algorithms using aggregate happiness data don’t care about your choices in the way that, say, a friend, family member, or even your own gut instinct does.

But when making personal decisions we should, one might think, seek out advice from sources that are legitimately concerned about what we find important and meaningful.

To say that one should adhere to such algorithms also seems to run into a version of the “one thought too many” problem. Consider someone who is trying to make an important life decision, say about who they should be in a relationship with, how they should raise a child, what kind of career to pursue, etc. There are lots of different kinds of factors one could appeal to when making these decisions. But even if a personal-decision-making algorithm said your best choice was to, say, date the person who made you laugh and liked you for you, your partner would certainly hope that you had made your decision based on factors that didn’t have to do with algorithms.

This is not to say that one cannot look to data collected about other people’s decisions and habits to try to better inform one’s own. But even if these algorithms were much better than they are now, a basic problem would remain with outsourcing personal decisions to algorithms, one that stems from a disconnect between meaningful life decisions and impersonal aggregates of data.

Bloodstained Men and Circumcision Protest

photograph of Bloodstained Men protestor

Images of men dressed in pure white with a vibrant mark of blood around their crotch have littered front pages in past weeks. The Bloodstained Men are protesting the practice of male circumcision – removal of the foreskin from the penis. This surgical practice, although less common in many European countries, is widely accepted and largely performed for social, aesthetic, or religious reasons. The World Health Organization estimated that somewhere between 76-92% of people with penises are circumcised in the United States.

While the practice of circumcision has a long history and has been endorsed by many Western doctors, does this make it ethical?

The Bloodstained Men, and other anti-circumcision activists, would argue that it does not: circumcision is a violation of genital autonomy and is a purely aesthetic surgery that only works to detract sexual pleasure and is performed without the consent of the child. Others, meanwhile, support circumcision, citing its possible medical benefits and ability to increase social, romantic, and sexual acceptance. How can we reconcile these two conflicting views?

Consulting our ethical convictions regarding female genital mutilation (FGM) may bring some clarity on this issue. The practice of altering the female genitalia – either by removing the clitoris, parts of the labia, or closing the vagina – has long been considered a morally impermissible intervention in Western society, and on valid grounds. Still, it must be determined whether our condemnation of FGM should inform a similar objection over male circumcision.

Most significantly, many cite FGM as problematic in its attempt to limit sexual autonomy, maintain ideals of purity, and uphold societal expectations around sex and femininity. The intent behind the procedure, then, may be the key to our acceptance of circumcision. Circumcision has long been a religious custom in the Muslim and Jewish faiths, but gained popularity in the United States for different reasons. Most integral to its growth in practice was a belief that circumcision could cure physical and mental health issues, provide an indication of wealth and social status, and prevent masturbation. Although these reasons may have led to its popularity, they have long been proven incorrect, and now the intent behind circumcision is typically associated with ideas of cleanliness, health, or social acceptance (with a focus on genital uniformity with one’s father or peers).

Are these justifications more morally permissible than those for FGM? Like FGM, there is a historic desire to suppress sexual autonomy paired with a current desire to gain social acceptance, and in both cultures the procedure is viewed as an accepted social custom done to benefit a child in some way. It is possible, then, that an evaluation of impact, rather than intent, will prove more useful for our discussion.

FGM is denounced by its lack of medical benefits, and more broadly by its medical risks, with severe forms causing difficulties birthing, infections, and psychological trauma. Does the moral difference, then, lie in the benefits of circumcision? Possible benefits include a decreased risk for HIV or urinary tract infections, easier hygiene, and social acceptance, with the belief that uncircumcised persons will face social persecution, bullying, or romantic/sexual ostracization. Do these reasons warrant genital surgery?

Research has found that these benefits are much more slight than once believed, especially when making a consideration for policy within the United States where HIV rates are quite low and may be better addressed with proper access to condoms, the drug PREP, or comprehensive sex education. In addition, circumcision, like FGM, reduces sexual pleasure; the foreskin, much like the clitoris, houses a majority of the nerve endings in the penis, so its removal reduces sensation. It is widely known, now, that circumcision is not a medical necessity, yet the practice remains a social custom. Social reasons for circumcision may be convincing, but are also similar to those that inform FGM.

Is social normativity enough to warrant the removal or change to a perfectly healthy organ, especially if it reduces pleasure? Even if there are some medical benefits, is this a decision that should be made for a child?

This discussion really comes down to a conversation about informed consent. For surgeries under the age of 18, parents are given the authority to provide consent for their children; this sacrifice of rights is necessary to serve the medical interests of the child. In the case of circumcision, though, there is absolutely no medical necessity; it is a surgery that involves the removal of a natural part of a healthy organ, an organ that increases pleasure later in life. Should parents be able to consent to surgeries that are not medically necessary?

The value we place on bodily autonomy suggests that this is not a decision that should be made by parents, especially as it is often motivated by a desire to “fit in.” Personal autonomy and the right to control one’s own body, especially such an intimate organ, should supersede social and cultural norms. If we do decide respecting cultural customs and desires for social acceptance are more important than our ethical understanding that people should have the right to control their bodies, why do we denounce FGM?

When evaluating the two procedures, it seems as though circumcision shares many of the qualities that make FGM unethical, so shouldn’t we deem circumcision unethical as well? If we decide to continue the practice of circumcision, where must we fall on the issue of FGM? In order to come to a conclusion about circumcision, we must reckon with our moral attitudes towards FGM and determine whether our values of consent and pleasure are more important than our need to conform to social and cultural customs.

Cryonics: The Trap Objection

photograph of hand pressed on thawing glass

Cryonics is the technique of preserving the bodies (or brains) of recently deceased people with the hope that future scientific advances will enable these people to be revived and live on. The technology to revive cryons (i.e., cryonically preserved people) doesn’t exist, and there’s no guarantee that it will ever be developed. Nevertheless, there’s a chance that it will be. This chance motivates people to spend money to undergo cryonic preservation.

The basic argument for cryonics is that it might not work, but what do you have to lose? As my colleague Richard Gibson has noted, we can think of the cryonics choice as a wager.

If you choose not to be preserved, then you certainly won’t enjoy any more life after death (I’m assuming there’s no spiritual afterlife). But if you choose to be preserved, then although there’s a chance you won’t be revived, there’s also a chance that you will be revived, enabling you to enjoy more life after you die.

Therefore, choosing preservation is a better bet, assuming the costs aren’t too high. By analogy, if you have to choose between placing a bet that has no chance of winning, and placing a bet that has some unspecified but non-zero chance of winning, the latter is definitely the better bet (ignoring the costs of placing the bets).

I want to explore an objection to this argument. Call it the Trap Objection. The Trap Objection questions the presupposition that revival would be a good outcome. Basically, the Trap Objection points out that while revival might be a good outcome for a cryon, it’s also possible for a cryon to be revived into a situation that is both undesirable and inescapable. Thus, the wager is less straightforward than it appears.

To appreciate the Trap Objection, first note that life is not always worth living. Life is filled with lots of bad things, such as pain, grief, and disappointment, to which we would not be exposed if we were not alive.

Most of us believe that most of the time the good in our lives outweighs the bad, and thus life is on balance worth living despite the drawbacks. Such assessments are probably usually correct (although some question this). It sometimes happens, though, that the bad things in life outweigh the good.

For example, the life of someone with an agonizing incurable illness may contain lots of pain and virtually no compensatory goods. For this person, life is no longer better than nothing at all.

Second, note that sometimes suicide is on balance good and consequently justified when life is no longer worth living. For example, the incurably ill person may reasonably view suicide as preferable to living on since living on will bring him more bad than good but death will permanently close the account, so to speak. And because suicide is sometimes justified and preferable to living on, it is sometimes a great misfortune when someone loses the capacity to choose death. If the incurably ill person were unable to choose to escape the agony of his life, this would likely be a great misfortune for him.

Let a Trap Situation be any situation wherein (i) a person’s life has permanently ceased to be worth living yet (ii) the person has lost the capacity to choose to end their life. For example, individuals with late-stage Alzheimer’s disease are often in Trap Situations, unable to enjoy life but also unable to end it. Trap Situations are very bad, and people have very good reason to want to avoid them.

Now we are in a position to formulate the Trap Objection. The Trap Objection is that there is a chance that choosing cryonic preservation will lead to a Trap Situation, and until we have some understanding of how high this chance is and how bad the most likely Trap Situations would be, we are not in a position to determine whether cryonic preservation is a good or bad bet. But a death without cryonic preservation will certainly not lead to a Trap Situation. Thus, choosing against preservation is arguably the safer and better option.

By analogy, if you have to choose between placing a bet that has no chance of winning or losing any money, and placing a bet that has some unspecified chance of winning you some unspecified amount of money and some unspecified chance of losing you some unspecified amount of money, the former is arguably the safer and better bet (ignoring the costs of placing the bets).

Cryonics could conceivably produce many types of Trap Situations. Here are some examples.

Brain Damage: The cryonics process irreversibly damages a cryon’s brain. The cryon is revived and kept alive by advanced technology for centuries. But the cryon’s brain damage causes her to suffer from irreversible severe dementia, rendering the cryon unable to enjoy her life and also unable to end it.

Environmental Mismatch: A cryon is revived into a radically unfamiliar social, political, and technological environment. The cryon is unable to adjust to this new environment and reasonably wants to end her life. The cryon is unable to end her life, however, because suicide is culturally and legally prohibited, and the means exist to enforce this prohibition.

Valuable Specimen: The technology to revive cryons is developed in the distant future. Future humans are interested in learning about 21st century humans, but only a few have been successfully preserved. A cryon from the 21st century is revived and studied. The study techniques are barbaric and make the cryon miserable to such an extent that the cryon reasonably wants to kill herself. But because the cryon is a valuable specimen this is not permitted.

Mind Upload: A cryon’s brain is scanned, and the cryon’s consciousness is uploaded to a virtual world that is owned and operated by a technology company. The cryon finds life in the virtual world to be unbearably depressing and wants to opt out, but because the activities of the virtual world’s digital inhabitants generate economic value for the technology company, inhabitants are not permitted to terminate themselves. Mental processes in the virtual world are simulated at 1,000 times their normal speed, such that one day in the real world feels like one thousand days to the digital inhabitants. The virtual world is maintained for 50 real-world years, which the cryon experiences as 50,000 years of unbearable depression.

This sampling is meant to illustrate that revival needn’t be a good thing and might actually be a very bad thing – even an astronomically bad thing, as in Mind Upload – for a cryon. It does not represent an exhaustive mapping of the relevant possibility space.

I don’t know how likely it is, either in absolute or relative terms, that a cryon will be revived into a Trap Situation, although the likelihood is definitely non-zero. Moreover, it’s unclear how to go about determining this likelihood from our current perspective. Contemporary cryonic practitioners will claim that they would never revive a cryon into a Trap Situation. But it is very unlikely that the technology to revive cryons will be developed within the (natural) lifespan of any living cryonic practitioners. Moreover, the world could change a lot by the time the technology is developed. So, the significance of these claims is dubious.

It seems that even if we ignore pre-preservation costs, choosing cryonic preservation is not clearly a safe or good option.

If you are so terrified of nonexistence that you would prefer the chance at any sort of future life to certain annihilation, then cryonic preservation does seem reasonable. But this preference seems unreasonable. In some situations, the certainty of death should be preferred to the uncertainty of life.

On the Morality of Allowing Euthanasia for Those with Mental Illness: Part 2

photograph of empty hospital bed with curtains closed

In a previous post on Canada’s decision to allow those with a mental illness to seek medical aid in dying, I discussed some of the factors that need to be considered when evaluating the moral permissibility of euthanasia. These considerations, however, are generally raised in response to cases of intolerable and incurable physical suffering. Things become a lot more complicated when this suffering is instead mental.

Why might this be the case? One of the most common arguments in favor of the moral permissibility of euthanasia is based around the idea of autonomy. This concept holds that we should get the final say on decisions that affect the course of our lives. And this includes choices about how and when we die. This is why we might see a case of suicide as tragic or regrettable, but are usually reluctant to say that someone who takes own life does something morally wrong. But what happens when the process used to make such choices becomes unreliable?

One way of understanding autonomy is through the satisfaction of desires. We all have many desires: A desire to see the climate crisis resolved, a desire to study orbital mechanics in college, or the desire to eat an entire cheese pizza for dinner. The extent to which we have autonomy over these things is determined by our ability to satisfy these desires. So, while I can do something to reduce my carbon footprint, the complete resolution of the climate crisis is entirely out of my control. This, then, is something over which I do not have autonomy. When it comes to what I eat for dinner or what I study at college, however, I have far more autonomy. To say that I should have autonomy over the time and manner of my death, then, is to say that I should be able to satisfy whatever desire I have regarding my death. If that desire is to end my life prematurely, then I should be allowed to do so. And if for some reason I need assistance in ending my own life, then there can be nothing wrong with another person providing this.

The problem with desire-based theories like this is that there are many cases in which we don’t desire what’s good for us. This can happen in one of two ways. Firstly, we can desire things that are bad for us. That cheese pizza might be delicious – and give me thirty solid minutes of bliss – but the long-term effects will be bad for me. I’ll gain weight, raise my cholesterol, and suffer through an entire evening of gastric distress. Secondly, we can fail to desire things that are good for us. While I might thoroughly enjoy studying orbital mechanics, it may very well have been the case that a degree in ornithology would have been far more enjoyable and rewarding.

These concerns are compounded in cases of mental illness, as sufferers may be more prone to form desires that are bad for them. But to discount all of the desires of the mentally ill is to show enormous disrespect for their dignity as persons. So how can we discern the good desires from the bad?

One solution might be to refer to distinguish between “first-order” and “second-order” desires. First-order desires are precisely the kind of desires we’ve been considering so far – desires about what to eat, what to study, and when to die. Second-order desires, on the other hand, are desires about desires. To illustrate the difference between these two, consider the case of Mary. Mary is a smoker. Every morning, she wakes up with a powerful desire for a cigarette. A desire that she promptly satisfies. Then, throughout the day, she desires many more cigarettes – a full pack’s worth in fact. Mary, however, deeply regrets being a smoker. She hates the harmful effects it has on her health and her wallet. She wishes that she didn’t desire cigarettes. So, while Mary’s first order desire is to smoke cigarettes, her second order desire is precisely the opposite.

How does this help us? Well, we might argue that when considering how best to respect a person’s autonomy, we should focus purely on an individual’s second-order desires. This, then, would permit us to do something like forcefully prevent Mary from smoking (say, by confiscating her cigarettes and preventing her from buying more). Similar reasoning can be applied to the many cases where someone’s desires have been corrupted by addiction, deception, or general human flaws like laziness and procrastination.

In the case of mental illness, then, we now have a tool that allows us to look past someone’s immediate desires, and instead ask whether an individual desires to have such desires. If we can show that someone’s desire for death has come about as a result of their mental illness (and not, say, by a reliable process of informed reasoning) we could argue that – since the individual does not desire that desire – helping them end their life would not be respectful of their autonomy. If, however, their second-order desire is in favor of the desire to die, respect for autonomy will once again lean in favor of us helping them to end their own life.

All of this is to say that allowing euthanasia in cases of severe and incurable mental illness is enormously complicated. Not only does it involve all of the usual considerations that are relevant to euthanasia, it also contains an additional set of concerns around whether helping a patient end their own life will truly see us acting in a way that respects their autonomy. In order to ensure such respect, we should focus not just on what an individual desires, but on their attitudes towards those desires.

Restrictive Legislation Prevents Liberation

image of child's mind maturing into adult's mind

In late April, the Wyoming Senate voted to withhold state funding for the Women and Gender Studies Program at the University of Wyoming. In favor of the resolution, Republican Charles Scott described the program as, “an extremely biased, ideologically driven program that I can’t see any academic legitimacy to.” Scott is 76 years old and attended business school. He has no background in Women and Gender Studies. Republican Senator Cheri Steinmetz expressed opposition to the program’s commitment to “service and activism,” claiming that the idea that state funds continued to support such a mission led her to a state in which she claimed, “my conscience won’t let me sleep.” Steinmetz has a background in farming, ranching, and insurance sales, not in Women and Gender Studies. There is no reason to think that either senator, nor any of the other 14 senators who voted in favor of the bill, have spent time reading books or journals on the topic, nor have they spent any significant time in classrooms dedicated to Women and Gender Studies. Even if they had, they would not be experts in this field.

This is just one of the most recent in a series of bills restricting the content of education that have been passed or proposed in recent months. Infamously, both houses of the Florida legislature passed “The Parental Rights in Education Bill” which has come to be known widely as the “Don’t Say Gay Bill.” This bill limits discussion of gender identity and sexual orientation in public schools. The reasoning, such as it is, behind the bill is that children shouldn’t be exposed to “inappropriate content” and that parents should be responsible for deciding how these issues are discussed with children. The best way to accomplish this, they argue, is to pass legislation ensuring that it is parents and not teachers who are discussing this subject matter with students to begin with.

Perhaps most notorious are the spate of laws across the country that restrict instruction and discussion of the topic of “critical race theory” and other such “divisive concepts.” As of February, 2022, 36 states have considered or passed this kind of legislation. Like the legislators responsible for defunding the University of Wyoming’s Women and Gender Studies department, these legislators seem to have little idea of what, exactly, they are banning or when and where it is being taught.

There are many implicit assumptions behind the passage of these pieces of legislation. One of them seems to be that education is valuable, if and when it is, for the purposes of teaching students trades and professions, reinforcing culturally-dominant opinions about institutions and historical events, and assisting in the development of young people into the kinds of adults that their parents want them to be. Educators, when doing things in the way these legislators want them to be done, capture and bottle the zeitgeist of the parents’ generation and pass it on, perfectly preserved and untested, to their children.

In contrast to these assumptions, many educators have argued that education is the practice of liberation. Through encountering new ideas and engaging in rigorous and critical debate, a person can achieve a kind of authenticity that might have been impossible for them otherwise. Real autonomy requires full information, or as close to it as one can get. This involves education, not just about matters of hard fact, but also about the experiences of individuals who are different from ourselves or our parents. The practice of becoming well educated, either in an institutional setting or otherwise, has the power to put people in circumstances to fully guide the direction of their own lives. Education can lead to self-actualization.

bell hooks ends her book Teaching to Transgress with the testimony of hope that,

The classroom, with all its limitations, remains a location of possibility. In that field of possibility, we have the opportunity to labor for freedom, to demand of ourselves and our comrades, an openness of mind and heart that allows us to face reality even as we collectively imagine ways to move beyond boundaries, to transgress. This is education as the practice of freedom.

hooks describes her experiences with education as a young black girl and young woman, growing up in what she describes as an intensely patriarchal family during a time of segregation. She says,

Attending school was sheer joy. I loved being a student. I loved learning. School was the place of ecstasy—pleasure and danger. To be changed by ideas was pure pleasure. But to learn ideas that ran counter to values and beliefs learned at home was to place oneself at risk, to enter the danger zone. Home was the place where I was forced to conform to someone else’s image of who and what I should be. School was the place where I could forget that self and, through ideas, reinvent myself.

Education potentially has transformative power for everyone but can be the most meaningful and even transcendent for disenfranchised populations. For instance, in his autobiography, Malcolm X has the following to say about how his dedication to self-education through reading and debate while incarcerated changed his world entirely,

I have often reflected upon the new vistas that reading opened to me. I knew right there in prison that reading had changed forever the course of my life. As I see it today, the ability to read woke in me inside some long dormant craving to be mentally alive.

It’s no surprise that the educational practices about which these legislative bodies are most critical and about which they demonstrate also anger and fear, are transformative practices regarding how people think about race, sex, gender identity, and sexual orientation. Racial justice, feminism, and LGBTQ rights are issues about which we have made tremendous progress only very recently in the scheme of things. Not everyone is on board; some people don’t want power dynamics to shift. No wonder, then, that study of these topics is cast as less than rigorous and the idea that educators might turn young people into activists on these topics causes some defenders of the status quo to lose sleep.

This is nothing new. Educational transgressors and liberators have been targets for all of recorded history. Socrates was transgressing accepted social norms by providing conditions for young people to challenge commonly-held beliefs and to engage in critical thinking. For this, Athenian jurists had him executed. Liberation through education is good for the liberators, but bad for the dominant power structure. One of the great virtues of the educator is that they know this history and, nevertheless, they persist.

Faramir’s Insight

photograph of Lord of the Rings book atop Middle-Earth map

When I last reread J.R.R. Tolkien’s Lord of the Rings trilogy, one line in particular stood out to me. It was something that Faramir says when talking with Frodo about whether to abandon Gollum. I’ll quote the line in context:

‘Frodo, I think you do very unwisely in this,’ said Faramir. ‘I do not think you should go with this creature. It is wicked.’

‘No, not altogether wicked,’ said Frodo.

‘Not wholly, perhaps,’ said Faramir; ‘but malice eats it like a canker, and the evil is growing. He will lead you to no good. If you will part with him, I will give him safe-conduct and guidance to any point on the borders of Gondor that he may name.’

‘He would not take it,’ said Frodo. ‘He would follow after me as he long has done. And I have promised many times to take him under my protection and to go where he led. You would not ask me to break faith with him?’

‘No,’ said Faramir. ‘But my heart would. For it seems less evil to counsel another man to break troth than to do so oneself, especially if one sees a friend bound unwitting to his own harm.

The last line rings true to my experience. It is one thing for me to take on a sacrifice to do what is right. It is another thing entirely to advise someone else to make that sacrifice.

It is easy to be frugal about stuff I buy for myself. Harder to be frugal about stuff I buy for my wife. Why? Because it is harder to ask someone else to sacrifice to do the right thing. I would never spend much money on jewelry for myself. And yet I felt tempted to spend too much money on an engagement ring for my wife. (Fortunately, my wife is as frugal as I am, and so independently insisted on a cheap $50 engagement/wedding ring.)

I want to talk about this phenomenon of expecting less of others than we expect of ourselves. I want to talk about it, because it first appears as almost noble. But the more I’ve thought about it, the more convinced I’ve become that it is actually a vice. It involves a neglected failure of character, a failure to take seriously the moral personality of others.

To see this, let’s consider possible explanations of the phenomenon. Two I’m sure are wrong, and three that might be right.

Wrong Hypothesis 1: Different Obligations

One possible explanation is that I really think my moral obligations are stricter than the obligations of others. Faramir thinks it would really be wrong for him to break his promise to Gollum, but that it would not be wrong for Frodo to do the same.

While I’m sure this sometimes happens, it does not explain the phenomenon I’m trying to understand.   The difference I’m responding to is not one of ‘weaker’ vs. ‘stronger’ but rather me vs. other. I will hold my close friends to higher standards than strangers, even if I think the strangers are better people. Likewise I am willing to hold myself to a higher standard than my close friends, even though I know many of my close friends are more virtuous than me.

Wrong Hypothesis 2: Selflessness

So could it be selflessness? Could it be that I just care more about the sacrifices of others than I do about my own sacrifices?

Perhaps this occurs for some people, but it is not the explanation in my case. For one thing, I’m not nearly that selfless. I spend far more time working on my own job applications than helping friends with their applications. Why? Because I selfishly care about my own future more than the future of my friends.

For another thing, the closer I am to someone the more, not the less, willing I am to ask them to sacrifice. If a stranger asks me for advice, I’m very unlikely to tell them to make the radical sacrifice. But the closer a friend, the more comfortable I am to ask of them sacrifice. As I will argue below, this is because the closer a friend, the more deeply I appreciate their moral personhood.

Possible Hypothesis 1: Moral Uncertainty

One possible explanation is that the hesitancy follows our moral uncertainty combined with a fear of responsibility.

Suppose someone held a gun to my head and told me to renounce Christianity. I’d like to think I’d have the courage to die rather than commit apostasy – that is, the abandonment of my personal beliefs. (After all, I believe that to commit apostasy is one of the worst things a human can do.) But the thing is, I’m not sure it’s really that bad. After all, I was being coerced. Everyone would be able to tell I don’t really mean it. So is it really committing apostasy? Or is it instead just playacting to pacify the criminal?

Those replies ring hollow when I imagine facing the situation myself. But they ring less hollow when I imagine a friend facing a similar situation. If a friend asked me what to do, I’d be much more tempted to tell them to “just say what the person wants to hear, you don’t have to mean it.” If a friend felt guilty about committing apostasy in such a situation, I’d be tempted to tell them not to feel bad, they didn’t do anything wrong.

The uncertainties loom larger when I consider a friend making the sacrifice. But not only then. They also loom larger when I consider making the choice myself but sacrificing another. These same worries would ring less hollow if someone held a gun to another person’s head and told me to commit apostasy. The significance of my uncertainty is magnified when others are at risk, because if someone else dies as a result of my choice, it’s not just sad but a moral tragedy. It’s not similarly a moral tragedy if I die as a consequence of my own choice. It’s sad if I die while trying to free climb a mountain, but it’s not a moral tragedy. It’s a moral tragedy if I kill someone else after taking a risk. Ultimately, I don’t want to give advice to others, because if I’m wrong I’ll feel responsible for that choice.

If this explanation is right, then the phenomenon is vicious. It denies the moral personality of others. I’m failing to recognize that the choice is still up to the other person, even when I advise them what I think is right.

This is easiest to see if we think of ourselves as the one getting advice. If I ask for the advice of someone else, I recognize it is still up to me whether to listen. It is myself, not them, who is ultimately responsible for the choice. As such, I don’t want the other person managing me with their advice in order to assuage their own conscience. Rather, I want them to be honest so that I can fully take responsibility for my own choice.

But if I’d want others to be honest with me so that I can make my own choice. Then, if I fully appreciated the moral personality of others, I’d not be tempted to be dishonest with them in order to shield myself from responsibility.

Possible Hypothesis 2: Shared Values

My wife’s theory is that the phenomenon is explained by the role of shared values in moral advice. The more someone shares your values, the more appropriate it is for them to act on your advice. You share perfectly your own values, you share extensively your close friend’s values, But you only partially share the values of a stranger. So, if moral testimony involves giving advice in light of shared values, then the more uncertain you are that values are shared, the less certain you are whether to recommend sacrifice.

Of all the plausible explanations of the phenomenon, this one looks closest to being noble. But even here I think there is something vicious about the phenomenon. If you believe the person is under the same moral obligation, but that they don’t share your values, to that extent you must think their values are mistaken. But if they are mistaken, and if you take seriously their moral personality, that gives you reason to explain as best you can the value you see in the sacrifice. It does not, if you take the moral personality of another seriously, give you reason to shrug your shoulders and consign them to vice. That would be to treat them as a moral lost cause.

Possible Hypothesis 3: Inside/Outside Tension

A final explanation of the phenomenon is that it is explained by a tension between the deliberative and external perspectives on action.

When you deliberate about your acts, you look at them from the inside. Because human death is a terrible thing, you recognize it would be terrible to kill one human to save three. You can tell, from the inside, that it would be wrong to kill someone even to save more lives.

But we can also take the external perspective. We can see, not a choice to kill, but rather a choice between two outcomes. It is not a choice to kill, but rather a choice between one killing and three deaths. And from that external perspective, we recognize that one killing is not worse than three deaths.

The force of moral constraints only emerges inside the deliberative perspective. They don’t appear when you look merely at the outside comparison of results. And this may explain our phenomenon: From the deliberative perspective, Faramir can see that it is an evil thing to break a promise. But from the outside perspective, a broken promise is far less evil than Frodo’s death and betrayal.

Once again the phenomenon would reveal an inability to appreciate the moral personality of others. It stems from a failure to see the other as a deliberating person, it instead looks at them merely from the outside as a sort of object. The phenomenon emerges when you look at a person, rather than when you look vicariously with a person at the choice they face.

I’m not sure which explanation is right. But I’m inclined to think all three plausible explanations reveal just how far I have to go before I really appreciate the full personhood of others.

Death Row Inmates, Execution, and Choice

photograph of drug vials and vintage syringe

On October 28th, 2021, the state of Oklahoma executed John Marion Grant. This was the first execution the state had carried out for six years, after placing a moratorium on executions following a case where a prisoner was given the wrong drugs (and another prisoner narrowly escaped the same fate). The drug of choice in nearly all modern federal executions had, until that point, been sodium thiopental. But issues with batch contamination shut down the sole factory producing and supplying it to prisons. Additionally, shortly after discovering a therapeutic alternative to sodium thiopental — pentobarbital — the drug company’s CEO imposed distribution restrictions on the product, prohibiting prisons from buying it.

Since then, most states have lost access to their first and second execution drug of choice, nearly slowing federal executions to a stop. In the last couple years, states have managed to re-access both sodium thiopental and pentobarbital, but the humaneness — and, therefore, constitutionality — of their use remain a matter of dispute, with several lawsuits across many states protesting their use in federal executions. Prisoners and their lawyers frequently use these lawsuits to try to achieve stays of execution. After all, if the jury is literally still out on whether a certain drug is a cruel and unusual means of execution, that seems a good reason to delay its use. However, since Justice Alito’s 2014 Supreme Court opinion arguing that “because capital punishment is constitutional, there must be a constitutional means of carrying it out,” states have been forced to come up with some way to perform executions. Oklahoma devised a compromise: take all of the contested methods of execution, and let the prisoners choose their preferred method.

There was a further deceptive aspect of the choice prisoners were given: several of the execution methods had active lawsuits against them. Therefore, if a prisoner chooses one of those drugs, their execution is put on hold indefinitely, at least until the lawsuit is resolved. The prisoners could choose between several different methods: pentobarbitol, sodium thiopental, non-FDA approved compounded versions of either of the first two drugs, firing squad, or a three-injection method that utilized a benzodiazepine, a paralytic, and potassium chloride to end lives.

But there were some prisoners who believed that selecting the method of their execution would be akin to participating in their own death (i.e., suicide). John Marion Grant was one of those prisoners.

Grant’s refusal to choose an execution method, on the grounds that it violated his deeply-held beliefs that this act would be morally wrong, meant that he was, by default, given the three-injection drug execution – a method not without controversy, given that dosing for these drugs represents a sort of guessing game. That is: nobody really knows how much of each drug any particular inmate will require for complete sedation, so they tend to give prisoners massive amounts rather than risk not giving them enough. Grant’s execution did not, however, go as planned. The aftermath of the injections saw him gagging, convulsing, and vomiting for at least 12 minutes before he was officially declared dead. In an op-ed for The Atlantic, Elizabeth Bruenig characterized Oklahoma’s ruling in the following headline: “Oklahoma Tortured John Grant to Death Because He Wouldn’t Commit Suicide.”

But is this a fair characterization of Oklahoma’s law? Is allowing inmates to choose their preferred method of execution really on a par with forcing them to commit suicide? Initially, the answer seems to be no. Merely having some active role in your own execution is surely not sufficient to render one’s actions “suicidal.” As far as John Marion Grant knew, he was going to die no matter what. All the state was offering him was a chance to choose what he would experience in his final moments.

But rhetoric aside, we may still wonder whether having prisoners take this active role presents an ethical problem. Elizabeth Anderson, in her Tanner Lectures entitled “Private Government,” argues that there are many instances in which a choice only superficially increases someone’s autonomy. She uses the example of laws regarding marriage, specifically the changes in the law when divorce became legal. This newly granted “freedom” of entry into (and exit from) a marriage which, on its surface, appeared to grant more autonomy to women within marriage, actually did the opposite. Because women still lost all property rights upon entering into a marriage contract with their husband, choosing to divorce would, for most women, result in almost certain destitution. It was an “option” that was not really an option at all. Such a choice did little to help improve the overall situation for domestic women. Anderson argues that, “Consent to an option within a set cannot justify the option set itself.” That is, a woman who consents to stay in the marriage, because her other option is homelessness, does not, by that acquiescence, justify the situation. Similarly, one might argue that the Oklahoma law only gives a superficial appearance of prisoner choice and autonomy, and does nothing to make the bare set of execution options permissible.

From a consequentialist perspective, however, an argument could be made that allowing prisoners to choose their method of executions maximizes net good. One may argue that this choice improves the lives of prisoners by alleviating some anxiety they may have otherwise experienced in the lead-up to execution, and that it does this without making anyone else worse-off. For example, if a prisoner had a particular fear of sodium thiopental, they may be relieved to have the option to avoid the drug entirely. Of course, this net gain in utility is not a guarantee — choosing their means of death could exacerbate the anxieties of the prisoner, allowing them to imagine their death in vivid detail in the days before their execution. It may also, as in the case of John Marion Grant, weigh on their conscience as a morally impermissible act of self-harm.

From a Kantian perspective, there may be entirely different reasons to avoid offering this choice to inmates. Kant’s theory of punishment is commonly held to involve a view of justice as primarily retributive — that is, justice is realized when people are rewarded for doing good, and punished for doing bad. Kantian retributivists like the philosopher Igor Primoratz hold that the punishment has to fit the crime in order of magnitude. A crime of murder, therefore, requires for justice that the murderer’s life be taken. The longer we wait to end the life of the murderer, the longer justice waits to be served.

One can, then, imagine a retributivist objection to the Oklahoma law on the grounds that it sometimes results in unnecessary stays of execution. Additionally, one could argue that granting this autonomy of choice to people who are charged with brutally taking innocent lives renders their punishment too light to actually serve justice. After all, the murder victims certainly were not allowed to choose their own means of death.

And so, it seems that, from all normative perspectives, the Oklahoma law regarding choice of means of execution appears morally questionable, at best. We can hope that the law will be replaced with one that is more just — whatever that may look like.

Parents’ Rights and Public Education

There’s been no shortage of post-mortems detailing Terry McAuliffe’s defeat at the hands of Glenn Youngkin in Tuesday night’s gubernatorial contest. Most accounts target one issue in particular: education. They point to school lockdowns and curriculum complaints as having sealed McAuliffe’s political fate. More specifically, it was McAuliffe’s declaration that “I don’t think parents should be telling schools what they should teach” that was responsible for driving away white, suburban moms and flipping the state from blue to red. In the end, the governor’s race in Virginia was decided by a single concern: parents’ rights.

However true this tale may or may not be, it seems vital to investigate our convictions regarding paternalism, autonomy, and the conflict between state interests and parents’ rights. How far should these rights extend? What function does the state serve in providing public education? And who should get the final say over school curricula? While the story’s already been written, we should take a moment to consider whether McAuliffe’s remark really was the obvious gaffe it’s been made out to be.

Certainly there’s always been a presumption in favor of familial discretion; it’s commonly held that households should be free to handle their own affairs. Consider, for example, Wisconsin v. Yoder. State law mandated that all children attend public school until age 16. But three different Amish families challenged the state’s right to compel attendance. They argued that compulsory education precluded their children from meaningful participation in their religious faith. High school instruction only interfered with these young adults’ religious development and integration into communal culture. Ultimately, exposure to the alternative values and worldviews preached in secondary school constituted a threat to the Amish way of life. Whatever worldly knowledge they might be taught at school paled in comparison to the vocational training they would receive at home.

In a unanimous decision (7-0), the Supreme Court found that these families’ right to freedom of religion outweighed the state’s interest in seeing their children educated. While “some degree of education is necessary to prepare citizens to participate effectively and intelligently in our open political system,” that imperative cannot justify trampling on other constitutional rights and liberties. This is true especially when a different, but commensurate, education remains on offer. As Chief Justice Burger explained,

The State’s claim that it is empowered, as parens patriae (parent of the people), to extend the benefit of secondary education to children regardless of the wishes of their parents cannot be sustained […], for the Amish have introduced convincing evidence that […] forgoing one or two additional years of compulsory education will not impair the physical or mental health of the child, or result in an inability to be self-supporting or to discharge the duties and responsibilities of citizenship, or in any other way materially detract from the welfare of society.

On its face, the court’s decision in Yoder might seem to provide absolute license for parents to dictate all aspects of their child’s education. Contrary to McAullife’s insistence that parents butt out, Yoder seems to suggest that the buck stops with them.

In reality, however, the ruling is much more limited than it might first appear. First, it only applies to cases where public education runs up against very specific First Amendment religious protections. Second, much of the ruling hinges on the Amish’s unique way of life. As Burger notes,

It is one thing to say that compulsory education for a year or two beyond the eighth grade may be necessary when its goal is the preparation of the child for life in modern society as the majority live, but it is quite another if the goal of education be viewed as the preparation of the child for life in the separated agrarian community that is the keystone of the Amish faith.

Given the unique (and especially isolated) way of life the Amish practice, it’s easier to explain how these students’ withdrawal from public school wouldn’t “materially detract from the welfare of society.”

Still, we shouldn’t make assumptions about the inevitable shape a developing child’s life will take. Justice White was quick to point out that while it’s more than likely that an Amish child “will wish to continue living the rural life of their parents,” others “may wish to become nuclear physicists, ballet dancers, computer programmers, or historians, and for these occupations, formal training will be necessary.” As such, the state has a crucial role to play in helping “prepare them for the lifestyle that they may later choose, or at least to provide them with an option other than the life they have led in the past.”

But if this is so central to the purpose of public schooling, why let these students opt-out? The court decided that the difference between an eighth grade education and a tenth grade education was so slight that the state couldn’t justify interfering with the families’ (but, more importantly, the children’s) religiously-informed convictions.

This finding, then, is much more limited than what parents’ rights advocates would have us believe. There is no broad license granted. There exists no basis for parents’ expansive and inviolable rights. There is no precedent in favor of parents picking and choosing which educational line items to approve and which to discard. Growth and development are stunted in insular settings; learning is about confronting the unfamiliar. Our commitment to consumer choice and rugged individualism blinds us to the role the village plays in raising our child, but that doesn’t make the proverb any less true.

Apart from the obvious practical problems with imbuing every parent with a veto right over school curricula, the belief that parents are best-positioned to judge the educational merits of scholastic programs disrespects the expertise of educators. There’s reason to doubt that parents possess innate knowledge of the intellectual and psychological needs of students better than teachers.

Ultimately, the battle cry of “parents’ rights!” fails to appreciate the person whose interests are actually at stake in these confrontations: children. Parents merely act as stewards of a child’s future until that youth herself becomes capable of exercising an autonomous will. While we often defer to parents because we think that mother or father might know best, this should not be confused with universal permission; there are very real limits to freedom parents can exercise in determining what’s best for their child. Despite often acting as guardians of their child’s individual interests, there are all sorts of issues where we do not grant parents final say. We limit parental authority to refuse medical treatment for their child, school children are required to have MMR vaccinations, and most states set homeschooling requirements regarding assessment, parents’ qualifications, and curriculum approval. Why? Because the state is obligated to protect vulnerable populations who cannot defend (or even express) their interests. It must oppose parental choices which threaten to constrain children’s future options or inhibit their growth and development. State intervention isn’t about reducing parents’ freedom but ensuring its future conditions for the child. (Consider the battle being waged over sex education in schools.)

In the end, we must recognize that public schools don’t operate to serve parents’ whims; they exist to ensure that children have the necessary tools to develop into fully autonomous adults. Certainly, parents enjoy the ability to impact their child’s education through electing school board representatives, voicing their opinions, and supplementing their child’s education, but they don’t have the right to demand that school curricula accommodate their personal worldviews and private desires. While there are legitimate causes and complaints, a ban on Toni Morrison’s Beloved cannot qualify.

A Squid Meta-Game Rule

photograph of Squid Game game board

[SPOILER WARNING: This article discusses several plot details of Netflix’s Squid Game.]

At one point in Squid Game, a competitor, Deok-su, finds himself at a decision point: Should he jump on to the right or left pane of glass in front of him? One will break under his weight and he will fall to his death. The other will hold his weight, carrying him forward to the game’s ultimate goal, crossing the bridge without dying. Instead of choosing, he throws a rival onto one of the panes sending the competitor crashing through. Many will regard Deok-su’s actions as morally wrong, but why?

Is our disapproval based merely on the fact that a competitor is being thrown to their death? This is horrific, no doubt, but in context, it is arguably not morally reprehensible. Surely we can agree that the game itself is morally reprehensible because of the stakes involved as well as its exploitative nature. The series, however, asks us to put this moral concern aside. The players have all voluntarily agreed to participate. The rules of this game have been presented to the participants, and there isn’t any reason to think Deok-su’s strategy falls outside these lines. Consider the game of poker: a player may choose to lie to their fellow players in order to win the pot through a strategy known as bluffing. Normally, lying to steal your friend’s money is not morally acceptable, but in the context of this game, where everyone knows the rules and the consequences of the game, it is a legitimate strategy and we wouldn’t morally judge a person engaging in bluffing. Likewise, Deok-su has found a legitimate strategy that isn’t strictly prohibited by the rules of the game, yet our moral condemnation still feels appropriate. How do we square these competing intuitions?

I think there is still a good reason to judge Deok-su wrong, and it has to do with the nature of what a game is. I believe that in all true games, the individual players have the ability to help determine the outcome of the game. For example, a “game” like Chutes and Ladders is not a game at all, as the players have no agency in determining the outcome of the game. The game is entirely determined by chance and chance alone. When Deok-su throws his competitor onto the next pane of glass, he strips his opponent of their agency, removing that player’s ability to choose. He’s effectively broken a meta-rule of games. These would be rules that don’t apply to a specific game, but to all games, in order to maintain their integrity as a game. (I don’t think that this is the only meta-rule of all games, but I’ll only be examining this particular one here.)

If the meta-rules of games can help us make moral judgments, then we should see similar results in other cases. We can apply this to a moment earlier in the series. Sang-woo has an advantage in the second game that the contestants are forced to play. He has a strong suspicion that the game will be Honeycomb and chooses the easiest shape to win, the triangle. He doesn’t share this information with his allies, but only watches silently as the show’s protagonist, Gi-hun, chooses the umbrella, the hardest shape. While this may not be in the spirit of the alliance that they have formed, he has not removed Gi-hun’s agency in the game. Sure, he’s violated the trust of his alliance, but given the stakes of the games, it might be considered simply good strategy to create false alliances. It is a more complex version of a bluff. But, imagine that Sang-woo, upon completing his task, went to all the other players that had yet to finish their tasks and shattered their honeycombs by kicking them. They would be eliminated from the game, but not by their own agency. The game would be taken from them. This would be morally reprehensible in the same way as a player slapping down the cards of their opponents in order to reveal them to the table in a poker game.

Let’s consider another moment from the Glass Bridge game. One of the players, a former glass maker, thinks that he can determine which plate is tempered and thus will not break. The host turns off the lights to stop him from being able to determine which is tempered. In the show, Sang-woo removes the glass maker’s agency in the same way that Deok-su does, by forcing the glass maker onto an arbitrary glass plate, because he is taking too long to decide. Are these two instances morally equivalent?

Let us suppose that Sang-woo acts differently and the Host leaves the lights on. The former glass maker could conceivably win the game at this point. He could simply stall, not making a decision until the last second, and then jump onto the correct plate in order to win the game. The other players would run out of time and lose the game. In this scenario, did the glass maker remove the agency of the players? If we understand the rules of the Glass Bridge game, no. Sang-woo could still go on to the same plate as the glass maker is on, exercise his autonomy, and choose without waiting for the glass maker to reveal the correct choice. Much like Sang-woo is not obligated to reveal the game was Honeycomb, the glass maker is not obligated to reveal to the other players the correct decision. It would be unfortunate that the players behind Sang-woo and the glass maker, Gi-hun and Sae-byeok, couldn’t advance safely. The game for them has ceased to be a “game” as they are prevented from making any meaningful choices. But would this be wrong? That is, is the glass maker blameworthy in the same way we seem to hold Deok-su responsible? Of course not. The manner in which the agency is lost in the game makes a moral difference. Direct removal of a player’s agency is fundamentally different from agency being removed by the circumstances of game play.

It isn’t only in fiction that we see such actions. We can see similar strategies in professional sports where a team or player actively aims to remove the agency of a player from a game. The most morally egregious case would be aiming to injure a player to remove them from the game. However, we can see a legitimized version of removing agency of a player in baseball. When a hot batter comes up to the plate in baseball, pitchers can choose to deliberately walk the batter so as to minimize their potential impact. This practice is so cemented into the rules of the game that now the actual throwing of the pitches isn’t required. The coach of the defending team can simply signal the umpire that they would like to intentionally walk the batter and the player will advance to first base. The intentional walk strategy, and now rule, has generated strong feelings about its “sportsmanship.” However, I suspect the actual frustration that fans are experiencing is that the strategy fundamentally takes the game out of the player’s hands. The batter has been intentionally stripped of their agency, and so the game ceases to be. Fans came to see a game played and this, momentarily, is not a game. This non-game event could have a significant impact on the outcome, and that can make it feel unjust or unfair. Fans who defend the intentional walk strategy may argue that the rules of baseball don’t disallow it, and in fact now explicitly support it. I will concede that this is the case. But while it may not break the stated rules of the game, it breaks a meta-rule of games, and thus generates a justified sense of moral unfairness.

There are many games that we play where we suspend the normal rules of morality for the sake of the game and adopt a new set of moral rules that apply to the game. Consequently, we can’t simply make moral judgements about a player’s strategies in relation to normal morality. Sang-woo is often a cunning and brutal player in Squid Game, but at least he isn’t an immoral one in the Honeycomb game. In the Glass Bridge game, both Deok-Su and Sang-woo show their moral colors not because they were breaking any stated rules of the game they were playing, but because they were undermining an aspect of what it means to play a game. Violating a meta-rule of games is at the very least dissatisfying, as we see in baseball, and would allow us to label strategies that break these rules as morally wrong, in the same way as breaking the stated rules of any game.

Conservatorships and the Problem of Possessing People

photograph of legal consultation with one party pausing over contract

For the second time in recent years, conservatorships are in the news. Like the many articles discussing Britney Spears’s, these accounts often highlight the ways the conservatorship system can be abused. News outlets focus on abuse for good reason, there are over 1.3 million people in conservatorship/guardianship in the United States, and those in such a position are far too often taken advantage of.

But there are other ethical concerns with conservatorship beyond exploitation. Even when a conservator is totally scrupulous and motivated merely by the good of their conservatee, there is still something ethically troubling about any adult have the right to make decisions for another. As Robert Dinerstein puts it, even when conservatorship “is functioning as intended it evokes a kind of ‘civil death’ for the individual, who is no longer permitted to participate in society without mediation through the actions of another.”

So, what is the moral logic underlying the conservatorship relationship? What are the conditions under which, even in principle, one should be able to make decisions for another person; and how exactly should we understand that kind of relationship? These are the questions I want to address in this post.

So What Is a Conservatorship?

Tribb Grebe, in his excellent explain piece, defines a conservatorship as “a court-approved arrangement in which a person or organization is appointed by a judge to take care of the finances and well-being of an adult whom a judge has deemed to be unable to manage his or her life.”

(You may sometimes hear conservatorships referred to as guardianship. Both the terms ‘conservatorship’ and ‘guardian’ are terms defined by legal statue, and while they usually mean slightly different things, what they mean depends on which state you are in. In Florida, a conservatorship is basically a guardianship where the person is ‘absent’ rather than merely incapacitated or a minor, while in other states a conservator and guardian might have slightly different legal powers, or one term might be used for adults and the other for minors. For most purposes, then, we can treat the two terms as synonymous.)

A conservatorship is, therefore, an unusual moral relationship. Normally, if I spend someone else’s money, then I am a thief. Normally, I need to consent before a surgeon can operate on me. — no one else has the power to consent for me.

Or at least, conservatorship is an unusual relationship between two adults. It is actually the ordinary relationship between parents and children. If a surgeon wants to operate on a child, the surgeon needs the permission of the parents, not of the child. A parent has the legal right to spend their child’s money, as they see fit, for the child’s good. Conservatorship is, essentially, an extension of the logic of the parent-child relationship. To understand conservatorship, then, it will be useful to keep this moral relationship in mind.

Parents, Children, and Status

My favorite accounts of the moral relationship between parents and children is given by Immanuel Kant in his book The Metaphysics of Morals. Kant divides up the rights we have to things outside ourselves into three categories: property, contract, and status. Arthur Ripstein introduces these categories this way: “Property concerns rights to things; contract, rights against persons; and status contains rights to persons “akin to” rights to things.”

Let’s try to break those down more clearly.

Property concerns rights to things. For example, I have a property right over my laptop. I don’t need to get anyone else’s permission to use my laptop, and anyone else who wanted to use it would have to first get my permission.

There are two essential parts to property: possession and use.

Possession means something like control. I can open up my laptop, turn it on, plug it in, etc. I can exercise some degree of control over what happens to my laptop. If I could not, if my laptop were instantly and irrevocably teleported to the other end of the universe, I could not have a property interest in the laptop any longer. I would no longer have possession, even in an extended sense.

Use, in contrast, means that I have the right to employ the laptop for my purposes. Not only do I have some control over the laptop, I can also exercise that control most anyway I want. I can surf the web, I can type up a Prindle Post, or I can even destroy my laptop with a hammer.

Use is why my laptop is mine, even if you are in current control of it.  If I ask you to watch my laptop while I go to the bathroom, then you have control of the computer, but you don’t have use of it. You don’t have the right to use the computer for whatever purpose you want. If you destroy the laptop while I’m away, then, you committed and injustice against me.

Contract involves rights to other people. If you agree to mow my lawn for twenty dollars, then I have a right that you mow my law. This does not mean that I have possession of you. You are a free person; you remain in control of your actions. So, in contract I have use of you, but not possession of you. I have a right that you do something for my end (mowing my lawn), but I am not in control of you even at that point. I cannot, for instance, take over your mind and guide your actions to force you to mow my lawn (even though I have a right that you mow my lawn).

This is one way in which contract is unlike slavery. A slaveowner does not just claim the use of their slave. They also claim control over the slave. In a slave relationship, the slave is no longer their own master, and so is not understood to have possession of their own life.

Of course, another difference between contract and slavery is that contract is consensual. But that is not the only difference. If the difference were simply that slavery was not consensual, then in principle slavery would be okay if someone agrees to become a slave. But Kant rejected that thought. Kant argued that a slavery contract was illegitimate, even if the slave had originally consented.

Status is the final relation of right, and it is status that Kant thinks characterizes parents and children. According to Kant, status is the inverse of contract. In contract, I have the use, but not the possession, of someone else. In status, I have the possession of another but not use.

What could that mean?

Remember that t to have possession of someone is to have a certain control over them. Parents have control over the lives of their children. Parents can, for instance, spend their children’s money, and parents can force their children to behave in certain ways. Not only that, but parents can do this without the consent of their children. These relationships of status, then, are very different from relations of contract.

But then why isn’t a parent’s control over their child akin to slavery?

To distinguish relations of slavery from relations of status, we need to attend to the second half of a status relationship. Parents have possession of their children, but they do not have the use of their children.

Let’s look at the example of money first. Parents have possession and use of their own money. That means parents controls their own money and have the right to spend it however they want. In contrast, parents have the possession, but not the use, of their children’s money. That means that while parents can control their own money, they cannot just spend it however the parent wants. Instead, parents can only spend the money for the good of the child. While I can give my own money away for no reason, I cannot give my child’s money away for no reason.

Parents have a huge amount of control over their children’s lives. However, Kant thinks that parents can only rightly use that control on behalf of their children. This does not mean that parents cannot require their children to perform chores. But it does mean that the reason parents must assign chores has to be for the moral development of the child. Kant was critical, for instance, of people who had children just so that they would have extra hands to help with work on a family farm. Because children cannot consent to the control that parents have, therefore, parents wrong their children if they ever use that control for their own good as opposed to the good of the child.

The Fiduciary Requirement

Parents, then, act as a kind of trustee of their child’s life; they are a fiduciary. The word ‘fiduciary’ is a legal word, which describes “a person who is required to act for the benefit of another person on all matters within the scope of their relationship.” As Arthur Ripstein notes, the fiduciary relationship is structurally parallel to the parental relationship.

“The legal relation between a fiduciary and a beneficiary is one such case. Where the beneficiary is not in a position to consent (or decline to consent), or the inherent inequality or vulnerability of the relationship makes consent necessarily problematic, the fiduciary must act exclusively for the benefit of the beneficiary. It is easier for the fiduciary to repudiate the entire relationship by resigning than for a parent to repudiate a relationship with a child. But from the point of view of external freedom the structure is exactly the same: one party may not enlist the other, or the other’s assets, in support of ends that the other does not share.”

This is a powerful explanatory idea, and recognizing these fiduciary relationships helps us explain various forms of injustice. For example, since in a fiduciary relationship one is only supposed to act for the good of a trustee, this can be used to explain what is unjust about insider trading. If I use my position in a company to privately enrich myself, then I am abusing my office in the company. The private knowledge I have as an employee is available to me for managing the affairs of the company. To use that knowledge for private gain is to unjustly use the property of someone else.

This relationship can also help us understand political corruption. The reason it is unjust for presidents to use their office to enrich themselves, is because their presidential powers are given for public use for the sake of the nation. To manage the government for private purposes is to unjustly mismanage the resources entrusted to the president by the people.

Why Status

But even if we know the sort of relationship that obtains between parents and children — a type of fiduciary relationship — we still need to know why such a relationship is justified. After all, I can’t take control of your life, even if I use that control for your own good. I can’t do so even if I am wiser than you and would make better decisions than you would yourself. Because your life is your own, you have possession of your own life, not matter how much happier you would be if I took control.

The reason why parents have possession of their children is not that parents are wiser or smarter than their kids. Instead, it is because Kant thinks that children are not yet fully developed persons. Children because of the imperfect and still developing position in which they find themselves, are not able to be in full control of themselves (for a nice defense of this view of children see this article by Tamar Schapiro). Of course, the legal relationships here are crude. It is not as though the moment someone turns 18 they instantly pass the threshold of full moral personhood. Growing up is a messy process, and this is why parents should give children more and more control as they mature and develop.

Conservatorship

And just as the messiness of human development means that people should often have some control over their lives before they reach the age of 18, so too that messiness means that sometimes people must lose some control over even after they reach adulthood.

Just as children are not fully developed moral persons, so someone with Alzheimer’s is not a fully developed moral person. We appoint a conservator over someone with Alzheimer’s not because the conservator will make better choices, but because people with Alzheimer’s are often incapable of making fully developed decisions for themselves.

This, then, is the basic moral notion of conservatorship. A conservator has possession but not use of their charge. They can make decisions on their behalf, but those decisions have to be made for the charge’s good. And such a relationship is justified when someone is unable to be a fully autonomous decision-maker, because in some way their own moral personhood is imperfect or damaged.

Autonomy, Euthanasia, and Non-Terminal Patients

photograph of hospital room with empty beds

In March of this year, changes to the law regarding assisted suicide and euthanasia in Canada loosened some of the restrictions about who is eligible to apply. Assisted suicide and voluntary active euthanasia has been legal in Canada since 2016, however, the law governing such procedures was recently challenged for being too restrictive and not in keeping with the original ruling of the Supreme Court. Initially, to be eligible for assisted suicide or euthanasia the patient’s terminal condition must have been considered “reasonably foreseeable,” but now that criterion has been dropped so that even people suffering from grievous and irremediable mental illnesses will be eligible. But this new-found freedom only raises more questions about the concept of consent, especially in those cases where patients with impaired mental faculties express a desire to end their life.

There are relatively few nations in the world which permit either physician-assisted suicide or physician-assisted euthanasia, and even fewer that permit them for people who are not terminally ill. For example, in the United States passive euthanasia is legal, and in a few states voluntary active euthanasia is permitted but only in situations where there is a terminal illness. On the other hand, Canada will now join nations like Belgium and the Netherlands in permitting voluntary active euthanasia and assisted suicide to patients who are not terminally ill. In Belgium, patients with psychiatric conditions, even children, can request euthanasia. But in non-terminal situations where a patient may request that their life end because of a psychiatric condition like depression or dementia, the issue is complicated. In all of the above nations, the notion of informed consent is paramount, but it isn’t always easy to determine if a patient is competent enough to provide consent.

As Scott Kim, a psychiatrist and philosopher, notes,

“It is not easy to distinguish between a patient who is suicidal and a patient who qualifies for psychiatric euthanasia, because they share many key traits…one does not need to be a psychiatrist to appreciate how psychiatric disorders, especially when severe enough to lead to euthanasia requires, could interfere with a patient’s ability to make ‘voluntary and well considered’ decisions.”

Unlike in the case of terminal illness which may be determined by a chemical or imaging test, it can be difficult to say for sure whether a patient with a psychiatric illness has “unbearable suffering without the prospect of improvement.” For example, a Dutch woman elected to be euthanized 12 months after her husband’s death for “prolonged grief disorder,” despite being otherwise physically healthy. To make this determination even more complicated, just last year, the Dutch Supreme Court expanded the law to extend this choice to those suffering from dementia as well.

Those who defend the idea of psychiatric-assisted dying in non-terminal patients argue that the suffering caused by mental illness justifies physician-assisted dying, but only if the patient is able to request such a procedure autonomously and rationally. However, some philosophers and ethicists take issue with this criterion. In a paper on the subject, Jukka Varelius points out that, “Given that the distress of a psychiatric patient undergoes can be very severe and that there may not always be adequate means of alleviating it (short of ending the patient’s life), the idea that psychiatric-assisted dying could sometimes be morally acceptable does merit attention.” Indeed, many argue that excluding the mentally ill from enjoying this freedom on the basis of disability is objectionably discriminatory.

For a patient to make an autonomous decision, it is commonly thought that it must be voluntary, intentional, and based on sufficient understanding of the nature and consequences of the decision. But certain mental illnesses undermine a patient’s ability to understand the world. A similar problem occurs in cases of dementia. As noted in a paper on the subject from the Journal of Neurology, “those suffering from dementia suffer from a disease that itself infiltrates the very center of autonomy and voluntariness.” But Varelius makes the case that even if their conception of reality is distorted, non-autonomous psychiatric patients can also suffer unbearably if they are unable to express a reasoned decision to end their life. It is already common practice, for example, to engage in non-voluntary euthanasia by withdrawing life support from non-autonomous patients if it is deemed to be in the patient’s best interests, such as those who are in an irreversible comatose state or those with severe brain damage. It is, however, difficult to argue that we have any certainty regarding the patient’s personal preferences. Because of this, our standards involving autonomous choice may be less important than we often make them, and it would be cruel to claim that the suffering we force people to endure due to our skepticism of their “true” interests is not morally significant.

On the other hand, many may argue that there is a significant difference between active and passive euthanasia, or even that passive euthanasia should be endorsed at all. Also, when it comes to issues like dementia and mental illness, it won’t always be clear if suffering can be abated. Longitudinal studies show that patients with chronic psychiatric disorders sometimes get better or worse for reasons beyond the control of healthcare providers. So, it might not ever be clear whether there are other reasonable alternatives to euthanasia. And, without the ability to predict the future or have a meaningful conversation with a patient, there is no more reason to think that a person would want to be euthanized than to think that they wouldn’t.

There is also strong rejection of euthanasia from psychiatrists stemming from the nature of the profession. A fundamental core of psychiatry, to many, is to prevent suicide and to address hopeless, helplessness, the desire to die, and the inability to see the future. The shift in policy towards psychiatric euthanasia is considered a fundamental change to the practice of psychiatry. Many worry about slippery slope precedents of the ever-expanding criteria for euthanasia which are beginning to include reasons those who feel “like they have a completed life” and are tired of living. And some studies of the system in Holland reveal that the no-reasonable-alternative criterion is not always met.

For these reasons, it is difficult to assess whether a decision is autonomous or how important that determination is in many of these cases. We need to explore the various frameworks in place to determine appropriate eligibility criteria and approval processes. Finding the right balance (if there even is such a thing) may be something that can only be learned the hard way.

Considered Position: On Voluntary Non-Vaccination – Real World Complications

photograph of child among masked crowd

This piece concludes a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated. To see the earlier segments, start here (Part II).

In Part I of this series I tried to explain the moral intuition that I have less reason to engage in precautionary social-distancing behavior once those who are unvaccinated are unvaccinated by choice rather than because they could not access the vaccine. This intuition, I argued, is explained by the fact that justice mainly requires us to give people reasonable options for safety and does not require us to make others safe.

In Part II, I looked at what it takes for an option to be reasonable, and we realized that this is often sensitive to the reasons that someone does not want to get a vaccine. If someone’s objection to the vaccine is reasonable, then justice may still require me to adopt precautionary behavior.

In both posts, I idealized the questions at hand. I ignored real life complications, and just tried to identify the abstract principles involved. For that reason, my cases were often unrealistic, involving such absurdities as the unvaccinated engaging in moral blackmail or people taking pills to make themselves allergic to the vaccine.

In this final post, I want to turn to the real world and look at some of the complications which make this a difficult ethical question.

Complication 1: We judge the reasonableness of others in biased ways

We saw in Part II that if the other person has reasonable grounds for refusing the vaccine, then justice still requires me to protect them by social distancing (even if their grounds are mistaken). One challenge, however, is that we tend to be biased in our own assessments of what are reasonable grounds.

Consider, for example, the following two suggested grounds of vaccine hesitancy:

Skepticism 1: Distrust of a Racist Healthcare System

Some Black people in the U.S. are reluctant to get a vaccine due to distrust of the American medical system. While this is sometimes attributed to historical injustices, like the Tuskegee study, it is more plausibly explained by current disparities in health care treatment. (It also, as a whole, might just be overblown; but we will put that aside for now.) The thought might go as follows:

“As Ben Almassi has argued in the context of organ donation, there are good grounds, given persistent racial health inequities, for Black people in the U.S. to distrust that the medical system has their best interest at heart. But if one has good reason to distrust the health system, then one also has good reasons to distrust the recommendations of the health system. This is especially true because we know that drugs and vaccines can sometimes affect different racial groups differently, and we also know that Black people tend to be massively underrepresented in vaccine trials (even when those rates are reported).”

Skepticism 2. Distrust of the Liberal Narrative 

Some conservatives are reluctant to get the vaccine due to distrust in the way that mainstream media portrays medical information. They might say things like:

“Putting aside worries that the COVID threat was overhyped to sink former President Trump’s reelection chances; we have seen a systematic tendency for the media to provide unreliable coverage on how to respond to the vaccine in order to fit its preferred political narrative. First, we see the same caseloads suggest different reactions depending on who is in charge. The week after President Biden was sworn in, long before any change in policy could have altered risk, blue states began opening up and schools began seriously trying to return students to in-person education. That was true, even though the absolute case numbers were still worse than they were in the summer when everyone insisted things needed to be shut down under President Trump.

Then, of course, ‘the Scientists’ consistently lauded Democratic governors like Andrew Cuomo and panned Republican governors like Ron Desantis. And yet, we have consistently found that the media narrative was backwards. Florida, despite an extremely old population, came out of the pandemic pretty well, with a much stronger economy, and with many more kids staying in school. This is not just cherry-picking. Republican states, on average, had fewer deaths and caused significantly less damage to the economy. Then ‘the scientists’ told us to take our kids out of school, but didn’t object to massive unmasked Black lives matter protests. ‘The scientists’ told us not to wear masks, until they needed a social symbol for purposes of public shaming.”

I bring up these two explanations of skepticism, not because I find them equally plausible, but because I don’t find them equally plausible. Intuitively, I find the first reasonable and the second ludicrous. But here is the issue: it’s hard to identify any objective reason the first is more reasonable. Sure, it fits better with my own preferred political narrative; but I think there are decisive objections to both lines of reasoning, and I don’t think the errors in one are in any sense more egregious than the errors in the other.

The danger, then, is that I am more sympathetic to members of what I see as my political in-group. But that differential in sympathy means I’m especially likely to inconsistently apply a standard as squishy as ‘reasonable.’

I don’t have a good solution here, and so just advise extreme caution when you label political allies reasonable or when you label political opponents unreasonable.

Complication 2: Immunization isn’t up to some people

While there are few, if any, groups in the U.S. for whom it would be dangerous to get the vaccine. There is some evidence that immunocompromised patients, while they can safely get the vaccine, do not always produce the required antibodies.

Similarly, there is a group of people in the U.S. who cannot choose to vaccinate: children. This is true in the limited sense that the vaccines are not currently approved for use in children below the age of 16. But it is also true in the sense that, even once the FDA approves the vaccine for children, children cannot choose to be vaccinated without a parent’s permission. Unvaccinated children, then, might not be unvaccinated by any choice of their own.

These are important complications, but I’m not sure that on their own they would show you must socially distance until we reach herd immunity (on the hypothesis that there are a large percentage of vaccine skeptical holdouts). Children are far less susceptible to COVID-19, and only a very small portion of the population are severely immunocompromised. Given these facts, the threat posed to children and the immunocompromised is far smaller than the risk posed by pre-pandemic activities when most people did not have access to the vaccine. Certainly, you should engage in some precautionary measures, especially if you know you are likely to be around someone who is immunocompromised. But it is unclear that those are any different than the ordinary obligations one has during flu season.

Complication 3: Deception and consent

One further complication is that deception tends to undermine voluntariness. For example, if I lie to you about the results of a surgery, then your consent to that surgery does not actually constitute voluntary consent. Similar issues arise about sexual consent.

Or suppose you told your friend that you would pick them up from the airport. But then I, just for the fun of it, lie to you and tell you that your friend’s flight was delayed, that they were not able to reach you, and that they don’t need a ride after all. If you don’t pick your friend up from the airport, then breaking the promise was involuntary. It was involuntary because I am the one who bears responsibility for your failure.

Now, if it is true that deception can undermine voluntariness, then one worry we might have is that there may be a good number of people who refuse the vaccine because they were lied to, and if so, it is those who lied who bear the actual responsibility for the non-vaccination.

One reason this is an important point to notice, is because a lot of people are especially likely to think that those with unreasonable reasons for refusing the vaccine accept those reasons because they are being lied to by their media ecosystem. Thus, many on the left think the vaccine hesitancy on the right is ludicrous, but those same people on the left are also likely to think that Fox News, OAN, or Newsmax, are systematically deceiving their viewers. Similarly, many on the right think that concerns of racism are blown way out of proportion, but those same people on the right are also likely to think that mainstream media organizations — like CNN or The New York Times — are providing systematically distorted information on those very issues.

Indeed, it is not just cases of outright lying that might trigger a shift in responsibility. Not only do I preclude the voluntariness of your action by lying to you, I do the same thing if I tell you something false when I should have known better. If I tell you something false but am really honestly trying to tell you the best I know, then your actions are still voluntary. You made the choice to trust me in good faith. But if I am not acting in good faith myself, then I am the one at fault when you err.

Conclusion

So once vaccines are widely available (such that the unvaccinated are mostly unvaccinated by choice) but before we reach herd immunity (due to widespread vaccine hesitancy) can you return to pre-pandemic behavior?

As we’ve seen, this is a difficult question. However, it seems likely that the right answer is generally yes. For the most part, because it is reasonable to expect people to get the vaccine, it is reasonable to return to behaviors that would be safe were others to be vaccinated. This is true, even without factoring in the fact that the vaccinated are very unlikely to spread COVID. And so, it does seem like justice allows life to return to normal.

However, we have also learned an important moral lesson about what it takes to justly live together in a society. For justice to allow us to return to pre-pandemic activities, it does not just require the vaccine to be widely available. It also depends on other people being able to voluntarily refuse the vaccine. And as it turns out, there are complicated ways in which we can undermine the voluntariness of other’s actions. When we are not fastidious about what we tell others, we risk undermining their own capacity to make voluntary choices. If I thoughtlessly spread misinformation or repeat something as fact that I am only pretty sure of, then I undermine one of the fundamental building blocks of our system of justice.

My own testimonial irresponsibility undermines the voluntariness of those who believe me in good faith. And systems of justice largely depend on the mutual voluntariness of everyone’s choices. This is one reason why lying and bullshitting are such profound moral wrongs. It is not just that others end up with some wrong beliefs (we all have a bunch of misguided convictions), but that other people are rendered, in a sense, passive in relation to your choices. By breaking down even a small portion of the reciprocal norms of justice, you don’t just lie to another but partially bring the whole system of justice down on top of you.

Considered Position: On Voluntary Non-Vaccination – The Difference Voluntariness Makes

photograph of people walking and biking in masks

This piece begins a Considered Position series that examines our evolving moral duty to those choosing to remain unvaccinated.

My state, Florida, recently opened up COVID vaccinations to everyone. This does not quite mean that anyone who wants to be vaccinated can be vaccinated. There are still a limited number of vaccines available, so not everyone who wants to get vaccinated has been able to schedule an appointment. But we are getting close to the point where those who remain unvaccinated are unvaccinated by choice.

This raises a question: does the fact that the vulnerable choose to remain vulnerable make a moral difference to what precautions I should observe? I have the strong intuition that this does make a moral difference; it intuitively seems that imposing risks on the unvaccinated is not as bad when the unvaccinated are unvaccinated by choice. (The evidence increasingly suggests that the vaccinated cannot really spread COVID-19, and if that is confirmed it will render much of this practical discussion moot. However, the underlying philosophical questions are important and worth investigating.)

But is my intuition that I can be less cautious correct? 

In this, and two subsequent posts, I will try to answer that question. Each post will be dedicated to one part of an answer.

  • Part I: What principle underlies the intuition that the voluntariness of non-vaccination makes a difference to my own actions? And is that principle a true moral principle?
  • Part II: Does it matter why others are choosing not to be vaccinated? Are there differences, for example, in how careful I should be around someone who avoids vaccination because they think COVID-19 is overblown or around a pregnant mother concerned about the lack of trial data in pregnant women?
  • Part III: How do the complexities of real life complicate the moral calculation? What are the implications of the fact that children cannot get the vaccine without a parent’s permission? And is someone’s choice really voluntary if that person was lied to about the safety of vaccines?

In this first post, I want to investigate what principle might underlie my intuition that I have fewer obligations of caution to those who are voluntarily unvaccinated. To identify the principle at work, it will be useful to start with a simple argument that voluntariness should not make any difference. The thought goes as follows:

  1. During the pandemic, I avoid certain behaviors — such as licking doorknobs — to avoid spreading illness and death.
  2. If someone forgoes vaccination, the reason they forwent vaccination makes no difference to their susceptibility to illness and death.
  3. So, people being unvaccinated by choice makes no difference to my reason to avoid certain behaviors.

Let us call this the ‘simple utilitarian perspective.’ The simple utilitarian thinks that because voluntarily refusing a vaccine conveys no immunological protection, the fact people voluntarily forgo the vaccine makes no difference to my moral calculation. If you are in a community where 40% of people are unvaccinated by choice and I am in a community where 40% of people are unvaccinated because of a limited supply of vaccines, then the simple utilitarian says we are in a morally equivalent position.

The Utilitarian Explanation of the Difference

I call this the ‘simple utilitarian perspective’ because there is a perfectly good utilitarian argument against this reasoning. It is true that it makes no difference to my own risk whether I cannot get a vaccine or whether I choose to not get a vaccine; in either case I am unvaccinated. However, that does not mean that if you compare a random person who could not get the vaccine to a random person who chose to not get the vaccine, that the average risk is the same. Assuming people are at least somewhat rational, people at higher risk are more likely to choose to be vaccinated.

Even if utilitarians only ultimately care about happiness, they still will place some value on freedom. When people are free to make their own choices, they can make choices that are best for themselves. The elderly are at greater risk than the young are. As such the elderly are more likely to choose to vaccinate. Similarly, those who are very healthy — and without any risk factors for COVID-19 — are more likely to forgo vaccination because their risks of contracting it are smaller.

All this means that it’s probably safer to resume licking doorknobs once everyone had the choice to get the vaccine because those at highest risk will also be vaccinated at the highest rates.

Going Beyond the Utilitarian Answer  — This might partly explain my intuition, but it cannot be the whole story. This is because my intuition persists, even when I know the utilitarian explanation does not apply; for example, even if I know that the person is forgoing a vaccine for a reason unrelated to personal risk — like because ‘vaccines don’t fit with their personal style’ — I still intuitively feel I have less reason to be cautious.

Distributed Responsibility 

Part of the intuition is explained, I think, by the fact that people who are unvaccinated by choice will share some of the responsibility when they get sick.

If the only way to prevent people from getting sick is if I stop licking doorknobs, then by licking doorknobs I take on complete responsibility for their illnesses. However, if there are two ways to prevent people getting sick — I stop licking doorknobs or they get vaccinated — then at worst I am only partially responsible. They share in responsibility by declining the vaccine.

If we imagine other more ordinary behaviors, like frequent grocery shopping rather than doorknob licking, then the other person actually bears most of the responsibility for getting sick. It seems more reasonable to ask them to get vaccinated than to ask me to stay indefinitely in lockdown; the more reasonable the choice you reject, the more responsible you are for the consequences of that rejection.  (This, then, is why you might feel I am mostly responsible if I really were licking doorknobs; licking doorknobs was not a reasonable thing to be doing in the first place.)

This idea, that the choices of others can mitigate our own responsibility is prominent in both ethics and law. I like how Christine Korsgaard presents the idea in her discussion of our responsibility for the consequences of lying:

“In a Kantian theory our responsibility has definite boundaries: each person as a first cause exerts some influence on what happens, and it is your part that is up to you. If you make a straightforward appeal to the reason of another person, your responsibility ends there and the other’s responsibility begins. But the liar tries to take the consequences out of the hands of others; he, and not they, will determine what form their contribution to destiny will take. By refusing to share with others the determination of events, the liar takes the world into his own hands, and makes the events his own.”

Going Beyond the Distributed Responsibility Answer — But if this is the explanation of the intuition, then we have a problem. There is something morally vicious about someone who is solely concerned with avoiding responsibility. The virtuous reason to take precautions is not to avoid responsibility for someone’s death, it is to save people’s lives.

To see this, let’s look at an example from my own life (an example I still look back on with shame).

Years ago, an acquaintance of mine expressed an intent to commit suicide. I became deeply distressed, was unsure how to proceed, and grew paralyzed by indecision. So, I reached out to two mentors of mine, both of whom had experience working with suicidal people.

Reaching out was the correct thing to do; I did not know how best to offer help. The problem was the reason I reached out for help. Ultimately, it was not so that I could better support this acquaintance. Rather, I was racked by anxiety about messing up and becoming responsible for the person’s death. I reached out to these mentors because I knew that it would be irresponsible to not follow their advice. Deep down, I wanted to reach out because that way, even if the person did kill herself, at least I would not be blameworthy.

Why think this is morally perverse? Most simply because my own guilt was not the important good at stake in the choice. The thing that mattered was my acquaintance getting the help she needed; decreasing my own culpability if things went badly was not anywhere near as important! (For a more detailed discussion of the way in which a concern for our own responsibility distorts our moral reasoning, see Elizabeth Anscombe’s article “On Being in Good Faith.”)

Reasons of Justice

Even though we should not be strongly motivated by a concern to avoid responsibility; there is a close connection between what we should do and what we would be responsible for not doing. So, this difference in how responsible I would be if someone gets sick might not explain why I have weaker reasons to take precautions, but it is evidence that my reasons are weaker.

But if I do have weaker reasons, then that must mean that my reasons to take precautions are not quite so simple as I have reasons to keep people from getting sick. And this is the key to unlocking the puzzle. While I do have reasons to lower the risk that other people get sick, I have especially important reasons of justice to give people control over their own risk.

Before the vaccine is widely available, if I go around engaging in ordinary risky activities, I impose risks on others that they cannot reasonably avoid. They have no control over whether what I do poses a risk to them. As such, it is reasonable to expect me to forgo certain activities for the sake of maintaining some minimal freedom for others.

After the vaccine is widely available, however, the risks I impose on others are risks that can be reasonably avoided. Others have control over how large a risk my frequent grocery shopping imposes on them. People have the option of safety. Whether they take that option makes some difference to my reasons for infrequent grocery shopping; but it is a less stringent reason than my reasons of justice to avoid imposing unavoidable risks.

Justice is that virtue which enables us to live our own life in community with others; as such, it is the virtue that sets boundaries on what I can choose, where those boundaries mutually accommodate the choices of others. We can drive faster now that every car comes equipped with seatbelts. Why? Not because everyone always uses their seatbelts, but because everyone having access to seatbelts ensures that everyone has a free option that allows them to maintain their previous level of safety even as I start driving faster on highways.

Justice is focused on whether people have choices of health, and not whether people are healthy. For example, justice requires that we provide those who are starving with food, but it does not require us to force feed someone who refuses to eat. Were this not true, then justice could actually harm our ability to live our own life in concert with others by giving rise to certain kinds of moral blackmail. Suppose I have no objection to being vaccinated and a high personal risk tolerance. As such, I insist that unless you pay me one hundred dollars I will not go and get a vaccine. If your duties of justice meant that as long as I forgo the vaccine, you cannot return to pre-pandemic activities, then I would be able to hold your actions hostage by means of your duty of justice.

Justice, of course, is not the only virtue. I also have duties of charity. And indeed, one of the things that makes charity so demanding is precisely that it opens us up to this kind of moral blackmail. To love another person requires caring about even their self-made misery. Charity is not ultimately about living your own life; it demands instead that you live, at least in part, for others. This is why charity is such a high and holy virtue; and in turn why even if everyone who forgoes a vaccination does so for entirely voluntary reasons, that does not end all duties of precaution.

Conclusion

Of course, in real life things are a little more complicated. For example, some people are forgoing the vaccine for trivial reasons while others seem to have reasonable concerns. Does my duty of justice change depending on why others are not being vaccinated? That will be the topic of Part II.

Continue to Part II – “Types of Reasons”

Back to School: America’s Uncontrolled and Unethical Experiment

photograph of middle school science clasroom

As of this writing, several school districts in the United States have already reopened at some level, but most of the nation’s 40 million school-age children are scheduled to return sometime from mid to late August. One major argument for the reopening is so parents can return to work (assuming there is a job to go to), and help rebuild America’s faltering economy. The American Academy of Pediatrics has also supported this back-to-school movement, though this support concentrates on the emotional and social needs of the students that can be better met by returning to school.

There is, however, one argument against going back to school that few consider: Going back to school amid an epidemic is America’s uncontrolled experiment using our children as the sample. Even the nation’s top epidemiologist, Anthony Fauci, told teachers in a recent interview: “You’ll be part of the experiment in reopening schools.” This experiment is neither scientific, nor ethical.

We scientists live in a world of unknowns, and we traverse that world through the use of the scientific method and research ethics. The controlled scientific experiment goes like this: (1) A research question is formulated when the researcher makes the best “guess” as to what to expect from the data to be collected; this “guess” is based on what is already known about the topic, (2) a sample of people is identified that will participate in the experiment with as little risk to them as possible, (3) variables are identified which, as much as reasonably can be, are controlled for, (4) after considering any risks, and obtaining consent to participate from the sample members, the experiment is run, (5) the data are collected, (6) analyzed, and (7) conclusions are drawn. Through this controlled and ethical study, hopefully we find some answers that can be used to solve the problem at hand. Of utmost importance, however, is that these steps must be accomplished within the boundaries of research ethics. In the field of healthcare, these are typically four in number.

The four basic ethical considerations when doing research in the public health and healthcare arenas in general are (1) autonomy, or the power to make an informed, uncoerced, freely given consent to participate in the research; (2) justice, assuring a fair distribution of risks, benefits, and resources over participants, (3) beneficence, that no harm is done; and, (4) nonmaleficence, keeping participants from harmful situations. These ethical considerations came about after WWII when atrocities of uncontrolled experiments on human subjects by the Nazi regime were discovered. These considerations are now guides in designing ethical research. By carefully adhering to the scientific method and ethical principles of research, controlled experiments can be carried out.

Unfortunately, none of these guidelines are being met in the uncontrolled experiment America is about to run on its children when they go back to school this fall. The assumption is that getting students back in school will help solve the economic problem as well as the social and psychological problems the nation’s children are facing. These are important problems, and there are ethical ways of addressing them; the uncontrolled experiment on which America is embarking is not one of them.

If we compare this uncontrolled experiment with an ethically-sound controlled experiment, we can see the many pitfalls; pitfalls that may have dire consequences for all involved.

First of all, there is no research question. There is only a hope that things go OK and not too many get hurt. We don’t have enough information about the virus and its effect on children to even formulate a research question. What are we looking for and hoping to find? In essence, we are saying, “Let’s reopen schools, get the economy going, and help meet students’ social and emotional needs,” inferring that this is the only avenue open to us to accomplish these goals.

Secondly, variables such as the age, race, and gender of students, teachers, school staff, and bus drivers — along with their underlying medical conditions — are just some of many variables that are difficult, if not impossible, to control for in the school environment. Even when good-faith attempts are made to control for some of these variables, several ethical problems emerge.

One example is school transportation. The average school bus occupancy is 56; if social distancing without masking is practiced, only 6 students can ride the bus; if masking alone is practiced, only 28 can ride. It costs districts about $1000 per pupil per year to transport students to and from school. Additional costs to the districts by adding routes and making more trips to get students to school using either masking or social distancing, will be a strain on precious resources that could be spent on helping students with the ability to use remote learning.

Additionally, many states have regulations that mandate only students who live beyond a one-mile radius of the school they attend can ride a bus. Others must walk, ride their bikes, or use public or private transportation. Assuming that the family can afford public transportation, or, has a car, lives in a neighborhood that is safe for walking, and has weather that cooperates, these options work. However, marginalized children who live within this one-mile radius (and are thus not candidates for school transportation) may be further marginalized — kept from the emotional and social contacts they need and potentially missing vital instructional activities. These concerns are further complicated when we think about special needs students, whose medical vulnerabilities might put them at-risk in these new school environments.

Thirdly, the sample used (children) is a protected one. The Office of Human Research Protection (OHRP) identifies several protected populations that deserve special consideration when they are involved in research using humans. Pregnant women, prisoners, those with lessened cognitive abilities, and children are a few examples. Extra precautions must be taken to assure these subjects are not simply being used with little protection from specific harms that may come. Children are not mature enough to make their own decisions as to whether they want to participate in a research project. They seldom, if ever, are even allowed to make their own medical decisions. Children have no say in whether they want to go back to school amid a pandemic projected to have taken the lives of more than 180,000 in our nation by the end of August. We are sending this protected group back to school blindly, with few safety precautions. We also know that when schools were closed statewide during the months of March through May, there was a temporal association with decreased COVID-19-related deaths in those states.

Fourthly, how will we be able to keep the participants (children, faculty and staff, bus drivers) from harm? Masking and social distancing can be practiced at school; however, some age groups will be better at that than others. The benefits and risks involved are not spread evenly over the sample of students. Not only the students are at risk, but teachers are, as well.

Education Week recently reported that as many as 1.5 million public school teachers are at a higher risk of contracting COVID-19 due to their underlying health problems. The research on school staff vulnerability is sparse, but, given the law of large numbers, many staff members are at high risk as well when in a building of several hundred children. Children do get COVID-19, and with 5.5 million children suffering from asthma alone this could be a disaster waiting to happen. When race is taken into account, African-American children are 2.5 times as likely to contract COVID-19 as are Caucasian children, and American Indian and Hispanic children are 1.5 times as likely. Schools may be breeding grounds for transmitting the virus to these vulnerable populations. Children have more of the COVID-19 virus in their noses and throats than do adults, which makes children just as likely to spread the disease. They may not get the disease as easily as adults, but they do transmit it just as easily.

Do the benefits of returning to school (and there are many) outweigh the associated costs of spreading the disease?

There are many reasons other than academic ones for children needing to be in school. We know that at least 14 million children do not get enough to eat on a daily basis and this is dependent on race; 30% of these children are Black and 25% are Hispanic, less than 10% are Caucasian. Additionally, when children are home for extended periods of time with adults, the probability of child abuse increases. Yet, during this summer, schools found a way to deliver lunches, if not breakfast also to their students who were in need of that service.

Some local municipality police departments and county sheriffs have instituted a “Drop By” program. In these programs, homes where abuse may be more likely to occur, are irregularly visited as a “Drop By” to see how things are going and if anyone needs anything. During the visits law enforcement officers are able to get a feel for any evidence of domestic violence doing so in a non-threatening and non-accusatory manner.

School attendance both mediates and moderates the potential problems of food insecurity and abuse. But, as seen with programs such as outlined, there are other ways to ameliorate these injustices to our children. A re-allocation of dollars is needed along with creative ways to supply the needed services that children and families need during this pandemic. Sending kids back to school under the current implementation is not the solution. The potential nonmonetary costs are not worth the benefits that may accrue by returning to school under the present conditions.

Eventually, we will have to come to terms with the outcomes of this uncontrolled experiment. Will we have learned that it was a bad idea? That there should have been more planning to control for the safety and well-being for all at school? That we should have controlled for transportation safety? That dollars should have been reallocated for technology and given to those without it for remote learning? That home visits by school personnel to aid those experiencing difficulty learning remotely would have been worth the money?

Is America prepared to deal with the outcomes of this uncontrolled experiment where children are the sample? Neither science nor the ethics of research accept the premise of “we’ll do it and then see what happens.” But uncontrolled experiments do just that at the peril of those who are participants in unethical, uncontrolled experiments. America sits poised to conduct such a trial.

McKamey Manor: The House of No Consent

black-and-white photograph of silhouetted figure behind glass

Since 2005 Russ McKamey has been running McKamey Manor, an extreme horror attraction. When patrons sign-up for the tour they are signing-up for being physically and psychologically mistreated. Before participating, patrons must go through extensive interviews, a medical examination, and sign a long legal waiver. However some participants complain that the experience is too extreme and that the legal waiver does not excuse their behavior. The nature of the attraction brings up a host of issues concerning the nature and extent of consent.

A waiver is a voluntary surrender of a right or opportunity to enforce a right. Many horror attractions require patrons to sign a waiver before entering, in which the participants acknowledge that they are knowingly taking on the risk of various losses and relinquish the right to seek damages they may suffer while attending the attraction. For example, if a person who attended a horror attraction suffered from a heart condition and experienced a heart attack during their participation, they would not be able to sue that attraction for any medical expenses incurred as a result of that heart attack. In the case of McKamey Manor the waiver is reportedly about 40-pages long. In addition to the waiver, potential patrons are required to watch videos of other people’s experiences at McKamey Manor. The participants in these videos all ask to have their experience ended prematurely, and advise the potential participants that they “don’t want to do this.”

But does it follow that potential participants, duly informed of what may happen to them, truly consent to be buried alive, forced to ingest their own vomit, held under water, cut, struck, and verbally abused? Not necessarily. Not even a signed legal form, or other explicit signal of consent automatically creates genuine consent. There are several conditions which render void apparent consent such as when no genuine choice is available to participants or when the participant is offered something that undermines their ability to make rational decisions. McKamey Manor offers participants $20,000 if they can survive the entire experience (which is of variable length, ranging from 4 – 10 hours). Even in the longest scenario a successful participant would stand to make $2,000 per hour of their time — an inducement that undermines a person’s ability to think clearly.

While recent McKamey attractions allow participants to create safe words to automatically end their horror experience, this was not always this case. And McKamey patron Amy Milligan claims that even when she begged the actors to stop, they continued to torment her. If a person cannot end the experience at will — if they are at the mercy of the actors creating the experience — then that person has been robbed of their autonomy, even if only for a limited time. This creates another type of situation in which the explicit consent signal, in the form of the waiver, is a legal fiction. It is not possible for a person to fully waive their autonomy, as doing so would be to essentially sign themselves into slavery.

The idea that such “voluntary slavery” could exist is discounted as a possibility by philosophers with views and methodologies as different as Jean-Jacques Rousseau and John Stuart Mill. Rousseau argued that once a person becomes a slave by losing all autonomy, they cease to be a moral agent at all. As such to consent to being a slave would be to consent to no longer being a moral or legal person. Mill argued that voluntary slavery was an exception to his harm-to-others principle, which stated that any person could do as they pleased so long as they did not harm someone else. He claimed that although a person attempting to sell themselves into slavery may not be causing harm to anyone but themselves, it nonetheless stood in contradiction with the whole point of the harm-to-others principle — to maintain maximum individual liberty.

Though McKamey Manor residents do not sign themselves away into permanent slavery, they do “waive” their autonomy for a limited amount of time. Importantly, the effective duration of this “waiver” is determined not by the participants, but rather by the actors. Moreover some of the experiences patrons are subjected to are essentially torture. Here again the substantiveness, or, at least, relevance, of patrons’ consent is dubious. Consider waterboarding, a form of simulated drowning. (McKamey contends that no participants are waterboarded, but admits that they will be made to feel like they are drowning — a spurious distinction.) The problem with military detainees being waterboarded is not that they weren’t asked for their permission first. Indeed lack of permission is not the sole moral shortcoming of any form of torture. The problem is instead the nature of the activity and the relationship it creates between people: a relationship in which one person is inflicting suffering an another for enjoyment or profit.

McKamey and his defenders claim that the screening and waiver process creates a situation in which McKamey Mansion patrons consent to a prolonged period of physical and emotional abuse. However there are some things that no waiver, no matter how length and legalistic can create consent for. A person’s autonomy is inalienable. This doesn’t just mean that it cannot be taken away, but also that it can’t be given away.

The DOJ vs. NACAC: Autonomy and Paternalism in Higher Ed

black and white photograph of graduation

Last month, the National Association for College Admission Counseling (NACAC) voted to remove three provisions from their Code of Ethics and Professional Practices. These changes will now allow schools to offer early-decisions applicants special considerations like priority housing and advanced course registration. Schools are also now allowed to “poach” students already committed to other institutions. And, finally, the May 1st National Candidates’ Reply deadline will no longer mark the end of the admissions process, as schools can continue to recruit into the summer. Together, these changes threaten to drastically alter the college recruitment landscape, and it’s unclear whether those changes will be positive or even who the beneficiaries might be.

NACAC’s move to strike these provisions was motivated by a two-year inquiry by the Department of Justice into antitrust claims. The prohibition on universities offering incentives to early-decision students and wooing already-committed recruits was deemed anti-competitive and a restraint of trade. NACAC was given a straightforward ultimatum: strike the provisions or engage in a legal battle whose only likely outcome would be being dissolved by court order.

As Jim Jump suggests, the DOJ appears to see NACAC as a “cartel” — coordinating behavior, fixing prices, and cooperating so as to insulate themselves from risk. From the DOJ’s point of view, NACAC is merely acting in the best interests of institutions, and prevents students from getting the best economic deal possible on their education. By prohibiting certain kinds of recruiting and incentives, NACAC limits competition between institutions for the industry’s gain and students’ loss.

The DOJ’s perspective is purely economic: The price of attending college has been increasing eight times faster than wages. Demand for education is at an all-time high, the need for student services is ever-increasing, and state-funding hasn’t been responsive to growing student numbers and institutions’ swelling size. Rather than increase government subsidy of higher education, the hope is that increasing competition between providers may drive costs down for consumers. The DOJ’s position is simple: “when colleges have to compete more openly, students will benefit.”

In response to these allegations, NACAC supporters claim that the rules are designed to safeguard students’ autonomy. By prohibiting institutions from poaching or offering better early-decision incentives, NACAC’s provisions shield impressionable high-schoolers from manipulation and coercion. Should colleges be permitted to offer priority housing or advanced course registration to early applicants, over-stressed teenagers will only be more likely to make their college choices prematurely. Should universities be allowed to court newly-matriculated students only just adjusting to college life, susceptible youths will always be swayed by the promise of greener pastures. In the end, these paternalistic measures are intended merely to preserve the possibility of effective student agency.

But, to many, treating prospective college students as vulnerable on the one hand, and competent and self-sufficient on the other, seems disingenuous. The average student debt is $38,000; if applicants are old enough to incur such large financial burdens, then surely they are old enough to navigate the difficult choices between competing financial and educational offers. As consumers of such high-priced and valuable goods, it should not be within others’ purview to doubt the truth, rationality, or sincerity of prospective students’ expressed preferences.

What the DOJ ruling may be missing, however, is the particular value for sale that makes the marketplace for colleges unique. As DePauw’s Vice President for Enrollment Management, Robert Andrews, argues, “There are real drawbacks to making your educational decisions like you would make your purchasing decisions around less-intricate commodities.” By reducing a college education to a simple dollar amount, we ignore the larger value a college education and the formative role it can play in students’ lives. It’s difficult to accurately assess in retrospect, (and certainly predict beforehand,) the meaning “an undergraduate education and the developmental experiences that occur when 18-22 year-olds live and learn on a college campus” will have, as well as all the factors that made that experience possible. As such, relative cost should perhaps not be billed as the crucial factor. Unfortunately, Andrews argues, striking these NACAC guidelines, prioritizes the wrong thing:

“Students may be enticed by larger scholarship and financial aid packages and choose a school they had previously ruled out for very valid reasons, (i.e. size, academic offerings, availability of student services, etc.) thus putting their successful educational experience in serious jeopardy. Will saving $5,000 more per year mean anything if it takes a student 5-6 years to graduate when they could have made it out in 4 at the “previous” institution?”

At bottom, the disagreement between the DOJ and NACAC centers on whether consumers know best their own interests. In particular, the question is whether NACAC is better-positioned to anticipate students’ needs than the students themselves. Folk wisdom claims that “You cannot harm someone by giving them an option,” and we must decide whether prospective college students represent a vulnerable population that needs to be protected from choice. Is the very possibility of new financial and educational incentives enough to undermine and override students’ true preferences? Does a policy of general prohibition on financial incentives support or frustrate those core preferences?

As of yet, whether the removal of NACAC’s guidelines will deliver positive or negative consequences for students, institutions, and higher education in general can’t be seen. Prophecies are in no short supply, and college administrators are desperately trying to anticipate how the new “Wild West” will play out.