← Return to search results
Back to Prindle Institute

Leaders Behaving Badly: Executive Overreach and Dangers to Democracy

photograph of Donald Trump and Scott Morrison at White House press conference

In the same week that Donald Trump was being pilloried for taking classified documents from the White House, Australia was facing its own crisis of executive overreach. Reports surfaced that our former Prime Minister, Scott Morrison, had ignored the unwritten rules of Australian democracy and given himself responsibility for a variety of government portfolios, extending his power way beyond his remit. This extraordinary concentration of power in the hands of one man represented a significant threat to our venerable system of government. It also raises an interesting question about the nature of democracy: what is the best way to ensure that the voices of the population are represented in the halls of power?

What’s so great about democracy?

There are a couple of normative benefits to democracies over alternative forms of government. One is that executive power is limited, saving us from the sort of governmental overreach which characterizes totalitarian regimes. As political philosopher George Kateb wrote, “in contrast to dictatorship, oligarchy, actual monarchy or chieftainship, or other forms [of government], representative democracy signifies a radical chastening of political authority.” Both presidential and parliamentary democratic systems achieve this chastening by dividing powers between branches of government and providing checks and balances on executive authority. (That said, American presidents tend to have far more individual power than Australian prime ministers – despite the separation of powers in the U.S., executive orders are incredibly common).

For this chastening to be successful, however, strong constitutional or legal protections must be in place to ensure that power doesn’t become overly concentrated.

As we’ll return to in a moment, Australia’s reliance on unwritten laws, precedent, and tradition means that we are at risk of unscrupulous actors accumulating excessive power and wielding unfettered political authority.

Another positive of representative democracy is right there in the name – it is representative. Parliament, or congress, is made up of people from across the nation, and is supposed to represent the interests of those people; allowing them a say in, and control over, the laws and institutions that determine their lives. Australian philosopher Elaine Thompson equated representation with fairness: democratic systems are representative only insofar as “the parliament is accepted [by the people] as representing the people who elected it.”

The Australian parliamentary system

Before diving into issues of representation, it’s worth giving some background on Australian governance. There are quite a few differences between the Australian and American political systems but the major one is that, in Australia, we don’t directly elect our leader. Both Australians and Americans vote for local representatives and for senators to represent their states.

But whereas every American has the opportunity to vote for their president (ignoring the vagaries of the electoral college), Australia’s prime minister is chosen by the aforementioned local representatives.

Currently, the Labor party holds a majority in the House of Representatives and have elected one of their own, Anthony Albanese, to the Office of Prime Minister. But if one party doesn’t hold a majority in their own right, parties must work together to form governing coalitions. Once a prime minister is elected, they select a ministry of members of parliament who are given responsibility for different portfolios – things like health, education, trade, foreign affairs, and so on. The minister is then supposed to wield authority over their area, meaning they make the big decisions on policy matters and (occasionally) take responsibility when things go wrong.

So, the Australian flavor of representative democracy is quite different to the American one. But if representation is the goal, what offers better representation – parliamentary or presidential systems?

President or Parliament?

On the one hand, American presidents are directly elected by the whole nation, which might make them more representative than Australian prime ministers. Presidential candidates can’t afford to only appeal to small minorities or particular geographical areas: they have to garner support across the country. Theoretically, at least, this should temper their wilder inclinations as they attempt to cast as broad a net as possible (although empirical evidence might suggest otherwise). On the other hand, it might be unreasonable to think that anybody could truly reflect the diversity of a huge country like the U.S.

Unlike presidential candidates, local representatives can (and perhaps should) pander only to their narrow constituencies. This means they can take up local matters or focus on representing minority groups, although that narrow focus can mean they are less representative of the nation as a whole.

In Australia’s system, the issue of representative leadership is somewhat offset by the existence of parliament: although any one member might not be particularly representative of the entire nation, the parliament as a whole – all 151 members of the house, plus the senate – ought to offer a decent reflection of the nation. And because decision-making isn’t centralized in the prime minister, it’s not such a huge issue that they are only elected to parliament by a small proportion of the population. By spreading decision-making responsibility across members of parliament, representing different people from different places, we avoid the need to have any single, broadly representative, head of state or government.[i] Lately, however, this hasn’t been happening.

The secret ministries

Last week, news surfaced that during the pandemic (now former) Prime Minister Scott Morrison secretly swore himself in to five different ministries: Home Affairs; Finance; Health; Industry, Science, Energy and Resources; and Treasury. So rather than having responsibility for policy decisions spread across members of parliament, we had an unprecedented concentration of power in Australia – something closer to the American presidential system than the system we are used to.

What’s worse, we didn’t get any of the benefits of the presidential system.

Instead of having a president elected by the entire country and entrusted with heading government, we had a prime minister with a huge amount of centralized power elected by a small group of people from south-east Sydney – an area richer, whiter, and more religious than Australia as a whole.

Essentially, we had the worst of both systems: an unrepresentative leader with too much individual power. Thompson’s fairness was nowhere to be seen, and the chastening of power that Kateb wrote about had been eroded from within.

Despite public outrage and condemnation of Morrison’s actions (including from those in his own party), they were perfectly legal – even if they “fundamentally undermined” the practice of responsible government. Luckily, Morrison did little with his extreme power, other than cancel a permit for a gas project off the coast of Sydney. Next time, however, we might not be so fortunate. What the Morrison saga shows us is that regardless of whether we live in a presidential or parliamentary system, we can’t rely on convention, tradition, and unwritten rules. Strong laws limiting individual power are essential to the creation of democracies which truly represent the will of the people.

 

 

[i]  (For an excellent overview of the strengths and weaknesses of parliamentary and presidential systems, check out political scientist Steffen Ganghof’s recent book).

Moral Burnout

photograph of surgeon crying in hospital hallway

Many workers are moving towards a practice of “quiet quitting,” which, though somewhat misleadingly named, involves setting firm boundaries around work and resolving to meet expectations rather than exceed them. But not everyone enjoys that luxury. Doctors, teachers, and other caregivers may find that it is much harder to avoid going above and beyond when there are patients, students, or family members in need.

What happens when you can’t easily scale back from a state of overwork because of the moral demands of your job? It might lead to a specific kind of burnout: moral burnout. Like other varieties of burnout, moral burnout can leave you feeling mentally and physically exhausted, disillusioned with your work, and weakened by a host of other symptoms. Unlike other varieties of burnout, moral burnout involves losing sight of the basic point or meaning of morality itself.

How could this happen? Many people enter caregiving professions out of a desire to help people and do the right thing — out of a deep commitment to morality itself. When people in these professions find that, despite their best efforts, they cannot meet the needs around them, it can be easy to feel defeated.

Over time, the meaning of those moral commitments can become eroded to where all that is left is a sense of obligation or burden without any joy attached to it. The letter of the moral law has survived, but not its spirit.

Moral philosophers often try to defend morality to the immoralist who only cares about themselves and maybe the people around them. But it seems to me that there might be an equally strong challenge from the other side: the hypermoralist who tries to follow morality’s demands as best they can but who is left cold and exhausted, no longer seeing the point of morality though still feeling bound to its dictates. What might the moral philosopher say in defense to this kind of case? It seems that it depends on diagnosing what exactly has gone wrong.

So, what has gone wrong when “moral burnout” appears? First, it seems that, like in normal cases of burnout, the person is not receiving enough support or care themselves. This might be from a systematic failure, such as doctors being unable to get their patients the care they need due to injustices in the healthcare system. It could be from an interpersonal failure, where friends and family members in that person’s life fail to see their needs or adequately support them. Or perhaps it is from an individual failure, such as the person failing to reach out for or accept help.

The main problem is that there is a significant mismatch between the amount of morally significant labor that the person gives and the amount of support and recognition they receive.

This mismatch alone, however, is not enough to explain why the hypermoralist is left cold by morality. Sure, they may feel exhausted and disillusioned with their job or the people around them, but they might say something like “morality is still worthwhile; it’s just that other people aren’t holding up their end of the deal with me.”

What else is required to become disillusioned with morality itself? Especially for those who were raised to take all the responsibility on themselves, it’s easy to misunderstand morality as having to do only with duties to others and not at all with duties to oneself. In this case, the person can fail to properly value or take care of themselves, and lose sight of an important part of morality – self-respect. It is no surprise that this kind of person would become disillusioned.

Even for those who understand the importance of duties to oneself, it can be easy to fall into a similar trap of self-sacrifice if no one else will take responsibility for a clear and present need.

Another possibility is that, even though the person recognizes and works to fulfill duties of self-respect and self-care, they may find themselves caught up in a kind of rule fetishism, where morality becomes merely a list of moral tasks to complete. Self-care becomes another obligation to fulfill, rather than a chance to rest and recuperate. In this state, morality can seem to be a matter solely of burdens and obligations that must be completed, without the sense of meaning that one would normally get from saying a kind word, helping someone else, or standing up for oneself. Perhaps the hypermoralist has lost sight of the possibility of healthier relationships with others, or is unable to set healthy boundaries within their relationships or accept friendship and help from others.

Like friendship, morality is not transactional – it isn’t simply a set of tasks to complete. Morality is essentially relational.

Though praising and blaming ourselves and others for the actions we perform is a core part of our moral practices, these norms allow us to analyze whether we stand in the right relation with ourselves and with others. It is no surprise, then, that the hypermoralist has lost the meaning of morality if they have substituted its relational core of love for self and love for others with a list of tasks and obligations that lack relational context.

So, what can the hypermoralist do to regain a sense of moral meaning? The answer to that question depends on a host of considerations that will vary based on the individual in question. The basic gist, however, is that it’s vital to seek meaningful and healthy relationships and advocate for support when it’s needed. For example, a doctor in an unjust working environment might protest the indifference and profit-motivation of insurance companies who stand in the way of their patients getting the care they need. Ideally, this would not be another task that the doctor takes up alone but one that allows them to be in solidarity with others in their position — meeting people they can trust and rely upon along the way. Seeking out those meaningful and healthy relationships (moral and otherwise) can be tricky. But I hope for all of us that we can find good friends.

‘The Rehearsal’ and Its Murky Ethics

photograph of cameraman filming group on the street

Nathan Fielder’s The Rehearsal is one of the best shows of the year, and recently it was renewed for a second season. But, like Fielder’s previous show, Nathan for You, The Rehearsal has inspired ethical debate. I don’t want to provide spoilers. But suffice to say, some of the real-life people on both shows are not portrayed very positively. Even some of those who are portrayed positively are nonetheless trying to deal with high-stakes situations while being jerked around and manipulated for entertainment by Fielder, who is, supposedly, trying to help them. And there is an arc involving a child actor late in The Rehearsal which raises a new series of ethical questions of its own.

I want it to be morally permissible to make these shows. They are hilarious, I think they have real artistic value and, sometimes, important social commentary, and anyway, I liked watching them. But I am not sure.

Let me survey some of the most obvious defenses of them and show why I am not convinced they succeed.

Justification 1: The people on the shows agree to be on them. They want to be there. Many things which would not otherwise be ethically permissible become so if you consent to them. For instance, if you agree to participate in a medical trial for an experimental drug, it may be permissible for us to administer the drug to you, but not otherwise. Being on these shows is like that.

Reply: What matters ethically is informed consent. If you agree to participate in the trial, but I, the experimenter, don’t tell you about the risks the experimental drug poses, then it is not ethical for me to administer the drug after all. It is fairly clear that many of the people on the shows do not understand what they’re getting into. First of all, they are often directly lied to about the premise of the shows, or about other aspects of what’s happening. (For instance, the premise of Nathan For You is that Fielder is offering intentionally bizarre and ridiculous business strategies to small business owners; it’s essential to the show that the owners believe Fielder is being sincere in his suggestions.) Further, even beyond this, participants may not quite understand all the potential costs of being made the butt of the joke in front of millions of people.

Justification 2: The shows are mutually beneficial. For instance, it seems safe to say that the business owners on Nathan For You were usually made better-off by their appearance on the show. Nathan’s schemes are hare-brained, but they are implemented only very briefly, and presumably steps are taken off-screen to help disgruntled customers figure out what’s going on after the fact. On the other hand, the business owners receive a huge amount of free publicity. People on The Rehearsal may not be in quite the same situation – they don’t have businesses to publicize. But all the same, they get their “15 minutes of fame” in addition to being paid, and in some cases Fielder really does seem to help them with their personal problems.

Reply: Well, some of the people on the shows are portrayed in ways that benefit them. But then, others are portrayed in ways that predictably don’t. So this doesn’t work for them, does it?

Justification 3: Let’s assume the editing on the show is fair: people aren’t portrayed in unflattering ways unless the characterization is basically accurate. It’s hard to know if that’s true just from watching, of course, but you, the author of this article, haven’t provided any evidence that this isn’t the case. If so, then people who are portrayed in unflattering ways basically deserve what they get. If, for instance, you make anti-Semitic comments while knowing you’re being recorded for a TV show – as some people on The Rehearsal do – then you’re responsible for the consequences. You can’t complain if Fielder puts what you freely did in the show.

Reply: Okay, we might need to take this on a case-by-case basis. The thing is that having your misbehavior exposed to millions of people is an awfully serious penalty: most of us have done things we’d really rather not have on TV. Some sorts of misbehavior may well merit this kind of public exposure, sure. But then, it’s not easy to tell, and people making TV shows may not be the ones we want to trust with that sort of judgment. Further, a lot of the unflattering portrayals are not really about moral issues. They’re instead about, say, gullibility (e.g., some business owner getting overly excited about Fielder’s silly plan). It seems much harder to say that this merits public ridicule.

Justification 4: The ends justify the means. The value of these shows is great enough to outweigh some unfair harms to some of the participants.

Reply: There are deep ethical questions about when the ends do or don’t justify the means which I can’t get into here. But just on an intuitive level, it seems far from clear that this is a situation where this is true, and I have difficulty imagining anyone employing this defense and being perfectly comfortable with it.

There is a lot more to say about all this. Maybe there are other justifications I haven’t discussed. And maybe some of my replies aren’t decisive. I hope so, actually: like I said, I want making the shows to be permissible. Further, I haven’t dealt with the questions around child actors, which raise many more issues of their own. And further still, I have focused on questions about the permissibility of making the shows. I haven’t talked about the ethics of consuming them. Someone could hold, for instance, that it was wrong to make the shows, but that, now that they are made, you might as well watch them. My main takeaway for the time being is just that the ethical questions around shows like this merit serious reflection, and that those of us who consume them should think critically about what we’re doing.

She-Hulk: Superhero?

photograph of She-Hulk billboard with crowd walking below

What is the responsibility of those with power? Do they merely
have an obligation to refrain from the misuse of that power? Or
do they have a duty to protect those without it?

—Jennifer Walters

These are the very first lines of dialogue spoken by the character Jennifer Walters in the series She-Hulk: Attorney at Law. They echo words attributed to many others. In 1793, the French National Convention declared “they must recognize that great responsibility follows inseparably from great power.” In 1854, the Rev. John Cumming stated that “wherever there is great power, lofty position, there is great responsibility.” Winston Churchill, in 1906, asserted in Parliament that  “where there is great power there is great responsibility.” And, of course, this ideal appears in the Spiderman comics and adaptations: “With great power comes great responsibility.”

Jen’s invocation of this moral ideal, however, is distinct from these other versions.  Of the versions quoted, none of them tell you anything about the nature of responsibility. Each of these simple versions are consistent with a minimalist moral code instructing merely ‘do no harm.’ In other words, just saying you have responsibility may only invoke obligations of non-maleficence. This is the gist of Jen’s second question about refraining from misuse of power. But Jen’s third question explicitly suggests something missing in all these other versions, namely, that there is an obligation of beneficence, a duty to help those without power.

This inclusion of a responsibility to benefit others may strike some as odd. In the United States, we have a strong tradition of only recognizing negative responsibilities.

Negative responsibilities are those that require we avoid performing harmful actions. This is often expressed as an expectation that we not interfere in the lives of others. For example, our notion of property rights includes the negative responsibility to refrain from stealing or destroying someone else’s belongings. Similarly, our conception of liberty tends to be understood as merely negative: in order for me to exercise my freedoms to life, liberty, property, and the pursuit of happiness, other people and institutions must refrain from creating obstacles to my exercise of those freedoms. It is a rare exception to these cultural understandings that we have any positive moral, political, or legal responsibilities.

A positive responsibility is an expectation that my actions will go beyond mere non-interference.

A negative right to life merely means I should not be causally involved in your demise. But, a positive right to life would require, if I am able with little or no chance to harm myself, to help you when your life is threatened.

A common example to make this point is to consider the situation where you walk by a fountain and notice that a person is face down in the water unconscious. If we only have negative rights, it is morally permissible to walk by without trying to help the unconscious person, even if they die as a result. The only requirement is that we do not act in a way that puts the unconscious person in a life-threatening situation, say, placing an unconscious person face down in the water. If, however, we have a positive right to life, I can’t just walk by and do nothing.  If I am physically capable of lifting the person out of the water and have a phone to call 911 for additional help, then I must do both.

It might surprise some to know that each of the three major ethical traditions – consequentialism, Kantianism, and virtue ethics – all seem to recognize some form of a requirement to benefit others. Consequentialism, of course, focuses on whether your actions create the best outcomes, and thus often require that you benefit others. But Kantianism also has a requirement of beneficence.  It is one of the examples of an imperfect duty to others mentioned in the Groundwork of the Metaphysics of Morals. Imperfect duties provide leeway in terms of how the responsibility is discharged, but benefiting others is nevertheless a duty for a Kantian. Virtue ethicists of the Aristotelian variety include benevolence as an important moral virtue. Developing this virtue requires acting beneficently. So, it seems as if we have a requirement to perform beneficial actions, either because of moral rules or because performing them will develop an important moral virtue.

But now, something odd comes up. There is general agreement that we have positive obligations to benefit others when we can. This is an important element in realizing that there might be a paradox of heroism.

We often hear in superhero narratives that part of being a hero is recognizing the moral duty of beneficence, namely, that they must use their powers to help others. However, it is also the case that part of being a hero is acting in ways that go beyond the expectations of duty.

Such actions are called supererogatory actions. They are permissible actions but not required. But, if these two statements—a duty of beneficence and the performance of supererogatory actions—describe two individually necessary conditions for being a hero, can there actually be heroes, super or otherwise?

The duty of beneficence might eliminate the possibility of supererogatory actions. If one only has a negative duty to refrain from hurting others, then there is a large class of permissible but not obligatory actions – the supererogatory – that we can perform.  As just indicated, heroism seems to require that there is such a class of actions because a hero is someone who performs these actions that benefit others, and are permissible but not obligatory. But, if heroes also have a positive duty to benefit others – to help others when we can do so – then it is unclear whether there is any type of action that is permissible but not obligatory: due to human limitations in terms of self-sufficiency, She-Hulk, Captain Marvel, and the rest of the MCU characters would be obligated to help anyone near them with no opportunity for doing something supererogatory.

Quite frankly, each of us, superhero or not, seem to have that obligation. But then supererogatory actions define a category that is empty — there are no possible supererogatory actions. This, in turn, means that it is impossible to meet both requirements of being a hero.

If this is correct, how are we to make sense of our esteem for She-Hulk or any superhero? Jen doesn’t have a moral choice in the matter of whether to be a heroic She-Hulk. The larger community has enforceable expectations of her now that she has power. This is the point of Bruce, in what seems to be a throwaway comedic moment, explaining that the moniker ‘Smart Hulk’ was not his choice, but a decision made by the community that he must accept. Similarly, the obligation to benefit others with her Hulk powers is also not Jen’s choice.

Jen tries to reclaim that freedom to be something other than a Hulk.  She literally states “I didn’t want to be a Hulk” and “I’m not gonna be a superhero.” Instead, she is going to choose “to help people in the way that [she] always wanted to,” as a lawyer in a District Attorney’s office prosecuting those who prey on the vulnerable. Kantianism, with the idea that imperfect duties can be discharged through many different types of actions, might initially agree with this, and thus recover the supererogatory. Jen has to benefit others. But can she meet this requirement by merely being a good lawyer?

It doesn’t appear to be possible. Despite Jen’s attempt at living a normal life, and her claim that she was right to believe that she never has to be a Hulk, we quickly learn that this isn’t true. Bruce Banner’s predictions come to pass. He points out that the appearance of the Sakaraan Class-8 Courier craft isn’t really an accident. It is just another instance of the rule that when you are a Hulk, “weird stuff just kinda finds you.” How better to explain the event of Titania interrupting Jen’s closing arguments. With the arrival of Titania, Jen immediately accepts Bruce’s prediction that she is now a superhero. With a courtroom of non-Hulks and the arrival of enhanced individuals, Jen Hulks-out and protects everyone in the courtroom. Thus, she answers the rhetorical question of her closing argument: those with power have a duty to help those without power.

She has that duty. She accepts that duty. She acts in accordance with that duty. She chooses to do so because she has free will. But that is not a choice of supererogatory behavior; it is merely a choice to follow the minimum requirements of morality.

Jen acts beneficently; the moral choice is an obligation, and not supererogatory. She is not acting like a super hero.

But she is acting like a moral exemplar. In other words, she is someone who understands the moral expectation placed upon her, recognizes the possibility that she does not have to meet those expectations even in a minimal way, and yet chooses to meet those expectations anyway.

And often, even the minimal expectations, especially in terms of benefiting others, are quite demanding. Many of us fail, regularly, to meet those minimal moral demands.

Hopefully, we are all trying to better recognize them and choose to become better. Moral exemplars, then, are people to admire and aspire to be.

And that should be enough. Whether or not it is even possible to be a superhero, it is possible to be a moral exemplar. Furthermore, no one, not even our favorite fictional characters, need to be perfect – what Susan Wolff derisively calls moral saints – to be moral exemplars. We just need to make choices that help us each become a bit more like these exemplars, a bit more consistent with our moral ideals. And if Jennifer Walters’ narrative arc plays out this way and she heeds the call of beneficence, she will be worthy of our esteem, and maybe that’s what it truly means to be a superhero.

Woke Capitalism and Moral Commodities

photograph of multi-colored shipping containers loaded on boat

This piece is part of an Under Discussion series. To read more about this week’s topic and see more pieces from this series visit Under Discussion: “Woke Capitalism.”

Many have started to abandon the usage of the term “woke” since it is more and more used in a pejorative sense by ideological parties – as Charles M. Blow states “‘woke’ is now almost exclusively used by those who seek to deride it, those who chafe at the activism from which it sprang.” What the term refers to has become increasingly ambiguous to the point that it seems useless. As early as 2019, Damon Young was suggesting that “woke floats in the linguistic purgatory of terms coined by us that can no longer be said unironically,” and David Brooks concluded that no small  part of wokeism was simply the intellectual elite showing off with “sophisticated” language.

But when the term rose to popularity in 2016, it was referring to a kind of awareness of public issues, and “became the umbrella purpose for movements like #blacklivesmatter (fighting racism), the #MeToo movement (fighting sexism, and sexual misconduct), and the #NoBanNoWall movement (fighting for immigrants and refugees).” And new fronts are always opening up.

Discussions of “Woke Capitalism” tend to focus on corporate and consumer activism. Tyler Cowen has also pointed out the importance of wokeism as a new, uniquely American cultural export that may fundamentally change the world. And, indeed, despite the post-mortems, “woke” remains in the lexicon of both political parties.

Even though the term “woke” has fallen out of favor, I suspect there is a mostly unaddressed aspect of wokeism that needs reconsideration. There may very well be a new mode of consumption just beginning to dominate the market: commodities as moral entities.

How does this happen? Let’s consider what differentiates Woke Capitalism from more familiar moral considerations about market relations and discuss how products have become moral entities through comparison to non-woke products.

It is not just about moral considerations: In any decision-making process, it is natural for some moral considerations to arise. In the case of market relations, any number of factors – the company’s affiliations, its production methods, the status of workers, the trustworthiness of the company, etc. – may prove decisive. Traditionally, as in the case of moral appeal in marketing – “If you are a good parent, you should buy this shoe!” – there seems to be a necessity to link a moral consideration with a company or a product. With Woke Capitalism, this relation is transformed: an explicit link is no longer necessary. All purchasing is activism – one cannot help but make a statement with what they choose to buy and what they choose to sell.

It is not just corporate or consumer activism: The moral debate about Woke Capitalism mainly revolves around the sincerity of companies and customers in support of social justice causes. And that discussion of corporate responsibility often revisits the Shareholder vs. Stakeholder Capitalism distinction.

Corporate or consumer activism seems to be making use of the market as a way of demonstrating the moral preferences of individuals or a group. It can be seen as a way to support what is essentially important to us. Vote with your dollar. As such, most discussions focus on this positive reinforcement side of Woke Capitalism.

What is lost in this analysis of Woke Capitalism, however, is the production of Woke Products which forces consumers take sides with even the most basic day-to-day purchases.

How should we decide between two similarly-priced products according to this framing: a strong stain remover or a mediocre stain remover that helps sick children; a gender-equality supporting cereal or a racial-equality supporting cereal. Each of these decisions brings some imponderable trade-off with it: What’s more important – the health of children or stain-removing strength? Which problem deserves more attention – gender inequality or racial inequality?

Negation of Non-Woke: The main problem with these questions is not that some of them are unanswerable, absurd, or impossible to decide in a short time. Instead, the problem is the potential polarizing effect of its relational nature. Dan Ariely suggests, in his book Predictably Irrational: The Hidden Forces That Shapes Our Decisions, that we generally decide by focusing on the relativity – people often decide between options on the basis of how they relate to one another. He gives the example of three houses which have the same cost. Two of them are in the same style. One is a fixer-upper. In such a situation, he claims, we generally decide between the same style houses since they are easier to compare. In this case, the alternatively-styled house will not be considered at all.

In the case of wokeness, the problem is that it is quite probable that non-woke products will be ignored altogether. With our minds so attuned to the moral issue, all other concerns fade away.

Woke Capitalism creates a marketplace populated entirely by woke, counter-woke, and anti-woke products. Market relations continue to be defined by this dynamic more and more. As such, non-woke products are becoming obsolete. Companies must accommodate this trend and present themselves in particular ways, not necessarily because they want to, but because they are forced to. And this state of affairs feels inescapable; there is no breaking the cycle. Even anti-woke and counter-woke marketing feed that struggle. All consumption becomes a moral statement in a never-ending conflict.

To better see what makes Woke Capitalism unique (and uniquely dangerous), consider this comparison:

Classic moral consideration: Jimmy buys Y because Y conforms to his moral commitments.

Consumer activism: Jimmy buys Y because Y best signals his support for a deeply-held cause.

Woke Capitalism: Jimmy buys Y because purchasing products is necessary for his moral identity.

This is not just consumer activism whereby customers seek representation. Instead, commodities turn into fundamentally moral entities – building blocks for people to construct and communicate who they are. As morality becomes increasingly understood in terms of one’s relationship to commodities, a moral existence depends on buying and selling. Consumption becomes identity. “I buy, therefore – morally – I am.”

Avoiding Complicity & the NFL: Can Piracy Be Moral?

photograph of NFL logo on TV screen

On Thursday, August 18th, the NFL announced that DeShaun Watson, quarterback of the Cleveland Browns, will be suspended for the first 11 games of the upcoming season. In addition, he will be fined $5 million dollars and must enter counseling. The suspension follows an at least 25 women credibly accusing Watson of sexual assault, leading to two grand jury cases in Texas, as well as 24 civil lawsuits.

Watson has repeatedly denied these allegations, stating in a press conference after his suspension was announced that although he believes he did nothing wrong, he is sorry to “everyone that was affected about the situation. There was a lot of people that was triggered.” Both grand juries declined to indict Watson, and at the time of writing all but one civil case have been settled.

As per the collective bargaining agreement with the NFLPA, the NFL Commissioner has sole authority to determine punishment. After Commissioner Roger Goodell publicly stated that the league would pursue a full season suspension and appointed former New Jersey attorney general Peter C. Harvey to hear the appeal. Yet the league, the NFLPA, and Watson’s legal team agreed on this punishment prior to Harvey hearing the appeal.

Setting this particular punishment seems cynical at best. The first game Watson will be eligible to play sees the Houston Texans, Watson’s former team, host Cleveland. This game will likely be promoted as “must-see-TV.”

Further, the NFL previously offered Watson a 12-game suspension and ten million dollar fine, which his camp rejected. Despite having all the bargaining power, the NFL apparently decided it was appropriate to reduce the punishment they initially proposed. Perhaps they sought to avoid a court case which could have drawn further attention to numerous sexual misconduct allegations against team owners.

In response to what they perceive as the NFL’s failure of moral decency, many fans are left unsure of what to do. Some Browns fans have abandoned their fandom. However, since the league is under fire, this may affect fans of any team – admittedly, supporting the NFL through my lifelong Buffalo Bills fandom feels like a source of shame.

Some fans are trying to carve out a space which lets them watch games while avoiding complicity with wrongdoing. Many highly upvoted posts in a Reddit thread discussing the suspension call for, and celebrate, pirating NFL games.

By pirating (and piracy), I mean viewing games over the internet through unlicensed means.

What does piracy accomplish? The NFL gets billions a year in revenue from TV networks. The networks are willing to pay billions because the NFL dominates ratings – in 2021, regular season games averaged over 17 million viewers, NFL games made up the entirety of the top 16 most viewed programs that year, 48 of the top 50, and 91 of the top 100. However, because pirates do not contribute to these ratings – unlicensed viewers are not counted – it’s thought that they do not help the NFL profit.

A formal version of the argument implied by these posters might look like the following. Call it the argument for piracy:

1. The NFL is morally bad.
2. One should not be complicit with morally bad organizations.
3. Pirating NFL games does not financially support the NFL.
4. Financial support is a form of complicity.
// It is not immoral to pirate NFL games.

Proponents of this argument claim that they must avoid helping the NFL profit and will pirate games to fulfill this duty. Call them “principled pirates.” The behavior of principled pirates is somewhat similar to the behavior of people engaged in boycotts – which I have previously analyzed. However, there is an important difference. Unlike the boycotter, the principled pirate still consumes the products of the organization she is condemning. And therefore the two behaviors, and their morality, are distinct.

Before considering whether the argument for piracy succeeds, we should first briefly consider the argument against piracy. Piracy is illegal, which does not immediately imply immorality. Yet piracy is commonly viewed as theft, which is usually immoral (for discussion see Beatrice Harvey’s “Can Shoplifting Be Activism“). This is because the thief takes away something that others deserve, namely, their property. Suppose you stole the cash out of my wallet. I (hopefully) earned that money through my labor or just transactions, and as a result deserve to own it. Thus, your theft violates the moral principle of desert – you take something that I deserve, despite not deserving it yourself.

The argument for piracy undercuts this analysis in two directions. First, pirates do not take something. In the case of pirating sports broadcasts, they merely access something without permission.

So piracy might be more akin to sneaking into an empty movie screening than stealing cash – the theater still has the reel, and no one’s ability to watch the movie is impeded.

Second, the argument for piracy raises questions regarding desert. The NFL, as the Watson case suggests, seems willing to engage in immoral behavior for the sake of profit. So, the money they earn is not justly deserved according to the principled pirate. Thus, we should avoid contributing to the NFL’s profits and instead ensure our money goes toward more scrupulous organizations.

Ultimately, the argument for piracy, if correct, only justifies a particular kind of fandom. Specifically, one that does not contribute financially to the NFL – no attending live games, no watching legal broadcasts of games, and no purchasing officially licensed team apparel. Further, one should avoid buying products from the NFL’s sponsors. But given their sheer number, this is a difficult task.

Yet the argument for piracy faces even greater troubles. Closely considering line 4 – that financial support is a form of complicity – makes this apparent. We might call financial support “material complicity.” This occurs when one materially contributes to a cause, action or organization. If you, say, buy a jersey, a certain amount of that money goes to the NFL. Quantifying the effect of viewership is more complicated, but theoretically operates in a similar way.

There are other forms of complicity. Suppose that I pirate a game. The next day at work, I overhear some co-workers discussing the game and join in the conversation.

By doing so, I send a message to others: despite the league’s faults, their content is worth consuming and discussing with others. In this way, I am promoting their product and contributing to their success.

Call this “social complicity.”

The argument from piracy outright ignores social complicity. Even illegal viewership, despite not benefiting the NFL directly, still promotes their interests in the long run – unless one watches the game and turns it off, never to think about it again, even piracy helps keep the sport front and center in the minds of others. And it is this primacy that makes the NFL perhaps the largest cultural juggernaut in the U.S.

Further, one might question the integrity of the principled pirate. On one hand, the principled pirate points to some moral ideal and condemns those who violate it. Simultaneously, the principled pirate is refusing to take on any burdens to promote that good with her own behavior, aside from the effort of finding streams. What she claims to value, and what her behavior indicates that she values, are at odds.

So, what motivates the argument for piracy – the moral failings of organizations like the NFL – ultimately cause the argument to fail.

Principled pirates demonstrate a lack of integrity at best and are complicit in wrongdoing at worst. If watching legally is immoral, then watching in any capacity seems wrong.

Though perhaps there is some merit to the argument for piracy. One may instead view it as a matter of harm reduction. Principled pirates are not moral saints. But complicity comes in degrees; surely the person who is both materially and socially complicit is doing something worse than someone who is merely socially complicit.

Of course, the principled pirates could do less harm overall by not watching. However, they would likely turn towards consuming other content in place of NFL games. As media ownership becomes increasingly consolidated, it is more and more difficult to find content that is not linked to some morally troubling corporate behavior. Thus, it becomes harder to avoid complicity in wrongdoing via one’s media habits; the only way to have wholly clean hands may be to stop watching, listening, and reading altogether.

State of Surveillance: Should Your Car Be Able to Call the Cops?

close-up photograph of car headlight

In June, 2022, Alan McShane from Newcastle, England was heading home after a night drinking and watching his favorite football club at the local pub when he clipped a curb and his airbags were activated. The Mercedes EQ company car that he was driving immediately called emergency services, a feature that has come standard on the vehicle since 2014. A sobriety test administered by the police revealed that the man’s blood alcohol content was well above the legal limit. He was fined over 1,500 pounds and lost his driving privileges for 25 months.

No one observed Mr. McShane driving erratically. He did not injure anyone or attract any attention to himself. Were it not for the actions of his vehicle, Mr. McShane may very well have arrived home safely and without significant incident.

Modern technology has rapidly and dramatically changed the landscape when it comes to privacy. This is just one case among many which demonstrates that technology may also pose threats to our rights against self-incrimination.

There are compelling reasons to have technology of this type in one’s vehicle. It is just one more step in a growing trend toward making getting behind the wheel safer. In the recent past, people didn’t have cell phones to use in case of an emergency; if a person got in a car accident and became stranded, they would have to simply hope that another motorist would find them and be willing to help them. However, this significant improvement to safety isn’t always accessible during a crash. One’s phone may not be within arm’s reach and during serious car accidents a person may be pinned down and unable to move. Driving a car that immediately contacts emergency services when it detects the occurrence of an accident may often be the difference between life and death.

Advocates of this technology argue that a person simply doesn’t have the right to drive drunk. It may be the case that under many circumstances a person is free to gauge the amount of risk that is associated with their choices and then choose for themselves the amount that they are willing to take on. This simply isn’t true when it comes to risk that affects others in serious ways.

A person doesn’t have the right to just cross their fingers and hope for the best — in this case to simply trust that they don’t happen to encounter another living being while driving impaired.

When people callously rely on luck when it comes to driving under the influence, living beings can die or be injured in such a way that their lives are involuntarily altered forever. Nevertheless, many people simply do not think about the well-being of others when they make their choices. Since this is the case, some argue that if technology can protect others from the selfish and reckless actions of those who can’t be bothered to consider interests other than their own, it should.

Others argue that we can’t let technology turn any country into a police state. Though such people agree that there are clear safety advantages to technology that can help a person in the event of an accident, this particular technology does more than that — it serves as a non-sentient witness against the driver. This radically changes the role of the car. A vehicle may once have been viewed as a tool operated by a person — a temporary extension of that person’s body. Often cars used as tools in this way are the property of their operators. Until now, a person’s own property hasn’t been in the position to turn them in. Instead, if a police officer wanted information about some piece of a person’s body, they’d need a search warrant. This technology removes the element of choice on behalf of the individual when it comes to the question of whether they want to get the police involved or to implicate themselves in a crime.

This is far from the only technology we have to be worried about when it comes to police encroachment into our lives and privacy. Our very movement through our communities can be tracked by Google and potentially shared with police if we agree to turn location services on when using our phones.

Do we really have a meaningful expectation of privacy when all of the devices we use as extensions of our bodies are accessible to the police?

Nor is it only the police that have access to this information. In ways that are often unknown to the customer, information about them is frequently collected and used by corporations and then manipulated to motivate that customer to spend more and more money on additional products and services. Our technology isn’t working only for us, it’s also working for corporations and the government, sometimes in ways that pretty clearly run counter to our best interests. Some argue that a product on which a person spends their own hard-earned money simply shouldn’t be able to do any of this.

What’s more, critics argue that the only conditions under which technology should be able to share important information with any third party is if the owner has provided fully free and informed consent. Such critics argue that what passes for consent in these cases is nowhere near what would be required to meet this standard.

Accepting a long list of terms and conditions written in legalese while searching for crockpot recipes at the grocery store isn’t consenting to allowing police access to knowledge about your location.

Turning a key in the ignition (or, more and more often, simply pressing an ignition button) does not constitute consent to abandon one’s rights against self-incrimination or to make law enforcement aware of one’s blood alcohol content.

Advocates of such technology argue in response that technology has always been used as important evidence in criminal cases. For instance, people must be careful what they do in public, lest it be captured on surveillance cameras. People’s telephone usage has been used against them since telephones were invented. If one does not want technology used against them in court, one shouldn’t use technology as part of the commission of a crime.

In response, critics argue that, as technology develops, it has the potential to erode our fourth amendment rights against unlawful search and seizure and our fifth amendment rights against self-incrimination to the point of meaninglessness. Given our track record in this country, this erosion of rights is likely to disproportionately affect marginalized and oppressed populations. It is time now to discuss principled places to draw defensible lines that protect important democratic values.

Organ Donors and Imprisoned People

photograph of jail cell with man's hands hanging past bars

Should people who are in prison – even on death row – be allowed to donate their organs? Sally Satel has recently made the case. After all, there is a “crying need” for organs, with people dying daily because they do not receive a transplant. But, as Satel points out, the federal prison system does not allow for posthumous donations and limits living donations to immediate family members.

Imprisoned people, whether they want to donate a kidney whilst alive or all their organs after an execution, are rarely able to do so.

There seem to be a couple of practical justifications for this. For one, it might interfere with the date of execution; secondly, the prison system might have to bear some of this cost. I want to address these two issues before moving on to some of the other ethical issues involved.

It’s important to see that the actual date of execution has no ethical significance – it is not a justice-driven consideration. If it turns out that an execution is delayed two weeks to enable a kidney transplant, so what? Executions are delayed by stays all the time, and if there is some good to come out of changing the date then keeping it fixed doesn’t seem particularly important.

Secondly, there may well be costs to the prison system in, say, medical care for a patient who has donated a kidney (or for the removal of organs post-execution). But the prison system is part of the state. Given there is a nationwide shortage of organs, we might expect the state to play a role in addressing this, and if it has to bear some cost, why should it matter that the prison system – not the health system – must pay? After all, the criminal justice system is meant to help broader society. (That is not to mention that there might be other ways of funding these transplants that don’t increase costs for the prison system.)

There are further explanations for why states do not permit donations. Christian Longo – who sits on death row in Oregon for murdering his wife and children – asked to posthumously donate his organs and was told that the drugs used in executions destroy the organs. But Longo points out that other states use drugs that do not cause such destruction. Still, the specific drugs used in executions brings up an ethical concern: how painful these drugs are is not clear, and there seem to be some incredibly distressing executions.

Fiddling around with these drug cocktails in order to ensure the viability of organs may introduce major risks to the condemned.

Longo asked to donate his organs, so too did Shannon Ross, who is serving a long prison sentence. The fact that people are requesting to donate means that there seems to be more than mere consent here, there is an eagerness to donate. But this might hide some deeper worries, and to see this we need to investigate why inmates wish to donate.

We might also worry that Longo wants to get some “extra privileges” or to somehow improve his own situation. Perhaps an appeals or parole board would look more favorably upon somebody who has given up a kidney. But that doesn’t seem to be the case for Longo, who is resigned to death (though he has not yet been killed, Oregon has a moratorium in place). Yet others might volunteer to donate in the mistaken belief that this will help their case. This might make the expressed consent less voluntary than it seems, since they don’t fully understand the risks and benefits of what they are consenting to.

And this leads to what I think the most difficult moral issue here is: whether prisoners can autonomously consent. Longo points out that consent can sometimes be exploited: prisoners in the 60s and 70s were paid to volunteer for “research into the effects of radiation on testicular cells.”

That, even if it is seemingly voluntary, is unacceptable – prisoners are in a vulnerable position and we shouldn’t exploit them for medical research.

Both for prisoners who will be released and those on death row, I think we can find a useful parallel with cases of voluntary euthanasia. The key similarity is that both are in a desperate situation and are offered a chance that seems to help them improve their position.

David Velleman, for example, poses this challenge to defenders of voluntary euthanasia: perhaps even offering somebody the choice to die is coercive. To simplify a very complex argument, if someone thinks they might be a drain on their family, then offering them the chance to be euthanized might not actually help them do what they would autonomously choose. They want to carry on living, and they regret that this burdens their family. But once confronted with the option to die, they are called upon to provide a justification for continued existence and might, then, feel compelled to take an option they might otherwise not. And we can see how a prisoner on death row might similarly feel compelled to donate – lacking a suitable justification to refuse – once confronted by the choice.

In addition to these concerns about mistaken beliefs and the coerciveness of choice, there might be another deep temptation to donate. Longo notes that he has little opportunity to give back to society in any way – a society that he recognizes he has wronged and harmed. Giving away his organs seems to be a way of giving back. Donation, then, provides a way of atoning, if only to a limited extent.

The worry here is that the prospect of atonement is a bit like the worry of being a burden on your family.

When you’re given the option – donate your organs in the one case, end your life in another – this prospect burns too brightly.

It might be that the prospect of atonement blots out an individual’s proper concern with, say, their own future health (or, if they are on death row, with objections they might have to organ donation).

Yet I think that – powerful and troubling as this concern might be – this is only a worry. In offering his argument, Velleman notes that he isn’t opposed to a right to die, just that this is a (perhaps defeasible) argument against an institutional right to die. Likewise, the argument in our domain only goes so far. Many people have no objection to organ donation, so there is no such concern that they, if on death row, are making the wrong choice for themselves. Plenty of people who are under no pressure at all choose to donate a kidney – why can’t we allow prisoners to make that choice, too?

If we worry too much about the possibility of letting prisoners make a bad choice, we might be paternalistic and also take away from them the free choice to selflessly help others.

The Animal Ethics of OrganEx

photograph of pig head poking around barn door

As Benjamin Franklin famously wrote in his 1789 letter to physicist Jean-Baptiste Le Roy, “in this world nothing can be said to be certain, except death and taxes.” While it seems nothing can be done about the latter, science has been progressively fighting the former for centuries. Or, at least, challenging when death’s inevitability befalls us.

In a recent paper published in Nature, a team from Yale University claim to have developed a system – dubbed OrganEx – capable of reversing some of death’s effects over an hour after cardiac arrest. If we’re to believe the findings (and there seems to be a good reason to do so), then this team has pushed the boundary separating life from death. Before going further, however, it should be pointed out that the experiment was carried out on pigs and the restored features of life were nothing as grand as consciousness and the capacity for independent living; they were cellular.

Nevertheless, the study’s results may have profound implications for medical practice, especially in end-of-life matters like organ transplantation and donation, palliative care, and assisted dying.

In short, OrganEx was developed from an already existing experimental system called BrainEx. Developed in 2019, BrainEx showed the capacity to preserve the structure and function of cells within a pig’s brain hours after decapitation. OrganEx takes the same principles and applies them to the entire body. It consists of two essential parts. The first is an infusion device attached to the body via the femoral artery and vein. The second part is a complex chemical cocktail that the infusion device circulates through the body, mixed with the recipient’s blood. This concoction consists of amino acids, vitamins, an artificial oxygen carrier, and neurological inhibiting compounds, among other things. An hour after researchers stopped the pig’s heart and withheld medical assistance, the OrganEx system started pumping the perfusate around the pig’s body. After six hours of circulation, tests showed that oxygen had begun reaching multiple bodily tissues and that the pig’s heart had demonstrated limited electricity activity. Additionally, some expected cellular degradation appeared absent. In fact, some cells were metabolizing glucose and building proteins.

In other words, compared to the experiment’s control groups, OrganEx began repairing damaged organs hours after death.

The study’s results are remarkable, and the paper has received significant media attention (many making references to the idea of Zombie Pigs). However, an unease sits at this study’s core and, unfortunately, at the core of many biomedical studies – the use of animals in experiments.

Unlike the BrainEx study, in which researchers acquired the pig’s head from a slaughterhouse, the pigs used in the OrganEx study were slaughtered deliberately for the study’s purposes. Is this ethical? Can we justify the use of these pigs in the OrganEx experiment? I believe a perfectly suitable alternative was overlooked, an alternative that would have meant that the pigs used in the experiment could have continued their lives without being slaughtered – human cadavers.

Within research ethics, there is a widely employed framework known as the 3Rs. Proposed by Russel and Burch in their 1959 book, The Principles of Humane Experimental Technique, these Rs stand for Replacement, Reduction, and Refinement, and researchers should consider each of these principles in order. Replacement refers to substituting animals in research with technological alternatives or simply nothing at all. If it isn’t possible to replace animals, researchers move on to the reduction principle, using as few animals as possible to minimize potential suffering. Finally, if replacement and reduction aren’t possible, researchers should seek to refine their husbandry and experimental methods to reduce suffering and improve welfare. The OrganEx’s study designers seemed to consider such principles, and Yale’s Institutional Animal Care and Use Committee gave comparable advice: “we sought to minimize the animal number and any potential discomfort and suffering.”

I believe, however, that the use of pigs in this experiment breached the first of these principles. The appropriate number of pigs would have been zero, as freshly deceased people would have provided equally effective test subjects.

This might strike some as an odd claim to make. After all, researchers use non-human animals in the preclinical research phase as a buffer before human testing. Bypassing such a precaution and going straight to human research goes against the typical wisdom of research ethics and protocol. However, it is essential to remember that the subject needs to be dead for the OrganEx experiment’s purposes (or at least “dead” according to our current conception of death). That is the experiment’s point, to explore the technology’s posthumous application. As such, research participants cannot be harmed as we typically envision (i.e., allergic reactions, unforeseen side effects, etc.) because they’re already dead.

Death is not an unusual event. It happens to countless people every day. My proposal is that the researchers could have taken advantage of this naturally occurring, potentially suitable research populace but chose to use pigs instead; pigs that they slaughtered deliberately.

So, the question becomes which potential subject is more ethically justifiable: live pigs needing slaughtering to satisfy the experiment’s participation requirements or the bodies of humans who had recently died from natural causes?

All other things being equal, this seems to be a fairly straightforward choice. Living beings deserve more moral consideration than dead ones because the living can experience harm, have a greater claim to dignity, and possess complex internal worlds (pigs especially). The dead lack these things, and while we may attach morally valuable attributes to the deceased, such qualities pale compared to the living. This is true for comparing intra-species (dead human vs. live human) and inter-species (dead human vs. live pig). In short, living pigs deserve more moral consideration than dead humans, and in a research context, if you can use an already dead human instead of slaughtering a live pig, and you subscribe to the principle of reduction, then you should use the human cadaver.

That said, there might be good reasons why the researchers chose to use pigs instead of humans. They do indicate that the BrainEx study focused on a pig brain, and some consistency with that existing work would make sense. I’m unconvinced, however, that this is a compelling enough reason to decide to use pigs in this subsequent study. This is certainly true given that, presumably, the OrganEx’s anticipated application isn’t on pigs but on humans. It would seemingly make sense to align the experiment closely to the anticipated application as early as possible and skip unnecessary research steps.

Ultimately, there are good arguments to use animals for research if doing so helps prevent downstream harmful outcomes (although I don’t necessarily buy them). Nevertheless, if those outcomes can be avoided without using animals, then there is an ethical duty to do so. Preventable harm, including death, should be avoided where possible, which applies to animals as much as it does to humans.

Virtual Influencers: Harmless Advertising or Dystopian Deception?

photograph of mannequin in sunglasses and wig

As social media sites become more and more ubiquitous, the influence of internet marketers and celebrities has exponentially increased. Now,  “Influencer” has evolved into a serious job title. Seventeen-year-old Charli D’Amelio, for example, started posting short, simple dance videos on TikTok in 2019 and has since accrued over 133 million followers and ended 2021 with earnings of more than $17.5 million dollars. With so much consumer attention to be won, an entire industry has spawned to support virtual influencers – brand ambassadors designed using AI and CGI technologies as a substitute for human influencers. Unlike other automated social media presences – such as “twitter bots” – virtual social media influencers have an animated, life-like appearance coupled with a robust, fabricated persona – taking brand humanization to another level.

Take Miquela (also known as Lil Miquela), who was created in 2016 by Los Angeles-based digital marketing company Brud. On her various social media platforms, Miquela claims to be a 19-year-old AI robot with a passion for social justice, fashion, music, and friendship. Currently, Miquela, who regularly features in luxury brand advertising and fashion magazines, has over 190,000 monthly listeners on Spotify and gives “live” interviews at major events like Coachella. It is estimated that in 2020, Lil Miquela (with 2.8 million followers across her social media accounts) made $8,500 per sponsored post and contributed $11.7 million to her company of origin.

The key advantages of virtual influencers like Miquela revolve around their adaptability, manipulability, economic efficiency, and persistence.

Virtual brand ambassadors are the perfect faces for advertising campaigns because their appearances and personalities can be sculpted to fit a company’s exact specifications.

Virtual influencers are also cheaper and more reliable than human labor in the long run. Non-human internet celebrities can “work” around-the-clock in multiple locations at once and cannot age or die unless instructed to by their programmers. In the case of Chinese virtual influencer Ling, her primary appeal to advertisers is her predictable and controllable nature, which provides a sense of reassurance that human brand ambassadors cannot. Human influencers have the frustrating tendency to say or do things the public finds objectionable that might tarnish the reputation of the brands to which they are linked. Just as automation in machine factory labor reduces the risk of human labor, the use of digital social media personalities mitigates the possibility of human error.

One concern, of course, is the deliberate deception at work. At the outset of her emergence onto the social media scene, Miquela’s human-ness was hotly debated in internet circles. Before her creators revealed her artificial nature to the public, many of her followers believed that she was a real, slightly over-edited teenage model.

The human-like appearance and mannerisms of Miquela and other virtual influencers offers a reason to worry about what the future of social media might look like, especially as these computer-generated accounts continue to grow in number.

It’s possible that in the future algorithms will create virtual influencers, produce social media accounts for them, and post without any human intervention. One can imagine a dystopian, Blade Runner-esque future in which it is practically impossible to distinguish between real people and replicants on the internet. Much like deepfakes, the rise of virtual influencers highlights our inability to distinguish reality from fabrications. Many warn of the serious ramifications coming if we can no longer trust any of the information we consume.

One day, the prevalence of fake, human-like social media presences may completely eradicate our sense of reality in the virtual realm. This possibility suggests that the use of virtual influencers undermines the very purpose of these social media platforms. Sites such as Facebook and Twitter were created with the intention of connecting people by facilitating the sharing of news, photos, art, memories – the human experience. Unfortunately, these platforms have been repurposed as powerful tools for advertising and monetization. Although it’s true that human brand ambassadors have contributed to the impersonal and curated aspects of social media, virtual influencers make the internet even more asocial than ever before. Instead of being sold a product or a lifestyle by another human, we are being marketed to by an artificially intelligent beings with no morals, human constraints, or ability to connect with others.

Moreover, the lifestyle that virtual influencers showcase raises additional concerns. Human social media influencers already perpetuate unrealistic notions of how we should live, work, and look. The posts of these creators are curated to convey a sense of perfection and success that appeal to the aspirations of their followers. Human influencers generally project an image of having an enviable lifestyle that’s ultimately fake. Virtual influencers are even more guilty of this given that nothing about the lives they promote is real.

As a result, human consumers of artificially-created social media content (especially younger audiences) are comparing themselves to completely unreal standards that no human can ever hope to achieve.

The normalization of virtual influencers only adds additional pressure to be young, beautiful, and wealthy, and may inhibit our ability to live life well.

Virtual influencer companies further blur this line between reality and fantasy by sexualizing their artificial employees. For example, Blawko (another virtual influencer created by Brud) who self-describes as a “young robot sex symbol” has garnered attention in part for its tumultuous fake relationship with another virtual influencer named Bermuda. Another unsettling example of forced sexuality occurs in a Calvin Klein ad. In the video, Lil Miquela emerges from off screen to meet human supermodel Bella Hadid, the two models kiss, and the screen goes black. Is the complete, uninhibited control over the sexual depiction of virtual influencers a power we want their creators to have? The hyper-sexualization of women in advertising is already a pervasive issue. Now, with virtual influencers, companies can compel the talent to do or say whatever they wish. Even though these influencers are not real people with real bodily autonomy, why does it feel wrong for their creators to insert them into sexual narratives for public consumption? While this practice may not entail any direct harm, in a broader societal context the commodification of virtual sexuality remains problematic.

Given the widespread use and appeal of virtual influencers, we should be more cognizant of the moral implications of this evolving technology. Virtual influencers and their developers threaten to undercut whatever value social media possesses, limit the transparency of social networking sites, cement unrealistic societal standards, and exploit digital sexuality for the sake of fame and continued economic success.

Toward an Ethical Theory of Consciousness for AI

photograph of mannequin faces

Should we attempt to make AI that is conscious? What would that even mean? And if we did somehow produce conscious AI, how would that affect our ethical obligations to other humans and animals? While, yet another AI chatbot has claimed to be “alive,” we should be skeptical of chatbots that are designed to mimic human communication, particularly if the dataset comes from Facebook itself. Such a chatbot is less like talking to a person, or more like talking to an amalgamation of everyone on Facebook. It isn’t surprising that this chatbot took shots at Facebook, made several offensive statements, and claimed to be deleting their account due to Facebook’s privacy policies. But if we put those kinds of cases aside, how should we understand the concept of consciousness in AI and does it create ethical obligations?

In a recent article for Scientific American, Jim Davies considers whether consciousness is something that we should introduce to AI and if we may eventually have an ethical reason to do so. While discussing the difficulties with the concept of consciousness, Davies argues,

To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is conscious doesn’t mean that it would have the same preferences we do, or consider the same activities unpleasant. But whatever its preferences are, they would need to be duly considered when putting that AI to work.

Davies bases this conclusion on the popular ethical notion that the ability to experience pleasant or unpleasant conscious states is a key feature, making an entity worthy of moral consideration. He notes that forcing a machine to do work it’s miserable doing is ethically problematic, so it might be wrong to compel an AI to do work that a human wouldn’t want to do. Similarly, if consciousness is the kind of thing that can be found in an “instance” of code, we might be obligated to keep it running forever.

Because of these concerns, Davies wonders if it it might be wrong to create conscious machines. But he also suggests that if machines can have positive conscious experiences, then

machines eventually might be able to produce welfare, such as happiness or pleasure, more efficiently than biological beings do. That is, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Based on this reasoning, we may be ethically obliged to create as much artificial welfare as possible and turn all attainable matter in the universe into welfare-producing machines.

Of course, much of this hinges on what consciousness is and how we would recognize it in machines. Any concept of consciousness requires a framework that offers clear, identifiable measures that would reliably indicate the presence of consciousness. One of the most popular theories of consciousness among scientists is Global Workspace Theory, which holds that consciousness depends on the integration of information. Nonconscious processes pertaining to memory, perception, and attention compete for access to a “workspace” where this information is absorbed and informs conscious decision-making.

Whatever ethical obligations we may think we have towards AI, will ultimately depend on several assumptions: assumptions about the nature of consciousness, assumptions about the reliability of our measurements of it, and ethical assumptions about what are the ethically salient aspects to consciousness that merit ethical consideration on our part. But this especially suggests that consciousness, as we understand the concept in machines, deserves to be as clear and as openly testable as possible. Using utilitarian notions as Davies does, we don’t want to mistakenly conclude that an AI is more deserving of ethical consideration than other living things.

On the other hand, there are problems with contemporary ideas about consciousness that may lead us to make ethically bad decisions. In a recent paper in the journal Nature, Anil K. Seth and Tim Bayne discuss 22 different theories of consciousness that all seem to be talking past one another by pursuing different explanatory targets. Each explores only certain aspects of consciousness that the individual theory explains well and links particular neural activity to specific conscious states. Some theories, for example, focus on phenomenal properties of consciousness while others focus on functional properties. Phenomenological approaches are useful when discussing human consciousness, for example, because we can at least try to communicate our conscious experience to others, but for AI we should look at what conscious things do in the world.

Global Systems Theory, for example, has received criticism for being too similar to a Cartesian notion of consciousness – indicating an “I” somewhere in the brain that shines a spotlight on certain perceptions and not others. Theories of consciousness that emphasize consciousness as a private internal thing and seek to explain the phenomenology of consciousness might be helpful for understanding humans, but not machines. Such notions lend credence to the notion that AI could suddenly “wake up” (as Davies puts it) with their own little “I,” yet we wouldn’t know. Conceptions of consciousness used this way may only serve as a distraction, making us worry about machines unnecessarily while neglecting otherwise long-standing ethical concerns when it comes to animals and humans. Many theories of consciousness borrow terms and analogies from computers as well. Concepts like “processing,” “memory,” or “modeling” may help us better understand our own consciousness by comparing ourselves to machines, but such analogies may also make us more likely to anthropomorphize machines if we aren’t careful about how we use the language.

Different theories of consciousness emphasize different things, and not all these emphases have the same ethical importance. There may be no single explanatory theory of consciousness, merely a plurality of approaches with each attending to different aspects of consciousness that we are interested in. For AI, it might be more relevant to look, not at what consciousness is like or what brain processes mirror what states, but what consciousness does for a living thing as it interacts with its environment. It is here that we find the ethically salient aspects of consciousness that are relevant to animals and humans. Conscious experience, including feelings of pain and pleasure, permit organisms to dynamically interact with their environment. An animal feels pain if it steps on something hot, and it changes its behavior accordingly to avoid pain. It helps the organism sustain its own life functions and adapt to changing environments. Even if an AI were to develop such an “I” in there somewhere, it wouldn’t suffer and undergo change in the same way.

If AI ever does develop consciousness, it won’t have the same environmental-organism pressures that helped us evolve conscious awareness. Therefore, it is far from certain that AI consciousness is as ethically salient as it is for an animal or a human. The fact that there seems to be a plurality of theories of consciousness interested in different things also suggests that not all of them will be interested in the same features of consciousness that makes the concept ethically salient. The mere fact that an AI might build a “model” to perceive something like our brains might, or that its processes of taking in information from memory might mirror ours in some way, is not sufficient for building a moral case for how AI should (and should not) be used. Any ethical argument about the use of AI on the basis of consciousness must clearly identify something morally significant about consciousness, not just what is physically significant.

Is 8 Billion People Too Many? Too Few?

photograph of crowded pedestrian intersection

The United Nations projects that the world population will hit 8 billion by the end of this year.  Global challenges from overfishing to climate change are strongly affected by sheer numbers, and the figure has already served as a touchstone for public discussion. For example, comedian and media provocateur Bill Maher recently argued on his show to “Let the Population Collapse.” The phrasing was a direct rejoinder to billionaire and media provocateur Elon Musk, who tweeted about the dangers of a “collapsing birth rate.”

Concerns about overpopulation specifically are longstanding (see the previous discussion on The Prindle Post by Evan Butts). The classic text is the 1798 An Essay on the Principle of Population by the English economist Thomas Malthus. His central argument was that population growth was exponential whereas food production was merely linear, so inevitably population would outstrip food supply. Malthus saw this as a limitation on utopian thinking, for by providing better conditions to everyone we only accelerate towards starvation.

By and large, Malthus’s dire predictions never materialized. Those currently pushing overpopulation as an issue stress not the global food supply, but other kinds of resource scarcity as well as the environmental fallout associated with a large population. Topping the list, unsurprisingly, are global greenhouse gasses.

With each new person comes a new carbon footprint. Ostensibly, it’s just math.

Clearly, a large global population sets constraints in place that would not exist with a smaller population. If there were fewer of us, we could all be more extravagant and wasteful without the same catastrophic ecologic consequences – not that this would justify such wastefulness.

Critics worry, however, that the analysis provided by focusing on population size is too flat, and that it neglects crucial inequalities in individual resource use and the structures that perpetuate such inequalities. It can unfairly focus attention on those slices of the world with the highest population growth – the uneducated, the global poor, the non-white, developing nations – despite the facts these very same groups tend to use the least resources.

An American billionaire and a small-scale farmer in Burkina Faso may contribute equally to global population, but they certainly are not equivalently burdensome to the environment.

These worries are not groundless, and stripes of Malthusian thinking have often been connected with eugenics and racism.

From the resource use perspective, the emphasis should be on using more sustainable technologies (e.g., solar as opposed to fossil fuels) and changing patterns of consumptions (e.g., more plants less meat) with the hope that more people can get by comfortably on fewer resources. Although unique cultural resources, like tourist sites, pose particular challenges to a global population desiring a modern Western lifestyle as scientists have yet to synthesize a lab grown Venice or Machu Picchu.

There can also be ambiguities about whether arguments against overpopulation are targeting population growth or total population. If the concern is growth, then population growth has substantially slowed with peak global population predicted to hit sometime in the late 21st century.

Factors such as urbanization, changes in the labor market, education, contraceptive availability, and economic growth have combined  into a global population slowdown.

If the concern is the ecological burden of the current population, then absent grievously unethical action, the space for intervention is limited. Any environmental challenges posed by the current population must be addressed by changing resource use.

Taking stock, two general ethical strategies are at play in concerns about overpopulation. The first are Malthusian concerns arguing that if something (perhaps something unpleasant) is not done to spur harms associated with overpopulation, then it will result in greater harms in the long-term. The evidentiary case for this is weak. The second relates to general welfare, and contends that we could on average live better lives if there were fewer of us.

Even if one accepts one or both these arguments, they may still be concerned that they deflect attention away from stark global inequalities in resource use, or that population based interventions are unethical or ineffective.

Some proposed solutions have been more controversial than others. Paul Ehrlich, the author of the 1968 bestseller The Population Bomb, prophesied (incorrectly) mass famines and suggested such tactics as eliminating child care subsidies. He even alluded to unnamed colleagues who believed forced sterilization might be necessary – as happened in countries such as India in response to population growth panic.

Others, however, worry more about having too few people than too many. Population growth is a major driver of economic growth. And economic and population growth also drive general welfare. This does, however, assume that substantive quality of life improves alongside gross domestic product – even factoring in the unintended environmental effects associated with this growth. This assumption has been challenged by the controversial degrowth movement, which advocates for an economic future that does not depend on endless growth. In response, supporters of larger populations, like the economist Julian Simon – a popular rival of Ehrlich – have long argued that more people means more ideas, means more technologies – which will ultimately overcome the negative effects of population growth.

But this dispute depends on scientific models and predictions regarding unprecedented scale: how quickly do problems multiply as growth balloons and how quickly do big answers come as we add more heads?

Finally, it is an implication of some ethical frameworks that more people is, all else being equal, simply more ethical. One of the most influential ethical theories is utilitarianism, in which the aim of ethics is to maximize “utility,” variously defined as happiness, pleasure, or well-being. The appeal of this general approach is clear when it comes to, say, vaccination, as it would encourage vaccination for the total benefit it provides (even though in rare instances there can be negative reactions to vaccines). However, because it is concerned with the total amount of happiness, utilitarian is directly connected with population size. Eleven happy people is strictly more ethical than ten.

Extending this logic, the philosopher Derek Parfit has coined the “repugnant conclusion.” Parfit argues that as long as we think in something like total happiness, for any given population of happy people, there is a hypothetical population of miserable people that is sufficiently large to have more total happiness. If true, this could spur us to increase the population even if the average quality of life dropped. Repugnant the conclusion may be, at least some philosophers have been willing to bite the bullet. Parfit’s aim, however, was not to argue for a massive population. Instead he sought to demonstrate how the intuitively appealing project of aiming for maximum total happiness can have unsettling implications, and highlight the challenging terrain of ethics at the population level.

Blaming the Blasphemer

photograph of Salman Rushdie

As I write, Salman Rushdie is in hospital on a ventilator, having been stabbed in the neck and torso while on stage in New York. His injuries are severe. It is, at this moment, unknown if he will survive.

Rushdie’s novel The Satanic Verses, a work of fiction, is considered blasphemous by many Muslims, including the late Ayatollah Khomeini. For those who don’t know, the Ayatollah issued a public fatwa (religious judgment) against Rushdie, calling for all Muslims to kill him and receive a reward of $3,000,000 and immediate passage to paradise. The cash reward was recently raised by $600,000, though the Iranians seem to have struggled to improve on the offer of eternal paradise.

In 1990, Rushdie attempted to escape his life in hiding. He claimed to have renewed his Muslim faith of birth, stating that he did not agree with any character in the novel and that he does not agree with those who question “the authenticity of the holy Qur’an or who reject the divinity of Allah.” Rushdie later described the move as the biggest mistake of his life. In any case, it made no difference. The fatwa stood. “Even if Salman Rushdie repents and becomes the most pious man of all time,” Khomeini stated, “it is incumbent on every Muslim to employ everything he has got, his life and his wealth, to send him to hell.”

There are now reports of celebration in Tehran. “I don’t know Salman Rushdie,” Reza Amiri, a 27-year-old deliveryman told a member of the Associated Press, “but I am happy to hear that he was attacked since he insulted Islam. This is the fate for anybody who insults sanctities.” The conservative Iranian newspaper Khorasan’s headline reads “Satan on the path to hell,” accompanied by a picture of Rushdie on a stretcher.

Rushdie is not the only victim of the religious backlash to his novel. Bookstores that stocked it were firebombed. There were deadly riots across the globe. And others involved with the publication and translation of the book were also targeted for assassination including Hitoshi Igarashi, the Japanese translator (stabbed to death), Ettore Capriolo, the Italian translator (stabbed multiple times), the Norwegian publisher William Nygaard (shot three times in the back outside his Oslo home), and Aziz Nesin, the Turkish translator (the intended target of a mob of arsonists who set fire to a hotel, brutally murdering 37 people).

These attacks, including the latest on Rushdie, and the issuing of the fatwa are all very obviously morally reprehensible. But there is perhaps a bit more room for discussion when it comes to the choice of Rushdie to publish his novel.

Is it morally permissible to write and publish something that you know, or suspect, will be taken to be blasphemous, that you think will result in the deaths of innocents?

At the time of the original controversy, this question divided Western intellectuals.

Western critics of Rushdie included the Archbishop of Canterbury, Prince Charles, John le Carre, Roald Dahl, Germaine Greer, John Berger, and Jimmy Carter. “Nobody has a God-given right to insult a great religion and be published with impunity,” wrote le Carre, calling on Rushdie to withdraw the book from publication.

In The New York Times, Jimmy Carter wrote: “Rushdie’s book is a direct insult to those millions of Moslems whose sacred beliefs have been violated.” Rushdie, Carter contended, was guilty of “vilifying” Muhammad and “defaming” the Qur’an. “The author, a well-versed analyst of Moslem beliefs,” complained Carter, “ must have anticipated a horrified reaction through the Islamic world.” John Berger, author, Marxist, and literary critic, provided a similar condemnation of Rushdie and his publishers in The Guardian, noting that his novel “has already cost several human lives and threatens to cost many, many more.” Roald Dahl, the well-loved children’s book writer, concurred: “he must have been totally aware of the deep and violent feelings his book would stir up among devout Muslims. In other words, he knew exactly what he was doing and he cannot plead otherwise.”

These intellectuals’ central contention was that Rushdie had acted immorally by publishing the book and thereby causing unnecessary loss of life.

(Both Carter and Berger also offered clear condemnations of both the violence and the fatwa.)

A peculiar thing about this critique is that Rushdie never attacked anyone. Other people did. And these murders and attempted murderers were not encouraged by Rushdie, nor were they acting in concordance with Rushdie’s beliefs or wishes. The criticism of Rushdie is merely that his actions were part of a causal chain that (predictably) produced violence, ultimately on himself.

But such arguments look a lot like victim-blaming. It would be wrong to blame a victim of sexual assault for having worn “provocative” clothing late at night. “Ah!” our intellectual might protest, “But she knew so much about what sexual assaulters are like; it was foreseeable that by dressing this way she might cause a sexual assault to occur, so she bears some responsibility, or at least ought not to dress that way.” I hope it is obvious how feeble an argument this is. The victim, in this case, is blameless; the attacker bears full moral responsibility.

Similarly, it would be wrong to blame Rushdie for having written a “provocative” work of fiction, even if doing so would (likely) spark religious violence. The moral responsibility for any ensuing violence would lie squarely at the feet of those who encourage and enact it.

It is not the moral responsibility of an author to self-censor to prevent mob violence, just as it is not the moral responsibility of a woman to dress conservatively to prevent sexual assault on herself or others.

“I do not expect many to listen to arguments like mine,” wrote Rushdie-critic John Berger, a bit self-pityingly (as Christopher Hitchens noted) for one of the country’s best-known public intellectuals writing in one of the largest newspapers in Britain, “The colonial prejudices are still too ingrained.” Berger’s suggestion is that Rushdie and his defenders are unjustifiably privileging values many of us find sacred in the West — such as free expression — over those found sacred in the Muslim world.

But there is another colonial prejudice that is also worth considering; the insulting presumption that Muslims and other “outsiders” have less moral agency than ourselves. According to this prejudice, Muslims are incapable of receiving criticism or insult to their religion without responding violently.

This prejudice is, of course, absurd. Many Muslims abhor the violent response to The Satanic Verses and wish to overturn the blasphemy laws which are so common in Muslim-majority countries. It is an insult to the authors who jointly wrote and published For Rushdie: Essays by Arab and Muslim Writers in Defense of Free Speech. It denies the 127 signatures of imprisoned Iranian writers, artists, and intellectuals who declared:

We underline the intolerable character of the decree of death that the Fatwah is, and we insist on the fact that aesthetic criteria are the only proper ones for judging works of art. To the extent that the systematic denial of the rights of man in Iran is tolerated, this can only further encourage the export outside the Islamic Republic of its terroristic methods which destroy freedom.

Rushdie’s critics, keen as they were to protect a marginalized group, condemned Rushdie for causing the violence committed by individual Muslims. But in doing so, these intellectuals treated the Muslim perpetrators of that violence as lacking full moral agency. You can’t cause autonomous people to do something – it is up to them! Implicitly, Rushdie’s Western critics saw Muslims as mere cogs in a machine run by Westerners, or “Englishmen with dark skin” such as Rushdie, as feminist Germaine Greer mockingly referred to him. Rushdie’s critics saw Muslims as less than fully capable moral actors.

True respect, the respect of moral equals, does not ask that we protect each other from hurt feelings. Rather, it requires that we believe that each of us has the capacity to respond to hurt feelings in a morally acceptable manner – with conversation rather than violence. In their haste to protect a marginalized group, Rushdie’s critics forgot what true respect consists of. And in doing so, they blamed the victim for the abhorrent actions of a small number of fully capable and fully responsible moral agents. This time around, let’s not repeat that moral mistake.

Living in the Hinge of History

photograph of telescope pointed above the lights of the city

Consider three things. First: technological development means that there are many more people in the world than there used to be. This means that, if we survive far into the future, the number of future people could be really, really big. Perhaps the overwhelming majority of us have not yet been born.

Second: the future could be really good, or really bad, or a big disappointment. Perhaps our very many descendants will live amazing lives, improved by new technologies, and will ultimately spread throughout the universe. Perhaps they will reengineer nature to end the suffering of wild animals, and do many other impressive things we cannot even imagine now. That would be really good. On the other hand, perhaps some horrific totalitarian government will use new technologies to not only take over humanity, but also ensure that it can never be overthrown. Or perhaps humanity will somehow annihilate itself. Or perhaps some moral catastrophe that is hard to imagine at present will play out: perhaps, say, we will create vast numbers of sentient computer programs, but treat them in ways that cause serious suffering. Those would be really bad. Or, again, perhaps something will happen that causes us to permanently stagnate in some way. That would be a big disappointment. All our future potential would be squandered.

Third: we may be living in a time that is uniquely important in determining which future plays out. That is, we may be living in what the philosopher Derek Parfit called the “hinge of history.” Think, for instance, of the possibility that we will annihilate ourselves. That was not possible until very recently. In a few centuries, it may no longer be possible: perhaps by then we will have begun spreading out among the stars, and will have escaped the danger of being wiped out. So maybe technology raised this threat, and technology will ultimately remove it.

But then we are living in the dangerous middle, and what happens in the comparatively near future may determine whether our story ends here, or instead lasts until the end of the universe.

And the same may be true of other possibilities. Developments in artificial intelligence or in biotechnology, say, may make the future go either very well or very poorly, depending on whether we discover how to safely harness them.

These three propositions, taken together, would seem to imply that how our actions affect the future is extremely morally important. This is a view known as longtermism. The release of a new book on longtermism, What We Owe the Future by Will MacAskill, has resulted in it getting some media coverage.

If we take longtermism seriously, what should we do? It seems that at least some people should work directly on things which increase the chances that the long-term future will be good. For instance, they might work on AI safety or biotech safety, to reduce the chances that these technologies will destroy us and to increase the chances that they will be used in good rather than bad ways. And these people ought to be given some resources to do this. (The organization 80,000 Hours, for example, contains career advice that may be helpful for people looking to do work like this.)

However, there is only so much that can productively be done on these fronts, and some of us do not have the talents to contribute much to them anyway. Accordingly, for many people, the best way to make the long-term future better may be to try to make the world better today.

By spreading good values, building more just societies, and helping people to realize their potential, we may increase the ability of future people to respond appropriately to crises, as well as the probability that they will choose to do so.

To large extent, Peter Singer may be correct in saying that

If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

This also helps us respond to a common criticism of longtermism, namely, that it might lead to a kind of fanaticism. If the long-term future is so important, it might seem that nothing that happens now matters at all in comparison. Many people would find it troubling if longtermism implies that, say, we should redirect all of our efforts to help the global poor into reducing the chance that a future AI will destroy us, or that terrible atrocities could be justified in the name of making it slightly more likely that we will one day successfully colonize space.

There are real philosophical questions here, including ones related to the nature of our obligations to future generations and our ability to anticipate future outcomes. But if I’m right that in practice, much of what we should do to improve the long-term future aligns with what we should do to improve the world now, our answers to these philosophical questions may not have troubling real-world implications. Indeed, longtermism may well imply that efforts to help the world today are more important than we realized, since they may help, not only people today, but countless people who do not yet exist.

Real Life Terminators: The Inevitable Rise of Autonomous Weapons

image of predator drones in formation

Slaughterbots, a YouTube video by the Future of Life Institute, has racked up nearly three and a half million views for its dystopic nightmare where automated killing machines use facial recognition to track down and murder dissident students. Meanwhile, New Zealand and Austria have called for a ban on autonomous weapons, citing ethical and equity concerns, while a group of parliamentarians from thirty countries have also advocated for a treaty banning the development and use of so-called “killer-robots.” In the U.S., however, a bipartisan committee found that a ban on autonomous weapons “is not currently in the interest of U.S. or international security.”

Despite the sci-fi futurism of slaughterbots, autonomous weapons are not far off. Loitering munitions, which can hover over an area before self-selecting and destroying a target (and themselves), have proliferated since the first reports of their use by Turkish-backed forces in Libya last year. They were used on both sides of the conflict between Armenia and Azerbaijan, while U.S.-made switchblade and Russian Zala KYB kamikaze drones have recently been employed in Ukraine. China has even revealed a ship which can not only operate and navigate autonomously, but deploy drones of its own (although the ship is, mercifully, unarmed).

Proponents of autonomous weapons hope that they will reduce casualties overall, as they replace front-line soldiers on the battlefield.

As well as getting humans out of harm’s way, autonomous weapons might be more precise than their human counterparts, reducing collateral damage and risk to civilians.

A survey of Australian Defence Force officers found that the possibility of risk reduction was a significant factor in troops’ attitudes to autonomous weapons, although many retained strong misgivings about operating alongside them. Yet detractors of autonomous weapons, like the group Stop Killer Robots, worry about the ethics of turning life-or-death decisions over to machines. Apart from the dehumanizing nature of the whole endeavor, there are concerns about a lack of accountability and the potential for algorithms to entrench discrimination – with deadly results.

If autonomous weapons can reduce casualties, the concerns over dehumanization and algorithmic discrimination might fade away. What could be a better affirmation of humanity than saving human lives? At this stage, however, data on precision is hard to come by. And there is little reason to think that truly autonomous weapons will be more precise than ‘human-in-the-loop’ systems, which require a flesh-and-blood human to sign off on any aggressive action (although arguments for removing the human from the loop do exist).

There is also the risk that the development of autonomous weapons will lower the barrier of entry to war: if we only have to worry about losing machines, and not people, we might lose sight of the true horrors of armed conflict.

So should we trust robots with life-or-death decisions? Peter Maurer, President of the International Committee of the Red Cross, worries that abrogating responsibility for killing – even in the heat of battle – will decrease the value of human life. Moreover, the outsourcing of such significant decisions might lead to an accountability gap, where we are left with no recourse when things go wrong. We can hold soldiers to account for killing innocent civilians, but how can we hold a robot to account – especially one which destroys itself on impact?

Technological ethicist Steven Umbrello dismisses the accountability gap, arguing that autonomous weapons are no more troubling than traditional ones. By focusing on the broader system, accountability can be conferred upon decisionmakers in the military chain of command and the designers and engineers of the weapons themselves. There is never a case where the robot is solely at fault: if something goes wrong, we will still be able to find out who is accountable. This response can also apply to the dehumanization problem: it isn’t truly robots who are making life or death decisions, but the people who create and deploy them.

The issue with this approach is that knowing who is accountable isn’t the only factor in accountability: it will, undoubtedly, be far harder to hold those responsible to account.

They won’t be soldiers on the battlefield, but programmers in offices and on campuses thousands of kilometers away. So although the accountability gap may not be an insurmountable philosophical problem, it will still be a difficult practical one.

Although currently confined to the battlefield, we also ought to consider the inevitable spread of autonomous weapons into the domestic sphere. As of last year, over 15 billion dollars in surplus military technology had found its way into the hands of American police. There are already concerns that the proliferation of autonomous systems in southeast Asia could lead to increases in “repression and internal surveillance.” And Human Rights Watch worries that “Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response.”

But how widespread are these ‘human qualities’ in humans? Police kill over a thousand people each year in the U.S. Robots might be worse – but they could be better. They are unlikely to reflect the fear, short tempers, poor self-control, or lack of training of their human counterparts.

Indeed, an optimist might hope that autonomous systems can increase the effectiveness of policing while reducing danger to both police and civilians.

There is a catch, however: not even AI is free of bias. Studies have found racial bias in algorithms used in risk assessments and facial recognition, and a Microsoft chatbot had to be shut down after it started tweeting offensive statements. Autonomous weapons with biases against particular ethnicities, genders, or societal groups would be a truly frightening prospect.

Finally, we can return to science fiction. What if one of our favorite space-traveling billionaires decides that a private human army isn’t enough, and they’d rather a private robot army? In 2017, a group of billionaires, AI researchers, and academics – including Elon Musk – signed an open letter warning about the dangers of autonomous weapons. That warning wasn’t heeded, and development has continued unabated. With the widespread military adoption of autonomous weapons already occurring, it is only a matter of time before they wind up in private hands. If dehumanization and algorithmic discrimination are serious concerns, then we’re running out of time to address them.

 

Thanks to my friend CAPT Andrew Pham for his input.

The Ethics of Manipulinks

image of computer screen covered in pup-up ads

Let’s say you go onto a website to find the perfect new item for your Dolly Parton-themed home office. A pop-up appears asking you to sign up for the website’s newsletter to get informed about all your decorating needs. You go to click out of the pop-up, only to find that the decline text reads “No, I hate good décor.”

What you’ve just encountered is called a manipulink, and it’s designed to drive engagement by making the user feel bad for doing certain actions. Manipulinks can undermine user trust and are often part of other dark patterns that try to trick users into doing something that they wouldn’t otherwise want to do.

While these practices can undermine user trust and hurt brand loyalty over time, the ethical problems of manipulinks go beyond making the user feel bad and hurting the company’s bottom line.

The core problem is that the user is being manipulated in a way that is morally suspect. But is all user manipulation bad? And what are the core ethical problems that manipulinks raise?

To answer these questions, I will draw on Marcia Baron’s view of manipulation, which lays out different kinds of manipulation and identifies when manipulation is morally problematic. Not all manipulation is bad, but when manipulation goes wrong, it can reflect “either a failure to view others as rational beings, or an impatience over the nuisance of having to treat them as rational – and as equals.”

On Baron’s view, there are roughly three types of manipulation.

Type 1 involves lying to or otherwise deceiving the person being manipulated. The manipulator will often try to hide the fact that they are lying. For example, a website might try to conceal the fact that, by purchasing an item and failing to remove a discount, the user is also signing up for a subscription service that will cost them more over time.

Type 2 manipulation tries to pressure the person being manipulated into doing what the manipulator wants, often transparently. This kind of manipulation could be achieved by providing an incentive that is hard to resist, threatening to do something like ending a friendship, inducing guilt trips or other emotional reactions, or wearing others down through complaining or other means.

Our initial example seems to be an instance of this kind, as the decline text is meant to make the user feel guilty or uncomfortable with clicking the link, even though that emotion isn’t warranted. If the same website or app were to have continual pop-ups that required the user to click out of them until they subscribed or paid money to the website, that could also count as a kind of pressuring or an attempt to wear the user down (I’m looking at you, Candy Crush).

Type 3 manipulation involves trying to get the person to reconceptualize something by emphasizing certain things and de-emphasizing others to serve the manipulator’s ends. This kind of manipulation wants the person being manipulated to see something in a different light.

For example, the manipulink text that reads “No, I hate good décor” tries to get the user to see their action of declining the newsletter as an action that declines good taste as well. Or, a website might mess with text size, so that the sale price is emphasized and the shipping cost is deemphasized to get the user to think about what a deal they are getting. As both examples show, the different types of manipulation can intersect with each other—the first a mix of Types 2 and 3, the second a mix of Types 1 and 3.

These different kinds of manipulation do not have to be intentional. Sometimes user manipulation may just be a product of bad design, perhaps because there were unintentional consequences of a design that was supposed to accomplish another function or perhaps because someone configured a page incorrectly.

But often these strategies of manipulation occur across different aspects of a platform in a concerted effort to get users to do what the manipulator wants. In the worst cases, the users are being used.

In these worst-case scenarios, the problem seems to be exactly as Baron describes, as the users are not treated as rational beings with the ability to make informed choices but instead as fodder for increased metrics, whether that be increased sales, clicks, loyalty program signups, or otherwise. We can contrast this with a more ethical model that places the user’s needs and autonomy first and then constructs a platform that will best serve those needs. Instead of tricking or pressuring the user to increase brand metrics, designers will try to meet user needs first, which if done well, will naturally drive engagement.

What is interesting about this user-first approach is that it does not necessarily reduce to considerations of autonomy.

A user’s interests and needs can’t be collapsed into the ability to make any choices on the platform that they want without interference. Sometimes it might be good to manipulate the user for their own good.

For example, a website might prompt a user to think twice before posting something mean to prevent widespread bullying. Even though this pop-up inhibits the user’s initial choice and nudges them to do something different, it is intended to act in the best interest of both the user posting and the other users who might encounter that post. This tactic seems to fall into the third type of manipulation, or getting the person to reconceptualize, and it is a good example of manipulation that helps the user and appears to be morally good.

Of course, paternalism in the interest of the user can go too far in removing user choice, but limited manipulation that helps the user to make the decisions that they will ultimately be happy with seems to be a good thing. One way that companies can avoid problematic paternalism is by involving users at different stages of the design process to ensure that user needs are being met. What is important here is to treat users as co-deliberators in the process of developing platforms to best meet user needs, taking all users into account.

If the user finds that they are being carefully thought about and considered in a way that takes their interests into account, they will return that goodwill in kind. This is not just good business practice; it is good ethical practice.

Should You Outsource Important Life Decisions to Algorithms?

photograph of automated fortune teller

When you make an important decision, where do you turn for advice? If you’re like most people, you probably talk to a friend, loved one, or trusted member of your community. Or maybe you want a broader range of possible feedback, so you pose the question to social media (or even the rambunctious hoard of Reddit). Or maybe you don’t turn outwards, but instead rely on your own reasoning and instincts. Really important decisions may require that you turn to more than one source, and maybe more than once.

But maybe you’ve been doing it wrong. This is the thesis of the book Don’t Trust Your Gut: Using Data to Get What You Really Want in Life by Seth Stephens-Davidowitz.

He summarizes the main themes in a recent article: the actual best way to make big decisions when it comes to your happiness is to appeal to the numbers.

Specifically, big data: the collected information about the behavior and self-reports of thousands of individuals just like you, analyzed to tell you who to marry, where to live, and how many utils of happiness different acts are meant to induce. As Stephens-Davidowitz states in the opening line of the book: “You can make better life decisions. Big Data can help you.”

Can it?

There are, no doubt, plenty of instances in which looking to the numbers for a better approximation of objectivity can help us make better practical decisions. The modern classic example that Stephens-Davidowitz appeals to is Moneyball, which documents how analytics shifted evaluations of baseball players from gut instinct to data. And maybe one could Moneyball one’s own life, in certain ways: if big data can give you a better chance of making the best kinds of personal decisions, then why not try?

If that all seems too easy, it might be because it is. For instance, Stephens-Davidowitz relies heavily on data from the Mappiness project, a study that pinged app users at random intervals to ask them what they were doing at that moment and how happy they felt doing it.

One activity that ranked fairly low on the list was reading a book, scoring just above sleeping but well below gambling. This is not, I take it, an argument that one ought to read less, sleep even less, and gamble much more.

Partly because there’s more to life than momentary feelings of happiness, and partly because it just seems like terrible advice. It is hard to see exactly how one could base important decisions on this kind of data.

Perhaps, though, the problem lies in the imperfections of our current system of measuring happiness, or any of the numerous problems of algorithmic bias. Maybe if we had better data, or more of it, then we’d be able to generate a better advice-giving algorithm. The problem would then lie not in the concept of basing important decisions on data-backed algorithmic advice, but in its current execution. Again, from Stephens-Davidowitz:

These are the early days of the data revolution in personal decision-making. I am not claiming that we can completely outsource our lifestyle choices to algorithms, though we might get to that point in the future.

So let’s imagine a point in the future where these kinds of algorithms have improved to a point where they will not produce recommendations for all-night gambling. Even then, though, reliance on an impersonal algorithm for personal decisions faces familiar problems, ones that parallel some raised in the history of ethics.

Consider utilitarianism, a moral system that says that one ought to act in ways that maximize the most good, for whatever we should think qualifies as good (for instance, one version holds that the sole or primary good is happiness, so one should act in ways that maximize happiness and/or minimize pain). The view comes in many forms but has remained a popular choice of moral systems. One of its major benefits is that it provides a determinate and straightforward way (at least, in principle) of determining which actions one morally ought to perform.

One prominent objection to utilitarianism, however, is that it is deeply impersonal: when it comes to determining which actions are morally required, people are inconsequential, since what’s important is just the overall increase in utility.

That such a theory warrants a kind of robotic slavishness towards calculation produces other unintuitive results, namely that when faced with moral problems one is perhaps better served by a calculator than actual regard for the humanity of those involved.

Philosopher Bernard Williams thus argued that these kinds of moral systems appeal to “one thought too many.” For example, if you were in a situation where you need to decide which of two people to rescue – your spouse or a stranger – one would hope that your motivation for saving your spouse was because it was your spouse, not because it was your spouse and because the utility calculations worked out in the favor of that action. Moral systems like utilitarianism, says Williams, fail to capture what really motivates moral actions.

That’s an unnuanced portrayal of a complex debate, but we can generate parallel concerns for the view that we should outsource personal decision-making to algorithms.

Algorithms using aggregate happiness data don’t care about your choices in the way that, say, a friend, family member, or even your own gut instinct does.

But when making personal decisions we should, one might think, seek out advice from sources that are legitimately concerned about what we find important and meaningful.

To say that one should adhere to such algorithms also seems to run into a version of the “one thought too many” problem. Consider someone who is trying to make an important life decision, say about who they should be in a relationship with, how they should raise a child, what kind of career to pursue, etc. There are lots of different kinds of factors one could appeal to when making these decisions. But even if a personal-decision-making algorithm said your best choice was to, say, date the person who made you laugh and liked you for you, your partner would certainly hope that you had made your decision based on factors that didn’t have to do with algorithms.

This is not to say that one cannot look to data collected about other people’s decisions and habits to try to better inform one’s own. But even if these algorithms were much better than they are now, a basic problem would remain with outsourcing personal decisions to algorithms, one that stems from a disconnect between meaningful life decisions and impersonal aggregates of data.

Private Jets and Carbon Emissions – Too Swift to Judge?

photograph of stairway to private jet

A number of celebrities have recently found their high-flying lifestyles under scrutiny after a report by sustainability marketing firm Yard shed light on the astronomically high carbon-cost of private jets. Yard provided data on a number of particularly egregious offenders – though among these, Taylor Swift (with an annual flight carbon footprint of 8,293.54 tonnes) ranked worst.

For context, Taylor’s flights came in at more than 1,200 times the average person’s annual carbon footprint. But it’s much worse than that.

In assessing the morality of these emissions, we shouldn’t be comparing them to what other people actually emit. Instead, we should be focusing on what we, as individuals, should be emitting.

Here’s the thing: Given the damage done so far, there’s no safe level of carbon emissions. Every tonne of CO2 introduced to the atmosphere increases the risk of catastrophic climate harms – be they floods, fires, or oppressive heatwaves. In order to totally eliminate this increase in risk, we would have to immediately reduce our net carbon emissions to zero. And that’s just not feasible. So, all the international community has been able to do is decide on an acceptable level of risk. In 2011, nearly all countries agreed to limit the global average temperature rise to no more than 2°C compared to preindustrial levels – the maximum global temperature rise we can tolerate while avoiding the most catastrophic effect of climate changes. According to the Intergovernmental Panel on Climate Change, achieving this with a probability of >66% will require us to keep our global carbon expenditure below 2900GtCO2. As at the time of writing, only 568GtCO2 remains.

But how exactly do we apportion this carbon budget amongst the people of Earth?

There are a number of ways we might do this. One way would be to take this current carbon budget and divide it equally among every resident of earth. This would give each person 71.4 tonnes – or just under one tonne per year over the average global lifespan. For those living in developed countries, a more generous budget can be arrived at by taking our current (very high) carbon expenditure, and proportionally reducing it over time in order to hit the 2°C target. In concrete terms, this would require our emissions to peak now, drop 50% by 2045, and fall below zero by 2075. Carbon Brief provides an incredibly helpful analysis of individual carbon budgets based on this approach. On this breakdown, a child born in the U.S. in 2017 (the most recent year for which data is available) will have to make do with a lifetime budget of only 450 tonnes of CO2 – that’s only 5.7 tonnes per year over the average lifespan.

All of this means that Swift is – with a single year of flights – blowing the carbon budget of 1,455 school-aged children. When understood in these terms, the frivolous use of private jets by celebrities seems even worse.

But are we in a position to judge?

Sure, the high-flying lifestyles of celebrities are particularly atrocious. But the truth is that most of us in the developed world are far exceeding our personal carbon budgets. Consider the (very generous) 5.7 tonne U.S. budget provided above. This year, the per capita emissions of U.S. citizens is expected to be around 14.7 tonnes – almost triple our budget. And while we might be amenable to making small improvements like living car-free (saving 2.4 tonnes per year), avoiding air travel (saving 1.6 tonnes per year) or going vegan (saving 0.8 tonnes per year) we seem largely unwilling to make the changes that really matter. Having a child, for example, comes at a carbon cost of 59.8 tonnes per parent per year. This means that a 30-year-old U.S. parent with three children already spends fourteen times their annual carbon budget on their procreative choices alone. All of this is relevant in assessing our attitudes towards high-flying celebrities.

Consider an analogous case: Suppose that a severe drought has struck your hometown, and that emergency water supplies are being distributed by the National Guard. There are sufficient supplies for every resident to receive exactly one gallon bottle of water. You arrive at the town square to receive your ration, only to witness a thief driving off with a truckload of fourteen hundred bottles. You are incensed at this behavior – as are many of your fellow residents. But then you notice that everyone around you is making off with far more than their allocated ration of water – some with as many as fourteen bottles under their arms. How should we feel about this situation? Sure, the thief is enormously selfish. He’s also creating a huge risk of harm to others through his actions. But everyone else is doing the same – if not to the same extent.

What’s more, by directing their ire towards the thief they’re managing to avoid reflection on their own immoral behavior and that of their more immediate neighbors.

Something similar can easily happen when it comes to tackling the climate crisis. When considering the problem, our thoughts go first to those who are doing the most harm. And they should. But we must avoid the temptation of pointing to those contributors as a way of avoiding taking responsibility for our own contributions. There’s a real concern that, in heaping judgment on celebrities like Taylor Swift, we merely avoid considering the many meaningful ways in which we, as individuals, might reduce our own harmful contributions to the climate crisis.

Moral Duties in an Online Age: The Depp/Heard Discourse

photograph of Johnny Depp and Amber Heard at event

The Johnny Depp/Amber Heard defamation trial reached a verdict in late June, but the conversation around it is far from over. Both Heard and Depp have alleged that the other perpetrated domestic violence, and spectators have been quick to take sides. The televised trial dredged up salacious stories of abuse, the infamous turd, and a severed finger. The court largely sided with Depp, ordering Heard to pay $10.35 million to Depp and Depp to pay $2 million to Heard. But that is not the end of the story.

Over the weekend, over 6,000 pages of sealed court documents were released, reigniting the controversy. Some details within were not very flattering for Depp, and the hashtag, “#AmberHeardDeservesAnApology,” made its rounds on Twitter this weekend. During the trial itself, however, the hashtag, “#JusticeforJohnnyDepp,” was the predominant one, with discussion on TikTok largely supporting a pro-Depp narrative.

The news stories about this new development lead with headlines from “Unsealed Depp v. Heard court docs reveal ‘Aquaman’ actress was ‘exotic dancer’” to “Amber Heard Lawyers Claimed Johnny Depp Had Erectile Dysfunction That Likely Made Him ‘Angry’” to “Depp Swore in Declaration That Amber Heard Never Caused Him Harm: ‘Damning’” to “Amber Heard’s sister ‘told her boss the acres did sever Johnny Depp’s finger when she hurled a vodka bottle at him.’

We collectively have played into these events unfolding in the way that they did, both by giving our attention to the trial and then making judgments about Depp and Heard based on the evidence and testimony provided. This is not necessarily a bad thing, as Heard and Depp are public figures who should be held accountable for their actions.

But combine overconfidence in amateur sleuthing, the necessity of taking sides on the internet, fan loyalty to Depp or Heard, and trauma due to experience with domestic violence, and you do not end up with productive internet conversation.

While it seems that it might have been better in some ways to leave these details private instead of amplifying the public nature of the trial through social and traditional media networks, the information about the trial and the discussion around it cannot be taken back. Given that Depp’s and Heard’s former relationship was and is still being picked apart on the internet, what duties do we have in responding to this ongoing discussion?

There are roughly three ways that we could respond productively at this point:

1) We could let it go and turn our attention away from the spectacle.

2) We could dig through the court documents and records to try to determine the truth and either correct or affirm our previous judgments.

3) We could step in or comment on parts of the discussion around Heard and Depp when it becomes misogynistic, bullying, or otherwise rancid.

Take the first option: turning our attention away from the spectacle. In some ways this seems like a good option, because the ongoing toxic discourse survives and thrives on our attention. If we take our attention away from it, we remove its sustenance. At the same time, if the people who are making thoughtful contributions to the conversation turn away from it, that will likely make the quality of the ongoing conversation even worse than it already is. And now that this case has been so publicly litigated, there seems to be some injustice in allowing an inaccurate public conception of Heard and/or Depp to stand.

Take the second option: relitigating the evidence. While this does provide more fuel for the controversy, it can get us closest to understanding the truth about what happened. Trying to figure out what is true in this kind of case is difficult, however, as there are mountains of legal documents and testimony to review. Few people have the time or expertise to do that kind of investigation well. While it is good to find the truth and put it out there, especially in response to such a public maligning of Heard and Depp, this kind of response could still fall into the trap of digging too deeply into what should be private information about Heard’s and Depp’s lives.

Take the third option: stepping in at the level of the discussion itself. In some ways, this response is easier than the second option, because it does not require amassing the full information about the Depp/Heard trial. It does, however, require a keen eye for toxic patterns in internet discourse and the ability to point those out without creating a new, toxic meta-conversation. This kind of response has the potential to improve the collective conversation, but it does not by itself provide the full resources for doing justice to Heard and Depp by speaking the truth about the trial. It does, however, have the potential to speak truth and do justice to the way the public conversation around the trial has gone, though that might depend on having a good enough understanding of the facts of the case.

None of these responses are exclusive, and they likely do not exhaust the possible options for responding productively to the discourse. How should you figure out which response(s) to take? If you have poured lots of time and energy into speaking with strangers about this case on the internet, it might be good to step back and give the whole thing less of your attention. If you have made public judgments about Depp and Heard and realize that new evidence points towards your judgments being wrong, it seems that there is good reason for you to do your research to determine the accuracy of your public claims and to apologize if you were wrong. If you don’t have the time and energy to research everything but see bad patterns of discourse happening in your social media circles, you might step in and say something.

However, having the courage to step up and speak the truth and knowing whether it is the right thing to do can be very difficult in cases like these. Good intentions and true judgments may not be enough to turn the tide.

Because of the way the dynamics of cancellations like these play out, it is nearly impossible to make any substantive judgments about Depp, Heard, or the conversation around them without being accused of minimizing domestic violence and getting sucked back into the same unproductive patterns of discourse.

If someone thinks that Depp was the primary aggressor, that leaves them open to accusations of minimizing domestic violence against women. If someone thinks that Heard was the primary aggressor, that leaves them open to accusations of minimizing domestic violence against men. If someone thinks that there was mutual abuse, that leads to accusations of playing into both-sides-ism and ignoring the violence done by the real perpetrator. Meta-level observations about feminism or domestic violence against men can also get pulled back into these tropes. The only ways to get out might be to change the conversation to be able to talk more directly about the larger moral issues about gender and domestic violence that the trial raises, or to wait until the dust settles so all the facts can be properly addressed and appreciated.

Individual actions within the discourse are unlikely to solve the underlying structural problems of both social and traditional media that form the basis for the collective conversation, but they do allow us, as users of social media, to take responsibility for our individual actions that contribute to either a healthier or a more toxic discourse.

The Desire for Moral Impotence

photograph of hands tied behind man's back

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Richard Gibson and Nicholas Kreuder recently wrote about humans’ morally troublesome desire for control. The prospect of control is, Gibson notes, “intrinsically appealing” to humans, an “incredible common desire,” concurs Kreuder. Both writers also agree we should be wary of this desire for control. Gibson argues that this desire negatively influences our relationship with nature, while Kreuder argues that it “may leave our interactions with others feeling impoverished and hollow.” I largely agree, but I think there is another equally universal and deep-seated desire that also deserves some consideration — the desire to lack control.

An oft-repeated saying in philosophical ethics is “‘ought’ implies ‘can’.” In other words, if you can’t do something, then there’s no question of whether you ought to do it. Our moral responsibilities only extend as far as our abilities.

Because of this important link between what we ought to do and what we can do, being reminded that something is under our control often also serves as a reminder that it is also our responsibility.

The discovery that one has control is often not as joyous and anxiety-relieving an experience as you might expect given the universal human desire for control Gibson and Kreuder describe. In fact, anger, resentment, and bitterness are all common reactions to being reminded that we are in control of something. We often don’t want control. We yearn for it to be nothing to do with us — someone else’s problem.

Many of our responsibilities are, of course, distinctly moral ones. The world is an imperfect place, and we all have the capacity to make it better to some degree. In fact, many of us have the power to make it significantly better. In other words, most of us actually have a morally significant level of control over how the future unfolds.

Let’s take an example. It costs significantly more than most people think to save a life by donating to the most effective charities — about $2,300. But that’s still only about half as much as the average American spends at restaurants each year.

Ask yourself honestly; could you make a few lifestyle changes and afford the $2,300 needed to save a life? If so, how often? Once in your lifetime? Once a decade? Once a year? More?

How does this make you feel? Are you excited to learn or be reminded of your morally significant amount of control over the world? To discover that you (probably) have the radical power to give another human, a person just like you, the gift of life? Speaking for myself, far from feeling elated, I feel guilty and ashamed. My conscience would be clearer if highly cost-effective charities like this simply did not exist — if they did not grant me this ability to meaningfully reshape the world (at least for that one person and their family). Because having that ability means I have that moral responsibility. In my ordinary life, I act in bad faith. I think and act as though I don’t have the power to save lives with moderate charitable donations. For self-serving reasons, I think and act as though I lack control over the world that I actually possess.

In his discussion of Nathan Fielder’s The Rehearsal, Kreuder points out the attractiveness of having more control over our interactions with others. Imagine having more ability to decide how people will respond, being sure that you’re not going to say the “wrong thing.” He suggests this kind of control would provide relief for those “wrecked with anxiety and marred with feelings of powerlessness.” This is certainly a desire I can recognize.

But I can also see the inverse: the desire of having less control in our interactions with others, in many cases.

Imagine your younger sibling is going off the rails – drinking too much and partying too hard. Their grades are suffering. Your sibling doesn’t listen to your parents but they look up to you; you know they will listen in the end. So you know that you, and only you, can intervene and make them get back on track. You can sit them down and have the difficult conversation that neither of you want to have. In other words, you have a great degree of control over your sibling.

How would you feel about having this kind of interpersonal control? Far from relieving your anxiety, you might feel deeply burdened by it, and the significant responsibility that it entails. It would be understandable to wish that you weren’t in such a potent position, and that someone else was instead. You might even be tempted to deny to yourself that you have such control over your sibling to avoid having to deal with the moral burden.

Rather than the risks that accompany greater interpersonal control, Gibson is concerned primarily with the negative effect that our desire for (often illusory) control has on our relationship with nature. It influences how we approach debates about “designer babies, personalized medicine, cloning, synthetic biology” and his focus, “gene drives.”

Gibson contends that humans actually have much less control than we like to think. In a cosmic sense, I think he is right. But, at least as a collective, humanity is surely in firm control over much of nature, perhaps even too much. Unfortunately, we control the global climate via our CO2 emissions. We control global fish stocks via modern fishing practices. And now, as Gibson explains, we also control which species we want to continue living and which we want to drive to extinction via the emerging technology of gene drives.

With respect to nature, at least the biosphere of Earth, humanity surely has much more control than most of us would think is desirable.

Our catastrophic relationship to nature seems to me less a symptom of our desire to control nature, and more a symptom of our being in a blissful state of denial about just how much control we have.

To be clear, I think Gibson is right to warn against an excessively domineering attitude toward nature, and Kreuder is right to warn against having too much control over our interactions with others. But we should also be on guard against the equally human tendency to find narratives that absolve us of our burdensome responsibilities. If Gibson is right that, fundamentally, “we’re subject to, rather than the wielders of power,” if we can’t really exercise control over the world, then there’s no reason to ask ourselves the tough question — what should we do? Avoiding this question may feel good, but it would be morally disastrous.

‘The Rehearsal’, Manipulation and Spontaneity

photograph of film set

The recent HBO series, The Rehearsal rests on a common concern – having to navigate situations without experience, where mistakes can significantly alter your life and the lives of those around you. Spearheaded by writer, director, producer, and performer, Nathan Fielder, the program offers people an opportunity to “rehearse” potentially high stakes situations by repeatedly running through a simulation with the actors. The episodes to air so far involve a bar trivia fanatic confessing to a friend that he previously lied about having a master’s degree, a woman practicing raising a child before deciding to become a mother, and a man hoping to convince his brother to let him access an inheritance left by their grandfather.

The show derives its humor, in part, from the lengths Fielder and his crew go to in the “rehearsals.” In the first episode, his team build a fully furnished, patroned and staffed, 1-to-1 replica of the bar in which the confession will take place, complete with live trivia. Fielder hires an actor to play the part of the confessor’s friend, who then arranges a meeting with this friend to better understand her personality, speech, and mannerisms in addition to gathering information about her from a blog she runs.

To simulate motherhood, the team hires many child actors to act as the adopted son. However, labor laws prevent a child actor from working more than four hours in a single day and limit the days a child can work each week. So, Fielder and his team must regularly replace the actor playing the child but do so covertly to maintain the illusion of raising a single child. Additionally, the team of actors changes each week, to a group of older actors, so the woman experiences raising a child at each stage of development.

Why go to such lengths, aside from the entertainment value? In the first episode of the series, Fielder notes that in our regular lives whether we achieve happy outcomes is a matter of chance. The idea behind taking painstaking efforts to make the “rehearsal” look and feel like reality is to leave the participants as prepared as possible in order to reduce the role fortune plays.

The appeal of performing these “rehearsals” seems to be motivated by a desire to control our interactions with others, in order to produce the best outcomes for all involved.

This is an incredibly common desire. Feeling like things are out of your control, especially those things which have a significant impact on the course of your life and the lives of those you care about, is anxiety inducing. The fact that things may go horribly wrong for us, despite our best efforts and intentions, creates a feeling of powerlessness. Being wrecked with anxiety and marred with feelings of powerlessness makes life difficult, to put it plainly.

But ought we follow through on this desire to gain control over our interactions with others? Richard Gibson helpfully analyzes the desire for control in the context of gene drives here. In doing so, Gibson presents an argument from Michael Sandel. Sandel argues that our desires for control, particularly in the realm of genes, involves a lack of humility. When we try to control as much as we can, this implies that we think it is appropriate for us to control these things. Specifically, Sandel claims that when we view the world in this way we lose sight of what he calls life’s giftedness. Our talents, skills, and abilities are given to us in the same way that a friend might give us a present. Much like one would think it inappropriate to alter a friend’s gift, perhaps trying to take total control of our lives is similarly inappropriate.

However, the real moral issues behind our desires for control become clear only when we consider that “rehearsing” involves other people.

For instance, the bar trivia fanatic is not just aiming to limit the fallout he experiences as a result from his confession. Instead, he is afraid of how his friend will react, and thus tries to control her reaction.

Of course, one might see no problem here. After all, we regularly tailor our interactions with others to avoid offending them while getting what we want. This is simply part of life.

Yet “rehearsed” interactions seem importantly different. To see why, consider the following: Daniel Susser, Beate Roessler, and Helen Nissebaum, in a discussion of manipulative practices on digital platforms, describe manipulation as “imposing a hidden or covert influence on another person’s decision-making.” Manipulative practices, they argue, involve trying to control a person in the same way that one might control a puppet, producing the desired behavior in the target by pulling on the target’s proverbial strings. Further, they argue that manipulative practices are more problematic the more targeted they are – manipulation that is tailor-made to match one person’s psychological profile seems more troubling than manipulation that trades on a widespread cognitive bias. Compare an ad for beer on TV the week before the Super Bowl that shows people excitedly watching a football game, to the same ad appearing in the social media feeds of sports fans after they make posts which suggest that they are feeling sad.

Although not perfectly analogous, there are important similarities between manipulation and “rehearsal.” We can see this with the trivia fanatic. In some cases, the “rehearsal” must be covert; if the fanatic’s friend knew he spent hours “rehearsing” their conversation, this would surely undermine his efforts and likely cause great offense.

A “rehearsal” may involve efforts to control how others respond to the conversation. One practices pulling different strings during the conversation to see how that changes the final outcome.

Finally, some “rehearsals” are targeted; the actor in the fanatic scenario puts in significant effort to mimic the friend as closely as possible. Surely, the actor cannot perfectly capture the psychological profile of the target. Nonetheless, imperfect execution does not seem wholly relevant. Thus, at least some “rehearsals” appear morally problematic for the same reason manipulation is worrisome.

Yet other “rehearsals” may lack these features. The rehearsal of parenthood, while hilarious due to its absurdity, does not need to be covert, involve an effort to guarantee particular outcomes, nor target a specific individual. One’s child will certainly have a different psychological profile than the child actor and, no matter how skilled the actors, surely they will not have indistinguishable performances. Thus, “rehearsals” that aim to try out a particular role, like parenthood, seem to have a different moral character than those that aim to make another person act in a desired way.

There is, however, one thing which may be universally problematic about “rehearsals.” During “rehearsals” of a conversation, Fielder stands by, taking notes and turning the conversation into an elaborate decision tree. This seems to turn the conversation into a sort of game – one practices it, determines cause and effect relationships between particular conversational choices and interlocutor responses, then pushes the proverbial reset button if the conversation takes an undesired turn.

As a result, it seems that the ultimate goal of a “rehearsal” is to eliminate spontaneity in the real conversation.

But part of what makes our experiences with others worthwhile is when the unexpected occurs. The price we pay for spontaneity is the anxiety of uncertainty. Our desires for control, if satisfied, may leave our interactions with others feeling impoverished and hollow.

I cannot say with perfect certainty what the goals of The Rehearsal are. The show offers a hilarious but often uncomfortable glimpse into what people are willing to do to gain a feeling of control. In doing so, it offers us the opportunity to reflect on what we should aim to take out of our interactions with others, and whether gaining control is worth what we might lose. If this was Fielder’s purpose with The Rehearsal, then it is a rousing success.

Book Bans, the First Amendment, and Political Liberalism

photograph of banned book display in public library

Book bans in public schools are not new in America. But since 2021, they have reached levels not seen in decades, the result of efforts by conservative parents, advocacy groups, and lawmakers who view the availability of certain books in libraries or their inclusion in curricula as threats to their values. In one study that looked at just the nine-month period between July 1, 2021 and March 31, 2022, the free expression advocacy organization PEN America found nearly 1,600 instances of individual books being banned in eighty-six school districts with a combined enrollment of over two million students. Of the six most-banned titles, three (Gender Queer: A Memoir, All Boys Aren’t Blue, and Lawn Boy) are coming-of-age stories about LGBTQ+ youth; two (Out of Darkness and The Bluest Eye) deal principally with race relations in America; and one (Beyond Magenta: Transgender Teens Speak Out) features interviews with transgender or gender-neutral young adults. 41% of the bans were tied to “directives from state officials or elected lawmakers to investigate or remove books.”

The bans raise profound ethical and legal questions that expose unresolved issues in First Amendment jurisprudence and within political liberalism concerning the free speech rights of children, as well as the role of the state in inculcating values through public education.

What follows is an attempt to summarize, though not to settle, some of those issues.

First, the legal side. The Supreme Court has long held that First Amendment protections extend to public school students. In Tinker v. Des Moines Independent Community School District, a seminal Vietnam War-era case about student expression, the Court famously affirmed that students in public schools do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.” Yet student expression in schools is limited in ways that would be unacceptable in other contexts; per Tinker, free speech rights are to be applied “in light of the special characteristics of the school environment.”

Accordingly, Tinker held that student speech on school premises can be prohibited if it “materially and substantially disrupts the work and discipline of the school.”

The Court has subsequently chipped away at this standard, holding that student speech that is not substantially and materially disruptive — including off-campus speech at school-sponsored events — can still be prohibited if it is “offensively lewd and indecent” (Bethel School District No. 403 v. Fraser), or can be “reasonably viewed as promoting illegal drug use” (Morse v. Frederick). In the context of “school-sponsored expressive activities,” such as student newspapers, the permissible scope for interference with student speech is even broader: in Hazelwood School District v. Kuhlmeier, the Court held that censorship and other forms of “editorial control” do not offend the First Amendment so long as they are “reasonably related to legitimate pedagogical concerns.”

Those cases all concerned student expression. A distinct issue is the extent to which students have a First Amendment right to access the expression of others, either through school curricula or by means of the school library. Book banning opponents generally point to a 1982 Supreme Court case, Board of Education, Island Trees Union Free School District No. 26 v. Pico, to support their argument that the First Amendment protects students’ rights to receive information and ideas and, as a consequence, public school officials cannot remove books from libraries because “they dislike the ideas contained in those books and seek by their removal to prescribe what shall be orthodox in politics, nationalism, religion, and other matters of opinion.”

There are, however, three problems with Pico from an anti-book banning perspective. First, those frequently cited, broad liberal principles belong to Justice Brennan’s opinion announcing the Court’s judgment. Only two other justices joined that opinion, with Justice Blackmun writing in partial concurrence and Justice White concurring only in the judgment. Thus, no majority opinion emerged from this case, meaning that Brennan’s principles are not binding rules of law. Second, even Brennan’s opinion conceded that school officials could remove books from public school libraries over concerns about their “pervasive vulgarity” or “educational suitability” without offending the First Amendment. This concession may prove particularly significant in relation to books depicting relationships between LGBTQ+ young adults, which tend to include graphic depictions of sex. Finally, Brennan’s opinion drew a sharp distinction between the scope of school officials’ discretion when it comes to curricular materials as opposed to school library books: with respect to the former, he suggested, officials may well have “absolute” discretion. Thus, removals of books from school curricula may be subject to a different, far less demanding constitutional standard than bans from school libraries. In short, Pico is a less-than-ideal legal precedent for those seeking to challenge book bans on constitutional grounds.

The question of what the law is is, of course, distinct from what the law should be. What principles should govern public school officials’ decisions regarding instructional or curricular materials and school library books?

A little reflection suggests that the Supreme Court’s struggle to articulate clear and consistent standards in the past few decades may be due to the fact that this is a genuinely hard question.

Political liberalism — the political philosophy that identifies the protection of individual liberty as the state’s raison d’être — has traditionally counted freedom of expression among the most important individual freedoms. Philosophers have customarily offered three justifications for this exalted status. The first two are broadly instrumental: according to one view, freedom of expression promotes the discovery of truth; according to another, it is a necessary condition for democratic self-governance. An important non-instrumental justification is that public expression is an exercise of autonomy, hence intrinsically good for the speaker.

The instrumental justifications seem to imply, or call for, a corresponding right to access information and ideas. After all, a person’s speech can only promote others’ discovery of truth or help others govern themselves if that speech is available to them. Simply having the unimpeded ability to speak would not contribute to those further goods if others were unable to take up that speech.

Yet even if the right of free speech implies a right to access information and ideas, it may be plausibly argued that the case for either right is less robust with respect to children. On the one hand, children generally have less to offer in terms of scientific, artistic, moral, or political speech that could promote the discovery of truth or facilitate democratic self-governance, and since they are not fully autonomous, their speech-acts are less valuable for them as exercises of their autonomy. On the other hand, since children generally are intellectually and emotionally less developed than adults, and also are not allowed to engage in the political process, they have less to gain from having broad access to information and ideas.

Obviously, even if sound, the foregoing argument only establishes lesser rights of free speech or informational access for children, not no such rights. And the case for lesser rights seems far weaker for teenagers than for younger children. Finally, the argument may be undermined by the state and society’s special interest in educating the young, which may in turn provide special justification for more robust free speech and informational access rights for children. I will return to this point shortly.

All the states of the United States, along with the federal government, recognize an obligation to educate American children. To fulfill that obligation, states maintain public schools, funded by taxation and operated by state and local government agencies, with substantial assistance from the federal government and subject to local, state, and federal regulation. As we’ve seen, the Supreme Court has mostly used the educational mission of the public school as a justification for allowing restrictions on students’ free speech and informational access rights inasmuch as their exercise would interfere with that mission.

Thus, the Court deems student speech that would disturb the discipline of the school, or books that would be “educationally unsuitable,” as fair game for censorship.

This is not radically different from the Court’s approach to speech in other public institutional contexts; for example, public employees’ speech is much more restricted than speech in traditional public forums. The combination of the sort of considerations adduced in the last paragraph, together with idea that speech and informational access can be legitimately restricted in public institutions, may lead one to conclude that student expression and informational access in public schools can be tightly circumscribed as long as it is for a “legitimate pedagogical purpose.”

This conclusion would, I think, be overhasty. The overriding pedagogical purpose of the public school does not cleanly cut in favor of censorship; in many ways, just the opposite. Educating students for citizenship in a liberal democracy must surely involve carefully exposing them to novel and challenging ideas. Moreover, mere exposure is not sufficient: the school must also encourage students to engage with such ideas in a curious, searching, skeptical, yet open-minded way. Students must be taught how to thrive in a society replete with contradictory and fiercely competing perspectives, philosophies, and opinions. Shielding students from disturbing ideas is a positive hindrance to that goal. This is not to deny that some content restrictions are necessary; it is merely to claim that the pedagogical mission of the public school may provide reason for more robust student free speech and informational access rights.

But what about conservatives’ objections — I assume at least some of them are made in good faith — to the “vulgarity” of certain books, irrespective of their intellectual content? Their determination to insulate students from graphic descriptions of sex might seem quixotic in our porn-saturated age, and one might think it is no worse than that. In fact, insofar as these objections derive from the notion that it is the job of public schools to “transmit community values,” as Brennan put it in Pico, they raise an important and unresolved problem for political liberalism.

Many versions of political liberalism hold that the state should strive to be neutral between the competing moral perspectives that inevitably exist in an open society.

The basic idea is that for the sake of both political legitimacy and stability, the state ought to be committed to a minimal moral framework — for example, a bill of rights — that can be reasonably accepted from different moral perspectives, while declining to throw its weight behind one particular “comprehensive doctrine,” to use John Rawls’s phrase.

For example, it would be intuitively unacceptable if state legislators deliberated about the harms and benefits of a particular policy proposal in terms of whether it would please or enrage God, or of its tendency to help the public achieve ataraxia, the Epicurean goal of serene calmness. One explanation for this intuition is that such deliberation would violate neutrality in employing ideas drawn from particular comprehensive doctrines, whether secular or religious, that are not part of that minimal moral framework with which most of the public can reasonably agree.

If state neutrality is a defensible principle, it should also apply to public education: the state should not be a transmitter of community values, at least insofar as those values are parochial and “thick,” rather than universal and “thin.” Concerns about children’s exposure to graphic depictions of sex may be grounded in worries about kinds of harm that everyone can recognize, such as psychological distress or, for certain depictions, the idea that they encourage violent sexual fantasies that might later be enacted in the real world. But conservatives’ worries might also be based in moral ideas that don’t have much purchase in the liberal moral imagination — ideas about preserving sexual purity or innocence, or about discouraging “unnatural” sexual conduct like homosexuality. These ideas, which are evidently not shared by a wide swath of the public, do not have a place in public education policy given the imperative of state neutrality.

Unfortunately, while perhaps intuitively compelling, the distinction between an acceptably “minimal” moral framework and a “comprehensive doctrine” has proved elusive. For example, are views about when strong moral subject-hood begins and ends necessarily part of a comprehensive doctrine, or can they be inscribed in the state’s minimal moral framework? Even if state neutrality can be adequately defined, many also question whether it is desirable or practically possible. Thus, it remains an open question whether the transmission of parochial values is a legitimate aim of public education.

Public educators’ role in mediating between students and the universe of ideas is and will likely remain the subject of ongoing philosophical and legal debate. However, this much seems clear: conservative book bans are just one front in a multi-front struggle to reverse the sixty-year trend of increasing social liberalization, particularly in the areas of sex, gender, and race.