← Return to search results
Back to Prindle Institute

No Fun and Games

image of retro "Level Up" screen

You may not have heard the term, but you’ve probably encountered gamification of one form or another several times today already.

‘Gamification’ refers to the process of embedding game-like elements into non-game contexts to increase motivation or make activities more interesting or gratifying. Game-like elements include attainable goals, rules dictating how the goal can be achieved, and feedback mechanisms that track progress.

For example, Duolingo is a program that gamifies the process of purposefully learning a language. Users are given language lessons and tested on their progress, just like students in a classroom. But these ordinary learning strategies are scaffolded by attainable goals, real-time feedback mechanisms (like points and progress bars), and rewards, making the experience of learning on Duolingo feel like a game. For instance, someone learning Spanish might be presented with the goal of identifying 10 consecutive clothing words, where their progress is tracked in real-time by a visible progress bar, and success is awarded with a colorful congratulation from a digital owl. Duolingo is motivating because it gives users concrete, achievable goals and allows users to track progress towards those goals in real time.

Gamification is not limited to learning programs. Thanks to advocates who tout the motivational power of games, increasingly large portions of our lives are becoming gamified, from online discourse to the workplace to dating.

As with most powerful tools, we should be mindful about how we allow gamification to infiltrate our lives. I will mention three potential downsides.

One issue is that particular gamification elements can function to directly undermine the original purpose of an activity. An example is the Snapstreak feature on Snapchat. Snapchat is a gamified application that enables users to share (often fun) photographs with friends. While gamification on Snapchat generally enhances the fun of the application, certain gamification elements, such as Snapstreaks, tend to do the opposite. Snapstreaks are visible records, accompanied by an emoji, of how many days in a row two users have exchanged photographs. Many users feel compelled to maintain Snapstreaks even when they don’t have any interesting content to share. To achieve this, users laboriously send meaningless content (e.g., a completely black photograph) to all those with whom they have existing Snapstreaks, day after day. The Snapstreak feature has, for users like this, transformed Snapchat into a chore. This benefits the company that owns Snapchat by increasing user engagement. But it undermines the fun.

Relatedly, sometimes an entire gamification structure threatens to erode the quality of an activity by changing the goals or values pursued in an activity. For example, some have argued that the gamification of discourse on Twitter undermines the quality of that discourse by altering people’s conversational aims. Healthy public discourse in a liberal society will include diverse interlocutors with diverse conversational aims such as pursuing truth, persuading others, and promoting empathy. This motivational diversity is good because it fosters diverse conversational approaches and content. (By analogy, think about the difference between, on the one hand, the conversation you might find at a party with people from many different backgrounds who have many different interests and, on the other hand, the one-dimensional conversation you might find at a party where everyone wants to talk about a job they all share). Yet Twitter and similar platforms turn discourse into something like a game, where the goal is to accumulate as many Likes, Followers, and Retweets as possible. As more people adopt this gamified aim as their primary conversational aim, the discursive community becomes increasingly motivationally homogeneous, and consequently the discourse becomes less dynamic. This is especially so given that getting Likes and so forth is a relatively simple conversational aim, which is often best achieved by making a contribution that immediately appeals to the lowest common denominator. Thus, gamifying discourse can reduce its quality. And more generally, gamification of an activity can undermine its value.

Third, some worry that gamification designed to improve our lives can sometimes actually inhibit our flourishing. Many gamification applications, such as Habitify and Nike Run Club, promise to help users develop new activities, habits, and skills. For example, Nike Run Club motivates users to become better runners. The application tracks users across various metrics such as distance and speed. Users can win virtual trophies, compete with other users, and so forth. These gamification mechanisms motivate users to develop new running habits. Plausibly, though, human flourishing is not just a matter of performing worthwhile activities. It also requires that one is motivated to perform those activities for the right sorts of reasons and that these activities are an expression of worthwhile character traits like perseverance. Applications like Nike Run Club invite users to think about worthwhile activities and good habits as a means of checking externally-imposed boxes. Yet intuitively this is a suboptimal motivation. Someone who wakes up before dawn to go on a run primarily because they reflectively endorse running as a worthwhile activity and have the willpower to act on their considered judgment is more closely approximating an ideal of human flourishing than someone who does the same thing primarily because they want to obtain a badge produced by the Nike marketing department. The underlying thought is that we should be intentional not just about what sort of life we want to live but also how we go about creating that life. The easiest way to develop an activity, habit, or skill is not always the best way if we want to live autonomously and excellently.

These are by no means the only worries about gamification, but they are sufficient to establish the point that gamification is not always and unequivocally good.

The upshot, I think, is that we should be thoughtful about when and how we allow our lives to be gamified in order to ensure that gamification serves rather than undermines our interests. When we encounter gamification, we might ask ourselves the following questions:

    1. Is getting caught up in these gamified aims consistent with the value or point of the relevant activity?
    2. Does getting caught up in this form of gamification change me in desirable or undesirable ways?

Let’s apply these questions to Tinder as a test case.

Tinder is a dating application that matches users who signal mutual interest in one another. Users create a profile that includes a picture and a short autobiographical blurb. Users are then presented with profiles of other users and have the option of either signaling interest (by swiping right) or lack thereof (by swiping left). Users who signal mutual interest have the opportunity to chat directly through the application.

Tinder invites users to think of the dating process as a game where the goals include evaluating others and accumulating as many matches (or right swipes) as possible. This is by design.

“We always saw Tinder, the interface, as a game,” Tinder’s co-founder, Sean Read, said in a 2014 Time interview. “Nobody joins Tinder because they’re looking for something,” he explained. “They join because they want to have fun. It doesn’t even matter if you match because swiping is so fun.”

The tendency to think of dating as a game is not new (think about the term “scoring”). But Tinder changes the game since Tinder’s gamified goals can be achieved without meaningful human interaction. Does getting caught up in these aims undermine the activity of dating? Arguably it does, if the point of dating is to engage in meaningful human interaction of one kind or another. Does getting caught up in Tinder’s gamification change users in desirable or undesirable ways? Well, that depends on the user. But someone who is motivated to spend hours a day thumbing through superficial dating profiles is probably not in this respect approximating an ideal of human flourishing. Yet this is a tendency that Tinder encourages.

There is a real worry that when we ask the above questions (and others like them), we will discover that many gamification systems that appear to benefit us actually work against our interests. This is why it pays to be mindful about how gamification is applied.

Sexual Violence in the Metaverse: Are We Really “There”?

photograph of woman using VR headset

Sexual harassment can take many forms, whether in an office or on social media. However, there might seem to be a barrier separating “us” as the user of a social media account, from “us” as an avatar or visual representation in a game since the latter is “virtual” whereas “we” are “real.” Even though we are prone to experience psychological and social damage to our virtual representations, it seems that we cannot – at least directly – be affected physically. A mean comment may hurt my feelings and change my mood – I  might even get physically ill – but no direct physical damage seemed possible. Until now.

Recently, a beta tester of Horizon Worlds – a VR-based platform of Meta – reported that a stranger “simulated groping and ejaculating onto her avatar.” Even more recently, additional incidents, concerning children, have been reported. A safety campaigner stated that “He has spoken to children who say they were groomed on the platform and forced to take part in virtual sex.” The same article talks about howa “researcher posing as a 13-year-old girl witnessed grooming, sexual material, racist insults and a rape threat in the virtual-reality world.” How should we understand these virtual assaults? While sexual harassment requires no physical presence, when we attempt to consider whether such actions represent a kind of physical violence, things get complicated as the victim has not been violated in the traditional sense.

This problem has been made more pressing by the thinning of the barrier that separates what is virtual from what is physical. Mark Zuckerberg, co-founder and CEO of Meta, has emphasized the concept of “presence” as “one of the basic concepts” of Metaverse. The goal is to make the virtual space as “detailed and convincing” as possible. In the same video, some virtual items are designed to give a “realistic sense of depth and occlusion.” Metaverse attempts to win the tech race by mimicking the physical sense of presence as much as possible.

The imitation of the physical sense of presence is not a new thing. Many video games also develop  a robust sense of  presence. Especially in mmo (massive multiplayer online) games, characters can commonly touch, push, or persistently follow each other, even when it is unwelcomed and has nothing to do with one’s progress in the game. We often accept these actions as natural, as an obvious and basic part of the game’s social interaction. It is personal touches like these that encourage gamers to bond with their avatars. They encourage us to feel two kinds of physical presence: present as a user playing a game in a physical environment, and present as a game character in a virtual environment.

But these two kinds of presence mix very easily, and the difference between a user and the avatar can easily be blurred. Having one’s avatar pushed or touched inappropriately, has very real psychological effects. It seems that at some point, these experiences can no longer be considered as merely “virtual.”

This line is being further blurred by the push toward Augmented Reality (AR) which places “virtual” items in our world, and Virtual Reality (VR) where “this” world remains inaccessible to user during the session. As opposed to classic games’ sense of presence, in AR and VR, we explore the game environment mainly within one sense of presence instead of two, from the perspective of a single body. Contrary to our typical gaming experience, these new environments – like that of the Metaverse – may only work if this dual presence is removed or weakened. This suggests that our experience can no longer be thought of as taking place “somewhere else” but always “here.”

Still, at some level, dual presence remains: When we take our headsets off, “this world” waits for us. And so we return to our main moral question under discussion: Can we identify an action within the embodied online world as physical? Or, more specifically, Is the charge of sexual assault appropriate in the virtual space?

If one’s avatar is taken as nothing but a virtual puppet controlled by the user from “outside,” then it seems impossible to conclude that gamers can be physically threatened in the relevant sense. However, as the barrier separating users from their game characters erodes, the illusion of presence makes the avatar mentally inseparable from the user, experience-wise they become increasingly the same. Since the aim of the Metaverse is to create such a union, one could conclude that sharing the same “space” means sharing the same fate.

These are difficult questions, and the online spaces as well as the concepts which govern them are always in development. However, recent events should be taken as a warning to consider preventive measures, as these new spaces require new definitions, new moral codes, and new precautions.

The Pascal’s Wager of Cryopreservation

photograph of man trapped under ice

In 1967, James H. Bedford, a psychology professor at California University, died. However, unlike most, Bedford didn’t plan to be buried or cremated. Instead, the Life Extension Society took ownership of his body, cooled it, infused it with chemicals, and froze it with dry ice before transferring it into a liquid nitrogen environment and placing it in storage. Bedford’s still preserved remains reside in Scottsdale, Arizona, under the Alcor Life Extension Foundation’s watchful eye. Upon undergoing this process, Bedford became the world’s first cryon – an individual preserved at sub-zero temperatures after their death, hoping that future medical technology can restore them to life and health. In other words, Bedford’s the real-life version of Futurama’s Philip J. Fry (minus the I.C. Wiener prank).

While Bedford was the first cryon, he is by no means alone. Today, Alcor is home to roughly 190 cryonically-preserved individuals. All of them hoped, before their deaths, that preservation might afford a second chance at life by fending off biological decay until possible restoration. But, Alcor is not the only company offering such services. Oregon Cryonics, KrioRus, and the Shandong Yinfeng Life Science Research Institute provide similar amenities. While exact figures are elusive, a recent New York Times article estimates that there are currently 500 cryons globally hoping, after paying between $48,000 to $200,000, to undergo the procedure upon their demise. Indeed, as I have written elsewhere, cryopreservation is no longer housed solely within speculative fiction.

Cryopreservation’s growing popularity might lead one to think that the potential for revival is a sure thing. After all, why would so many people spend so much money on something that isn’t guaranteed? Nevertheless, resurrection is not inevitable. In fact, not a single cryon has ever been revived. Every person who has undergone preservation is still in storage. The reason for this lack of revival is comparatively simple. While we can preserve people relatively well, we don’t have the technology or know-how to revive cryons. So, much like burial and cremation, there’s a (probably good) chance that cryopreservation is, in fact, a one-way trip.

This might lead us to the question why people are willing to invest such significant sums of financial and emotional capital in something that seems like such a poor investment. When money could be spent enhancing one’s life before death, bequeathed to loved ones, or donated to charity, why are people willing to flitter tens of thousands of dollars away on such a slim hope. One potential answer to this uniquely modern dilemma comes from the seventeenth-century philosopher and theologian Blaise Pascal and his argument for why we should believe in God’s existence.

Pascal’s Wager, as it is commonly known, is an argument that seeks to convince people to believe in God, not via an appeal to scripture or as an explanation for why the world exists. Instead, Pascal argued that individuals should believe in God out of self-interest; that is, believing in God is a better bet than not believing in him.

Pascal starts by admitting that we cannot ever truly know if God exists. Such certainty of knowledge is simply unobtainable as God, if they exist, is a divine being residing beyond mortal comprehension. In other words, the existence of God is not something we can ascertain, as God’s existence cannot be proven scientifically or reasoned logically.

However, even though we cannot positively claim God’s existence from evidence or inference, we can make claims about what would happen if we did/didn’t believe in God in cases where God either exists or not. In his 1994 book chapter, McClennen formulates Pascal’s argument in the form of a decision matrix like the one below:

God Exists God does not exist Total outcome rating
Wager for God Gain all (+1) Status quo (0) +1
Wager against God Misery (-1) Status quo (0) -1

 

Either we believe in God, or we don’t, and either God exists, or they don’t. Out of this combination of possibilities arises four potential outcomes. If God exists and we believe in them, we’re afforded the chance to go to heaven. If God exists and we don’t believe in them, we go to hell and suffer eternal torment. If God doesn’t exist, then it doesn’t matter if we believe in them or not, the outcome is the same.

So, Pascal argues, in the face of incomplete information, it is best to place our bets on that outcome that has the most significant payoff: God’s existence. Even if you’re wrong and God doesn’t exist, the worst result is that everything stays the same. On the other hand, if you wager that God doesn’t exist, and they do, an eternity of agony in hell awaits you. If you’re wrong, the best outcome is everything being the same. In other words, the worst outcome if you believe in God is the best outcome if you don’t. Again, if you’re going to gamble, you should put your money on the better payouts.

What does this have to do with cryopreservation? Mirroring Pascal’s acknowledgement that we cannot honestly know if God exists, we also cannot know if the technology required to revive people from cryopreservation will ever be developed. It might, and it might not. Firm knowledge of this is something we simply cannot gain as it is impossible to know what developments in medicine and technology will occur over the next several centuries. In the face of this uncertainty, we’re left with a similar wager to the one Pascal envisioned. Either we believe that cryopreservation will be entirely successful, enabling curative revival, or we don’t. So, drawing inspiration from McClennen, we can make a matrix mapping the outcomes of such a belief, or lack thereof:

Cryopreservation works Cryopreservation does not work Total outcome rating
Wager for cryopreservation Revived (+1) Dead; money wasted (-0.5) +0.5
Wager against cryopreservation Missed the chance at a revival (-1) Dead (0) -1

 

Much like gambling on the existence of God, gambling on cryopreservation’s success provides the best outcome (a return to life), whereas wagering against its development provides the worst result (missing out on the chance for more life). Even if cryopreservation isn’t successful when one thinks it would be, and that person wastes their money financing a futile endeavor, that still isn’t as bad an outcome as missing out on the chance of revival. Overall, then, a belief in cryonics affords the best result.

This form of arguing is common amongst those who advocate for cryopreservation, with many asserting that even if there is a minute chance that cryopreservation will work, it is infinitely preferable to the certainty of death offered by burial or cremation. As The Cryonics Institute asserts, “The Cryonics Institute provides an ambulance ride to the high-tech hospital of the future. When present medical science has given up on you or your loved ones, we seek another solution. The choice is yours – Do you take the chance at life?”

Now, this argument only works if you believe in the validity of Pascal’s original wager, and there are reasons not to. But, when faced with the gaping maw that is one’s demise, isn’t any gamble preferable to the certainty of death?

Digital Degrees and Depersonalization

photograph of college student stressing over books and laptop

In an article titled “A ‘Stunning’ Level of Student Disconnection,” Beth McMurtie of the Chronicle of Higher Education analyzes the current state of student disengagement in higher education. The article solicits the personal experiences and observations of college and university faculty, as well as student-facing administrative officers and guidance counselors. Faculty members cite myriad causes of the general malaise they see among the students in their classes: classes switching back and forth between virtual and remote settings; global unrest and existential anxiety, stemming from COVID-19 and the recent war between Ukraine and Russia; interrupted high school years that leave young adults unprepared for the specific challenges and demands of college life; the social isolation of quarantines and lockdowns that filled nearly two years of their lives. Some of these circumstances are unavoidable (e.g., global unrest), while others seem to be improving (classroom uncertainty, lockdowns, and mask mandates). Still, student performance and mental health continues to suffer as badly as it did two years ago, and college enrollment is nearly as low as it was at the start of the pandemic.

McMurtie also takes the time to interview some college students on their experience. The students point to a common element that draws together all the previously-mentioned variables suspected of causing student disengagement: prolonged, almost unceasing, engagement with technology. One college junior quoted in the article describes her sophomore year as a blur, remembering only snippets of early morning Zoom classes, half-slept-through, with camera off, before falling back asleep. Each day seemed to consist in a flow between moments of sleep, internet browsing, and virtual classes. When COVID-19 restrictions subsided and classrooms returned to more of a traditional format, the excessive use of technology that had been mandatory for the past two years left an indelible psychological mark.

As she returned to the classroom, Lyman found that many professors had come to rely more heavily on technology, such as asking everyone to get online to do an activity. Nor do many of her courses have group activities or discussions, which has the effect of making them still seem virtual. ‘I want so badly to be active in my classroom, but everything just still feels, like, fake almost.’

Numerous scientific studies offer empirical support for the observation that more frequent virtual immersion is positively correlated with higher levels of depersonalization — a psychological condition characterized by the persistent or repeated feeling that “you’re observing yourself from outside your body or you have a sense that things around you aren’t real, or both.” In an article published last month in Scientific Studies, researchers reported the following:

We found that increased use of digital media-based activities and online social e-meetings correlated with higher feelings of depersonalisation. We also found that the participants reporting higher experiences of depersonalisation, also reported enhanced vividness of negative emotions (as opposed to positive emotions).

They further remarked that the study “points to potential risks related to overly sedentary, and hyper-digitalized lifestyle habits that may induce feelings of living in one’s ‘head’ (mind), disconnected from one’s body, self and the world.” In short, spending more time online entails spending more time in one’s “head,” making a greater percentage of their life purely cerebral rather than physical. This can lead to a feeling of disconnect between the mind and the body, making all of one’s experiences feel exactly as the undergraduate student described her life during and after the pandemic: unreal.

If the increase and extreme utilization of technology in higher education is even partly to blame for the current student psychological disconnect, instructors and university administrators face a difficult dilemma: should we reduce the use of technology in classes, or not? The answer may at first appear to be an obvious “no”; after all, if such constant virtual existence is taking a psychological toll on college students, then it seems the right move would be to reduce the amount of online presence required to participate in the coursework. But the problem is complicated by the fact that depersonalization makes interacting with humans in the “real world” extremely psychologically taxing — far more taxing than interacting with others, or completing coursework, online. This fact illuminates the exponentially increasing demand over the past two years for online degrees and online course offerings, the decrease in class attendance for in-person classes, and the rising rates of anxiety and depression among young college students on campus. After being forced into a nearly continuous online existence (the average time spent on social media alone — not counting virtual classes — for young people in the United States is 9 hours per day) we feel wrenched out of the physical world, making reentering the world all the more exhausting. We prefer digital existence because the depersonalization has rendered us unable to process anything else.

Some philosophers, like Martha Nussbaum, refer to these kinds of preferences as “adaptive preferences” — things we begin to prefer as a way of adapting to some non-ideal circumstances. One of Nussbaum’s cases focuses on impoverished women in India who were routinely physically abused by their husbands, but preferred to stay married. Some of the women acknowledge that the abuse was “painful and bad, but, still, a part of women’s lot in life, just something women have to put up with as part of being a woman dependent on men.” Another philosopher, Jon Elster, calls these kinds of desires “sour grapes,” because a fox that originally desires grapes may convince himself the grapes he previously wanted were sour (and therefore not to be desired) if he finds himself unable to access them.

Are in-personal classes, social engagement, and physical existence on campus becoming “sour grapes” to us? If we have, to some extent, lost the ability to navigate these spaces with psychological ease, we may convince ourselves that these kinds of interactions are not valuable at all. But as we move further and further from regular (non-virtual) physical interactions with others, the depersonalization continues and deepens. It may be a self-perpetuating problem, with no clear path forward for either students or instructors. Should instructors prioritize meeting students where they are currently and providing virtual education as far as possible? Or should they prioritize moving away from virtual education with hope for long-term benefits? This is a question that higher education will likely continue to grapple with for many years to come.

Celebrity, Wealth, and Meaning in Life

Color photograph of reality star Paris Hilton sitting on a throne in front of a green screen while many cameras point at her.

People love celebrity and, in particular, they love rich celebrity. Reality TV makes a fortune by playing on people’s voyeuristic desires to see how rich people live. Paris Hilton, the Kardashians, and the Jenners are noteworthy simply for being rich and famous. “The Real Wives” franchise has been so successful that it has launched iterations of its brand in at least 10 different states. Many people admire and hold a high opinion of the capacities of Donald Trump simply because he’s perceived as being wealthy. Our culture is less likely to convict or to require the rich and famous to do any hard time for their criminal behavior. We live vicariously through them; we don’t want for them that which we wouldn’t want for ourselves under the same circumstances. After all, each one of us may be inclined to reason, “I myself am just a temporarily embarrassed billionaire.”

This is an attitude that people have long taken toward the rich, and it is one that we would do well to reflect carefully upon. The 18th-century philosopher Adam Smith is a figure that people often associate with capitalism, but Smith was not impressed with the ways in which people in his day viewed wealth. He wrote not only about markets, but also about moral behavior and the kinds of things about which people are inclined to express approval and disapproval. In The Theory of Moral Sentiments, he writes:

This disposition to admire, and almost to worship, the rich and the powerful, and despise, or, at least, to neglect, persons of poor and mean condition, though necessary to both establish and maintain the distinction of ranks and order in society is, at the same time, the great and most universal cause of the corruption of our moral sentiments. That wealth and greatness are often regarded with the respect and admiration which are due only to wisdom and virtue; and that the contempt, of which vice and folly are the only proper objects, is most unjustly bestowed upon poverty and weakness, has been the complaint of moralists in all ages.

Though there may be much to criticize in the idea of necessary distinction in terms of rank, Smith speaks to our times when he points out that while we venerate the wealthy, we are more likely to engage in what we might today call attribution bias when it comes to the poor. We seem inclined to attribute bad behavior on the part of others to their enduring personality characteristics (for example, their laziness, their self-indulgence, their lack of vision, etc.), and might be contemptuous of them for those reasons, but that same person would attribute similar bad behavior on their part to the various particulars of their circumstances. So, for example, Jane engages in attribution bias when she blames the fact that Tom got nothing done on the weekend on what she views as his laziness but explains the fact that she got nothing done on the same weekend on the fact that she had a long hard week at work and needed a rest.

A similar phenomenon occurs when people consider the behavior of the poor. We are more likely to say that a person who is out of work, addicted to drugs, or finds themselves homeless is in one or more of those circumstances because of their vicious traits of character than we are to say that they find themselves where they are due to bad luck, poor treatment, or ill health. Society tends to be contemptuous of such people for that reason, and often even passes retributive legislation that makes these social problems worse. These sentiments prevent us from viewing poverty and its attendant consequences (for example, addiction, criminal behavior, and incarceration) as public health and safety challenges that should be dealt with in compassionate ways.

When it comes to the wealthy, on the other hand, we tend to attribute success to work ethic, talent, innovativeness, and worthiness. Those who rise to the top do so because they deserve to be there; surely there could be no flaws with the system of merit that ensures that anyone with the right set of traits gets where they deserve to be. We admire such people, even when, in fact, they have vicious characters and manipulated and exploited people to get where they are.

The explanation behind how we view the wealthy probably has much to do with how we are encouraged to think about meaning in life. Here in the United States, the “American Dream” is often presented in a way that focuses on the value of material success. People live this dream to the extent that they are able to find work which allows them to purchase an impressive house and fancy cars to store in a large garage. Young people often plan their lives in ways that are focused on maximizing profits, or, at least, they are often encouraged to do so by their parents or their peers and made to feel like failures if they don’t. At some point, many come to believe that this kind of meaning can be theirs if, and only if, they work at it, and those who have not achieved such success must simply not have worked hard enough. Contempt ensues.

As Adam Smith points out, people frequently make the mistake of confusing material success and social status for virtue. He says,

The respect which we feel for wisdom and virtue is, no doubt, different from that which we conceive for wealth and greatness; and it requires no very nice discernment to distinguish the difference. But, notwithstanding this difference, those sentiments bear a very considerable resemblance to one another. In some particular features they are, no doubt, different, but, in the general air of the countenance, they seem to be so very nearly the same, that inattentive observers are very apt to mistake the one for the other.

Similar as they may feel, wealth and status are not the same thing as virtue. If we want to live flourishing lives, it would be wise of us to change our attitudes toward the rich and famous. Philosophers have long engaged in debate regarding meaning in life, but, perhaps unsurprisingly, no philosopher of note has concluded that meaning (or, absent that, a good or virtuous life) consists in attaining wealth and power. Our moral sentiments on this point are increasingly important as oligarchs gain more and more control over the planet. When our attitudes are distorted by the seductive powers of wealth and status, we aren’t in a position to recognize that the things we value most (for example, autonomy, self-respect, the well-being of the planet etc.,) are being bought and sold in a way that recognizes no greater good than the dollar.

The Obligations of Players and Fans

Color photograph of a crowd of football fans, one of whom is dressed up like the Hulk

Recently, Cristiano Ronaldo slapped the phone out of a 14-year-old fan’s hand, as the kid tried to take a photo from the terraces. The kid’s phone was broken, he was apparently bruised, and the police are investigating this “assault.” Meanwhile, Charlotte Hornets forward Miles Bridges threw his mouthguard at an abusive fan, accidentally hitting a girl. And finally, Ronaldo’s fellow Manchester United player Marcus Rashford was also embroiled in some controversy after apparently swearing at a fan who had criticized his performance. Rashford denied swearing, claiming that he instead said “come over here and say it to my face,” which he acknowledged was silly.

These cases are a bit different – Ronaldo hit a child (though there is no suggestion he knew this was a child, and it was hardly violent), Bridges did something perhaps less violent but a bit more disgusting, and Rashford merely swore. How should we judge these sports stars? My inclination is that the Ronaldo and Bridges cases are fairly obvious: they shouldn’t have been violent, and it was obviously wrong to act in that way.* Rashford’s case raises more interesting questions.

In Rashford’s case (and Bridges’s), he was being berated by fans – this was not on-pitch or in the stadium (like it was for Bridges) – and those fans then reacted with complete indignation when he dared respond to them. And this raises a question: why can fans swear at players, as they so often do, yet when a player raises his middle finger it is (according to the same fans) an outrage?

Perhaps a good starting point is the concept of a role model. People often criticize the behavior of sports stars by saying that they should be exemplars – they should act in a way that encourages other people, especially kids, to behave correctly. If this is true, it explains why fans can, say, swear but players cannot.

But should players be seen as role models? Basketball player Charles Barkley – who had his run-ins with fans – famously said, in a Nike commercial: “I am not a role model… Parents should be role models. Just because I dunk a basketball, doesn’t mean I should raise your kids.” This hits on something very basic: his job is to play basketball, it is to throw a ball through some hoops, and that is what we admire him for. What does being a role model have to do with that?

Alas, this is an overly simplistic view of the role of sports in society. Sports clubs are socially important, and how they do – as well as how players behave – reflects on fans. As Alfred Archer and Benjamin Matheson have argued, sports stars are often representatives of their club or country; when they misbehave, this can bring collective shame on the whole club. When a player is embroiled in a scandal, when a player does something morally disgusting, this makes everyone connected with the club ashamed. And this means that players should be expected to behave in certain ways. (In some scenarios, this might apply to fans, too, such as if they engage in racist chanting – or the Tomahawk Chop – but generally it seems as though players are representatives, fans less so.)

So, an argument like this – though Archer and Matheson are not explicitly trying to argue that players are role models – can ground the idea that players have special obligations to behave appropriately. That said, it might not extend to all sorts of behavior. Perhaps players can be adulterers, who are moderately unpleasant to those around them, but it does mean that players have an extra obligation to not be morally awful: firstly, they should not be morally awful for the standard moral reasons, secondly, they should not be morally awful because it can bring shame on so many others.

I am partial to the idea that players do have certain obligations to behave appropriately, since they are not merely playing sport. Even so, this does not establish the perceived gap between how players should act and how fans should act.

Firstly, it’s far from clear that Rashford’s behavior rises to the level of being bad enough to bring shame on Manchester United. Secondly, we haven’t explored what obligations fans have. How should they behave toward players?

Start with the idea that fans are supposed to support a team. Support can range from cheering them on in the stadium to decking out your home in a variety of merchandise. But you can support someone while criticizing them: after all, if you support someone you want them to do well, and that can involve telling them when they’re doing badly. When it comes to sports fandom, that might even involve booing if a player doesn’t perform well.

But even if booing is okay, there are limits to this, too: criticism and displeasure is one thing, abuse another. As Baker Mayfield has reminded us, the player is doing their job. Mayfield wonders whether the fan would be so keen on booing, if he had things his way: “I would love to show up to somebody’s cubicle and just boo the shit out of them and watch them crumble.” Perhaps fans need to bear this in mind, even if booing is occasionally acceptable when a player really underperforms.

Yet even if criticizing a player is acceptable, this seems to cross a line when the player isn’t playing. To intrude at work is one thing, but when they’re headed home, or to the team bus, seems to be another. It is to treat the player as having no personal life, but having a personal life is something everybody has a right to.

This brings us back around to Charles Barkley’s complaint with being a role model. Yes, he’s an athlete and that is what we are meant to respect, where he’s wrong is in thinking this shields him from any non-sporting expectations. But he is right that he is a man with a life to live, and once we get far enough away from the basketball court, we shouldn’t have much interest in what he does with his life (so long as it isn’t too egregious!).

All of this is to say, slapping a kid’s phone out of his hand or throwing a mouthguard is bad, but so is abusing players. Perhaps the real problem in the Rashford incident isn’t that he failed to be a role model – he in fact is a role model who behaves admirably in the public sphere – the problem is that fans lose sight of how they should behave.

 

*Around a week after this incident, Ronaldo’s child died. We do not know what stresses Ronaldo was under that might change how we view this incident. His bereavement obviously changes our view of this situation.

AI and Pure Science

Pixelated image of a man's head and shoulders made up of pink and purple squares

In September 2019, four researchers wrote to the academic publisher Wiley to request that it retract a scientific paper relating to facial recognition technology. The request was made not because the research was wrong or reflected bad methodology, but rather because of how the technology was likely to be used. The paper discussed the process by which algorithms were trained to detect faces of Uyghur people, a Muslim minority group in China. While researchers believed publishing the paper presented an ethical problem, Wiley defended the article noting that it was about a specific technology, not about the application of that technology. This event raises a number of important questions, but, in particular, it demands that we consider whether there is an ethical boundary between pure science and applied science when it comes to AI development – that is, whether we can so cleanly separate knowledge from use as Wiley suggested.

The 2019 article for the journal WIREs Data Mining and Knowledge Discovery discusses discoveries made by the research term in the work on ethic group facial recognition which included datasets of Chinese Uyghur, Tibetan, and Korean students at Dalian University. In response a number of researchers, believing that it is disturbing that academics tried to build such algorithms, called for the article to be retracted. China has been condemned for its heavy surveillance and mass detention of Uyghurs, and this study and a number of other studies, some scientists claim, are helping to facilitate the development of technology which can make this surveillance and oppression more effective. As Richard Van Noorden reports, there has been a growing push by some scientists to get the scientific community to take a firmer stance against unethical facial-recognition research, not only denouncing controversial uses of technology, but the research foundations of it as well. They call on researchers to avoid working with firms or universities linked to unethical projects.

For its part, Wiley has defended the article, noting “We are aware of the persecution of the Uyghur communities … However, this article is about a specific technology and not an application of that technology.” In other words, Wiley seems to be adopting an ethical position based on the long-held distinction between pure and applied science. This distinction is old, tracing back to the time of Francis Bacon and the 16th century as part of a compromise between the state and scientists. As Robert Proctor reports, “the founders of the first scientific societies promised to ignore moral concerns” in return for funding and for freedom of inquiry in return for science keeping out of political and religious matters. In keeping with Bacon’s urging that we pursue science “for its own sake,” many began to distinguish science as “pure” affair, interested in knowledge and truth by themselves, and applied science which seeks to use engineering to apply science in order to secure various social goods.

In the 20th century the division between pure and applied science was used as a rallying cry for scientific freedom and to avoid “politicizing science.” This took place against a historical backdrop of chemists facilitating great suffering in World War I followed by physicists facilitating much more suffering in World War II. Maintaining the political neutrality of science was thought to make it more objective by ensuring value-freedom. The notion that science requires freedom was touted by well-known physicists like Percy Bridgman who argued,

The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.

For Bridgman, science just wasn’t science unless it was pure. He explains, “Popular usage lumps under the single world ‘science’ all the technological activities of engineering and industrial development, together with those of so-called ‘pure science.’ It would clarify matters to reserve the word science for ‘pure’ science.” For Bridgman it is society that must decide how to use a discovery rather than the discoverer, and thus it is society’s responsibility to determining how to use pure science rather than the scientists’. As such, Wiley’s argument seems to echo those of Bridgman. There is nothing wrong with developing the technology of facial recognition in and of itself; if China wishes to use that technology to oppress people with it, that’s China’s problem.

On the other hand, many have argued that the supposed distinction between pure and applied science is not ethically sustainable. Indeed, many such arguments were driven by the reaction to the proliferation of science during the war. Janet Kourany, for example, has argued that science and scientists have moral responsibilities because of the harms that science has caused, because science is supported through taxes and consumer spending, and because society is shaped by science. Heather Douglas has argued that scientists shoulder the same moral responsibilities as the rest of us not to engage in reckless or negligent research, and that due to the highly technical nature of the field, it is not reasonable for the rest of society to carry those responsibilities for scientists. While the kind of pure knowledge that Bridgman or Bacon favor has value, these values need to be weighed against other goods like basic human rights, quality of life, and environmental health.

In other words, the distinction between pure and applied science is ethically problematic. As John Dewey argues the distinction is a sham because science is always connected to human concerns. He notes,

It is an incident of human history, and a rather appalling incident, that applied science has been so largely made equivalent for use for private and economic class purposes and privileges. When inquiry is narrowed by such motivation or interest, the consequence is in so far disastrous both to science and to human life.

Perhaps this is why many scientists do not accept Wiley’s argument for refusing retraction; discovery doesn’t happen in a vacuum. It isn’t as if we don’t know why the Chinese government has an interest in this technology. So, at what point does such research become morally reckless given the very likely consequences?

This is also why debate around this case has centered on the issue of informed consent. Critics charge that the Uyghur students who participated in the study were not likely fully informed of its purposes and this could not provide truly informed consent. The fact that informed consent is relevant at all, which Wiley admits, seems to undermine their entire argument as informed consent in this case appears explicitly tied to how the technology will be used. If informed consent is ethically required, this is not a case where we can simply consider pure research with no regard to its application. And these considerations prompted scientists like Yves Moreau to argue that all unethical biometric research should be retracted.

But regardless of how we think about these specifics, this case serves to highlight a much larger issue: given the large number of ethical issues associated with AI and its potential uses, we need to dedicate much more of our time and attention to the question of whether some certain forms of research should be considered forbidden knowledge. Do AI scientists and developers have moral responsibilities for their work? Is it more important to develop this research for its own sake or are there other ethical goods that should take precedence?

“Born This Way”: Strategies for Gay and Fat Acceptance

Birds-eye view of a crowd of people. Some people are in focus and others are blurred.

In light of the recent discussion around Florida’s “Don’t Say Gay” bill, you might have come across the argument that lesbian, bi/pansexual, and gay people did not choose their sexual orientation and cannot alter it and that’s what makes homophobia wrong. Call this the “born this way” argument. Interestingly, a similar response is often given to the question: What makes anti-fat bias wrong? The argument states that people cannot usually exert control over the size of their body, and diets don’t usually work. So, we shouldn’t blame them for or expect them to change something they can’t control.

Are these good answers? While both “born this way” style arguments have some truth to them, I don’t think that either gives us the best strategy for responding to these kinds of questions. Politically, they can only get us so far.

First, there is still some control that individuals can exercise in both cases. Gay people could choose to be celibate or live in a heterosexual marriage, though, of course, those actions are likely to be highly damaging to their happiness. Fat people could choose to continually stay on some diet and access medical interventions, even though they will likely gain the weight back and suffer in the meantime.

This limited control gives the homophobe/anti-fat person a foot in the door. They might argue that gay people should be celibate or force themselves to live in heterosexual relationships and that we can blame them if they fail to do so. Or that fat people should consistently diet and try to change their bodies through any means necessary. If they fail to do so, the anti-fat person can claim that they are blameworthy for not caring about their health.

It should be obvious why these are undesirable outcomes: neither rationale allows the gay person or the fat person to accept and love these core aspects of themselves. Each still effectively marginalizes gay people and fat people. These strategies simply shift the target of blame from the desires/physical tendencies themselves to the person’s response to those desires/physical tendencies. They require that you reject who you love/your own body.

Second, assume that it would be possible to argue that people can’t control their sexuality or weight at all, even to abstain from relationships or go on diets. The “born this way” style of argument blocks blame, but it doesn’t block the general attitudes that it is worse to be gay/fat and that gay/fat people cannot live full and meaningful lives.

Even if being gay or being fat are or have been associated with higher health risks (see, for instance, the recent spate of articles on COVID and obesity), that fact alone is insufficient to see these social identities as somehow inferior. For instance, failing to use sunscreen can contribute to poor health and is under personal control, and yet no one considers that behavior grounds for discrimination. Additionally, health risks such as AIDS or diabetes are not fully explainable by individual behaviors — they are also informed by public health responses, or a lack thereof, as well as by other material and social consequences of discrimination. Creating stigma does not help public health outcomes and it actively harms members of marginalized groups.

These negative associations with fat and gay people fail to take into account the kind of joy that fat and gay people experience when they accept themselves and can live full lives. See, for instance, the deep love that queer people have for each other and the loving families that they create, or the kind of joy felt in appreciating one’s fat body and enjoying living in it. Representation of fat and gay people being happy and living good lives is more likely to lead to health and happiness than campaigns to increase stigma.

Third, the “born this way” style argument, while it can be used to block some of the worst oppressive legislation and attitudes, is not the most helpful for a campaign of liberation. But what would an alternative look like? Probably an argument that shows that homophobic/anti-fat attitudes are wrong, because being gay/fat is a legitimate way to be in the world, and gay/fat people deserve equal respect and rights. In such a world in which gay/fat rights are enshrined by law and respected, gay/fat people can flourish.

With this answer, we haven’t simply blocked the ability to blame gay/fat people, we’ve blocked the judgment that there’s something morally bad or blameworthy about being gay/fat in the first instance. We’ve also avoided thorny issues surrounding what control any given individual has over their situation, and we’ve re-centered the need for positive changes to make life better for gay/fat people, to make them equal citizens, and to encourage their friends and family members to love and accept them. Of course, this project will require that we deal with the specific kinds of oppression that differently legible fat people and different sub-categories of LGBTQ+ people face, as well as how these identities can intersect with each other and with other marginalized identities.

This doesn’t mean that we should totally jettison “born this way” style arguments, but it does mean that we need to re-emphasize building and living into the kind of world we want to see. “Born this way” style arguments might be a part of that strategy, but they can’t be the core of it.

Should Work Pay?

Color photograph of Haines Hall at UCLA, a large red brick building with lots of Romanesque arches

“The Department of Chemistry and Biochemistry at UCLA seeks applications for an assistant adjunct professor,” begins a recent job listing, “on a without salary basis. Applicants must understand there will be no compensation for this position.”

The listing has provoked significant backlash. Many academics have condemned the job as exploitative. They have also noted the hypocrisy of UCLA’s stated support of “equality” while expecting a highly qualified candidate with a Ph.D. to work for free. For context, UCLA pays a salary of $4 million to its head men’s basketball coach, Mick Cronin.

UCLA has now responded to the growing criticism, pointing out:

These positions are considered when an individual can realize other benefits from the appointment that advance their scholarship, such as the ability to apply for or maintain grants, mentor students and participate in research that can benefit society. These arrangements are common in academia.

It is certainly true that such arrangements are fairly common in academia. But are they ethical?

The university’s ethical argument is that the unpaid worker receives significant compensation other than pay. For example, having worked at a prestigious university might advance one’s career in the longer term – adding to their “career capital.” The implication is that these benefits are significant enough that the unpaid job is not exploitative.

Similar arguments are given by organizations that offer unpaid internships. The training, mentoring, and contacts an intern receives can be extremely valuable to those starting a new career. Some unpaid internships for prestigious companies or international organizations are generally regarded to be so valuable for one’s career that they are extremely competitive, sometimes receiving hundreds of applications for each position.

Employers point out that without unpaid internships, there would be fewer internships overall. Companies and organizations simply do not have the money to pay for all these positions. They argue that the right comparison is not between unpaid and paid internships, but between unpaid internships and nothing. This might explain why so many well-known “progressive” organizations offer unpaid positions despite publicly disavowing the practice. For example, the U.N. has famously competitive unpaid internships, as does the U.K.’s Labour Party, a left-wing political party whose political manifesto promises to ban unpaid internships, and whose senior members have compared the practice to “modern slavery.” Not long ago, the hashtag #PayUpChuka trended when Chuka Umunna, a Labour Member of Parliament, was found to have hired unpaid interns for year-long periods.

Besides the sheer usefulness of these jobs, there is also a libertarian ethical case for unpaid positions. If the workers are applying for these jobs, they are doing so because they are choosing to. They must think the benefits they receive are worth it. How could it be ethical to ban or prevent workers from taking jobs they want to take? “It shouldn’t even need saying,” writes Madeline Grant, “but no one is forced to do an unpaid internship. If you don’t like them, don’t take one—get a paid job, pull pints, study, go freelance—just don’t allow your personal preferences to interfere with the freedoms of others.”

On the other side of the debate, the opponents of unpaid jobs argue that the practice is inherently exploitative. The first Roman fire brigade was created by Marcus Licinius Crassus:

Crassus created his own brigade of 500 firefighters who rushed to burning buildings at the first cry for help. Upon arriving at the fire, the firefighters did nothing while their Crassus bargained over the price of their services with the property owner. If Crassus could not negotiate a satisfactory price, the firefighters simply let the structure burn to the ground.

Any sensible homeowner would accept almost any offer from Crassus, so long as it was less than the value of the property. The homeowner would choose to pay those prices for Crassus’ services. But that doesn’t make it ethical. It was an exploitative practice – the context of the choice matters. Likewise, employers may find workers willing to work without compensation. But that willingness to work without compensation could be a sign of the worker’s desperation, rather than his capacity for autonomous choice. If you need to have a prestigious university like UCLA on your C.V. to have an academic career, and if you can’t get a paid position, then you are forced to take an unpaid adjunct professorship.

Critics of unpaid jobs also point out that such practices deepen economic and social inequality. “While internships are highly valued in the job market,” notes Rakshitha Arni Ravishankar, “research also shows that 43% of internships at for-profit companies are unpaid. As a result, only young people from the most privileged backgrounds end up being eligible for such roles. For those from marginalized communities, this deepens the generational wealth gap and actively obstructs their path to equal opportunity.” Not everyone can afford to work without pay. If these unpaid positions are advantageous, as their defenders claim, then those advantages will tend to go toward those who are already well-off, worsening inequality.

There are also forms of unpaid work which are almost universally seen as ethical: volunteering, for instance. Very few object to someone with some spare time and the willingness to help a charity, contribute to Wikipedia, or clean up a local park. The reason for this is that volunteering generally lacks many of the ethical complications associated with other unpaid jobs and internships. There are exceptions; some volunteer for a line on their C.V. But volunteering tends to be done for altruistic reasons rather than for goods like career capital and social connections. This means that there is less risk of exploitation of the volunteers. Since volunteers do not have to worry about getting a good reference at the end of their volunteering experience, they are also freer to quit if work conditions are unacceptable to them.

On the ethical scale, somewhere between unpaid internships and volunteering are “hidden” forms of unpaid work that tend to be overlooked by economists, politicians, and society more generally. Most cooking, cleaning, shopping, washing, childcare, and caring for the sick and disabled represents unpaid labor.

Few consider these forms of unpaid work as directly unethical to perform or request family members to help carry out. But it is troubling that those who spend their time doing unpaid care work for the sick and disabled are put at a financial disadvantage compared to their peers who choose to take paid forms of work instead. An obvious solution is a “carer’s allowance,” a government payment, paid for by general taxation, to those who spend time each week taking care of others. A very meager version of this allowance (roughly $100/week) already exists in the U.K.

These “hidden” forms of unpaid work also have worrying implications for gender equality, as they are disproportionately performed by women. Despite having near-equal representation in the workforce in many Western countries, women perform the majority of unpaid labor, a phenomenon referred to as the “double burden.” For example, an average English female 15-year-old is expected, throughout her life, to spend more than two years longer performing unpaid caring work compared to the average male 15-year-old. This statistic is no exception. The Human Development Report, studying 63 countries, found that 31% of women’s time is spent doing unpaid work, as compared to 10% for men. A U.N. report finds that, in developed countries, women spend 3:30 hours a day on unpaid work and 4:39 hours on paid work. In comparison, men spend only 1:54 hours on unpaid work, and 5:42 on paid work. Finding a way to make currently unpaid work pay, such as a carer’s allowance, could also be part of the solution to this inequality problem.

Is unpaid work ethical? Yes, no, and maybe. Unpaid work covers a wide bandwidth on the ethical spectrum. At one extreme, there are clear cases of unpaid work which are morally unproblematic, such as altruistically volunteering for a charity or cooking yourself a meal. And, at the other extreme, there are cases where unpaid work is clearly unethical exploitation: cases of work that ought to be paid but where employers take advantage of their workers’ weak bargaining positions to deny them the financial compensation to which they are morally entitled. And many cases of unpaid work fall somewhere between these two extremes of the moral spectrum. In thinking about these cases, we have no alternative but to look in close detail at the specifics: at the power dynamics between the employers and the employees, the range and acceptability of the options that were available to workers, and the implications for equality.

Insulin and American Healthcare

photograph of blood sugar recording paraphanelia

On March 31st the Affordable Insulin Now Act was passed by the House and is now being considered by the Senate. The House bill only applies to people who already have insurance, and caps the out-of-pocket costs for insulin to 35 dollars per-month. It does not address the uninsured, nor does it directly address the retail price of the drug. For advocates, it stands as a limited but hopefully effective response to the surging cost of life-saving insulin in the U.S., the crucial medicine for the management of diabetes.

Sponsoring congresswoman Angie Craig (D) of Minnesota stated:

Certainly our work to lower drug costs and expand access to healthcare across this nation is not done, but this is a major step forward in the right direction and a chance to make good on our promises to the American people.

It is also true to the original promise of insulin.

Isolated in 1921, the firsts patents for insulin were sold to the University of Toronto for the price of one dollar as part of a defensive maneuver to ensure insulin could be produced widely and affordably. Nonetheless, insulin became a goldmine. Cheap to manufacture and widely used, it could bring home a tidy profit even at low prices. Since the early days, three pharmaceutical companies, American Eli Lilly, French Sanofi, and Danish Novo Nordisk, have produced the vast majority of global insulin and captured almost the entire U.S. market. Low production costs continue, but insulin is no longer sold at low prices. At least not in the U.S. Eli Lilly’s popular Humalog insulin is sold to wholesalers at 274.70 per vial, compared to 21 dollars when first introduced in 1996. Further costs accrue as insulin makes its way through the thicket of wholesalers, pharmacy benefits managers, and pharmacies before reaching customers.

One question that emerges from the whole mess is: Who is to blame for this development? Here, blame needs to be understood in two senses. The first concerns all those actors who are partly causally responsible such that if they had behaved differently, the price of insulin would not be so high. American healthcare economics is ludicrously complex, and a discussion of the price of insulin quickly blossoms into biologics and biosimilars, generics, pricing power, patents, insurance, the FDA, pharmaceutical benefit managers, and prescription practices. What idiosyncrasies of the American healthcare systems allow a drug price to increase 1000% without being undercut by competition or stopped by the government? Insulin is not the only example.

But there is a second sense of blame, and that is which actors most directly chose to increase the costs of a life-saving medicine and thus are potentially deserving of moral opprobrium.

The big three manufacturers, with their overwhelming market share and aggressive pricing strategies, are clear targets. However, when investigated by the Senate in 2019, they pointed fingers at pharmacy benefit managers. These companies serve as intermediaries between manufacturers and health insurance companies. Like insulin manufacturers, the pharmacy benefit manager market is highly consolidated, with only three major players: CVS Caremark, Express Scripts, OptumRX. They can benefit directly from higher manufacturer prices by raking in fees or rebates. While noting that only Eli Lilly and CVS Caremark fully responded to requests for documents, the Senate investigation found problematic tactics on the part of both pharmacy benefit managers and manufacturers, such as leveraging market power and raising prices in lockstep.

The bill, it should be noted, is most directly targeted at insurance companies, rather than these other actors. This leads to both an economic and an ethical objection. The economic objection is that by forcing insurance companies to cap prices and absorb the cost of insulin, the insurance companies may simply turn around and raise premiums to recoup profit. The ethical objection is that it is unfair for the government to intercede and force costs of say, aggressive pricing by the manufacturer, onto some other party. The caveat to the ethical objection is that each of the three major pharmaceutical benefits managers have merged with major insurance companies.

What are the business ethics of this all? One approach would be stakeholder theory, which holds that corporate responsibility needs to balance the interests of multiple stakeholders including employees, shareholders, and customers. Pricing a medically-necessary drug out of the range of some customers would presumably be a non-starter from a stakeholder perspective, or at least extremely contentious.

The more permissive approach would be the Friedman doctrine. Developed by the economist Milton Friedman, it argues that the only ethical responsibility of companies is to act in the interest of their shareholders within the rules of the game. This is, unsurprisingly, controversial. Friedman took it as all but axiomatic that the shareholder’s interest is to make as much money as possible as quickly as possible, but the choice is rarely put bluntly: “Would you, as a shareholder, be okay with slighter lower profit margins, if it meant more diabetics would have access to their insulin?” (For Friedman this moral conundrum is not supposed to occur, as his operating assumption is that the best way to achieve collective welfare is through individuals or firms chasing their own interests in the free market.)

Separate from questions of blame and business ethics are the grounds for government intervention in insulin prices. Two approaches stand as most interesting and come at the problem from very different directions. The first is a right to healthcare. Healthcare is what is sometimes referred to as a positive right, which consists of an entitlement to certain resources. There is as yet no formal legal right to healthcare in the United States, but Democratic lawmakers increasingly speak in this framework. Obama contended, “healthcare is not a privilege to the fortunate few, it is a right.” Different ethicists justify the right to healthcare in different ways. For example, Norman Daniels has influentially argued that a right to healthcare serves to preserve meaningful equality of opportunity by shielding us from the caprice of illness. A slightly narrower position would be that the government has a compelling interest in promoting healthcare, even if it does not reach the level of a right.

A completely different ground for intervention would be maintaining fair markets. This gets us to a fascinating split in reactions to insulin prices. Either there is too much free market, or not enough. Advocates of free markets criticize the regulatory landscape that makes it difficult for generic competitors to enter the market, or the use of incremental changes to insulin of contested clinical relevance to maintain drug patents in a practice called “evergreening.”

The “free market” is central to the modern American discussion of healthcare, as it allows considerations of healthcare to not be discussed in terms of rights – that everyone deserves a right to healthcare – but in terms of economics. Republican politicians do not argue that people do not deserve healthcare, but rather that programs like Medicare for All are not good ways to provide it.

At the center of this debate, however, is an ambiguity in the term free market. On the one hand, a free market describes an idealized economic system with certain features such as low barriers to entry, voluntary trade, and prices responding in accordance to supply and demand. This is the use of free markets found in introductory textbooks like Gregory Mankiw’s Principles of Macroeconomics. This understanding of a free market is at best a regulative ideal, in that we can aim at it but we can never actually achieve it and all actual markets will depart from the theoretical free market to some degree or another.

On the other hand, the free market is used to refer to a market free from regulations and interference, especially government regulations, although this understanding does not follow from the theoretical conceptualization of the free market. Oftentimes government interference – such as breaking up an oligopoly – is precisely what is needed to move an actual market towards a theoretical free market. Even for advocates of free market economics, interventions and regulations should be evaluated based on their effect, not based on their status as interventions. From this perspective, the current insulin price represents not an ethical failure but a market failure and justifies intervention on those grounds.

The current bill, however, does not try to correct the market, but instead represents a more direct attempt to secure insulin prices (for people with insurance). This is an encouraging development for those who believe essential goods like insulin should not be subject to the whims of the market. More discouragingly, the bill is likely dead in the Senate.

The Philosophical Underpinning of “War Crimes” Statutes

photograph of destroyed apartment buildings

Over the past week, Russian forces have withdrawn from the areas surrounding Kyiv and Chernihiv, both located in Northern Ukraine. Belief among Western intelligence agencies is that this has been a repositioning, not a retreat. This withdrawal, however, was accompanied by disturbing reports, to put it mildly. Accusations against Russian soldiers reported by the Human Rights Watch include executions, repeated rape, torture, threats of violence, and destruction of property aimed against civilians in the area. These revelations come after air strikes against targets such as hospitals and theaters housing civilians.

The international outcry has been severe. U.S. President Joe Biden explicitly referred to Putin as a “war criminal” and called for a war crimes trial. Boris Johnson, Prime Minister of the U.K., stated this conduct “fully qualifies as a war crime.” President Volodymyr Zelensky of Ukraine accused Russia of genocide. However, Russian officials have dismissed the outcry, going so far as to claim that the scenes were staged.

These acts seem to violate the Geneva Conventions. Namely, the Fourth Geneva Convention which establishes protections for civilians in war zones. The convention specifically prohibits violence towards civilians, taking them as hostages, treating them in degrading and/or humiliating ways, and extra-judicial punishments like executions. When violations occur, the Convention tasks parties to it with prospecting responsible individuals through their own legal systems or to defer to international courts, like the International Criminal Court, when appropriate.

It is one thing to recognize nations have agreed to these treaties. However, legal agreement is different from morality. So, we should ask: What moral reason is there to avoid these practices?

A simple justification is a consequentialist one. Targeting civilians massively increases the suffering and death that wars inflict. The idea behind war crimes may simply just be to limit the horrific consequences of war by ensuring that the only people targeted by the war are those who are fighting it.

However, consequentialist justifications can always cut the opposite way. One might try to argue that, in the long run, unrestricted warfare could have better consequences than regulated, limited warfare. Much like the possibility of nuclear annihilation has prevented wars between major powers in the later half of the 20th century and onward, perhaps the possibility of any war becoming (even more) horrific would reduce the number of wars overall.

I am very skeptical of this line of reasoning. Nonetheless, there is a possibility, however remote, that it is correct. So, we should look elsewhere to justify war crimes statutes.

Many have thought long and hard about the morality of conduct in war – jus in bello. These “just war” theorists often determine what considerations justify the use of violence at the individual level and “scale up” this explanation to the level of states. What can we learn from these reflections?

First, violence is only justified against a threat. Suppose someone charged at you with harmful intent. However, you could stop the assailant by striking an innocent bystander; if you’re willing to do that to a bystander, then I might be afraid you’ll use any means available against me.

Would stopping me in my tracks justify attacking the innocent bystander? No, this seems false. And this is true even if attacking an innocent produced better consequences overall – the fact that you and your assailant would both be gravely injured does not justify minorly injuring the bystander.

So, most just war theorists propose a prohibition on the direct targeting of non-combatants. Perhaps the deaths of civilians may be justifiable if they are an unintended, regrettable consequence of an act that produces a desirable outcome. But military decision-makers are morally forbidden from directly and intentionally targeting civilians – an idea known as the doctrine of double effect.

Regardless, decision-makers do not have moral license to do anything so long as they don’t directly target civilians. Most just war theorists endorse a second criterion called proportionality. This means the goods gained by an act that unintentionally harms civilians must be proportionate to the harms. Suppose that bombing a mountain pass would slow an advancing army by a day. However, this would also destroy a village, killing at least one thousand civilians. This act does not target civilians, but it still seems wrong; delaying an advance by a day does not seem proportionate to the lives of one thousand innocents.

Finally, many just war theorists endorse a criterion of necessity. Even if a decision meets the other two criteria, it should not be adopted unless it is required to produce the good in question. Consider the case of the assailant again. You might be justified in defending yourself by shooting the attacker. However, if you also had a fast-acting tranquilizer gun this would change things. You could produce the same good – stopping the attack – without producing the same harm. Since the harm of shooting the attack is no longer necessary, it is no longer permitted.

Let’s extend this to war by re-imaging the mountain pass example. Suppose that the bombing would instead kill just one or two civilians. But we could also render the road impassable by using road spikes, caltrops and digging covered trenches. This would result in no civilian casualties. So, bombing the mountain pass, although not targeting civilians and now proportional, would nonetheless be unnecessary to achieve the goal of delaying the opposing army’s advance. And as a result it would not be justified.

With these criteria in hand, we can now clearly see that many of the Russian’s military’s actions are not just illegal, but they also fail to meet the most minimal standards for jus in bello. Many acts, particularly those in Bucha, directly targeted civilians. As noted earlier, this is the absolute minimum for moral justification. It is also unclear what, if any, purpose acts like executing civilians serve. Since Russian forces have now withdrawn from these areas, they clearly did not achieve whatever objective they were aimed at, unless the goal was merely to terrorize civilians (as the White House claims). But this might even undermine the Russian effort; why would the Ukrainian people put themselves at the mercy of a military that is unwilling to protect civilians?

Will anyone be held to account? It depends on what you mean. The Biden administration has announced new sanctions, the EU has as well and is proposing additional measures to member states. So, there will be at least economic consequences.

Most, however, would like to see the leaders behind these decisions face punishment. Unfortunately, this seems less likely. Russia is party to the Geneva Convention. But in 2019 President Vladimir Putin revoked Russia’s ratification of a protocol allowing members of an independent commission to investigate alleged violations of the Convention. He claimed that such investigations may be politically motivated. This sets the stage for a textbook example of circular reasoning – future investigations will be politically motivated because the Russian regime is not involved with them, and the Russian regime did not want to be involved because these investigations are politically motivated.

Unless the current regime feels compelled to punish the decision-makers directly responsible for these acts (a possibility that strikes me as very unlikely), then these crimes will likely go unpunished. Perhaps, in time, a new regime will take power in Russia and will seek to at least acknowledge and investigate these crimes as part of reconciliation. Until then, this should not stop us from labeling atrocities for what they are lest we grow numb to them.

Is the Pain of Wild Animals Always Bad?

Close-up color photograph of a screwworm fly on a green leaf.

Should humans intervene to prevent wild animals from suffering? This question has received some attention as of late. Consider, for example, Dustin Crummett’s recent article here at the Post.

In response to this question, I suggest that it is not clear what types of animal suffering are bad. Consequently, it is not clear that human beings ought to intervene on their behalf. I will outline what I think are several types of pain, but I still suggest it is unclear whether human beings should intervene.

Before we hit it off, notice what this question is not. It is not a question of negative obligation: “should humans act in such a way that causes animal suffering?” This question, when we answer “no,” means human beings have a negative obligation to not cause harm. For instance, this question of negative obligation arises in the recent prohibition of a geothermal project in Nevada, a project which could threaten an endangered species of toad.

Instead, the present question is a positive one. When, if ever, should humans intervene to prevent wild animals from suffering? Crummett’s example of the New World screwworm is poignant and motivates us to intervene on behalf of suffering animals. The New World screwworm causes excruciating pain for the prey, and its elimination would not apparently result in ecological harm. In other words, its elimination would only seem to benefit the would-be-prey.

As Crummett argued, human beings ought to reduce wild-animal suffering. To make this point, Crummett entertains an example about a dog that lives in a poor state of experiencing cold, disease, and hunger before dying at an early age. He then uses this example to discern what is bad about the situation, and what is good about helping such an animal. He writes,

Why is what happens to the dog bad? Surely the answer is something like: because the dog has a mind, and feelings, and these events cause the dog to experience suffering, and prevent the dog from experiencing happiness. Why would the person’s helping the dog be good? Surely the answer is something like: because helping the dog helps it avoid suffering and premature death, and allows it to flourish and enjoy life.

Though this all seems intuitively plausible to me, I remain unconvinced. Even if I assume (for the sake of argument) that humans should prevent animal suffering, it is not clear what counts as suffering.

When I reflect upon pain more generally, it is not apparent to me that all kinds of pain are bad. Sure, I don’t like experiencing pain (except for going to the gym, perhaps). But we are talking about morals and value theory, not experience — when something is morally bad, it is not necessarily reducible to my experiential preference.

So, are all pains bad? Consider some different types of pain. In his recent monograph, philosopher David S. Oderberg distinguishes between three types of pain (distinctions not unlike the ones which St. Augustine posits in his little book, On the Nature of Good):

    1. Useful pain;
    2. Pain achieving;
    3. Useless pain.

A useful pain alerts you to something for a good reason. For example, it is useful to experience pain when you burn your hand on a hot stovetop; it is also useful to experience the pain that accompanies going to the gym.

“Pain achieving” is the pain that can accompany the successful operation of an organism’s natural operation or function. For example, pain achieving is the pain a child experiences with growing pains or when growing teeth.

Useless pain, in contrast, is pain that may alert you to an issue but serves no purpose. For example, a useless pain is the pain of chronic nerve damage or that of a phantom limb. This useless pain is useless because the alert it gives cannot successfully motivate the individual to react, or because there are no underlying issues or malfunction of the body to account for this.

According to Oderberg, only useless pain is bad. While the former pains might be unpleasant for the individuals in question, they are not always bad. Indeed, it is good that we experience a high degree of pain when we burn our hands on stovetops — why else would we move them? Surely, if we as human beings only had red lights go off in our peripheries whenever we were burned, it would not be as motivating.

Of the three options, Oderberg’s position that only useless pains are bad seems correct.

But notice a further complication. Even when the pain serves a further good, it can be bad in itself. As philosopher Todd Calder points out, while money can have a good of utility, it is intrinsically neutral. So too with pain. It might be a good of utility, but it could still be bad in itself.

This distinction between types of value explains why pains of utility can still be bad in themselves. While the pain of a sprained ankle is bad because it causes me to be irritable, it still can be bad in itself as a painful experience.

With these distinctions in mind, we come back to the original question: Should humans intervene in wild animal suffering?

It seems that the second distinction between intrinsic value and utility does not help us here. For, if all pain is intrinsically bad, and human beings ought to prevent all pain, we experience a moral overload. This is unrealistic, too onerous. Moreover, this conclusion would require us to intervene in all instances of pain, without discrimination regarding the kinds of pain and the degree of pain. Are we really to consider cases of an animal with a thorn in its side as serious as the case of an animal with a New World screwworm? Certainly not.

The first distinction instead offers a clear answer to the original question: Should humans intervene in wild animal suffering? Only if it is bad. And is the suffering of wild animals bad? If the suffering in question is an instance of useless pain, then yes.

To achieve a resounding “yes” to the original question, we need two things. First, we need a good reason for the assumption we started off with: that human beings are obliged to prevent animal suffering because it is bad (and such prevention amounts to a good act). Is this the case? I have not yet seen a good reason to believe it. Second, we need to see that there are instances of useless pain in wild animal suffering. Could the case of the New World screwworm count as an instance of useless pain? Perhaps. But it looks like it can count as an instance of ‘pain achieving’ as well. Because of this, it is not clear that human beings ought to intervene on behalf of wild animals.

Are Self-Spreading Vaccines the Solution to Potential Future Pandemics?

photograph of wild rabbits in the grass

Human beings are engaging in deforestation on a massive scale. As they do so, they come into contact with populations of animals that were previously living their lives unmolested in the forest. Humans are also increasingly gathering large numbers of animals in small spaces to raise for food. Both of these conditions are known to hasten the spread of disease. For instance, COVID-19 is a zoonotic disease, which means that it has the capacity to jump from one animal species to another. Many experts believe that the virus jumped from horseshoe bats, then to an intermediary species, before finally spreading to human beings. As a result of human encroachment into wild spaces, experts anticipate that there will likely be rapid spread of other zoonotic diseases in the near future.

In response to this concern, multiple teams of scientists are working on developing “self-spreading vaccines.” The technology to do so has existed for over 20 years. In 1999, scientists conducted an experiment designed to vaccinate wild rabbits against two particularly deadly rabbit diseases: rabbit hemorrhagic disease and myxomatosis. The process, both in 1999 and today, involves “recombinant viruses,” which means that strands of DNA from different organisms are broken and recombined. In the case of the rabbit vaccine, a protein from the rabbit hemorrhagic disease virus was inserted it into the myxoma virus that is known to spread rapidly among rabbit populations. The resulting virus was injected into roughly 70 rabbits. A little over a month later, 56% of the rabbits in the population had developed antibodies for both viruses.

Today, scientists are pursuing self-spreading vaccine technology for Ebola, Bovine Tuberculosis, and Lassa Virus. The research is currently being conducted on species-specific viruses rather than on those that have the capacity to jump from one species to another. However, as the research progresses, it could potentially provide a mechanism for stopping a potential pandemic before it starts.

Critics of this kind of program believe that we should adopt the Precautionary Principle, which says that we should refrain from developing potentially harmful technology until we know to a reasonable degree of scientific certainty how the technology will work and what the consequences will be. We do not yet know how these vaccines would function in the wild and how they might potentially affect ecosystems. It may be the case that, without these viruses active in the population, some species will become invasive and end up threatening the biodiversity of the given ecosystem.

On a related note, some argue that we should not use wild animals as test subjects for this new technology. Instead of encroaching further into the land occupied by these animals and then injecting them with vaccines that have not been tested, we should instead try to roll back the environmental damage that we have done. These critics raise similar concerns to those that are raised by critics of geoengineering. When a child messes up their room, we don’t simply allow them to relocate to the bedroom across the hall — we insist that they clean up their mess. Instead of developing increasingly intrusive technology to prevent disease spread from one species to another, we should simply leave wild animals alone and do what we can to plant trees and restore the lost biodiversity in those spaces. If that means that we need to make radical changes to our food systems in order to make such a strategy feasible, then that’s what we need to do.

In the case of genetically engineered crops, there have been some unanticipated consequences for local ecosystems. There have been instances of “transgene escape,” which means that some of the genetic features of an engineered organism are spread to other plants in the local ecosystem. In the case of crops that have been genetically modified to be pesticide resistant, this has led to the emergence of certain “superweeds” that are difficult to eliminate because they, too, are resistant to pesticides. That said, most of the soy and corn grown in the United States are crops that have been genetically modified to be pesticide resistant with very few negative consequences. Nevertheless, in the case of crops, we are dealing with life that is not sentient and cannot suffer. When we make use of these vaccines, we are delivering genetically modified deadly diseases to populations of animals without fully understanding what the consequences might be or if there will be a similar kind of transgene escape that has more serious side effects.

In response to this concern, advocates of the technology argue that we don’t have time to press pause or to change strategy. Deforestation has happened, and we need to be prepared to deal with the potential consequences. The COVID-19 pandemic had devastating impacts on human health and happiness. In addition to the death and suffering it caused, it also wreaked economic havoc on many people. It turned up the temperature of political battles and caused the ruin of many friendships and family units. Advocates of self-spreading vaccines argue that we should do everything in our power to prevent something like this from happening again.

Advocates of the policy also argue that these vaccines would benefit not only human beings, but wild animals as well. They could potentially eradicate serious diseases among animal populations. This could lead to a significant reduction in suffering for these animals. As a practical matter, wild animals can be very difficult to catch, so relying on traditional vaccination methods can prove quite challenging. This new method would only involve capturing a handful of animals, who could then spread the vaccine to the rest of the population.

Some object to this strategy because of a more general concern about the practice of genetic engineering. Those who offer in principle critiques of the process are often concerned about the hubris it demonstrates or worry that human beings are “playing God.” In response, advocates of genetic technology argue that we modify the natural world for our purposes all the time. We construct roads, build hospitals, and transplant organs, for example. The fact that the world does not exist in a natural state unaltered by human beings is only a bad state of affairs if it brings about negative consequences.

This is just one debate in environmental and biomedical ethics that motivates reflection on our new relation to the natural world. What is it to be environmentally virtuous? Is it ethical to use developing technology to modify the natural world to be just the way that human beings want it to be? Ought we to solve problems we have caused by altering the planet and the life on it even further? Or, instead, does respect for nature require us to restore what we have destroyed?

Should We Intervene to Help Wild Animals?

photograph of deer in the snow

The parasitic larvae of the New World screwworm consume the flesh of their living hosts, causing pain which is “utterly excruciating, so much so that infested people often require morphine before doctors can even examine the wound.” At any given time, countless animals suffer this excruciating pain. But not in North America – not anymore. Human beings have eliminated the New World screwworm from North America. This was done to protect livestock herds, but innumerable wild animals also benefit. In fact, eliminating the screwworm from North America has had “no obvious ecological effects.”

All of us should be happy that wild animals in North America no longer suffer the screwworm’s torments. I argued in an earlier post that if something has conscious experiences, then that entity matters morally. Suppose some stray dog experiences cold, hunger, and disease before dying at two years old. This is a bad thing, and if some person had instead helped the dog and given it a nice life, that would have been a good thing. Why is what happens to the dog bad? Surely the answer is something like: because the dog has a mind, and feelings, and these events cause the dog to experience suffering, and prevent the dog from experiencing happiness. Why would the person’s helping the dog be good? Surely the answer is something like: because helping the dog helps it avoid suffering and premature death, and allows it to flourish and enjoy life. But then, the exact same thing can be said about wild animals who do not suffer from the screwworm because humans drove it out.

So we have helped many wild animals by eliminating the New World screwworm, and we should be happy about this. The question then becomes: what if we intentionally intervened in the natural world to help wild animals even further? In South America, they still suffer from the New World screwworm. And they suffer from many other things all over the world: other parasites, disease, starvation, the elements, predation, etc. In principle, there may be quite a lot we can do to alleviate all this. We could eliminate other harmful parasites. We could distribute oral vaccines through bait. (We already do this to combat rabies among wild animals – again, this is for self-interested reasons, so that they don’t serve as a reservoir of diseases which can affect humans. But we could expand this for the sake of the animals themselves.) In the future, perhaps we will even be able to do things which sound like goofy sci-fi stuff now. Perhaps, say, we could genetically reengineer predators into herbivores, while also distributing oral contraceptives via bait to keep this from causing a catastrophic population explosion.

If we can do these things and thereby improve the condition of wild animals, I think we should. In fact, I think it is extremely important that we do so. There are trillions of wild vertebrates, and perhaps quintillions of wild invertebrates. We don’t know exactly where the cut-off for the ability to suffer is. But because there is so much suffering among wild animals, and because there are so many of them, it seems entirely plausible that the overwhelming majority of suffering in the world occurs in the wild. Since this suffering is bad, it is very important that we reduce it, insofar as we can.

Of course, we’d better make sure we know what we’re doing. Otherwise, our attempts to help might, say, upset the delicate balance of some ecosystem and make things worse. But this is not a reason to ignore the topic. It is instead a reason to investigate it very thoroughly, so that we know what we’re doing. The field of welfare biology investigates these questions, and organizations like the Wild Animal Initiative conduct research into how we can effectively help wild animals. It may turn out, of course, that some problems are just beyond our ability to address. But we won’t know which ones those are without doing research like this.

Many people react negatively to the idea that we should intervene to help wild animals. Sometimes they suggest that what happens in the natural world is none of our business, that we have no right to meddle in the affairs of wild animal communities. But aiding wild animal communities is merely doing what we would want others to do for our own communities, were they afflicted with similar problems. If my community suffered widespread disease, starvation, infant mortality, parasitism, attacks from predatory animals, etc. and had no way to address any of these problems on its own, I would be quite happy for outsiders who had the ability to help to step in.

Others worry that intervention would undermine the value of nature itself. They think the untamed savagery of the natural world is part of its grandeur and majesty, and that “domesticating” the natural world by making it less harsh would decrease its value. But, as the philosopher David Pearce has noted, this is plausibly due to status quo bias: an emotional bias in favor of however things currently happen to be.

Suppose we lived in a world where humans had greatly reduced disease, starvation, parasitism, etc. among wild animals, thereby allowing a much higher proportion of wild animals to live long, flourishing lives. Does anyone really think that people in that world would want to put those things back, so as to restore the majesty and grandeur of nature? Surely not! And anyway, I am not at all sure that improving the condition of wild animals would make them less grand or majestic. If someone, say, finds some baby birds whose mother has died and cares for them, are they making nature less grand or majestic – even a little bit?

Still others pose a religious objection: they worry that intervening in nature would mean arrogantly “playing God,” interfering in the natural order God established because we think we can do better. But we already use technology to protect ourselves, and our domestic animals, from natural threats – disease, parasites, predators, etc. And if anything, people think God wants us to do that, likes it when we express love for others by helping them avoid suffering. Why should the situation with wild animals be different? In fact, in this paper, I gave a theological argument in favor of intervening to help wild animals. I note that Judaism, Christianity, and Islam have traditionally viewed humans as having been given a special authority over the world by God, and then argue that, if anything, this gives us a special obligation to exercise this authority in helping wild animals.

So: we should do what we can to help wild animals. As I’ve said, there is quite a lot of work to be done to figure out what is the best way to do this. But that just makes that work more urgent.

The Scourge of Self-Confidence

photograph of boy cliff-jumping into sea

Our culture is in love with self-confidence — defined by Merriam-Webster as trust “in oneself and in one’s powers and abilities.” A Google search of the term yields top results with titles such as “Practical Ways to Improve Your Confidence (and Why You Should)” (The New York Times), “What is Self-Confidence? + 9 Ways to Increase It” (positivepsychology.com), and “How to Be More Confident” (verywellmind.com). Apparently, self-confidence is an especially valued trait in a romantic partner: a Google search for “self-confidence attractive” comes back with titles like “Why Confidence Is So Attractive” (medium.com), “4 Reasons Self-Confidence is Crazy Sexy” (meetmindful.com), and “6 Reasons Why Confidence Is The Most Attractive Quality A Person Can Possess” (elitedaily.com).

I will argue that self-confidence is vastly, perhaps even criminally, overrated. But first, a concession: clearly, some degree of self-confidence is required to think or act effectively. If a person has no faith in her ability to make judgments, she won’t make many of them. And without judgments, thinking and reasoning is hard to imagine, since judgments are the materials of thought. Similarly, if a person has no faith in her ability to take decisions, she won’t take many of them. And since decisions are necessary for much intentional action, such a person will often be paralyzed into inaction.

Nevertheless, the value that we place on self-confidence is entirely inappropriate. The first thing to note is that behavioral psychologists have gathered a mountain of evidence showing that people are significantly overconfident about their ability to make correct judgments or take good decisions. Representative of the scholarly consensus around this finding is a statement in a frequently-cited 2004 article published in the Journal of Research in Personality: “It has been consistently observed that people are generally overconfident when assessing their performance.” Or take this statement, from a 2006 article in the Journal of Marketing Research: “The phenomenon of overconfidence is one of the more robust findings in the decision and judgment literature.”

Furthermore, overconfidence is not a harmless trait: it has real-world effects, many of them decidedly negative. For example, a 2013 study found “strong statistical support” for the presence of overconfidence bias among investors in developed and emerging stock markets, which “contribut[ed] to the exceptional financial instability that erupted in 2008.” A 2015 paper suggested that overconfidence is a “substantively and statistically important predictor” of “ideological extremeness” and “partisan identification.” And in Overconfidence and War: The Havoc and Glory of Positive Illusions, published at the start of the second Iraq War, the Oxford political scientist Dominic Johnson argued that political leaders’ overconfidence in their own virtue and ability to predict and control the future significantly contributed to the disasters of World War I and the Vietnam War. And of course, the sages of both Athens and Jerusalem have long warned us about the dangers of pride.

To be sure, there is a difference between self-confidence and overconfidence. Drawing on the classical Aristotelian model of virtue, we might conceive of “self-confidence” as a sort of “golden mean” between the extremes of overconfidence and underconfidence. According to this model, self-confidence is warranted trust in one’s own powers and abilities, while overconfidence is an unwarranted excess of such trust. So why should the well-documented and baneful ubiquity of overconfidence make us think we overvalue self-confidence?

The answer is that valuing self-confidence to the extent that we do encourages overconfidence. The enormous cultural pressure to be and act more self-confident to achieve at work, attract a mate, or make friends is bound to lead to genuine overestimations of ability and more instances of people acting more self-confidently than they really are. Both outcomes risk bringing forth the rotten fruits of overconfidence.

At least in part because we value self-confidence so much, we have condemned ourselves to suffer the consequences of pervasive overconfidence. As I’ve already suggested, my proposed solution to this problem is not a Nietzschean “transvaluation” of self-confidence, a negative inversion of our current attitude. Instead, it’s a more classical call for moderation: our attitude towards self-confidence should still be one of approval, but approval tempered by an appreciation of the danger of encouraging overconfidence.

That being said, we know that we tend to err on the side of overconfidence, not underconfidence. Given this tendency, and assuming, as Aristotle claimed, that virtue is a mean “relative to us” — meaning that it varies according to a particular individual’s circumstances and dispositions — it follows that we probably ought to value what looks a lot like underconfidence to us. In this way, we can hope to encourage people to develop a proper degree of self-confidence — but no more than that.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Acquitted but Not Forgotten: On the Ethics of Acquitted Conduct Sentencing

black and white photograph of shadow on sidewalk

On March 28th, 2022, the House of Congress was addressed by Congressman Steve Cohen (TN-09), Chairman of the Judiciary Subcommittee on the Constitution, Civil Rights, and Civil Liberties. The topic of the address was the legality of a practice called Acquitted Conduct Sentencing. Acquitted Conduct Sentencing is the practice of a judge increasing the penalty for a crime based on facts about the defendant’s past — specifically, facts about crimes the defendant was charged with, but later acquitted of. Perhaps surprisingly, such a practice is not only legal, but relatively common. For example, in 2019 Erick Osby of Virginia was charged with seven counts of criminal activity related to the possession of illegal narcotics and firearms. He was acquitted of all but two of the charges, which should have resulted in a prison sentence of between 24-30 months. The district court trying him, however, estimated a range of 87-108 months due to the five other charges of which he was acquitted. Osby ended up receiving an 84-month (7-year) prison sentence.

In his remarks, Congressman Cohen, along with the co-author of his bill, Kelly Armstrong (Congressional Representative for North Dakota at large), presented arguments to Congress for making Acquitted Conduct Sentencing an illegal practice. The reasoning is fairly straight-forward: if someone is charged with a crime but later acquitted, that acquittal seems to say that they cannot legally be punished for that particular charge. But when sentences are expanded — even, in some cases, tripled — fines are raised, or obligatory service is extended, due to the charges the defendant has been acquitted of, it certainly seems as though the acquittal was meaningless. Cohen’s argument, then, is clear: if we acquit someone of a charge, they should be fully acquitted, meaning those charges should not have any bearing on the sentence handed down to the defendant.

Still, the question of acquitted conduct sentencing is not quite as straightforward as that. Juries and judges need to make their decisions on a host of factors, some of which have to do with facts known about the defendant’s character as well as predictions about how likely they may be to reoffend. These are not easy decisions to make, and it is further complicated by the ambiguity of what counts as legally-admissible evidence. Acquitting someone of a charge does not entail that no facts relevant to the original charge can be used in the trial. In many cases, it is difficult to say how such evidence should be treated. For example, a charge that was acquitted because of police mishandling evidence may be discussed during witness testimony. That testimony, and facts about the defendant’s character and behavior, seems (at least in some cases) hard to ignore when considering fair and effective sentencing for other charges. Acquittal, after all, does not mean that the defendant is not guilty of the crime, only that they cannot be legally charged for it. This could be for a variety of reasons.

Of course, we know that there are many instances of people being charged with a crime that they are innocent of. Mistaken charges happen all the time. Judges and juries may be privy to the original charge, and the later acquittal, but may not know the reason for the acquittal. Acquitted conduct sentencing allows defendants in this position to suffer the consequences of someone else’s error. Because the people making these legal decisions often have limited, or at least imperfect, access to all of the relevant information, allowing for acquitted conduct sentencing is guaranteed to allow cases like this to (in some cases, massively) increase the sentences for these defendants.

So, how should we think about the ethics of acquitted conduct sentencing? Purely consequentialist reasoning may lead us to conclude that we should look at the statistics: what percentage of acquittals are due to innocence, and what percentage are due to bureaucratic missteps? Perhaps the answer to this question will tell us whether allowing or prohibiting acquitted conduct sentencing would be expected to generally maximize good outcomes. This of course would be based on the presumption that, if someone is genuinely guilty of the crime for which they are acquitted, then adjusting their sentence in light of any relevant facts of the acquitted charge would be best for preventing future harm. But this assumption may, of course, itself be mistaken.

Instead, maybe a just outcome depends on more factors than simply maximizing happiness or minimizing harm. The idea of fairness as a desirable outcome of justice, for instance, is a popular one. We might think about the issue of acquitted conduct sentencing as a question of where the locus of justice lies: is a procedure of justice fair in virtue of the procedure itself, or is it fair just in case the outcomes of the procedure are generally fair? John Rawls, one of the most influential political philosophers of the modern era, argued that what he called perfect procedural justice has two characteristics: (1) criteria for what constitutes a just outcome of the particular procedure, and (2) the particular procedure guarantees that a perfect outcome will be achieved. Of course, such perfection is often unattainable in real life, and we might think that the best we can aim for is imperfect procedural justice: where criteria (1) is met, but the procedure cannot guarantee a perfect outcome. Can our current sentencing procedure meet Rawls’ first characteristic? Does it give us an idea of what counts as a just outcome of sentencing? The answer is unclear.

Further, we might question whether outcomes are relevant at all for justice. As a pluralistic society, we might expect there to be wildly-differing views about what counts as a fair outcome. But what counts as an impartial (if not fair) procedure is likely less controversial. For example, when healthcare resources are very scarce, some institutions use random-lottery (or weighted-lottery) decision procedures to determine who gets the resources and who does not. Even if the outcome seems “unfair” (because not everybody who needs the resource will receive it) it is hard to contest that everyone had a fair shot at the prize. Not everyone agrees that lotteries are just procedures, but they at least appear to be impartial. Perhaps this is enough to secure procedural justice? The view that the procedure alone, and not the outcome, determines the fairness of a procedural process, is what Rawls calls pure procedural justice.

Is the procedure of acquitted conduct sentencing fair? Perhaps an easier question: is it impartial? Likely not. After all, implicit (or explicit) bias can easily result in someone being charged for a crime they did not commit. Those who are members of marginalized groups, then, have a much higher risk of having their sentences expanded due to crimes they did not commit. The procedure is far from impartial, and so the likelihood that it could be a part of a just procedural process appears to be low. While we certainly want judges to have as much relevant information on a case as possible when handing down a sentence, perhaps we can agree with Congressman Cohen that acquitted conduct sentencing is not the way to accomplish this goal.

Reflections on Communal Annihilation or: How I Learned to Stop Worrying and Love the Bomb

photograph of overgrown buildings in Chernobyl exclusion zone

It appears that we are at the moment living under the greatest threat of nuclear war the world has seen in decades.

If you live in a city, or if you (like me) live next to a military base of strategic importance, there is a non-zero chance that you and your community will be annihilated by nuclear weapons in the near future.

I, at least, find this to be unsettling. For me, there’s also something strangely fascinating about the prospect of death by the Bomb. The countless books, movies, and video games that depict nuclear apocalypse – often in vaguely glamorous terms – suggest that I’m not alone.

If ever there was one, this is surely an appropriate time to reflect on the specter that haunts us. That’s what I’d like to do here. In particular, I’d like to ask: How should we think about the prospect of death by the Bomb? And why do we find it fascinating?

***

Let’s start with fascination.

Our fascination with the Bomb is no doubt partly rooted in the technology itself. It wasn’t too long ago that human beings warred with clubs and pointy bits of metal. The Bomb is an awful symbol of humanity’s precipitous technological advancement; to be threatened by it is an awful symbol of our folly.

Even more important, in my view, is that the Bomb has the power to transmute one’s own personal ending into a small part of a thoroughly communal event, the calamitous ending of a community’s life. In this way, the Bomb threatens us with a fascinating death.

To appreciate this point, consider that one of the peculiar things about the prospect of a quotidian death is that the world – my world – should carry on without me. I (you, we) spend my whole life carving out a unique place in a broad network of relations and enterprises. My place in my world is part of what makes me who I am, and I naturally view my world from the perspective of my place in it. Contemplating the prospect of my world going on without me produces an uncanny parallax. I see that I am but a small, inessential part of my world, a world which will not be permanently dimmer after the spotlight of my consciousness is extinguished.

This peculiarity sometimes strikes us as absurd, an indignity even. Wittgenstein, for example, expressed this when he said that “at death the world does not alter, but comes to an end.” It can also be a source of consolation and meaning. Many people are comforted by the belief that their loved ones will live on after they die, or, optimistically, that they have made a positive impact on others that will extend beyond the confines of their life. We are told that death can even be noble and good when one lays down one’s life for one’s friends.

The Bomb is different from run-of-the-mill existential threats in that it brings the prospect of a death that isn’t characterized by this peculiarity. If the Bomb were to strike my (your) city, my world would not carry on without me. Humanity and the Earth might survive. But my world – my family, friends, colleagues, students, acquaintances, the whole stage on which I strut and fret – would for the most part disappear along with me in one totalizing conflagration. At death, my world really would come to an end.

From an impartial point of view, this is clearly worse than my suffering a fatal heart attack or dying in some other quotidian way. But this is not the point of view that usually dominates when we think about the prospect of our own deaths. What sort of difference does it make from a self-interested point of view if our world dies with us?

A tempting thought is that the extent to which this would make a difference to a person is directly proportional to how much they care about others. If I don’t care about anyone besides myself, then I will be indifferent to whether my world dies with me. The more I care about others, the more I will care about who dies with me.

While I think there’s truth in this, it strikes me as an oversimplification. A thought experiment may help us along.

Imagine a people much like us except that they are naturally organized into more or less socially discrete cohorts with highly synchronized life cycles, like periodical cicadas. Every twenty years or so, a new cohort spontaneously springs up from the dust. The people in any given cohort mostly socialize with one another, befriending, talking, trading, fighting, and loving among themselves. They live out their lives together for some not entirely predictable period, somewhere between ten and one hundred years. Then they all die simultaneously.

It seems to me that death would have a recognizable but nevertheless rather distinct significance for periodical people. On the one hand, as it is for us, death would be bad for periodical people when it thwarts their desires, curtails their projects, and deprives them of good things. On the other hand, there would be no cause to worry about leaving dependents or grieving intimates behind. There would be little reason to fear missing out. Death might seem less absurd to them, but at the unfortunate expense of the powerful sources of consolation and meaning available to us.

Perhaps most importantly, that most decisive of personal misfortunes, individual annihilation, would invariably be associated with a much greater shared misfortune. In this way, death would be a profoundly communal event for periodical people. And this would reasonably make a difference in how a periodical person thinks about their own death. It’s not that the communality would necessarily make an individual’s death less bad. It’s more that assessments of the personal significance of events are generally affected by the broader contexts in which those events occur. When a personal misfortune is overshadowed by more terrible things, when it is shared – especially when it is shared universally among one’s fellows – that personal misfortune does not dominate one’s field of vision as it normally would. Perversely, this can make it seem more bearable.

When we contemplate the Bomb, we are in something like the position of periodical people. The usual other-related cares, the usual absurdity, the usual sources of consolation and meaning do not apply. The prospect of collective annihilation includes my death, of course. But weirdly that detail almost fades into the background as it is almost insignificant in relation to the destruction of my world. This is a strange way of viewing the prospect of my own annihilation, one that produces a different sort of uncanny parallax. I think this is key to our fascination.

There may be something else, too. We live in a highly individualistic and competitive society where the bonds of community and fellow feeling have grown perilously thin. The philosopher Rick Roderick has suggested that in a situation like ours, there’s something attractive, even “utopian,” about the possibility that in its final hour our fragmented community might congeal into one absolutely communal cry. Of course, if this suggestion is even remotely plausible, it is doubly bleak, as it points not only to the prospect of our communal death but also to the decadence of our fragmented life.

***

I’ve tried to gesture at an explanation as to why the Bomb can be a source of fascination as well as trepidation. Along the way, some tentative insights have emerged, which relate to how we ought to think about this unique existential threat.

Then again, I recently had a conversation with my much wiser and more experienced nonagenarian grandparents, which makes me question whether I didn’t start this circuitous path on the wrong foot. To my surprise, when I asked them of these things my grandparents told me that during the Cold War they didn’t really think about the Bomb at all. My grandfather, Don, gave me a pointed piece of advice:

“There’s not a darn thing we can do about it. You know, if it’s going to happen, you better go ahead and live your life.”

Perhaps, then, I (and you, reader, since you made it this far) have made a mistake. Perhaps the best thing to do is simply not to think about the Bomb at all.