← Return to search results
Back to Prindle Institute

Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World

photograph of crowd in subway station

In his recent article, Aaron Schultz asks whether we have a right to attentional freedom. The consequences of a life lived with our heads buried in our phones – consequences not only for individuals but for society at large – are only becoming more and more visible. At least partly to blame are tech’s (intentionally) addictive qualities, and Schultz documents the way AI attempts to maximize our engagement by taking an internal X-ray of our preferences while we surf different platforms. Schultz’s concern is that as better and better mousetraps get built, we see more and more of our agency erode each day. Someday, we’ll come to see the importance of attentional freedom – freedom from being reduced to prey for these technological wonders. Hopefully, that occurs before it’s too late.

Attention is a crucial concept to consider when thinking about ourselves as moral beings. Simone Weil, for instance, claims that attention is what distinguishes us from animals: when we pay attention to our body, we aim at bringing consciousness to our actions and behaviors; when we pay attention to our mind, we strive to shut out intrusive thoughts. Attention is what allows us, from a theoretical perspective, to avoid errors, and from a moral, practical perspective, to avoid wrong-doing.

Technological media captures our attention in almost an involuntary manner. What often starts as a simple distraction – TikTok, Instagram, video games – may quickly lead to addiction, triggering compulsive behaviors with severe implications.

That’s why China, in 2019, imposed a limit on gaming and social media gaming use. Then, in 2021, in an attempt to further control and reduce mental and physical health problems of the young population, stricter limits for online gaming on school days were enforced, and children and teenagers’ use was limited to one hour a day on weekends and holidays.

In Portugal, meanwhile, there is a crisis among children who, from a very young age, are being diagnosed with addiction to online gaming and gambling –  an addiction which compromises their living habits and routine such as going to school, being with others, or taking a shower. In Brazil, a recent study showed that 28% of adolescents show signs of hyperactivity and mental disorder from tech use to the point that they forget to eat or sleep.

The situation is no different in the U.S., where a significant part of the population uses social media and young people spend most of their time in front of a screen, developing a series of mental conditions inhibiting social interaction. Between online gaming and social media use, we are witnessing a new kind of epidemic that attacks the very foundations of what it is to be human, to be able to relate to the world and to others.

The inevitable question is: should Western countries follow the Chinese example of controlling tech use? Should it be the government’s job to determine how many hours per day are acceptable for a child to remain in the online space?

For some, the prospect of big brother’s protection might look appealing. But let us remember Tocqueville’s warning of the despotism and tutorship inherent in this temptation – of making the  State the steward of our interests. Not only is the strategy paternalistic, in curbing one’s autonomy and the freedom to make one’s own choices, but it is also totalitarian in its predisposition, permitting the State control of one more sphere of our lives.

This may seem an exaggeration. Some may think that the situation’s urgency demands the strong hand of the State. However, while an unrestrained use of social media and online gaming may have severe implications for one’s constitution, we should recognize the problem for what it is. Our fears concerning technology and addiction are merely a symptom of another more profound problem: the difficulty one has in relating to others and finding one’s place in the world.

What authors like Hannah Arendt, Simone Weil, Tocqueville, and even Foucault teach us is that the construction of our moral personality requires laying roots in the world. Limiting online access will not, by itself, resolve the underlying problem. You may actually end up by throwing children to an abyss of solitude and despair by exacerbating the difficulties they have in communicating. We must ask: how might we rescue the experience of association, of togetherness, of sharing physical spaces and projects?

Here is where we go back to the concept of attention. James used to say that attention is the

taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness is of its essence. It implies withdrawal from some things in order to deal effectively with others. 

That is something that social media, despite catching our (in)voluntary attention, cannot give us. So, our withdrawal into social media must be compensated with a positive proposal of attentive activity to (re)learn how to look, interpret, think, reflect upon things and most of all to (re)learn how to listen and be with others. More than 20 years ago, Robert Putnam documented the loss of social capital in Bowling Alone. Simone Weil detailed our sense of “uprootedness” fifty years prior to that. Unfortunately, today we’re still looking for a cure that will have us trading in our screens for something that we can actually do attentively together. Legislation is unlikely to fill that void, alone.

From Conscience to Constitution: Should the Government Mandate Virtue?

photograph of cards, dice, chips, cigarettes, and booze

You have probably heard it said that you can’t legislate morality, that making laws that require people to do the right thing is both ineffective and authoritarian. Nevertheless, in his recent Atlantic article entitled “American Has Gone Too Far in Legalizing Vice,” Matthew Loftus encourages politicians to do just that. By legalizing sports betting and recreational marijuana, Loftus argues that states are neglecting to consider the countless addicts that will result, and that lawmakers should do more to outlaw these harmful vices.

On Loftus’s view, public policy plays a role in the habits that we form, and creating an environment where more people succumb to their vices is neither good for addicts nor the political communities that will be left picking up the pieces. A substantial portion of gambling revenue comes from those who struggle with addiction, and legalizing marijuana is linked to higher rates of drug abuse. If these activities remained illegal, then fewer people would get hooked.

On this score, it seems that Loftus is obviously correct. Our environments play a significant role in the habits we adopt. If I am surrounded by responsible peers, I will be more likely to study for my next exam, while if many of my friends are cutting class, I will be more likely to skip out as well. These choices then form my habits. In the good case, my habits will be virtues like temperance, honesty, and diligence. In the bad case, my habits will lead me into all sorts of vice, including destructive addictions like gambling and drug use.

But even if it is true that our environments form our habits, the question still remains whether it’s the government’s place to guide us towards virtue instead of vice.

As a democracy founded on the rights to “life, liberty, and the pursuit of happiness,” it may be too heavy-handed for political leaders to require us, or even nudge us, to live a certain way.

This concern is amplified by the fact that many of the philosophers who have been the staunchest advocates of state-sanctioned virtue have not been very enthusiastic about democracy. According to Plato, a well-functioning political community should mirror the way that virtuous individuals conduct their lives, while for Aristotle, the purpose of government is to help citizens to live flourishing lives of virtue. But Plato also held that we should all be ruled by philosopher kings, a class of highly educated rulers, and that the freedoms granted within democracies would inevitably lead to anarchy. Likewise, Aristotle thought that monarchy and aristocracy are superior to democracy. An emphasis on character formation through the law might also lead to rejecting democracy as a promising form of government rather than embracing important constitutional freedoms.

These considerations reveal that there is some tension between allowing citizens the freedom to conduct their own lives and passing laws that promote virtue. Part of this tension arises because we often disagree about what is morally best, a fact that the political philosopher John Rawls called reasonable pluralism. Intelligent, well-intentioned citizens can find themselves at odds over many key moral questions.

Is gambling a harmless pastime or a serious moral vice? Is access to abortion a central human right, or the murder of an innocent human being? By enforcing policies that promote particular virtues, lawmakers may have to come down on one side or the other of these ongoing debates.

Furthermore, even in cases where we can agree on what is morally best, it is not clear that the law should prevent us from doing things that we know are to our detriment. Certainly the law should prevent us from interfering with how others choose to pursue happiness, but if we are only hurting ourselves, then why is that anyone’s business besides our own? Part of making room for the pursuit of happiness is allowing citizens to decide for themselves what they pursue, not limiting them to only a menu of government-approved options.

All of this, however, overlooks the fact that promoting certain virtues might be an unavoidable aim even for democratic governments. If it is true that political institutions should enable their citizens to freely pursue their vision of the good life, this goal cannot be accomplished by being completely hands off.

To form and pursue their understanding of the good, citizens need wisdom, discernment, courage, and perseverance, amongst other virtues. These virtues are necessary, not because the government wants to control our lives, but because without them we would be incapable of controlling our own lives.

We would instead be left to the dictates of momentary desires or, in the worst case scenario, crippling addictions from which we cannot recover.

This insight opens up a potential middle road between fully laissez-faire public squares and domineering, authoritarian governments. According to the philosopher Martha Nussbaum, political institutions should cultivate the capabilities necessary for their citizens to pursue self-directed lives. By promoting these capabilities, or virtues, governments ensure that their citizens are able to pursue their own unique visions of the good.

This approach allows that the law can encourage citizens in virtue in a way that creates and supports their ability to choose the life that they want to lead. On this model, the rule of law would not be completely value neutral, but it would make space for people to be able to choose many of their own values.

Forbidding certain kinds of vice, like preventing adults from gambling or using addictive substances, would for the most part be off the table. Unless the government wants to endorse a more robust picture of what a good life is like, the default position would be to let those who can choose their own informed goals pursue those ends. Recreational activities, like football or freediving, come with substantial dangers, but it is typically left up to individuals whether they want to take on those risks. In contrast, protecting those who are still forming the ability to choose their own life paths, like forbidding Juul from marketing to children, would be well within the purview of government officials.

Of course, just having laws that promote virtue does not ensure that anyone will become particularly moral. While they may succeed in outlawing vice, laws simply compel behavior, and those who begrudgingly comply out of fear of punishment would not for that reason become deeply good. The law, rather, would act as a guide for what kinds of values might be worth adopting, and citizens can then decide whether or not they want to choose these ideals for themselves. Policies like sin taxes, for instance, allow states to discourage vice without outright banning it.

Thus, even a view like Nussbaum’s leaves plenty of room for people to develop their own distinctive moral characters. Democracies can lay the groundwork for citizens to live meaningful and fulfilling lives, but at the end of the day, it is up to them to decide what values their lives will ultimately serve.

A Right To Attentional Freedom?

collage of various people on their phones

The White House recently posted a proposal for an AI Bill of Rights. In California, there is a bill that aims to hold social media companies accountable for getting young children addicted to their platforms. Several of these companies also face a federal lawsuit for emotionally and physically harming their users.

For those who use technology on a day-to-day basis, these developments are likely unsurprising. There is an intuition, backed by countless examples, that our technology harms us and that those who have created the technology are somehow responsible. Many of us find ourselves doomscrolling or stuck on YouTube for hours because of infinite scrolling.

Less settled is precisely how these technologies are bad for us and how exactly these companies wrong us.

The California bill and the lawsuit both argue that one notable form of harm can be understood through the lens of addiction. They argue that social media companies are harming a particularly vulnerable group, namely young adults and children, by producing an addicting product.

While this way of understanding the problem certainly has plausibility, one might favor other ways of explaining the problem. The way that we frame the moral relationship users have with technology will shape legal argumentation and future regulation. If our aim is to forge a morally sound relationship between users, technology, and producers, it is important to get the moral story right.

What makes social media addicting is the fact that it has become especially adept at producing content that users want to engage with. Complex algorithms learn about its user’s predilections and can accurately predict the kinds of things people want to see. The ability for AI to manipulate us so effectively highlights our failure to recognize the importance of attention – a valuable good that has gone underappreciated for far too long.

First, our attention is limited. We cannot attend to everything before us and so each moment of attention is accompanied with non-attention. If I am paying attention to a film, then I am not paying attention to the cars outside, or the rain falling, or the phone in my pocket.

Second, attention is susceptible to outside influence. If someone is talking loudly while a film plays, I may become distracted. I may want to watch the film closely, but the noise pulls my attention away.

Third, attention is related to many foundational moral rights. Take for instance freedom of thought. We might think that in a society where there are no laws about what you are allowed to think, read, or say guarantees the freedom of thought. However, unless your attention is respected, freedom of thought cannot be secured.

We need only think of Kurt Vonnegut’s story “Harrison Bergeron” to show what this claim misses. In it, Harrison Bergeron lives in a society that goes to great lengths to ensure equality. In order to make sure everyone remains equal, those who are born with natural talents are given artificial burdens. For Harrison, who is exceptional both physically and mentally, one particularly clever tactic is used to ensure he does not think too much. Periodically, a loud, harsh sound is played through an earpiece. This makes it impossible for Harrison to focus.

The relevant point here is that even if no law exists that prohibits you from thinking whatever you please, reading what you want, or discussing what you wish, your freedom of thought can be indirectly overridden.

By utilizing the fact that your attention is limited and not fully voluntary, another party can prevent you from thinking freely. Thus, although our rights may be respected on paper, assaults on our attention may inhibit us from utilizing the capacities these rights are supposed to protect in practice.

When we interact with technology, we must give our attention over to it. Furthermore, much of the technology we interact with on a day-to-day basis is designed specifically to maintain and increase user engagement. As a result of these design choices, we have developed technology that is highly effective at capturing our attention.

As predictive technology improves, machines will also improve their ability to distract us. The result of this will mean that more people will spend more time using the technology (e.g., watching videos, reading news pieces, viewing content produced by other users). The more time people spend using this technology, the less they can spend attending to other things.

If our attention is limited, can be controlled from the outside, and is vital for utilizing other morally important capacities, it seems clear that it is something that should be treated with respect.

Consider how we tend to think that it is rude to distract someone while they are trying to concentrate. It rarely feels satisfying if the person causing the distraction simply replies “Just ignore me.” This response denies a crucial reality of the nature of attention, viz., it is often non-voluntary.

Furthermore, it would be even worse if the distracting person tried to mask their presence and distract someone secretly, and yet this is precisely what a great deal of our technology does. It exploits the non-voluntary nature of our attention, overrides attentional freedom, and does so in the most discrete way possible. Technology could be designed in a way that respected our attentional freedom, instead of covertly trying to undermine it. For example, periodically prompting the user to consider doing something else, instead of endlessly presenting more content to engage with.

Rather than focusing on technology’s tendency to encourage addictive behavior in young people, I would like us to think about the effects technology has on all users’ attentional freedom.

Technology that is designed to distract you is harmful because it overrides your attentional freedom. When you use this technology, you are less free. This analysis must overcome at least two challenges, both centered around consent.

The first is that we consent to use these products. To argue that my phone wrongfully harms me because it is distracting seems like arguing that a book wrongfully harms me if it is so gripping that I cannot put it down.

However, while a book may be enticing and may even be created with the hopes that it captures attention, the book does not learn about what captures attention. There is a difference between something capturing your attention because it is interesting and something that learns your preferences and sets about satisfying them. What makes AI driven technology unique is that it has the capacity to fine tune the kinds of things it offers you in real time. It knows what you click on, what you watch, and how long you engage. It also relies on the involuntary part of attention to keep you engaged.

The second argument is about general human interaction. If it is wrong to affect someone’s attention, then daily interactions must be wrong. For instance, if someone walks down the street and asks me to take a flier for a show, do they wrong me by distracting me? Do all interactions require explicit consent lest they be moral violations? If our moral analysis of attention forces us to conclude that even something as trivial as a stranger saying hello to you constitutes a moral wrong because it momentarily distracts you, we will have either gone wrong somewhere along the way, or else produced a moral demand that is impossible to respect.

To answer this second objection, one thing we can say is this. When someone distracts you, they do not necessarily wrong you. Someone who tries to hand you a flier in the street effectively asks for your attention, and you have the opportunity to deny this request with fairly little effort. Notably, if the person who asks for your attention continues to pester you, and follows you down the road as you walk, their behavior no longer seems blameless and quickly turns into a form of harassment. When someone intentionally tries to override your attentional freedom, the moral problem emerges. Because attentional freedom is connected to a set of important freedoms (e.g., freedom of thought, freedom of choice, etc.), if one can override another’s attentional freedom, they can override other important freedoms indirectly.

If technology harms us because we become addicted to it, then we have reason to protect children from it. We may even have reason to provide more warnings for adults, like we do with addictive substances. However, if we stop our analysis at addiction, we miss something important about how this technology operates and how it harms us. When we see that technology harms us because it overrides our attentional freedom, we will need to do more than simply protect children and warn adults. Several new questions emerge: Can we design technology to preserve attentional freedom, and if so, what changes should we make to existing technology? How can we ensure that technology does not exploit the non-voluntary part of our attention? Are some technologies too effective at capturing our attention, such that they should not be on the market? Is there a right to attentional freedom?

No Fun and Games

image of retro "Level Up" screen

You may not have heard the term, but you’ve probably encountered gamification of one form or another several times today already.

‘Gamification’ refers to the process of embedding game-like elements into non-game contexts to increase motivation or make activities more interesting or gratifying. Game-like elements include attainable goals, rules dictating how the goal can be achieved, and feedback mechanisms that track progress.

For example, Duolingo is a program that gamifies the process of purposefully learning a language. Users are given language lessons and tested on their progress, just like students in a classroom. But these ordinary learning strategies are scaffolded by attainable goals, real-time feedback mechanisms (like points and progress bars), and rewards, making the experience of learning on Duolingo feel like a game. For instance, someone learning Spanish might be presented with the goal of identifying 10 consecutive clothing words, where their progress is tracked in real-time by a visible progress bar, and success is awarded with a colorful congratulation from a digital owl. Duolingo is motivating because it gives users concrete, achievable goals and allows users to track progress towards those goals in real time.

Gamification is not limited to learning programs. Thanks to advocates who tout the motivational power of games, increasingly large portions of our lives are becoming gamified, from online discourse to the workplace to dating.

As with most powerful tools, we should be mindful about how we allow gamification to infiltrate our lives. I will mention three potential downsides.

One issue is that particular gamification elements can function to directly undermine the original purpose of an activity. An example is the Snapstreak feature on Snapchat. Snapchat is a gamified application that enables users to share (often fun) photographs with friends. While gamification on Snapchat generally enhances the fun of the application, certain gamification elements, such as Snapstreaks, tend to do the opposite. Snapstreaks are visible records, accompanied by an emoji, of how many days in a row two users have exchanged photographs. Many users feel compelled to maintain Snapstreaks even when they don’t have any interesting content to share. To achieve this, users laboriously send meaningless content (e.g., a completely black photograph) to all those with whom they have existing Snapstreaks, day after day. The Snapstreak feature has, for users like this, transformed Snapchat into a chore. This benefits the company that owns Snapchat by increasing user engagement. But it undermines the fun.

Relatedly, sometimes an entire gamification structure threatens to erode the quality of an activity by changing the goals or values pursued in an activity. For example, some have argued that the gamification of discourse on Twitter undermines the quality of that discourse by altering people’s conversational aims. Healthy public discourse in a liberal society will include diverse interlocutors with diverse conversational aims such as pursuing truth, persuading others, and promoting empathy. This motivational diversity is good because it fosters diverse conversational approaches and content. (By analogy, think about the difference between, on the one hand, the conversation you might find at a party with people from many different backgrounds who have many different interests and, on the other hand, the one-dimensional conversation you might find at a party where everyone wants to talk about a job they all share). Yet Twitter and similar platforms turn discourse into something like a game, where the goal is to accumulate as many Likes, Followers, and Retweets as possible. As more people adopt this gamified aim as their primary conversational aim, the discursive community becomes increasingly motivationally homogeneous, and consequently the discourse becomes less dynamic. This is especially so given that getting Likes and so forth is a relatively simple conversational aim, which is often best achieved by making a contribution that immediately appeals to the lowest common denominator. Thus, gamifying discourse can reduce its quality. And more generally, gamification of an activity can undermine its value.

Third, some worry that gamification designed to improve our lives can sometimes actually inhibit our flourishing. Many gamification applications, such as Habitify and Nike Run Club, promise to help users develop new activities, habits, and skills. For example, Nike Run Club motivates users to become better runners. The application tracks users across various metrics such as distance and speed. Users can win virtual trophies, compete with other users, and so forth. These gamification mechanisms motivate users to develop new running habits. Plausibly, though, human flourishing is not just a matter of performing worthwhile activities. It also requires that one is motivated to perform those activities for the right sorts of reasons and that these activities are an expression of worthwhile character traits like perseverance. Applications like Nike Run Club invite users to think about worthwhile activities and good habits as a means of checking externally-imposed boxes. Yet intuitively this is a suboptimal motivation. Someone who wakes up before dawn to go on a run primarily because they reflectively endorse running as a worthwhile activity and have the willpower to act on their considered judgment is more closely approximating an ideal of human flourishing than someone who does the same thing primarily because they want to obtain a badge produced by the Nike marketing department. The underlying thought is that we should be intentional not just about what sort of life we want to live but also how we go about creating that life. The easiest way to develop an activity, habit, or skill is not always the best way if we want to live autonomously and excellently.

These are by no means the only worries about gamification, but they are sufficient to establish the point that gamification is not always and unequivocally good.

The upshot, I think, is that we should be thoughtful about when and how we allow our lives to be gamified in order to ensure that gamification serves rather than undermines our interests. When we encounter gamification, we might ask ourselves the following questions:

    1. Is getting caught up in these gamified aims consistent with the value or point of the relevant activity?
    2. Does getting caught up in this form of gamification change me in desirable or undesirable ways?

Let’s apply these questions to Tinder as a test case.

Tinder is a dating application that matches users who signal mutual interest in one another. Users create a profile that includes a picture and a short autobiographical blurb. Users are then presented with profiles of other users and have the option of either signaling interest (by swiping right) or lack thereof (by swiping left). Users who signal mutual interest have the opportunity to chat directly through the application.

Tinder invites users to think of the dating process as a game where the goals include evaluating others and accumulating as many matches (or right swipes) as possible. This is by design.

“We always saw Tinder, the interface, as a game,” Tinder’s co-founder, Sean Read, said in a 2014 Time interview. “Nobody joins Tinder because they’re looking for something,” he explained. “They join because they want to have fun. It doesn’t even matter if you match because swiping is so fun.”

The tendency to think of dating as a game is not new (think about the term “scoring”). But Tinder changes the game since Tinder’s gamified goals can be achieved without meaningful human interaction. Does getting caught up in these aims undermine the activity of dating? Arguably it does, if the point of dating is to engage in meaningful human interaction of one kind or another. Does getting caught up in Tinder’s gamification change users in desirable or undesirable ways? Well, that depends on the user. But someone who is motivated to spend hours a day thumbing through superficial dating profiles is probably not in this respect approximating an ideal of human flourishing. Yet this is a tendency that Tinder encourages.

There is a real worry that when we ask the above questions (and others like them), we will discover that many gamification systems that appear to benefit us actually work against our interests. This is why it pays to be mindful about how gamification is applied.

Re-evaluating Addiction: The Immoral Moralizing of Alcoholics Anonymous

photograph of church doorway with chairs arranged

As of 2019, Alcoholics Anonymous boasts more than 2 million members across 150 countries, making it the most widely implemented form of addiction treatment worldwide. The 12-step program has become ubiquitous within medical science and popular culture alike, to the extent that most of us take its potency for granted. According to Eoin F. Cannon’s The Saloon and the Mission: Addiction, Conversion, and the Politics of Redemption in American Culture, A.A. has “spread its ideas and its phraseology as a natural language of recovery, rather than as a framework with an institutional history and a cultural genealogy . . . A.A.’s deep cultural penetration is most evident in the way the recovery story it fostered can convey intensely personal, experiential truth, largely free from the implications of persuasion or imitation that attached to its precursors.” And yet, medical science continues to debate the effectiveness of A.A., or if it’s even effective at all. Critics have pointed out that the organization’s moral approach to suffering and redemption leaves much to be desired for many addicts.

It’s worth beginning with a basic overview of the social and historical context of A.A. The organization has its roots in the Oxford Group, a fellowship of Christian evangelical ministers who believed in the value of confession for alleviating the inherent sinfulness of humanity. Bill Wilson, who would go on to co-found Alcoholics Anonymous in 1935, was a member of this group, and based many of the founding principles of his organization on the teachings of the Oxford Group. A.A. was also rooted in a much broader historical moment. As Cannon explains, “A.A. embraced the disease concept of alcoholism in an era of rising medical authority and popular psychology. It formulated a spirituality that used the language of traditional Christian piety but was personal and pragmatic enough to sit comfortably with postwar prosperity.” Also crucial was “the evangelical energies and professional expertise of its early members, many of whom were experienced in marketing and public relations.” A.A.’s marketing was so effective at embedding the organization in popular culture that virtually all depictions of addiction and recovery have been colored by the 12-steps-approach, even into the 21st century.

Furthermore, the Great Depression was ending as the group achieved national prominence, and its philosophy was closely aligned with that of the New Deal. As Cannon explains, the pain of the economic crisis (which was characterized by contemporaries as a kind of drunken irresponsibility) was transformed into an opportunity for a moral makeover, a narrative pushed by FDR and the New Deal that A.A. either seized upon or unconsciously imitated. Cannon explains that “recovering narrators described their experiences of decline and crisis drew on the same kind of social material that, writ large, defined the national problem: the bewildering failure of self-reliant individualism, as evidenced in job loss, privation, and family trauma. A.A. narrative, just like FDR’s New Deal story, interpreted this failure as a hard-earned lesson about the limits of self-interest.” In this sense, A.A. is hardly apolitical or ahistorical. It was forged by political and economic currents of the early 20th century, and its ascendancy was hardly natural or inevitable.

The spiritual dimension of A.A. is impossible to ignore. The Oxford Group’s foundational influence is evident in the famous the 12-step program: for example, steps 2 and 3 read,

“2. Came to believe that a Power greater than ourselves could restore us to sanity.

3.. Made a decision to turn our will and our lives over to the care of God as we

understood Him.”

The final step, “Having had a spiritual awakening as the result of these Steps, we tried to carry this message to alcoholics, and to practice these principles in all our affairs,” sounds like a call for religious conversion. Most would agree that medical treatment should be secular, so why is alcoholism an exception?

Furthermore, an emphasis on spirituality doesn’t necessarily make addiction treatment more effective. A 2007 study conducted for the Journal for the Scientific Study of Religion acknowledges that “Studies focusing on religiosity as a protective factor tend to show a weak to moderate relationship to substance use and dependence . . . Studies that have examined religiosity as a personal resource in treatment recovery also tend to report weak to moderate correlations with treatment.” However, the 2007 study takes issue with this data. The researchers argue that most previous studies rely “on the assumption that religiosity, although an outcome of socialization, is an internal attribute that functions as a resource to promote conventional behavior . . . An alternative model to this individualistic, psychological framework is a sociological model where religion is viewed as an attribute of a social group.” In other words, we focus too much on how religion functions for individuals instead of how religion functions in a social context.

Rather, this study uses the “moral community” hypothesis, first articulated by sociologists Stark and Bainbridge, as a framework for understanding addiction treatment. This theory argues that individual interactions with religion (how much importance you place on it or specific beliefs you subscribe to) are not as important as your entrenchment in a religious community, which is the ultimate predictor of long-term commitment to and effectiveness of treatment. The results of the 2007 study seem to support this idea; the data “revealed that an increase in church attendance and involvement in self-help groups were better predictors of posttreatment alcohol and drug use than the measure of individual religiosity.” The study found that “individuals with higher levels of religiosity tended to have higher levels of commitment” to AA, but more broadly, “in some programs religiosity functioned as a positive resource whereas in other programs it served as a hindrance to recovery.” In other words, religion isn’t universally helpful, depending on the person and how easy they find it to assimilate into their moral community. Perhaps those who already have incorporated organized religion into their life will be better equipped for group participation in the context of addiction recovery. What all of this seems to suggest is that A.A. is only effective if you’re already receptive to its framework, which hardly makes it a cure-all for alcoholism. Non-Christians and atheists who drink are more or less left out in the cold.

In fact, there are very few studies that conclusively support A.A. as the best or only treatment plan for alcoholism. As writer Gabrielle Glaser pointed out in an article for The Atlantic, “Alcoholics Anonymous is famously difficult to study. By necessity, it keeps no records of who attends meetings; members come and go and are, of course, anonymous. No conclusive data exist on how well it works.” The few studies that have tested A.A.’s effectiveness tend to find less than impressive results. For example, psychiatrist Lance Dodes estimated in his 2015 book The Sober Truth: Debunking the Bad Science Behind 12-Step Programs and the Rehab Industry that the actual rate of success for the A.A. program is somewhere between 5 and 8 percent, based on retention rates and long-term commitment. As of 2017, there are 275 research centers devoted to studying alcohol addiction worldwide. The majority of research is conducted in multi-disciplinary research institutions, and nearly half of all research on alcoholism comes out of the U.S, which given how prominent the A.A. approach is here, may skew what facets of addiction are given attention by researchers.

Despite a dearth of proof, A.A. claims to have a 75% percent success rate. According to the movement’s urtext Alcoholics Anonymous: The Story of How Many Thousands of Men and Women Have Recovered from Alcoholism (affectionately referred to as “The Big Book” by seasoned A.A. members),

“Rarely have we seen a person fail who has thoroughly followed our path. Those who do not recover are people who cannot or will not completely give themselves to this simple program, usually men and women who are constitutionally incapable of being honest with themselves . . . They are not at fault; they seem to have been born that way.”

While alcoholism can have a genetic component, the idea that some people are simply doomed to be incurable because of the way they were born (or that any treatment plan for addiction can be “simple”) is deeply troubling. Reading this passage from the Big Book, one can’t help but notice a parallel to early 20th-century eugenicists like Walter Wasson, who argued in 1913 that alcoholics (who he labeled “mental defectives”) should be “segregated and prevented from having children” so as not to pass down their condition and further pollute the gene pool. Eugenicists believed that alcoholism was incurable, and while A.A. ostensibly believes that it can be cured, they still believe that some are genetically destined to always drink. If their treatment plan for you doesn’t work, it’s simply your own fault, and you’ll never be able to get help at all.

Since its post-Depression inception, A.A. has relied on a moral framework that places blame on the individual rather than society at large. Alcoholism is understood as an innate failure of the individual, not a complex condition brought about by a number of economic, social, and genetic factors. As one former A.A. member explained,

“The AA programme makes absolutely no distinction between thoughts and feelings – a key factor in cognitive behavioural therapy, which is arguably a more up-to-date form of mental health technology. Instead, in AA, alcoholism is caused by ‘defects of character,’ which can only be taken away by surrender to a higher power. So, in many ways, it’s a movement based on emotional subjugation . . . anything you achieve in AA is through God’s will rather than your own. You have no control over your life, but the higher power does.”

Many individuals have found comfort and support in A.A., but it seems that the kind of moral community it offers is only accessible to those with a religious bent and predisposition to the treatment plan. For those who drink to escape crushing poverty, racial inequality, or the drudgery of capitalism, A.A. often offers pseudoscience instead of results, moralizing condemnation instead of medical treatment and genuine understanding.

Smoking Legislation and the E-Cigarette Epidemic

photograph of Juul pods with strawberries, raspberries, a peach, and a cocktail

At the end of the year, President Trump signed legislation changing the federal minimum age for tobacco and nicotine purchase from 18 to 21. This move to raise the federal smoking age was made in response to the popularity of e-cigarettes amongst teen users and the e-cigarette epidemic. To combat this public health crisis, attempts have also been made to ban flavored e-cigarettes. For e-cigarettes to stay on the market, vape companies will need to prove that they cause more good than harm. This proposed legislation applies to all e-cigarette companies, threatening the smaller vapor manufacturers as well as Juul Labs, who make up seventy-five percent of the nine-billion dollar industry.

Juul pods, marketed at millennials and teen users, contain twice the amount of nicotine found in traditional freebase nicotine e-cigarettes These products are especially addictive and are sold in a variety of fruity flavors making them very appealing to children. It’s unsurprising, then, that America’s youth are hooked. In fact, one can hardly walk across a college campus or use the bathroom of a high school without seeing a Juul user “fiending.”

But Juul Labs isn’t just selling their products to children; children are their targeted demographic. Although e-cigarette executives publicly claim nicotine vaporizing devices have always been about a safer smoking alternative to traditional combustible cigarettes, looking to social media advertising tactics from the company’s inception, as well as interviews with investors and employees, children have always represented a main marketing target. Using youthful brand ambassadors that fit the young demographic and advertisements featuring millennials at parties demonstrate the company’s clear attempts to market the sleek e-cigarette device to young people. A study conducted by the University of Michigan two years ago emphasized the dramatic rise in high school students – a generation with historically low tobacco use – in just a single year. And many blame Juul Labs for their irresponsible marketing tactics that created a generation of kids addicted to nicotine.

Even scarier than the addiction that it causes are the health risks. Throughout the summer of 2019, thousands of teens were hospitalized and 39 e-cigarette related deaths were reported to the CDC. Although the vaping illness was linked to vitamin E acetate, an ingredient in illicit THC vape cartridges, since the outbreak, legislatures have had full support of curbing teen vaping from concerned parents across the nation.

Another issue with Juul Labs is their association with Big Tobacco. While it may seem as though e-cigarette companies are the tobacco industry’s biggest competitor, for the most part, the tobacco industry and vaping industry are becoming more and more related. Altria, of Marlboro cigarettes, recently bought a 35% stake of Juul Labs for $12.8 billion, and the e-cigarette company’s CEO was replaced by K.C. Crosthwaite, an Altria executive. These changes left employees concerned and angered with their new relationship with Altria. How can a company whose mission is to provide a safer smoker alternative to combustible cigarettes be associated so closely with Big Tobacco?

While the danger the vaping epidemic presents is dangerous, and the specific targeting of kids seems objectionable, many wonder if the FDA should regulate e-cigarettes quite so heavily. The regulation of the vaping industry is a case of paternalism, where one’s choices are interfered with in order to promote one’s well-being and long-term interests. Some are concerned that the raising of the federal minimum smoking age is an overextension of the government’s authority, especially considering there is lack of evidence that nicotine e-cigarettes cause significant health issues. Similarly, because there are less immediate consequences of teen nicotine use (compared to teen alcohol use for example), such regulations may appear overcautious. There are more practical concerns at play as well; if vape products are banned, teens may be pushed to use combustible cigarettes or illicit vaping products that have been linked to respiratory disease. Although some are concerned about the restriction of personal choice, others view such laws as similar to mandatory seatbelt and compulsory child education laws.

Issues of classism and racism are rooted in the e-cigarette industry as they were in the tobacco industry. Because a large amount of stigma surrounds combustible cigarettes in the United States, smoking cigarettes is especially frowned upon by the middle class, and the habit is associated with those of a lower socioeconomic class according to British economist Roger Bate. Middle-class, adult vapers are conditioned to feel ashamed for smoking traditional combustible cigarettes. Similarly, many feel wronged that e-cigarettes are being regulated so heavily when flavored menthol cigarettes, claimed to be more addictive and are most commonly used by African Americans, remain on the market. Tobacco companies’ use of racially targeted marketing tactics of the addictive menthol flavored cigarettes are eerily similar to Juul’s early advertising blitzes, however, it seems that it is only when “young white people [are affected], then action is taken really quickly,” according to LaTroya Hester, spokeswoman for the National African American Tobacco Prevention Network.

Ultimately, any form of governmental intervention will cause debate about which personal liberties warrant being curbed, what our “best interests” are, and who is best positioned to know what those interests actually are. Juul and other e-cigarette companies might be blameworthy, but for many it’s not clear that the government should go to such great lengths to save us from ourselves.

When Moral Arguments Don’t Work

photograph of machines at a coal mine at dawn

In 2019, the global issue of the climate emergency has taken center stage, with the School Strike for Climate movement, led by Swedish teen activist Greta Thunberg, mobilizing 4 million people on September 20 to strike in the largest climate demonstration in human history.

It is of course well understood, by scientists and by much of the public, that burning fossil fuels is trapping carbon (and other greenhouse gasses) in the atmosphere and causing the world to warm. Since pre-industrial times the world’s climate has warmed by an average of 1C, and on the current trend of greenhouse gas emissions, will warm more than 3C by the end of the century.

Though it is becoming harder for climate change deniers to evade the existential implications of inaction, globally, governments are still prevaricating and fossil fuel companies are doubling down.

The climate and ecological emergency is clearly a grave moral issue, yet for many of those within whose power it is to act, moral imperatives to do so are either unrecognized or unheeded. This raises the question of whether moral arguments on this issue are defunct, in terms of their power to move those within whose power action lies.

Right now Australia is a case in point; the country is in the grip of an unprecedented bushfire emergency.

More than 3 million hectares has burned just in the state of New South Wales (NSW) already this bushfire season; in excess of 20 percent of the national park area of the Blue Mountains, adjacent to Sydney, has been razed, and a ‘mega-fire’ that emergency crews say can not be extinguished continues to rage. The state capital, and Australia’s largest city, Sydney, has been blanketed in toxic bushfire smoke for several weeks, and the city’s already low water supply is in danger of being poisoned by toxic ash. Out of control fires are burning as well in the states of Queensland and Western Australia.

This horrifying start to the summer has sparked a national conversation about the reality of global heating for an already drought and bushfire prone country, and also about the exponential costs of government inaction. It has elicited pleas from large sections of the public, and from professional organizations representing front-line health and emergency services, for the government to own up to its moral responsibility – all of which appear to be falling on profoundly deaf ears.

Back in 2007 then Labor Prime Minister Kevin Rudd stated: “Climate change is the great moral challenge of our generation.”

Yet, here we are in 2019 with a Liberal-National government that is determined to continue subsidizing the coal industry and whose refusal to countenance climate action is scuppering hopes of an effective international agreement.

Just last week, as the country burned, the latest round of climate talks in Madrid ended in stalemate, and Australia was accused of cheating (by claiming ‘carryover credits’ to meet its Paris target) and of frustrating global efforts to secure meaningful action.

The moral argument for climate action is not registering at a political level here, and it is impossible to miss the fact that this failure is inversely proportional to the government’s support for Australia’s coal industry.

This week 22 medical groups have called on the Australian government to phase out fossil fuels, and close down the coal industry, due to what it is calling a major public health crisis. Dr. Kate Charlesworth, a fellow of the Royal Australasian College of Physicians, said: “To protect health, we need to shift rapidly away from fossil fuels and towards cleaner, healthier and safer forms of energy.”

At the same time the Emergency Leaders for Climate Action are calling for a national summit to address climate change, and are criticizing the government for its failure to address the climate emergency.

Yet Michael McCormack, Deputy and currently the Acting Prime Minister told a press conference, which was being held in the incident control centre for a state-wide bushfire emergency that “… We need more coal exports.”

Given that moral imperatives are traditionally thought to be some of the strongest motivations we have for action, why aren’t the moral arguments cutting through?

The obvious, though depressing, answer is that the rapacious demands of neoliberal capitalism have managed to drown out the principled stance of moral analysis.

There is a plethora of literature available on the relationship of capitalism, neoliberalism, overconsumption and climate change. One need read no further, for example, than Naomi Klein’s 2014 book ‘This Changes Everything’ to understand the mechanisms by which neoliberal capitalism has caused the climate crisis and has systematically frustrated efforts to combat it.

If climate change is the great moral challenge of our generation, it is rapidly becoming its great moral failure. But since moral language is not working it is perhaps time, for pragmatic reasons, to deploy another set of concepts.

One suggestion is to recalibrate our analysis from the ethical to the clinical by thinking of the problem as one of addiction. Caution is obviously needed here – we do not want to make the mistake of assuming diminished responsibility. The point is, rather, that the concept of addiction allows the compulsive, subconscious elements to be taken into account as part of our understanding of the degree of difficulty we face in solving this problem.

Addiction is a psychological and physical inability to stop consuming (a chemical, drug, activity, or substance), even though it is causing psychological and physical harm. A person with an addiction uses a substance, or engages in a behavior, for which the rewarding effects provide a compelling incentive to repeat the activity, despite detrimental consequences. Traditionally, at least in western thought, ethics is a rational activity but we seem to be facing a situation where the rational is struggling to break through the dark and self-destructive compulsions of the addiction.

The coal industry is killing us, and the degree of difficulty of interrupting deeply entrenched patterns of addiction reflects the degree of difficulty of interrupting the Australian government’s commitment to the coal industry. Of course the issue is vastly larger than just the coal industry as such, but the Australian government’s relationship to coal is emblematic of the entrenched patterns of consumption to which all of us in rich countries are similarly addicted.

As we try to free ourselves from the grip of what is now threatening our very existence, moral arguments may be less effective than existential ones, and thinking in clinical terms may possibly arm us with the practical understanding we need to appreciate the difficulty of the kind of work that has, now, to be done.

States’ Rights, Sports, and the Harm of Gambling

Image of gamblers in a sports betting hall.

The Supreme Court has struck down a federal law prohibiting sports betting. In 1992, a federal law prohibited states from authorizing sports gambling. This week, Justice Alito provided his reasoning in favor of protecting states’ rights, wanting to avoid the federal government interfering with state legislatures making their own rulings regarding the issue of wagering on professional and amateur sports, which is indeed legal in Nevada. Many states anticipated the Supreme Court ruling and have been mobilizing to profit on their newfound avenue for revenue. Citizens will be able to start wagering on sports in New Jersey, for instance, in the next two weeks or so.

Continue reading “States’ Rights, Sports, and the Harm of Gambling”

Addiction, Free Will, and St. Anselm

A photo of pills spilling out of a bottle.

By now, all of us have been touched in some way by the opioid epidemic in the United States. While there is ample medical science, social science, and political science to explain the phenomenon of addiction, our anecdotal personal experience shapes our ethical judgments. Is addiction a choice or a disease? If it is a disease, is it acquired because of voluntary behavior, or caused by biological or societal factors? Can addicts simply stop using drugs?

Continue reading “Addiction, Free Will, and St. Anselm”

Drug Addiction: Criminal Behavior or Public Health Crisis?

It is painfully obvious that the United States is in the midst of an epidemic of opioid abuse. According to the US Department of Health and Human Services (DHHS), more people died from drug overdoses in 2014 than any other recorded year, and the majority of those overdose deaths involved opioids. DHHS and the Centers for Disease Control (CDC) claim that an increase in the prescription of pain medication is a primary driver of the opioid epidemic. According to the CDC, the amount of prescription opioids sold in the US has nearly quadrupled since 1999. However, Americans do not report higher levels of pain than they did in 1999.

Continue reading “Drug Addiction: Criminal Behavior or Public Health Crisis?”

On Providing Safe Spaces for Drug Use

Under new legislation in Maryland, spaces will be provided for illegal narcotics to be ingested in clean facilities under the supervision of medical professionals. There are nearly 100 such facilities worldwide, largely in Europe, where they have existed since the early 1980s. In the United States, where rates of accidental death from opioid overdose have “quadrupled since the late 1990s,” these facilities are still largely a controversial possibility.

Continue reading “On Providing Safe Spaces for Drug Use”