← Return to search results
Back to Prindle Institute

Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World

photograph of crowd in subway station

In his recent article, Aaron Schultz asks whether we have a right to attentional freedom. The consequences of a life lived with our heads buried in our phones – consequences not only for individuals but for society at large – are only becoming more and more visible. At least partly to blame are tech’s (intentionally) addictive qualities, and Schultz documents the way AI attempts to maximize our engagement by taking an internal X-ray of our preferences while we surf different platforms. Schultz’s concern is that as better and better mousetraps get built, we see more and more of our agency erode each day. Someday, we’ll come to see the importance of attentional freedom – freedom from being reduced to prey for these technological wonders. Hopefully, that occurs before it’s too late.

Attention is a crucial concept to consider when thinking about ourselves as moral beings. Simone Weil, for instance, claims that attention is what distinguishes us from animals: when we pay attention to our body, we aim at bringing consciousness to our actions and behaviors; when we pay attention to our mind, we strive to shut out intrusive thoughts. Attention is what allows us, from a theoretical perspective, to avoid errors, and from a moral, practical perspective, to avoid wrong-doing.

Technological media captures our attention in almost an involuntary manner. What often starts as a simple distraction – TikTok, Instagram, video games – may quickly lead to addiction, triggering compulsive behaviors with severe implications.

That’s why China, in 2019, imposed a limit on gaming and social media gaming use. Then, in 2021, in an attempt to further control and reduce mental and physical health problems of the young population, stricter limits for online gaming on school days were enforced, and children and teenagers’ use was limited to one hour a day on weekends and holidays.

In Portugal, meanwhile, there is a crisis among children who, from a very young age, are being diagnosed with addiction to online gaming and gambling –  an addiction which compromises their living habits and routine such as going to school, being with others, or taking a shower. In Brazil, a recent study showed that 28% of adolescents show signs of hyperactivity and mental disorder from tech use to the point that they forget to eat or sleep.

The situation is no different in the U.S., where a significant part of the population uses social media and young people spend most of their time in front of a screen, developing a series of mental conditions inhibiting social interaction. Between online gaming and social media use, we are witnessing a new kind of epidemic that attacks the very foundations of what it is to be human, to be able to relate to the world and to others.

The inevitable question is: should Western countries follow the Chinese example of controlling tech use? Should it be the government’s job to determine how many hours per day are acceptable for a child to remain in the online space?

For some, the prospect of big brother’s protection might look appealing. But let us remember Tocqueville’s warning of the despotism and tutorship inherent in this temptation – of making the  State the steward of our interests. Not only is the strategy paternalistic, in curbing one’s autonomy and the freedom to make one’s own choices, but it is also totalitarian in its predisposition, permitting the State control of one more sphere of our lives.

This may seem an exaggeration. Some may think that the situation’s urgency demands the strong hand of the State. However, while an unrestrained use of social media and online gaming may have severe implications for one’s constitution, we should recognize the problem for what it is. Our fears concerning technology and addiction are merely a symptom of another more profound problem: the difficulty one has in relating to others and finding one’s place in the world.

What authors like Hannah Arendt, Simone Weil, Tocqueville, and even Foucault teach us is that the construction of our moral personality requires laying roots in the world. Limiting online access will not, by itself, resolve the underlying problem. You may actually end up by throwing children to an abyss of solitude and despair by exacerbating the difficulties they have in communicating. We must ask: how might we rescue the experience of association, of togetherness, of sharing physical spaces and projects?

Here is where we go back to the concept of attention. James used to say that attention is the

taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness is of its essence. It implies withdrawal from some things in order to deal effectively with others. 

That is something that social media, despite catching our (in)voluntary attention, cannot give us. So, our withdrawal into social media must be compensated with a positive proposal of attentive activity to (re)learn how to look, interpret, think, reflect upon things and most of all to (re)learn how to listen and be with others. More than 20 years ago, Robert Putnam documented the loss of social capital in Bowling Alone. Simone Weil detailed our sense of “uprootedness” fifty years prior to that. Unfortunately, today we’re still looking for a cure that will have us trading in our screens for something that we can actually do attentively together. Legislation is unlikely to fill that void, alone.

A Game Worth Dying For?

image of "Game Over" screen displayed on monitor

There’s a game mechanic called permadeath. The idea behind it is simple. If your character – be that on a computer, board, tabletop, or any other medium – dies, they stay dead. So instead of the standard gaming affair of having extra lives or being revived at a save point, for those games with permadeath, you lose all your equipment, merch, coins, etc. and are considered entirely dead. Some of the most famous games that use this feature include The Long Dark, XCOM: Enemy Unknown, and DayZ.

The purpose of permadeath is relatively simple. It drives up the tension by driving up the stakes.

If you know your character comes back to life when they’re killed, then there’s little risk. The time you invest in a game is safe because it won’t be lost when you get hit by a fireball or trip into a bottomless pit. You can simply dust yourself off and try again.

But, if you’re at risk of losing that progress, the time, effort, and emotions you’ve put into a game become far more precious. Knowing that one wrong move means all that progress gets thrown into the bin means that every step, every look around the corner, and every opening of a mysterious box has tension. Knowing that in-game death means starting over again after spending days reaching a game’s final stages means your investment skyrockets.

However, a game’s stakes are rarely anything more valuable than time. Sure, losing all your progress can be frustrating when a ghoul kills your character in the game’s final moments, but you’re still able to get up and walk away.

While your character may face oblivion, you, as the player, don’t. You may think you’ve wasted your time, but ultimately, that’s all that would have been wasted (and if you had fun, is it really wasted?).

But, in early November 2022, Palmer Luckey, the founder of the VR firm Oculus, claimed he designed a headset that transcends permadeath out of a game and into reality – he developed a headset that kills you if you die in-game.

The headset is fitted with three explosives. Luckey wired these to detect certain shades of red at a specific frequency. So, when your character dies in-game, and the VR headset displays that shade of red, the explosives detonate, and the player’s brain is destroyed. This system is still in its developmental stages, with the headset currently acting as a piece of office art. However, Luckey’s stated that he wants to explore its potential further, eventually ensuring that it’s tamperproof and cannot be removed by external parties. In effect, he wants to prevent someone from helping the player remove the headset if they change their mind after starting the game. Also, a game that would work with the headset needs to be created. Specifically, one that avoids using the triggering shade and frequency of red before the character, and consequentially the player, meets their end.

The prospect of someone using such a headset raises numerous questions. These include whether someone could genuinely consent to use the headset and whether Luckey would be a murderer if/when someone died while using it to play a game.

We may return to these in another article. For now, however, I want to focus on why Palmer Luckey created this maniacal contraption.

Luckey says he got the idea from the manga and anime series Sword Art Online. It features a VR headset called the NerveGear, allowing total immersion in a virtual world. The headset is released with the titular Sword Art Online game. Ten thousand beta players sign in when the game launches but soon discover they cannot sign out and are trapped within the game. The game’s designer then appears to the players and tells them they must beat all 100 floors of the game’s monster-infested mega-castle if they want to escape. At this point, he also reveals that death in the game results in death in real life. The idea of an immersive virtual world captured Luckey’s imagination, as he writes in his blog:

The idea of tying your real life to your virtual avatar has always fascinated me – you instantly raise the stakes to the maximum level and force people to fundamentally rethink how they interact with the virtual world and the players inside it. Pumped up graphics might make a game look more real, but only the threat of serious consequences can make a game feel real to you and every other person in the game. This is an area of videogame mechanics that has never been explored, despite the long history of real-world sports revolving around similar stakes.

At first, this prospect might strike many as patently absurd. It seems that few, if any, would sign up to play a game that could result in death. Games usually are a form of escapism from real-life’s woes, and a game that includes as a mechanic one of life’s (arguably) most significant downsides – mortality – seems to run entirely counter to this goal.

But, with some consideration, Luckey’s perspective on risk’s relationship with gaming seems to hold at least some value, specifically concerning gaming’s attempts at raising the stakes. Games have little material value in and of themselves – it’s what makes them games. This is one of the reasons gaming, in its various forms (including sports), is closely tied to gambling.

Gambling raises the stakes of what is happening in the game and gives it real-world value and impact. For example, you’re much more likely to care about who wins a round of Super Smash Bros if you have money riding on the outcome.

The in-game risk is given real-world form, and the greater the value bet, the greater one’s emotional and cognitive investment is; you care more when there’s more on the line. When it comes to putting things on the line, there’s nothing more valuable than your life.

Also, while it might seem madness to design a game that kills the player if they fail to perform, countless people already undertake recreational activities that involve the prospect of death if mistakes are made. Skydiving is an obvious one.

Plummeting out of a plane and reaching terminal velocity, with only a couple of layers of fabric preventing you from dying upon impact with the earth, is a risk most of us don’t have to take. But the prospect of death in this context doesn’t have people up in arms demanding that skydiving be stopped.

On the contrary, the activity’s value is, in some measure, derived from the inseparable risk of immeasurable harm. It’s arguably what makes diving out of a plane different from indoor skydiving; despite all the measures put in place, you’re aware that death is a potential outcome.

So, provided safeguards are put in place to prevent system errors, and the games offer players a beyond excellent chance of survival, is it such an obscene prospect?

Having said that, if offered the chance to play an almost unlosable game on Luckey’s murderous headset, you can be sure I’d say no

No Fun and Games

image of retro "Level Up" screen

You may not have heard the term, but you’ve probably encountered gamification of one form or another several times today already.

‘Gamification’ refers to the process of embedding game-like elements into non-game contexts to increase motivation or make activities more interesting or gratifying. Game-like elements include attainable goals, rules dictating how the goal can be achieved, and feedback mechanisms that track progress.

For example, Duolingo is a program that gamifies the process of purposefully learning a language. Users are given language lessons and tested on their progress, just like students in a classroom. But these ordinary learning strategies are scaffolded by attainable goals, real-time feedback mechanisms (like points and progress bars), and rewards, making the experience of learning on Duolingo feel like a game. For instance, someone learning Spanish might be presented with the goal of identifying 10 consecutive clothing words, where their progress is tracked in real-time by a visible progress bar, and success is awarded with a colorful congratulation from a digital owl. Duolingo is motivating because it gives users concrete, achievable goals and allows users to track progress towards those goals in real time.

Gamification is not limited to learning programs. Thanks to advocates who tout the motivational power of games, increasingly large portions of our lives are becoming gamified, from online discourse to the workplace to dating.

As with most powerful tools, we should be mindful about how we allow gamification to infiltrate our lives. I will mention three potential downsides.

One issue is that particular gamification elements can function to directly undermine the original purpose of an activity. An example is the Snapstreak feature on Snapchat. Snapchat is a gamified application that enables users to share (often fun) photographs with friends. While gamification on Snapchat generally enhances the fun of the application, certain gamification elements, such as Snapstreaks, tend to do the opposite. Snapstreaks are visible records, accompanied by an emoji, of how many days in a row two users have exchanged photographs. Many users feel compelled to maintain Snapstreaks even when they don’t have any interesting content to share. To achieve this, users laboriously send meaningless content (e.g., a completely black photograph) to all those with whom they have existing Snapstreaks, day after day. The Snapstreak feature has, for users like this, transformed Snapchat into a chore. This benefits the company that owns Snapchat by increasing user engagement. But it undermines the fun.

Relatedly, sometimes an entire gamification structure threatens to erode the quality of an activity by changing the goals or values pursued in an activity. For example, some have argued that the gamification of discourse on Twitter undermines the quality of that discourse by altering people’s conversational aims. Healthy public discourse in a liberal society will include diverse interlocutors with diverse conversational aims such as pursuing truth, persuading others, and promoting empathy. This motivational diversity is good because it fosters diverse conversational approaches and content. (By analogy, think about the difference between, on the one hand, the conversation you might find at a party with people from many different backgrounds who have many different interests and, on the other hand, the one-dimensional conversation you might find at a party where everyone wants to talk about a job they all share). Yet Twitter and similar platforms turn discourse into something like a game, where the goal is to accumulate as many Likes, Followers, and Retweets as possible. As more people adopt this gamified aim as their primary conversational aim, the discursive community becomes increasingly motivationally homogeneous, and consequently the discourse becomes less dynamic. This is especially so given that getting Likes and so forth is a relatively simple conversational aim, which is often best achieved by making a contribution that immediately appeals to the lowest common denominator. Thus, gamifying discourse can reduce its quality. And more generally, gamification of an activity can undermine its value.

Third, some worry that gamification designed to improve our lives can sometimes actually inhibit our flourishing. Many gamification applications, such as Habitify and Nike Run Club, promise to help users develop new activities, habits, and skills. For example, Nike Run Club motivates users to become better runners. The application tracks users across various metrics such as distance and speed. Users can win virtual trophies, compete with other users, and so forth. These gamification mechanisms motivate users to develop new running habits. Plausibly, though, human flourishing is not just a matter of performing worthwhile activities. It also requires that one is motivated to perform those activities for the right sorts of reasons and that these activities are an expression of worthwhile character traits like perseverance. Applications like Nike Run Club invite users to think about worthwhile activities and good habits as a means of checking externally-imposed boxes. Yet intuitively this is a suboptimal motivation. Someone who wakes up before dawn to go on a run primarily because they reflectively endorse running as a worthwhile activity and have the willpower to act on their considered judgment is more closely approximating an ideal of human flourishing than someone who does the same thing primarily because they want to obtain a badge produced by the Nike marketing department. The underlying thought is that we should be intentional not just about what sort of life we want to live but also how we go about creating that life. The easiest way to develop an activity, habit, or skill is not always the best way if we want to live autonomously and excellently.

These are by no means the only worries about gamification, but they are sufficient to establish the point that gamification is not always and unequivocally good.

The upshot, I think, is that we should be thoughtful about when and how we allow our lives to be gamified in order to ensure that gamification serves rather than undermines our interests. When we encounter gamification, we might ask ourselves the following questions:

    1. Is getting caught up in these gamified aims consistent with the value or point of the relevant activity?
    2. Does getting caught up in this form of gamification change me in desirable or undesirable ways?

Let’s apply these questions to Tinder as a test case.

Tinder is a dating application that matches users who signal mutual interest in one another. Users create a profile that includes a picture and a short autobiographical blurb. Users are then presented with profiles of other users and have the option of either signaling interest (by swiping right) or lack thereof (by swiping left). Users who signal mutual interest have the opportunity to chat directly through the application.

Tinder invites users to think of the dating process as a game where the goals include evaluating others and accumulating as many matches (or right swipes) as possible. This is by design.

“We always saw Tinder, the interface, as a game,” Tinder’s co-founder, Sean Read, said in a 2014 Time interview. “Nobody joins Tinder because they’re looking for something,” he explained. “They join because they want to have fun. It doesn’t even matter if you match because swiping is so fun.”

The tendency to think of dating as a game is not new (think about the term “scoring”). But Tinder changes the game since Tinder’s gamified goals can be achieved without meaningful human interaction. Does getting caught up in these aims undermine the activity of dating? Arguably it does, if the point of dating is to engage in meaningful human interaction of one kind or another. Does getting caught up in Tinder’s gamification change users in desirable or undesirable ways? Well, that depends on the user. But someone who is motivated to spend hours a day thumbing through superficial dating profiles is probably not in this respect approximating an ideal of human flourishing. Yet this is a tendency that Tinder encourages.

There is a real worry that when we ask the above questions (and others like them), we will discover that many gamification systems that appear to benefit us actually work against our interests. This is why it pays to be mindful about how gamification is applied.

Sexual Violence in the Metaverse: Are We Really “There”?

photograph of woman using VR headset

Sexual harassment can take many forms, whether in an office or on social media. However, there might seem to be a barrier separating “us” as the user of a social media account, from “us” as an avatar or visual representation in a game since the latter is “virtual” whereas “we” are “real.” Even though we are prone to experience psychological and social damage to our virtual representations, it seems that we cannot – at least directly – be affected physically. A mean comment may hurt my feelings and change my mood – I  might even get physically ill – but no direct physical damage seemed possible. Until now.

Recently, a beta tester of Horizon Worlds – a VR-based platform of Meta – reported that a stranger “simulated groping and ejaculating onto her avatar.” Even more recently, additional incidents, concerning children, have been reported. A safety campaigner stated that “He has spoken to children who say they were groomed on the platform and forced to take part in virtual sex.” The same article talks about howa “researcher posing as a 13-year-old girl witnessed grooming, sexual material, racist insults and a rape threat in the virtual-reality world.” How should we understand these virtual assaults? While sexual harassment requires no physical presence, when we attempt to consider whether such actions represent a kind of physical violence, things get complicated as the victim has not been violated in the traditional sense.

This problem has been made more pressing by the thinning of the barrier that separates what is virtual from what is physical. Mark Zuckerberg, co-founder and CEO of Meta, has emphasized the concept of “presence” as “one of the basic concepts” of Metaverse. The goal is to make the virtual space as “detailed and convincing” as possible. In the same video, some virtual items are designed to give a “realistic sense of depth and occlusion.” Metaverse attempts to win the tech race by mimicking the physical sense of presence as much as possible.

The imitation of the physical sense of presence is not a new thing. Many video games also develop  a robust sense of  presence. Especially in mmo (massive multiplayer online) games, characters can commonly touch, push, or persistently follow each other, even when it is unwelcomed and has nothing to do with one’s progress in the game. We often accept these actions as natural, as an obvious and basic part of the game’s social interaction. It is personal touches like these that encourage gamers to bond with their avatars. They encourage us to feel two kinds of physical presence: present as a user playing a game in a physical environment, and present as a game character in a virtual environment.

But these two kinds of presence mix very easily, and the difference between a user and the avatar can easily be blurred. Having one’s avatar pushed or touched inappropriately, has very real psychological effects. It seems that at some point, these experiences can no longer be considered as merely “virtual.”

This line is being further blurred by the push toward Augmented Reality (AR) which places “virtual” items in our world, and Virtual Reality (VR) where “this” world remains inaccessible to user during the session. As opposed to classic games’ sense of presence, in AR and VR, we explore the game environment mainly within one sense of presence instead of two, from the perspective of a single body. Contrary to our typical gaming experience, these new environments – like that of the Metaverse – may only work if this dual presence is removed or weakened. This suggests that our experience can no longer be thought of as taking place “somewhere else” but always “here.”

Still, at some level, dual presence remains: When we take our headsets off, “this world” waits for us. And so we return to our main moral question under discussion: Can we identify an action within the embodied online world as physical? Or, more specifically, Is the charge of sexual assault appropriate in the virtual space?

If one’s avatar is taken as nothing but a virtual puppet controlled by the user from “outside,” then it seems impossible to conclude that gamers can be physically threatened in the relevant sense. However, as the barrier separating users from their game characters erodes, the illusion of presence makes the avatar mentally inseparable from the user, experience-wise they become increasingly the same. Since the aim of the Metaverse is to create such a union, one could conclude that sharing the same “space” means sharing the same fate.

These are difficult questions, and the online spaces as well as the concepts which govern them are always in development. However, recent events should be taken as a warning to consider preventive measures, as these new spaces require new definitions, new moral codes, and new precautions.

It’s Just a Game: The Ethics of Tom Clancy’s Not-So-Elite Squad

image of man in military gear firing weapon

Video game company Ubisoft has recently received a fresh wave of backlash, this time for its latest mobile game app: Tom Clancy’s Elite Squad. The mechanics of the game are nothing revolutionary – tap here to make person X shoot person Y – and has received a number of poor reviews for its heavy-handed monetization and boring gameplay. The problem is not so much the style of game, as it is the backstory. Here is how the plot is described in the game’s introduction:

“The world is in an alarming state: wars, corruption and poverty have made it more unstable than ever. As the situation keeps worsening, anger is brewing. From between the cracks, a new threat has emerged to take advantage of escalating civil unrest. They are known as UMBRA: a faceless organization that wants to build a new world order. They claim to promote an egalitarian utopia to gain popular support, while behind the scenes, UMBRA organizes deadly terrorist attacks to generate even more chaos, and weaken governments, at the cost of many innocent lives. Simultaneously, they have been hacking social media to discredit world leaders and rally people to their cause. Under immense pressure, world leaders have come together to authorize a new international cross agency unit designed to combat UMBRA. It is clear, playing by the rules will not win this fight. The leader of this unconventional squad will need to recruit elite soldiers from every corner of the world, including the criminal underworld. As commander of this unprecedented squad, we need you to put an end to UMBRA’s campaign of chaos. Welcome to Tom Clancy’s Elite Squad.”

In addition to sounding like it was written by a middle-schooler who used a thesaurus for every word except “squad”, many critics have noted that the message of the game itself is a dangerous one, in that it appears to lend credence to right-wing conspiracy theories that protesters are somehow being controlled by evil members of the deep state. Additionally, it seems to advocate for violence against those protesters – after all, “playing by the rules will not win this fight” – who are villainized for, bizarrely, wanting to create an “egalitarian utopia”.

(You control the guys shooting the people waving the flag reading “freedom”)

It gets much worse: the symbol that Ubisoft chose to represent the antagonists bears a striking resemblance to that used by the Black Lives Matter movement. So close, in fact, that Ubisoft issued an apology, and promised to remove the symbol from the next update. Of course, the plot of protesters being secret terrorists remains central to the game.

Hanlon’s razor states to “never attribute to malice that which is adequately explained by stupidity.” That being said, there might not be enough stupidity available to adequately explain Ubisoft’s choice of plot and imagery. It seems clear that not only was an apology warranted, but that significant changes to the game ought to be made. Outrage online has been widespread, and deserved.

But wait: why make such a big deal out of this? It is, after all, nothing more than a dumb mobile game from a franchise whose best days are likely long behind it. And it’s not like we haven’t had Tom Clancy’s work glorifying the military for decades already, in the forms of books, movies, and TV shows – featuring protagonists with jaw-droppingly original names like “Jack Ryan”, “John Clark”, and “Jack Ryan Jr.” – not to mention dozens of video games. These are works of fiction, though, and people are able to differentiate such works from reality. So really, we shouldn’t be bothered by the latest in a long line of predictable Tom Clancy-branded properties.

There is, I think, something to be said about the “it’s just a game” response. For instance, while research on the effects of violent video games on their players is ongoing, there is a good amount of evidence that long-term exposure to such games has no effect on levels of aggression, pro-social behavior, impulsivity, or cognition in general. So the thought that a mobile game in which one shoots protesters is going to have a direct impact on the number of people who are going out and shooting protesters – something that has become a real problem as of late – is too quick.

The relevant worry, then, is not so much that the game will be a direct cause of future violence, but rather that the fact that a company would create such a game, with such a premise, at this particular moment in time, helps to normalize a false and dangerous narrative of events that is very much taken seriously by some people.

Here, then, is a difference between Tom Clancy’s Elite Squad and the kinds of video games that have historically received moral outrage and are the subject of studies that mentioned above: those falling into the latter category do not tend to promote any kind of narrative that actively promotes a particular political agenda. Consider, for example, a violent videogame like Mortal Kombat (which was one of the games that sparked early congressional hearings into depictions of violence in video games in the early 1990s) that involves graphic acts of decapitation and pixelated blood. These acts are obviously so far removed from what are generally considered good societal values that it is easy to see how one could separate the acts promoted in the video game from those promoted in the real world. On the other hand, when a game tells you that protestors who wave a flag remarkably similar to that used by the Black Lives Matter movement are driven by ulterior motives, are “hacking social media”, and ought to be dealt with in any way possible – a message that seems to be condoned by right-wing media outlets – the lines between fantasy and reality become much less distinct.

In one sense, Tom Clancy’s Elite Squad is just a game. In another sense, it is a symbol of an uninformed and intolerant worldview that has potentially real and damaging consequences. With any luck, Ubisoft’s decisions will turn out to be the result of an enormous amount of stupidity, and not an equivalent amount of malice.

The Culture of Crunch: The Video Game Industry and Overwork

Banner for the game "Red Dead Redemption 2"

This month sees the release of one of the most highly anticipated video games of the year, Red Dead Redemption 2. The game is created by video game supergiant Rockstar Games, known best for their Grand Theft Auto series of games. However, the co-founder of Rockstar Games, Dan Houser, has recently been the target of controversy for expressing in a tweet, as well as in an interview with Vulture that employees at Rockstar had, in weeks leading up to the game’s release, been working “100-hour weeks.” While later clarifying that Houser did not mean to imply that all employees were working such hours, or that it was mandatory that any employee do so, the statement nevertheless reignited discussion about the seemingly ubiquitous occurrence of “crunch” in the video game industry.

“Crunch” is generally defined as a period in which employees put in work weeks much longer than 40 hours, often unpaid, in the weeks or months leading up to the completion of a project. Take This, a non-profit that describes itself as “serving the game community/industry that provides resources, guidelines and training about mental health issues in the game community” describes in a whitepaper that crunch is often the product of setting of unrealistic deadlines, and that employers feel that it is required for “creativity and esprit de corps”. Take This describes typical crunch times as involving 60 to 80-hour work weeks, although some in the industry have reported even more significant demands on their time. For example, one of the early catalysts for more public discussion of crunch came in 2004 in a blog post by spouses of employees of Electronic Arts, who describe periods in which employees worked up to 12 hours a day, seven days a week.

Take This also describes the toll that crunch takes on the mental and physical health of employees: “Long work hours might mean giving up sleep, eating poorly, overindulging in caffeinated drinks, and otherwise abandoning healthy habits”, with “major risk factors for health problems that include insomnia, depression, heart disease, stroke, and on-the-job injuries”. Studies reported in the whitepaper also strongly support the idea that crunch is actually detrimental to the quality of the finished product, as well as the company itself: excessive crunch time tends to result in more numerous software defects and lower critic ratings, and more significant costs to the company in terms of dealing with employee turnover.

It seems clear that there are a number of ethical problems surrounding crunch in the video game industry. First and foremost, even if crunch does end up producing a higher quality video game (although we have seen reason to think that it doesn’t), it seems that detriments to the mental and physical well-being of employees are costs that outweigh any potential benefits. It would then seem to be a generally unethical practice to make significant crunch mandatory.

However, while companies like Rockstar have clarified that there is no explicit expectation of crunch from its employees, there may be more subtle factors that result in employees feeling as though engaging in crunch is expected of them. For instance, Matt Webster at gamesindustry.biz describes a number of practices that can create the appearance of implicit requirements for crunch from employees, and that companies have an obligation to try to avoid. Webster suggests a number of best practices, including setting realistic expectations for completion, regularly seeking feedback from employees measuring their health, and curbing rewards for bad behaviors; for example, Webster notes how encouraging someone for working excessive hours with praise like “She’s just passionate” reinforces detrimental behavior.

Webster’s observations speak to a second ethical concern surrounding the crunch phenomenon, namely concerning the obligations that companies have towards their employees to try to try to mitigate the effects of crunch. Eliminating the effects of crunch will take more than just explicitly decrying the practice: one may also be required to try to establish a workplace culture in which employees do not feel implicitly obligated to engage in crunch. In addition to the above best practices, Webster suggests that those in leadership positions ought to modify their own behavior to set the right kind of precedent for their employees. “Like the behaviors you want to see”, Webster recommends, adding “if you believe everyone should leave at six o’clock…then leave at six.”

We have seen that given the detrimental effects on employees, employers have moral reasons not to require crunch. However, since crunch can be a product not only of explicit policy but of implicit behavior, the actions of those like Hauser, someone who does not require crunch but still engages in it, may still be morally problematic. After all, if all of your bosses are working 100 hours weeks, you will no doubt feel pressure to start putting in a lot of overtime yourself.

One final worry has to do with how we as consumers of games that are partly the result of crunch ought to behave. Jessica Conditt at Engadget, for example, reports mixed feelings when appreciating the artistic qualities of Red Dead Redemption 2 while knowing that many of those qualities were the product of significant crunch at Rockstar:

“While I admire these in-game moments, they’re also the ones that shake me out of Red Dead Redemption 2‘s spell the most abruptly. The more beautiful the scene, the more obvious how much talent and work has gone into it, the more I think about the people behind it and how many 80-hour weeks they might have endured; how their emotional and physical health must have fared; how many family milestones they may have missed. The more I think about crunch.”

Conditt suggests that, as a minimum, both those in the video game industry and consumers of games ought to engage in an open discussion about consequences of crunch. Given that according to some estimates Red Dead Redemption 2 is expected to sell 25 million copies in the first six months after its release, we can hopefully expect a lot of conversations in the near future.

Evolving Apocalyptic Narratives and the Ethics of Fallout

An in-game screenshot of Fallout 4, where a man points a weapon at a zombie.

Since the first atom bombs fell on Hiroshima and Nagasaki in 1945, pop culture has imagined and re-imagined apocalyptic narratives. From the “atomic pop” that proliferated on the radio in the 1940s and 50s, to the 2008 and 2016 releases of post-apocalyptic video games Fallout 3 and Fallout 4, this recurring theme exemplifies how ingrained the apocalyptic narrative is in Western culture. However, a shift can be seen from apocalyptic fears in the years after the bomb, to the post-apocalyptic heroic narratives told today in video games like Fallout. Although the apocalypse was once seen as the ultimate end, post-apocalyptic narratives make room for life afterwards, a life inherently fraught with ethical dilemmas about how to rebuild society. Where did apocalyptic narratives shift from ultimate annihilation to a heroic narrative about rebuilding society, and how does Fallout provide a moral compass for navigating the post-apocalypse?

Continue reading “Evolving Apocalyptic Narratives and the Ethics of Fallout”

Growing Pains in the Rapid Rise of eSports

On August 7-12, the Dota 2 Championships are taking place in Seattle, Washington. Eighteen qualifying teams will compete for a combined prize pool currently estimated at $23.8 million. The large prize pools, and high participation and viewership, make Dota 2 rival more traditional sports: the International’s first prize last year was comparable to cash rewards in sports like tennis, cricket, and golf, out-pacing them all in terms of grand prize. Thus, though Dota 2 isn’t competing with the most lucrative sports like football, there is a real sense that eSports are rivaling the traditional, physical sports. Since 2014, more people watched the League of Legends world championships than the NBA finals.

Continue reading “Growing Pains in the Rapid Rise of eSports”

Moral Panic and the “Blue Whale Game”

Over the last few months, there have been reports of a deadly internet game, “The Blue Whale.” Allegedly, teenage gamers participate by following the instructions provided by the designers of the game. These instructions include watching horror films and waking up in the middle of the night. The challenge goes on for 50 days, and then, the final instruction is to commit suicide.

Continue reading “Moral Panic and the “Blue Whale Game””

The Imperialism of Animal Crossing

When I first popped the cartridge for Animal Crossing: New Leaf into my Nintendo 3DS, I had no idea I would be playing a game about imperialism. I had played iterations of the cute “life simulator,” complete with its talking animal villagers and customizable houses, since it first came to the United States on the Gamecube in 2001. The colorful art style and simplistic premise of New Leaf checked all the right nostalgia boxes, and I was excited to see what the latest iteration in the series had to offer. Considering imperialistic narratives was hardly the priority.

Continue reading “The Imperialism of Animal Crossing”

Are You Your Avatar?

The online world has always been one of seemingly endless possibilities. In this space, it has been said, anything can happen and anything can be changed, including one’s own identity. And while this has been the case with many games, others have upended this model entirely. One of them, the online survival game Rust, is doing so to provoke debate about a topic rarely considered: race in the online world.

Continue reading “Are You Your Avatar?”