← Return to search results
Back to Prindle Institute

Why Debate?

In December of 2025, CBS and the Bank of America announced the “Things That Matter” series of town hall debates. According to editor-in-chief Bari Weiss,

We believe that the vast majority of Americans crave honest conversation and civil, passionate debate…In a moment in which people believe that truth is whatever they are served on their social media feed, we can think of nothing more important than insisting that the only way to get to the truth is by speaking to one another.

Debates are popular these days. The YouTube channel Jubilee, for example, claims to “Provoke human connection” by producing videos where one central figure debates numerous contrarians (often 20 or more), with recent video titles including: “Doctor Mike vs. 20 RFK Jr. Supporters,” “Andrew Callahan vs. 20 Conspiracy Theorists,” and “Piers Morgan vs. 20 Woke Liberals.” These videos receive millions of views, and the Jubilee channel itself boasts over 10 million subscribers.

Our current moment is often presented as one in which polarization is running amok. Think pieces and books abound on the need to bridge social divisions, and debating issues is often seen as the best way to reach across political aisles to address those issues rationally and productively. But despite their popularity and promise of healing divides, maybe what we need is a little less debate.

What do I mean by “debate”? While they can take different forms, debates are generally ways of engaging with issues where parties take opposing stances on a particular issue, present arguments for their view, and criticize and respond to the arguments of their opponents. In engaging in a debate, the ultimate goal is to “win”: in formal debates, a “winner” or “loser” may be declared, while in more informal debates, it is left up to an audience or the participants themselves to reach a verdict.

Issues can be debated in better or worse ways. A debate where people are screaming and insulting each other isn’t a good one; hence, we see “civility” commonly espoused as a characteristic of a quality debate. There is no shortage of additional advice about how to debate well: books on proper debating techniques like Win Every Argument: The Art of Debating, Persuading, and Public Speaking and Good Arguments: What the art of debating can teach us about listening better and disagreeing well emphasize rhetorical strategies alongside proper debate decorum, with the promise of helping their readers get the edge over their ideological rivals.

If we’re going to debate, then we should want to do it well. One of the main criticisms of Jubilee, for example, is not that it features debates, but that the debates it features are bad: participants arrive unprepared and talk over one another, many of the arguments are specious, and the overall vibe is more chaotic than civilized. However, we might think that something like the “Things That Matter” series, which promises well-informed speakers and civil engagement, would be the ideal way to address divisive political issues.

The lineup of the “Things That Matter” series, however, illustrates one of the major problems with how political and ideological issues are presented for debate, at least in the media: while debates are often presented as ways to critically evaluate views from a neutral position, choosing which issues to debate is itself a political act.

For example, one of the purported issues that “matter” in the CBS series concerns feminism, with the question “Has feminism failed women?” being up for debate. While it is certainly worthwhile to discuss feminism and its relation to American politics, the question is presented in a way that places two positions on equal footing: one that claims that feminism has failed women, and one that says it hasn’t. Using debate to address political issues thus presents them as being equally worthy of consideration, even when they’re not. There are plenty of examples of this kind of “bothsidesism” in the popular debate landscape, for example: flat earthers vs. scientists, doctors vs. antivaxxers, etc. The mere fact that there are people who disagree on some issues, then, does not mean that they ought to be presented as issues for debate.

Some commenters have also criticized the “Things that Matter” series in particular for the choice of debate topics. In addition to “Has feminism failed women?”, the series includes the questions “Does America need God?” and “Should gen Z believe in the American dream?” Again, these are certainly topics some Americans disagree about, and they may even be ones that have some well-formulated arguments on both sides. However, in the current moment, they are arguably far from the most pressing. Choosing an issue for debate, especially as part of a major national televised event, thus elevates it as being particularly important. Again, choosing which issues to debate becomes a political act.

These might be problems when it comes to big, televised events or social media channels with millions of followers, but surely, when it comes to our own personal disagreements, we should still encourage engagement with people we disagree with in a way that emphasizes civility and well-formulated arguments. What would it mean to say that we should be debating less with each other?

There are ways to engage with contentious issues that don’t involve debating them. Philosophers, arguably, do not debate; instead, they discuss different views. While the difference between debates and discussions might seem semantic, the philosopher’s discussion is motivated by pursuing the force of arguments and working collaboratively to find the best such arguments to gain a deeper understanding. While philosophical argumentation can also be done in better or worse ways, the virtuous philosophical argument is not motivated by a desire to defeat an opponent.

Indeed, that debates encourage adopting the mindset of wanting to emerge as a “winner” in turn encourages obstinacy in the face of evidence that one might be wrong. Sometimes, though, the most reasonable course of action in the face of arguments is to change your mind. That this would constitute “losing” a debate underscores both the artificiality of debates and how ill-suited the debate mindset is to engaging with complex political issues.

Debating isn’t all bad. As many books about debating point out, to debate well, you need to know how to do other important things, such as analyzing and formulating arguments. But the idea that the best approach to socially divisive issues is to debate them fails to acknowledge how debates can amplify political differences rather than bridge them.

When Is Foreign Intervention Justified?

The current US administration has become increasingly interested in the affairs of other countries. This includes the military strikes on Iranian nuclear sites in 2025, as well as the more recent ousting and detainment of President Maduro in Venezuela, consideration of further strikes on Iran during the ongoing protests, and, most surprisingly, aggressive saber-rattling towards long-time American ally Greenland. Each of these situations concerns the thorny issue of when and how interference in the internal affairs of a sovereign nation is justified.

National sovereignty is the bedrock of the post-WWII international order. Each nation is an absolute (or near absolute) authority within the territory it controls. Moreover, nations relate to each other as equals, or at least as equals under international law. Practically speaking, there are, of course, vast differences in political and economic power between nations.

The justifications for such a system of sovereign nations bridge power and ethics. National sovereignty as a norm is ideal for individual nations interested in running their country without foreign obstructions. And nations unsurprisingly have a vested interest in preserving a system where the nation as a unit of political organization is paramount. Even the United Nations is designed to respect national sovereignty. This setup is especially amenable to powerful nations, such as the United States and China, who would likely be unenthusiastic about a global system that did not preserve their advantages.

Nonetheless, there are ethical defenses of sovereignty to be made. First, a sincere commitment to sovereignty and especially, sovereign equality, can help to prevent war and conflict between nations. Second, sovereign equality is in the interest of smaller nations who would otherwise be at risk of being constantly victimized, or even taken over, by more powerful nations. (Again, the actual implementation of sovereign equality is imperfect. Powerful nations do throw their weight around. Less powerful nations do occasionally get squashed.) Finally, national sovereignty can relate to self-determination. If we believe that a people have a right to self-determination, that is, to collectively decide on governance, then respect for national sovereignty is one way to respect self-determination.

The sticking point of national sovereignty is that while it may protect nations from the interference of other nations, it does not protect the people of nations from harms enacted by or permitted by their own countries. Hence, if we want systems of government that are fundamentally for the well-being of their people, then there must be limits on sovereignty. But, given the importance of sovereignty to the current global order, outside of overturning that order, violations of sovereignty should be carefully considered.

One reason for violating sovereignty relates to self-determination. If respect for national sovereignty flows from respect for self-determination, then an oppressive or authoritarian government might not deserve the presumption in favor of non-interference. The thought is that a state which does not serve its people is no true state at all. This argument, however, does not have much traction in the current global order. Instead, the more common justifications involve preventing atrocities or self-defense.

For example, humanitarian intervention approaches contend that sovereignty should be violated to prevent genocides and other atrocities, usually through military action. Ethically, we may also consider a nation engaged in such monstrous acts to have violated a covenant with its people and lack the political legitimacy to claim sovereignty. The exact way humanitarian intervention works varies – which atrocities? who gets to decide? – but central is that the motivation for intervention is humanitarian, as opposed to merely being in the strategic interest of a specific nation. Similarly to humanitarian intervention, the Responsibility to Protect approach adopted by the United Nations in 2005, represents a shared commitment to protect people from mass atrocity crimes such as genocide. The final stage of the Responsibility to Protect entails intervention, albeit only occurring under certain conditions and after other approaches have been exhausted.

Self-defense, or more controversially, defense against an imminent threat, are also commonly offered reasons for violating national sovereignty. The self-defense case is clear cut. If a country is being attacked, it can violate sovereignty and attack back. Imminent threat is more slippery, demanding an assessment of the danger and the need for preemptive action.

Besides the justification for intervention, it is also important to consider who should be able to intervene. For interventions other than self-defense, an organization like the UN, or a similar form of global governance, allows for a collective international regime with an agreed upon set of conditions for the violation of sovereignty. This provides a structure for intervention that, at least in theory, does not generally corrode national sovereignty or render it merely the privilege of powerful states. Likewise, time permitting, an international organization like the UN could provide a neutral actor to access a threat. (Although there are certainly those who would allege the UN is unduly influenced by its more powerful members, or that ostensibly humanitarian interventions can have ulterior motives.)

This puts us in a clearer position to understand why recent US interventions, or potential interventions, are generating such fear and controversy.

While there was at least a plausible humanitarian justification for intervening in Venezuela (although certainly no better than for many other countries), the US was quite forthright about pursuing oil there. It was also a largely unilateral action. The US recently pulled back from the brink of intervening (again) in Iran, although in light of intense violence against protestors there would at least be a potential humanitarian motivation (even if there are undoubtedly strategic interests at play). It remains to be seen how the situation with Greenland will develop. However, there is no plausible humanitarian justification nor imminent threat. The US has been clear that what it wants from this one-time ally are its natural resources and strategic location. In other words, this nakedly abandons any pretext of a higher ethical justification for the violation of sovereignty.

Critics might allege that, at least Venezuela, is business as usual for US foreign policy. The US has long had an interventionist foreign policy, from coups in Latin America, to the Vietnam War, to the 1999 intervention in Kosovo (with NATO), to the wars in Iraq and Afghanistan. However, even while engaging in these actions, the US generally maintained the rhetorical trappings of respect for sovereignty, coalition building, and international cooperation. So, while the US was perhaps hypocritical, it was not completely dismissive of the post-WWII system, a system for which it was one of the primary architects and beneficiaries. Like Russia’s action in Ukraine, US action in Greenland would signal that there are no boundaries for foreign intervention other than the respective power of the nations involved. A marked departure indeed.

Can Algorithms Really Treat Us Fairly?

The world is increasingly inundated by algorithmic decision-making. Everything we experience online is the result of a calculation. It is sometimes difficult to remember that what we see is not chosen randomly, or without reason. There are motivations behind the online experiences we have, mostly driven by engagement metrics and revenue generation.

Beyond our consumption choices, algorithms are used in high stakes, real world conditions. Courts use algorithmic systems like COMPAS to determine the possibility of recidivism. Car insurance companies use AI algorithms to determine whether we should pay more or less money for our insurance. Health care companies are using AI to approve and deny patient coverage.

As our society increasingly embeds algorithmic decision-making into its infrastructure, it will become ever more important to know whether, and precisely how, algorithms can treat us fairly. A great deal of the current research on algorithmic fairness focuses on the best ways to remove biased data inputs and how to adjust outcomes so that they conform to a fair distribution based on some theory of outcome fairness (which is itself a contentious subject). There are, however, far fewer arguments that consider the possibility that algorithmic fairness may be an impossible union, and that some degree of unfairness may always be baked into algorithms. But on what grounds could we make this claim?

The reason is not simply that we can’t make outcomes fair, or that we can’t remove bias from data (though perhaps these things are true). The reason has to do with the very nature of how algorithms work.

To make this argument, I will consider fairness from a Rawlsian perspective. Rawls explicitly and repeatedly states that his conception of fairness concerns the basic structure of society. According to Rawls, the basic structure consists of a political constitution, an independent judiciary, legally recognized forms of property, an economic structure, and a family structure. For Rawls, then, fairness is relevant with respect to these general social practices, including decision making that affects these areas.

Fairness for Rawls also requires engagement between a particular kind of persons, namely rational and reasonable ones. Rational people are those who pursue what is in their own best interests, while reasonable people are those who are willing both to propose fair terms of cooperation and to accept them when proposed by others. If we are to be engaged in the creation of a fair social arrangement, we must be engaged with people who are both rational and reasonable, otherwise fairness is likely impossible to create.

Crucially, we must also see one another as free and equal. What makes us equal, for Rawls, is that we possess two basic moral powers: the capacity to form a conception of justice and the capacity to form a conception of the good. What makes us free, according to Rawls, is that we conceive of ourselves and of each other as having the moral power to form, revise, and pursue a conception of the good. We regard ourselves as self-authenticating sources of valid claims. Simply put, we think of ourselves and one another as free.

If we are going to make algorithms fair in a Rawlsian sense, we need to reconcile how algorithms treat people with Rawls’s idea that a fair society is one constituted by free and equal persons. So, what is the argument for why algorithms may be unable to treat us fairly? It goes something like this:

Justice requires treating people as free. This entails (1) treating people as capable of conceiving of the good and pursuing it, and (2) treating people as self-authenticating sources of valid claims.

However, algorithms cannot, in principle, do this. Algorithmic systems do not have minds, and so they do not think of us as free, for the simple reason that they do not think at all. Treating someone as algorithmically predictable is conceptually incompatible with treating them as free in the Rawlsian sense. Numbers cannot adequately represent the kind of Rawlsian freedom that justice requires.

Algorithms can be deterministic, probabilistic, or non-deterministic. For those who believe that determinism and free will are incompatible, it should be obvious why deterministic algorithms cannot treat people as free. But even for those who accept that determinism and freedom are compatible, a deterministic algorithm still treats humans as deterministic systems, not as free agents.

Probabilistic and non-deterministic algorithms may sound as though they leave room for treating people as free, but it is not clear how they do so. The burden of proof seems to lie with those who claim that such algorithms are fair. If algorithms are supposed to treat people as free, then we need a reason to believe that treating people probabilistically or non-deterministically is equivalent to treating them as free.

Consider a concrete example. A court uses COMPAS to evaluate a defendant’s eligibility for bail. After using the probabilistic algorithm, the court decides to deny bail. The algorithm determines that there is a 75% chance that the defendant, if released, would flee. In what sense does the algorithm treat the defendant as free merely because it relies on a probabilistic calculation?

The crucial question that proponents of algorithmic fairness must answer is this: How can an algorithm treat a human being as free when its basic functioning relies on the predictability and quantification of human behavior?

While algorithmic harm and bias are real, there are reasons to be optimistic that we might make algorithms fairer, or more just, on the outcome side. One can define fair outcomes and then adjust decision-making parameters until those outcomes are achieved.

However, if justice and fairness require that our procedures themselves be fair, and if fairness demands treating people as free, then we must also ask whether algorithmic systems can ever satisfy this requirement.

We might wonder why algorithms need to be just in this particular way. One might argue that the solution is straightforward: all that is required is to ensure that a human being remains in the decision-making loop. If a human uses an algorithm and then offers a justification to another human, is that not enough to satisfy the requirement of treating people as free?

I think the answer depends on a few things. First, when a decision-maker justifies a decision by appealing to an algorithm, we can ask whether there are additional reasons for the decision and how central the algorithm’s role was in arriving at it. In cases where the sole justification is the algorithmic output, the presence of a human decision-maker is merely performative. We might as well remove the human entirely and allow an anthropomorphized system to deliver the decision.

However, if there are independent reasons a person can give beyond the algorithmic output, then perhaps the conditions of Rawlsian fairness can be preserved. Even then, this would depend on whether the reasons offered still treat the person subject to the decision as free.

The upshot of this argument is not simply that algorithmic systems can be poorly designed and need to use better, less biased data, or produce fairer outcomes (though surely both would be ideal). Rather, it is that there may be a fundamental mismatch between predictability and justice. If treating people as free requires engaging them as self-authenticating sources of claims, then any system that governs by prediction rather than justification may fall short of fairness in a Rawlsian sense.

This leaves us with an urgent question to answer: are willing to trade procedural justice for the benefits of algorithmic decision-making, or should certain decisions demand forms of reasoning that algorithms cannot, in principle, provide and thus forgo the use of algorithms and AI entirely?

The Sloppy Hyperreality of Sora

It must have been a wondrous thing to enter a cathedral in medieval Europe. The stained-glass images of saints and angels would have shone like kaleidoscopes in an otherwise earthy world. Their rainbow patterns, beaming through the haze of incense, would have echoed the cantor’s chants, plangent and unintelligible, reverberating through the nave, casting sacred stories in a rarefied light. By providing a space for extraordinary physical sensations, the cathedral would have drawn the mind from the ordinariness of the everyday toward life’s sacredness and significance.

Today, the cathedral is a mere curiosity. We encounter more images scrolling for an hour on Instagram than a medieval peasant would have encountered in their entire life. The enchantment of the stained-glass saint is hardly visible against the blooming, buzzing digital confusion that surrounds us today. Our relationship to images has changed. Pictures hit differently.

There may be no better place to look for a pointed illustration of this confusion than Sora, OpenAI’s new TikTok-style social media app populated exclusively by AI-generated videos. Like TikTok, Sora feels like a digital conveyor belt designed to jam as much mechanized content into our conscious minds as possible. Videos on the platform vary in style, tone, and content. Many are uncanny. All are fake. The most distinctive feature of Sora is that users can create videos featuring likenesses of real people. By recording three words and a turn of your head, you can depict yourself in nearly any kind of fictional situation and, with permission, you can do the same with the likenesses of other users. You can even create videos depicting long-dead cultural figures like Martin Luther King, Jr. and Bob Ross.

OpenAI markets Sora using the term hyperreal: “Turn your ideas into videos with hyperreal motion and sound.” OpenAI’s use of this term presumably refers to the visual quality of the videos, which are nearly indistinguishable from lens-based recordings. The concept of hyperreality, however, has a deeper cultural pedigree. Ironically, critical theorists have used the term to describe a worrisome shift under capitalist visual culture, one that tech firms like OpenAI have been exacerbating in recent decades.

One of its earliest uses appeared in a 1975 essay by the Italian philosopher Umberto Eco, titled “Travels in the Hyperreal” — a critical travelogue documenting a road trip Eco took across the United States. Eco was fascinated by Americans’ obsession with simulations and replicas, such as wax museums, ghost town attractions, and, that ultimate “degenerate utopia,” Disneyland. People are drawn to such simulations, he suggests, because they package our messy realities into perfect, controllable images. These images are pleasant and easy to consume, and so they seem more understandable, and eventually more real, than their referents. This blurs the distinction between the fake and the authentic, which Eco describes as a culture of hyperreality.

The term gained wider traction, however, through the nearly contemporary writings of French philosopher Jean Baudrillard, who used it more radically to diagnose what he saw as the widespread and fundamentally alienating effect of image-saturated capitalist culture. For Baudrillard, hyperreality is a state in which cultural signs have detached from reality completely. Historically, images could be said to represent the real world either faithfully or falsely. Under capitalist culture, however, signs only reference themselves, pinging back and forth between advertisements, television, movies, magazines, newscasts, and now, social media, making the line between the fake and the authentic no longer relevant.

According to Baudrillard, the detachment of our visual culture from reality has massive consequences for human subjectivity. Immersed in this symbolic game of pinball, we gradually lose our connection to meaning as rooted in embodied human experience. Within the hyperreal, you are unable to interpret the world, or form a judgment about what matters, except by reference to the sign systems around you. You watch a sunset over the ocean, and you are awestruck because it looks almost as beautiful as a movie; you fall in love, and you are excited because you find yourself in your own Cinderella story. While it could be said that sign systems have always structured our experiences, the difference is that these are self-referential and self-generating, rooted in the attempt to dominate human consciousness for profit.

We thus live our lives in a simulation of meaning — what Baudrillard calls the simulacrum — yet we continue to think our beliefs, desires, and preferences are our own. As a simulation, hyperreality is an inescapable paradigm that covers its tracks. One effect of the hyperreal is a flattening out of our cultural signs. Internet culture has already deepened this effect. Taylor Swift, Zohran Mamdani, and Carrie Bradshaw inhabit the same plane of consciousness as we scroll through our seemingly personalized TikTok feeds. This flattening leads to a loss of historical consciousness, as everything becomes an image in the stream. Although it may be comfortable, the simulacrum draws us away from concerns and activities that are integral to human flourishing. There are no shadows in the grocery store; the television never stops flashing. There is no reason to think on death, or, for that matter, anything beyond the soothing and profitable signs in which we are immersed.

As others have suggested, generative AI seems to represent a culmination of Baudrillard’s hyperreality. Self-generating, detached from physical reality, yet sneakily naturalistic, images and videos produced by AI replicate many of the key features of the simulacrum. While making videos of yourself and your friends on Sora seems like meaningless fun on the surface, if there is any truth in Baudrillard’s observations, then we must consider the deeper impact these videos may have on our subjectivities as well as our relationships to others.

Disneyland, for all its fakery and capitalist mechanization, is still made and run by humans, as were the advertisements and media images when Baudrillard developed his analysis of the hyperreal. The near eradication of embodied humanity in generative AI’s processes, coupled with the increasingly heightened naturalism of these images, threatens to deepen the negative effects of the hyperreal by creating a media landscape that is nearly, or even completely, non-human. The Dead Internet Theory already claims that most of the internet, including social media, has been taken over by bots and AI-generated “slop.” Since the release of Sora 2 in September 2025, its videos are increasingly found on other social media platforms, often with the Sora watermark removed. Our media culture seems to be drifting further and further from embodied human experience.

However, it is also possible that the extreme mechanization of videos like those produced by Sora might pull the veil back and compel at least some of us to distrust media so deeply that we seek ways to reconnect with embodied experience. For Baudrillard, Disneyland played an important role in the maintenance of hyperreality. Its hyperbolic simulations made the surrounding environs of LA — the strip malls, billboards, Porsche dealerships, etc. — seem real by comparison. This comparative realness, Baudrillard argued, mollifies us by obfuscating the fact that our everyday life is completely and utterly shaped by the simulacrum. However, slop videos, and the technology behind those videos, render the artificiality of our sign systems more salient, at least for technologically savvy users. If you know you’re looking at slop, you know you’re looking at an image that is in some profound way detached from embodied human experience. For some people, this awareness might produce a tear in the veil. To the extent that humans still aim for an authentic engagement with reality, this tear could actually motivate an attempt to escape rather than deepen immersion in the hyperreal.

Yet, escape is not so easy as stepping away from the computer. Our built environment is affected by people’s expectations about what reality should look like, and these expectations are in turn shaped by visual culture. Traveling to Rome in pursuit of an authentic experience, tourists seek out cafés that conform to a Hollywood image. The act of ordering and drinking a cappuccino becomes a performance of that image. When the children of today, raised on Italian brainrot, begin to travel, how will their associations and expectations of Rome change? What kind of images will shape the cities of tomorrow? A better escape route might involve seeking embodied experiences in non-built environments. “Touch grass,” as they say. Yet the fact that “touch grass” is itself a meme embodies the difficulties with this route. Capitalist sign systems digest their own critiques. There may be no escape.

Throughout Silicon Valley, tech companies preach that generative AI will usher in a new age of transcendence, pushing humanity beyond the limitations of our frail, embodied selves. While it’s true that massive increases in computing power will undoubtedly lead to biomedical breakthroughs and other scientific advances, we must ask ourselves: at what cost? It is telling that OpenAI, whose core mission is to develop “artificial general intelligence that benefits all of humanity,” should create a product that seemingly takes us in the opposite direction.

Do Great Video Games Require Suffering?

A Soulslike is a video game that shares similarities with installments from the Dark Souls series, as well as other similar games from developer FromSoftware. Infamous for their high levels of difficulty, the Dark Souls games are loved by many and despised by many others, and are often considered something of a bar when differentiating the “hardcore” or serious gamers from their more casual counterparts. Many games in the souls-like genre are also highly rated: recent additions to the genre like Elden Ring and Hollow Knight: Silksong have received almost universal acclaim from reviewers whose criticisms almost always pertain to how difficult they are.

There have been debates in the gaming community about whether they should be easier, or at least come with an optional easy mode, so that more players could enjoy them. However, many fans and developers argue that the high degree of difficulty is essential to the experience. For example, in a recent interview, the developer of an upcoming Soulslike argued that while they would be open to adding an easy mode, “what connects Soulslike players is that they are all struggling in these games” and that such struggle “has a huge value” for them. This sentiment echoes a statement from FromSoftware’s president Hidetaka Miyazaki, who stated that in their games, “hardship is what gives meaning to the experience.”

For those who aren’t fans of such games it might sound odd that some people voluntarily put themselves through “hardship” and “struggle.” What, then, is the relationship between suffering and the value of one’s experience when it comes to playing Soulslikes or other difficult video games? And would adding the option to make these games easier really detract from their value?

We might first wonder whether there really is value in beating a really difficult video game. Certainly, players who play Soulslikes seem to think so, insofar as they enjoy the experience of overcoming significant obstacles. We might think, though, that while it might be fun to finally defeat a Dark Souls boss after hours of trying and failing, there is nothing really valuable about doing so: it is, after all, just a video game, and one’s time could have been used more effectively to create art, solve problems, or do something other than move some pixels around on a screen.

This kind of argument, though, is too dismissive. We can account for the value of defeating a difficult video game by virtue of the fact that doing so is an achievement. Philosopher Gwen Bradford argues that achievements are valuable because in achieving something we are allowed to “exercise characteristic human capacities, to fully express aspects of our nature, and to fully ‘be all we can be’.” Specifically, Bradford notes that one of the fundamental human capacities that we exercise in achieving something is the will: achieving something requires willing ourselves to overcome obstacles, and by strengthening our will, we become a better version of ourselves.

Here we have an argument for the value of beating a difficult video game: doing so is an achievement that exercises the will. But does that mean that a difficult video game has to be brutally difficult, to the point where the creators openly acknowledge that their players are suffering? Again, we have an argument from Bradford that suggests that Soulslikes do, in fact, bestow opportunities for even greater achievements. She considers the following example:

Smith and Jones are both writers. They both write novels of equal value. Smith’s experience writing the novel is fairly typical, alternating between the usual frustrations of the writing process, and periods that are enjoyable and productive. By contrast, Jones encounters tremendous obstacles while writing his novel: he loses his wife, his house, his dog, he struggles with mental health issues, and finds the writing process utterly agonizing.

According to Bradford, while Smith and Jones have done the same thing, Jones has achieved something far greater, precisely because of all the obstacles and suffering he went through. Here we also have an argument as to why Soulslikes shouldn’t be made easier, or come with an easy mode: in doing so, we remove the obstacles that allow for a more significant achievement.

It’s not clear, though, that all obstacles one has to overcome make an achievement more valuable. Here’s an example: I am partially colorblind, and so there are some video games that are harder for me to play because I can’t tell the difference between certain colors in certain situations. In these cases, the games are harder for me, and I have to work harder to overcome obstacles created by my own misbehaving cones. While we might think that overcoming these obstacles represents a greater achievement for me, it also doesn’t seem like it would be doing me a disservice to provide a game mode that allowed me to perceive colors more easily.

Colorblindness is an example of an obstacle that may make an achievement more significant, but does not make for a more satisfying or meaningful experience playing a game. Of course, there are other obstacles I face when it comes to playing difficult video games, as well. For example, as I have gotten older, my reflexes have slowed, I have lost patience when dying over and over again, and I just don’t have enough time to get really good at difficult games. When compared to someone who is, say, younger than me and has more time on their hands, it seems like I have to overcome more obstacles than they do. Are these also obstacles that game designers should make accommodations for, perhaps in the form of a game mode for elder Millennials with full-time jobs?

Fans of Soulslikes would likely say that this is a step too far: while an obstacle like colorblindness should be accommodated, being employed shouldn’t. But it’s not clear why not. After all, when it comes to how much we are suffering when playing a Soulslike, I, as a man in his forties with a job, am suffering more than someone with spryer joints and more free time. In terms of quantity of suffering and number of obstacles to overcome, then, I would be having the same experience on an easier game mode than someone else would on standard difficulty.

So here we have the makings of an argument: if the suffering one experiences when playing a Soulslike is valuable because it creates the conditions needed for greater achievements, but different people suffer to different degrees because of their abilities and circumstances, then creating an easier mode in a difficult game will allow more players to experience equivalently significant achievements.

When we consider how game developers might accommodate more players, however, we run into a problem. After all, adjusting a game’s difficulty is not necessarily a simple matter of adjusting the value of a few variables; it might impact the intended experience and vision of the game designers. Perhaps, then, there still needs to be a baseline level of difficulty that allows a Soulslike to maintain the essence of what makes it a Soulslike, regardless of whether some people will suffer more than others when playing it.

So let’s return to our first question: we’ve seen that suffering through a difficult video game can result in a significant achievement, but is such suffering necessary? Could we, for example, still have a rewarding experience playing a game in the Dark Souls series if it wasn’t punishingly difficult?

Maybe we could. As an example, consider two copies of Alexandre Dumas’ 1846 classic The Count of Monte Cristo: one is a normal, standard paperback, and one is a hardcover version that snaps shut if you don’t read the pages quickly enough. Both contain the same plot, characters, and daring adventure, but reading one involves significantly more suffering than the other.

Presumably, we wouldn’t think that reading the booby-trapped version of The Count of Monte Cristo is a more valuable experience than reading the normal version just because it involves more suffering. This case illustrates that mere suffering does not necessarily add anything of value to a challenging experience. Indeed, this kind of argument has been used to criticize some Soulslikes for inflicting suffering for suffering’s sake: there is a point at which a game can become unnecessarily unfair or difficult such that the struggle no longer adds anything meaningful to the overall experience.

We can, then, account for the experiences of those who enjoy Soulslikes and other very difficult games by acknowledging that beating them constitutes a significant achievement, while also recognizing that suffering by itself does not make beating such games more meaningful achievements. At the same time, if part of the value of playing these games does come from a shared experience of struggle, then we might still want developers to include options to allow more players to struggle together.