← Return to search results
Back to Prindle Institute

What Reddit Can Teach Us About Moral Philosophy

eye looking through keyhole

Moral philosophy is enjoying a moment in popular culture. Shows like The Good Place have made ethics accessible to broad audiences, and publishing houses churn out books on the philosophical underpinnings of franchises like Star Wars and Game of Thrones. One example of this can be found on Reddit, a social media site that hosts a myriad of topic-based forums. In particular, the subreddit “Am I the Asshole?” exemplifies pop culture’s breezy and accessible approach to moral philosophy, while also shedding light on how and why we engage in ethical questions.

This extremely popular subreddit boasts over two million subscribers, and claims to offer “A catharsis for the frustrated moral philosopher in all of us, and a place to finally find out if you were wrong in an argument that’s been bothering you.” Reddit users post stories about relationship problems, family squabbles, and workplace tension. Any conflict will do, so long as it doesn’t involve physical violence and the original poster has some reason to believe they were in the wrong. Those who comment on these stories are required to pass judgment, expressed using the subreddit’s shorthand language, followed by a brief explanation of their ruling. A person can be either NTA (not the asshole), YTA (you are the asshole), ESH (everyone sucks here, for situations where all parties did something indefensible), and NAH (no assholes here, for situations where no one is in the wrong). A bot eventually sifts through these comments, and the post is labeled with the most popular judgment.

As the word “asshole” implies, this isn’t the place to rail against class oppression or the cruelties of fate. AITA focuses on everyday interpersonal drama, and it’s understood that being labeled YTA isn’t necessarily a judgment on the entirety of a person’s character. Every judgment is situational, though commentators may point out larger patterns of problematic thought or behavior if they emerge, and the subreddit operates under a shared understanding that many of us act immorally without malicious intent. Even in the somewhat rigid lexicon of judgment, there’s room for shades of gray, for the ambiguities of social life. As Tove Danovich writes in an article for The Ringer, “The scope of the problems on AITA, even when the judgment is a difficult one to make, is human, and therefore more manageable. They’re medium questions asked and answered by medium people who just want to be a little bit better.”

Even though the subreddit’s scope is somewhat limited, one aspect of the AITA’s culture offers a window into the role narrative plays in shaping our sense of right and wrong. Scrolling through the front page, one very frequently encounters stories where the original poster was indisputably in the right: “Someone ran me over with their car, and as I went flying over their windshield, I accidentally dented their front hood. AITA?” So many users were annoyed by these posts that they started their own parody subreddit, “Am I the Angel,” where the saintly and oblivious tone of AITA posts are mocked. An ungenerous interpretation of these posts would be that some people just want a pat on the head. They aren’t actually looking for a moral judgment, they want to vent about a situation they already recognize as unfair. Alternatively, one could argue that we so often lack perspective on our own lives, and what seems obviously wrong to an impartial third party may be less transparent from the inside. But one AITA user suggests a different interpretation. On a post from a man who no longer wanted to let his disabled neighbor park in their driveway, Reddit user boppitywop commented, “I think the majority of these posts are because people feel guilty, and they are looking to assuage their guilt. [The original poster] is not the asshole but they’ve made someone’s life a lot more inconvenient and doesn’t feel good about it. [This subreddit] serves the purpose of socially normalizing something that a person feels bad about.”

This comment reveals both the limits and potential of AITA. Some situations are morally intractable, and require a lot more than interpersonal skills (or an understanding of one’s wrongdoing) to effectively address. But they also correctly point out the social function of storytelling. AITA posts help users renegotiate the boundaries between right and wrong in a way that feels deeply communal. Norms are both established and questioned in this online space. The judgment system may feel very open-and-shut, but reading through the comment section of popular posts reveals an ongoing dialogue with the moral philosophy of everyday life.

But in any narrative, language often betrays the biases or intentions of the teller. One ubiquitous trope you’ll notice if you fall down the AITA rabbit hole is what I would call the “sudden turn”: in the middle of an encounter, the antagonist of the story will begin to bawl, shriek, or throw a tantrum without clear provocation. The other person is portrayed as irrational or inscrutable, and one often feels the gap here where their perspective on the situation could fit. Commenters are often very perceptive about the original poster’s word choice, but the way the story is told inevitably colors our judgment of the encounter. This sense of messiness and instability accurately reflects how we experience conflict, and reminds us that all moral arguments, whether large or small, contain some speck of subjectivity.

It’s a simple truth that judging people we don’t know is fun, sometimes even addicting. The voyeuristic element of AITA is certainly worthy of critique, but at the same time, anonymity is crucial to the communal storytelling experience. In an era where few define themselves by a single ethical belief system, AITA helps readers wade through the mire of modern life, and testifies to a universal desire to understand what we owe to one another.

The Remote of Morty and the Ring of Gyges

photograph of several rick and morty action figures

The latest episode of sci-fi comedy Rick and Morty presented a variation on an idea previously seen in Groundhog Day among other stories. In it, Rick invents for Morty a remote that allows him to “save” his life at a certain point, try out different experiences and “load” the save to return back to the save point with no consequences. In this piece, I hope to explore what it means for consequences to matter morally and whether we should be thinking in terms of ultimate consequences at all. Before that, however, let us explore further how exactly this remote works.

The remote is meant to mimic the way many video games work, where one is able to save and return to the save point if one fails or dies, i.e. they allow one to “load” saves. In a video game, a “save state,” is a file that contains information about the save point and amounts to a record of the values of different variables changeable by participation in the game. It is conceivable that one could record the “save state” of the actual universe since the state of the universe is determined by the variable excitations of certain “fields,” like the electromagnetic field or the Higgs-induced mass field, as well as by the distribution of those excitations in the fabric of spacetime. While practically impossible, it is imaginable that someone could record the values of all of these and so be able to generate a save state of the universe at a given time. Indeed, if the simulation argument is true, something like this would be the case.

So, suppose you, mortal and small as you are, possessed a remote that allowed you to contact the Simulators and signal to them to load a previous save state (excepting, presumably, the state of your mind, as otherwise you would not remember your experiences between saving and reloading, rendering the remote useless). The moral dimension of this scenario comes with this question: would you continue to act in accordance with virtue if you knew your actions had “no consequences” beyond how they affected your mind? If you would, why? People are already comfortable with what they call “victimless” crimes. Doing wrong before reloading might be the ultimate victimless crime.

The reader of Plato cannot help but be reminded of the story of the Ring of Gyges by this scenario. In The Republic, Plato presents, through the character Glaucon, the story of a man who finds a ring, the so-called “Ring of Gyges” which allows the wearer to become invisible. With the power of the ring, the man, a shepherd, rapes the queen of the land and kills the king, taking his place (in fact this man is supposed to be the ancestor of Gyges, a historical king of Lydia). Glaucon then asks Socrates to imagine two such rings, one placed on a just man, another on an unjust man and to consider whether their actions would differ. Glaucon indicates that not only would they almost certainly act the same, but if the just man refrained from unjust actions he would actually be foolish for doing so while the man who acts unjustly would be happier.

One response to Glaucon’s argument is that the unjust man would not be happy because people generally feel empathetic pain when they hurt others, and feel guilt afterward for acting unjustly. This pain and guilt would mean the just man would end up happier, though he would lack the material comforts the unjust man might obtain. However, this response is not as helpful with the remote scenario, at least at first glance.

Consider the person who gleefully begins to use the remote and does all sorts of horrible things to people, just for fun or out of curiosity. Why would they feel guilt? Upon reloading a save, none of those people they hurt would feel hurt or even remember the experience. In some sense, those minds—the ones that experienced the harm induced by the remote user—do not exist. So the user might feel empathetic pain while they commit atrocities, or before they reload the save, but afterward it is not obvious these feelings would remain.

So suppose the ancestor of Gyges found this remote—instead of the ring—and did as he did, raping the queen, killing the king, and taking over rulership of the land. Our intuition is that those actions are wrong. But, once the shepherd reloads his save and becomes a shepherd once again, do those actions remain wrong? In other words, suppose the shepherd told his friend about what he had done and the friend believed him. Would the friend judge the shepherd as a bad person?

Most of us likely believe that something immoral is taking place, but it will prove particularly difficult to justify this intuition. A natural response to this question, for example, is to say “Of course! Anyone who is capable of something so horrible must be a bad person.” However, as we have learned from the Holocaust, ordinary people can tolerate or aid in horrible actions. Some of those who Americans often consider moral exemplars, the Founding Fathers, owned slaves. While those who perpetrated these harms did actually do something wrong, it seems fair to say that we are not so different from them that we would be incapable of acting likewise, in the right (or rather “wrong”) circumstances. We are all capable of great evil, it seems, but we rarely judge each other merely on the basis of what we think others are capable of. We judge each other for actual harms we perpetrate. On some definition of “actual,” those who are harmed by someone who uses the remote before they reload a save are not really “actual.”

“But,” you may retort, “while you’re right we don’t judge people for merely being capable of horrible actions, we do judge them for ‘following through’ so-to-speak. Isn’t committing the action, even if it gets undone by reloading, still morally blameworthy?” And you might be right. But we also usually require that someone know what they are doing to be horrible. A person with an intellectual disability who assaults someone in anger is not usually thought responsible for their actions in the same way someone capable of understanding the harms of their actions would be. Supposing that the shepherd sees nothing wrong with his actions, given that they have no permanent consequences, he does not seem to be doing wrong knowingly. He might recognize that, in other circumstances, his actions would be wrong. A soldier does not knowingly murder as does the serial killer since the soldier thinks there is a justification for his actions while the serial killer does not. Likewise the shepherd thinks under these circumstances there is no permanent harm wrought, and so he believes that he does no wrong.

The critical flaw with the shepherd seems to be his obsession with consequences as the only morally relevant criteria. More specifically, there is a problem with his only judging actions by their ultimate consequences. Suppose the shepherd did as he did, with the rape and the murder, and never reloaded his save. In this case, the shepherd clearly does wrong and the existence of the remote is irrelevant; it is as though it never existed. But in any of the cases where he does reload the save, any actions he takes to hurt other people (if they are indeed wrong) will be wrong in spite of the fact that his victims will not remember experiencing this parallel reality harm. These actions will be wrong even if the people he wrongs never even exist after he reloads the save—say, if those people were born, lived, and died all between the time he saved and the time he reloaded.

Ultimately, if our current understanding of physics is correct, the stars will all be swallowed by black holes, those black holes will eventually evaporate, and the whole universe will be a homogeneous soup of photons. No matter what course of action we take, this will be the result. It is a natural consequence of the second law of thermodynamics: entropy must always increase. Not only are our lives temporary due to death, but the consequences of our lives, of all the lives of all people who will ever live are temporary, ending in this same final result. In a sense, we are in a similar position to all those who make up the shepherd’s alternate reality; we will all eventually be erased.

And, yet, we cannot help but believe our actions are meaningful and it matters that people act in accordance with virtue—even in these outlandish remote-user scenarios. That we are temporary does not mean that harms perpetrated against us are insignificant. But, if this is true, then the suffering of the child who grew up without her father, who really herself “never existed,” at least in terms of having any impact on our final reality, really matters too. The alternative is the denial that any of our actions have moral significance given that the fate of the universe is the same regardless.

This remote is fantastical, but, like the Ring of Gyges, it provokes responses that make clear some really foundational moral principles. The story of the Ring of Gyges solidifies our belief that one ought to do right not because the law forces you to do so, but because you simply ought to do right. Various explanations for this conviction have been given. One common explanation is that doing wrong harms the doer. In a similar vein, German philosopher Immanuel Kant believed that harming animals was not wrong but still said that people should not do it since, by becoming comfortable harming animals, people might become more comfortable hurting humans. But, if you’re skeptical of these sorts of arguments (perhaps because they seem too doer-centric) and still think harming people while using the remote is wrong, then we are left to conclude that what is right or wrong is not so in virtue of ultimate consequences, but because doing right or wrong benefits or harms conscious people, whether they exist for a day or a lifetime, whether the actions they take impact humanity for millennia or not at all.

Computer Simulations and the Ethics of Predicting Human Behavior

A row of black supercomputer processors

In an episode of the British sketch comedy series That Mitchell and Webb Look, a minister of finance is sitting across from two aides, who are expressing their frustration at how to deal with a recent recession. They have run a number of scenarios through a computer simulation: increasing or decreasing value-added tax, lowering or raising interest rates, or any combination thereof, fail to produce any positive result. The minister then suggests adding a new variable to the simulation: “Have you tried ‘kill all the poor’?” At his behest, the aides run the simulation, and show that it wouldn’t have any positive result, either. The minister is insistent that he merely wanted to see what the computer would say, as an intellectual exercise, and would not have followed its advice even if the results had been different.

Although this example is clearly fictitious, computer simulations that model human behavior have become a reality, and bring with them a number of ethical problems. For instance, a recent article published at The Atlantic reports results of the Modeling Religion Project, a project which addresses questions about “the most compelling features of religion” by “turning to an unconventional source: computer modeling and simulation.” According to the project, some of these models “examine processes of group formation, religious leadership, extremism and violence, terror management, ritual patterns, and much more.” One such model, called MERV, models “mutually escalating religious violence,” while another called NAHUM models “terror management theory.” For instance, one recent publication coming out of the project called “Can we predict religious extremism?” provides a tentative answer of “yes.”

These and other models have been used to test out various policies in an artificial environment. For example, the Modeling Religion in Norway project is currently modeling policy decisions concerning the immigration of refugees into Norway: “Governments and organizations seek policies that will encourage cohesion over conflict,” the project outline states, “but it’s hard to know what ideas will lead to harmony and tolerance, facilitating integration between local and immigrant communities. Problems like these need a road-map that can point us towards a better future, and tools for considering all of the possible outcomes.” One study suggested that, since Norwegians have a strong social safety net, religiosity is expected to continue to decrease in Norway, since one factor that predicts higher degrees of religiosity is a feeling of “existential anxiety” (to put it bluntly, the researchers suggest that the less worried one is about dying, the less religious one will tend to be).

While such models might be interesting as an intellectual exercise, there are a host of ethical concerns when it comes to relying on them to influence policy decision. First and foremost there is the concern about how plausible we should think such models will be. Human behavior is complex, and the number of variables that influence that behavior is immense, so it seems near impossible for such models to make perfectly accurate predictions. Of course, these models do not purport to be able to tell the future, and so they could still at least potentially be useful at predicting broad trends and changes, and in that regard may still be useful in guiding policy decisions.

Perhaps an even more significant problem, however, is when the example of comedic fiction becomes disturbing close to reality. As The Atlantic reports, Wesley Wildman, one of the directors of the Modeling Religion Project, reported having developed a model that suggested that the best course of action when dealing with extremist religious groups with charismatic leaders was to assassinate said leader. Wildman was, understandably, troubled by the result, stating that he felt “deeply uncomfortable that one of my models accidentally produced a criterion for killing religious leaders.” Wildman also stated that according to a different model, if one wanted to decrease secularization in a society that one could do so by “triggering some ecological disaster,” a thought that he reports “keeps me up at night.”

Some of the results produced by these models clearly prescribe immoral actions: there seem to be very few, if any, situations that would justify the triggering of an ecological disaster, and it seems that assassination, if it should ever be an option, should be a very last resort, not the first course of action one considers. Such simulations may model courses of action that are most efficient, but that is hardly to say that they are the ones that are most morally responsible. As Wildman himself noted, the results of these simulations are only potentially useful when one has taken into consideration all of the relevant ethical factors.

One might worry about whether these kinds of simulation should be performed at all. After all, if it turned out that the model predicted that an immoral course of action should result in a desired benefit, this information might be used to attempt to justify performing those actions. As The Atlantic reports:

[Wildman] added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. ‘It’s going to be done. So not doing it is not the answer.’ Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

If one is indeed worried about the ethical ramifications of such models, this kind of reasoning is unlikely to provide much comfort: even if it is a fact that “if I don’t do it, someone else will,” that does not absolve one of their moral responsibility (since they, in fact, did it!). It is also difficult to tell how being transparent and discussing the ethical danger of their simulation’s proposed course of action would do to mitigate the damage.

On the other hand, we might not think that there is anything necessarily wrong with merely running simulations: simulating ecological disasters is a far cry from the real thing, and we might think that there’s nothing inherently unethical in merely gathering information. Wildman certainly does seem right about one thing: regardless of whether we think that these kinds of models are useful, it is clear that they cannot be relied upon responsibly without any consideration of the moral ramifications of their results.