The Remote of Morty and the Ring of Gyges
The latest episode of sci-fi comedy Rick and Morty presented a variation on an idea previously seen in Groundhog Day among other stories. In it, Rick invents for Morty a remote that allows him to “save” his life at a certain point, try out different experiences and “load” the save to return back to the save point with no consequences. In this piece, I hope to explore what it means for consequences to matter morally and whether we should be thinking in terms of ultimate consequences at all. Before that, however, let us explore further how exactly this remote works.
The remote is meant to mimic the way many video games work, where one is able to save and return to the save point if one fails or dies, i.e. they allow one to “load” saves. In a video game, a “save state,” is a file that contains information about the save point and amounts to a record of the values of different variables changeable by participation in the game. It is conceivable that one could record the “save state” of the actual universe since the state of the universe is determined by the variable excitations of certain “fields,” like the electromagnetic field or the Higgs-induced mass field, as well as by the distribution of those excitations in the fabric of spacetime. While practically impossible, it is imaginable that someone could record the values of all of these and so be able to generate a save state of the universe at a given time. Indeed, if the simulation argument is true, something like this would be the case.
So, suppose you, mortal and small as you are, possessed a remote that allowed you to contact the Simulators and signal to them to load a previous save state (excepting, presumably, the state of your mind, as otherwise you would not remember your experiences between saving and reloading, rendering the remote useless). The moral dimension of this scenario comes with this question: would you continue to act in accordance with virtue if you knew your actions had “no consequences” beyond how they affected your mind? If you would, why? People are already comfortable with what they call “victimless” crimes. Doing wrong before reloading might be the ultimate victimless crime.
The reader of Plato cannot help but be reminded of the story of the Ring of Gyges by this scenario. In The Republic, Plato presents, through the character Glaucon, the story of a man who finds a ring, the so-called “Ring of Gyges” which allows the wearer to become invisible. With the power of the ring, the man, a shepherd, rapes the queen of the land and kills the king, taking his place (in fact this man is supposed to be the ancestor of Gyges, a historical king of Lydia). Glaucon then asks Socrates to imagine two such rings, one placed on a just man, another on an unjust man and to consider whether their actions would differ. Glaucon indicates that not only would they almost certainly act the same, but if the just man refrained from unjust actions he would actually be foolish for doing so while the man who acts unjustly would be happier.
One response to Glaucon’s argument is that the unjust man would not be happy because people generally feel empathetic pain when they hurt others, and feel guilt afterward for acting unjustly. This pain and guilt would mean the just man would end up happier, though he would lack the material comforts the unjust man might obtain. However, this response is not as helpful with the remote scenario, at least at first glance.
Consider the person who gleefully begins to use the remote and does all sorts of horrible things to people, just for fun or out of curiosity. Why would they feel guilt? Upon reloading a save, none of those people they hurt would feel hurt or even remember the experience. In some sense, those minds—the ones that experienced the harm induced by the remote user—do not exist. So the user might feel empathetic pain while they commit atrocities, or before they reload the save, but afterward it is not obvious these feelings would remain.
So suppose the ancestor of Gyges found this remote—instead of the ring—and did as he did, raping the queen, killing the king, and taking over rulership of the land. Our intuition is that those actions are wrong. But, once the shepherd reloads his save and becomes a shepherd once again, do those actions remain wrong? In other words, suppose the shepherd told his friend about what he had done and the friend believed him. Would the friend judge the shepherd as a bad person?
Most of us likely believe that something immoral is taking place, but it will prove particularly difficult to justify this intuition. A natural response to this question, for example, is to say “Of course! Anyone who is capable of something so horrible must be a bad person.” However, as we have learned from the Holocaust, ordinary people can tolerate or aid in horrible actions. Some of those who Americans often consider moral exemplars, the Founding Fathers, owned slaves. While those who perpetrated these harms did actually do something wrong, it seems fair to say that we are not so different from them that we would be incapable of acting likewise, in the right (or rather “wrong”) circumstances. We are all capable of great evil, it seems, but we rarely judge each other merely on the basis of what we think others are capable of. We judge each other for actual harms we perpetrate. On some definition of “actual,” those who are harmed by someone who uses the remote before they reload a save are not really “actual.”
“But,” you may retort, “while you’re right we don’t judge people for merely being capable of horrible actions, we do judge them for ‘following through’ so-to-speak. Isn’t committing the action, even if it gets undone by reloading, still morally blameworthy?” And you might be right. But we also usually require that someone know what they are doing to be horrible. A person with an intellectual disability who assaults someone in anger is not usually thought responsible for their actions in the same way someone capable of understanding the harms of their actions would be. Supposing that the shepherd sees nothing wrong with his actions, given that they have no permanent consequences, he does not seem to be doing wrong knowingly. He might recognize that, in other circumstances, his actions would be wrong. A soldier does not knowingly murder as does the serial killer since the soldier thinks there is a justification for his actions while the serial killer does not. Likewise the shepherd thinks under these circumstances there is no permanent harm wrought, and so he believes that he does no wrong.
The critical flaw with the shepherd seems to be his obsession with consequences as the only morally relevant criteria. More specifically, there is a problem with his only judging actions by their ultimate consequences. Suppose the shepherd did as he did, with the rape and the murder, and never reloaded his save. In this case, the shepherd clearly does wrong and the existence of the remote is irrelevant; it is as though it never existed. But in any of the cases where he does reload the save, any actions he takes to hurt other people (if they are indeed wrong) will be wrong in spite of the fact that his victims will not remember experiencing this parallel reality harm. These actions will be wrong even if the people he wrongs never even exist after he reloads the save—say, if those people were born, lived, and died all between the time he saved and the time he reloaded.
Ultimately, if our current understanding of physics is correct, the stars will all be swallowed by black holes, those black holes will eventually evaporate, and the whole universe will be a homogeneous soup of photons. No matter what course of action we take, this will be the result. It is a natural consequence of the second law of thermodynamics: entropy must always increase. Not only are our lives temporary due to death, but the consequences of our lives, of all the lives of all people who will ever live are temporary, ending in this same final result. In a sense, we are in a similar position to all those who make up the shepherd’s alternate reality; we will all eventually be erased.
And, yet, we cannot help but believe our actions are meaningful and it matters that people act in accordance with virtue—even in these outlandish remote-user scenarios. That we are temporary does not mean that harms perpetrated against us are insignificant. But, if this is true, then the suffering of the child who grew up without her father, who really herself “never existed,” at least in terms of having any impact on our final reality, really matters too. The alternative is the denial that any of our actions have moral significance given that the fate of the universe is the same regardless.
This remote is fantastical, but, like the Ring of Gyges, it provokes responses that make clear some really foundational moral principles. The story of the Ring of Gyges solidifies our belief that one ought to do right not because the law forces you to do so, but because you simply ought to do right. Various explanations for this conviction have been given. One common explanation is that doing wrong harms the doer. In a similar vein, German philosopher Immanuel Kant believed that harming animals was not wrong but still said that people should not do it since, by becoming comfortable harming animals, people might become more comfortable hurting humans. But, if you’re skeptical of these sorts of arguments (perhaps because they seem too doer-centric) and still think harming people while using the remote is wrong, then we are left to conclude that what is right or wrong is not so in virtue of ultimate consequences, but because doing right or wrong benefits or harms conscious people, whether they exist for a day or a lifetime, whether the actions they take impact humanity for millennia or not at all.