Back to Prindle Institute

‘The Rehearsal’, Manipulation and Spontaneity

photograph of film set

The recent HBO series, The Rehearsal rests on a common concern – having to navigate situations without experience, where mistakes can significantly alter your life and the lives of those around you. Spearheaded by writer, director, producer, and performer, Nathan Fielder, the program offers people an opportunity to “rehearse” potentially high stakes situations by repeatedly running through a simulation with the actors. The episodes to air so far involve a bar trivia fanatic confessing to a friend that he previously lied about having a master’s degree, a woman practicing raising a child before deciding to become a mother, and a man hoping to convince his brother to let him access an inheritance left by their grandfather.

The show derives its humor, in part, from the lengths Fielder and his crew go to in the “rehearsals.” In the first episode, his team build a fully furnished, patroned and staffed, 1-to-1 replica of the bar in which the confession will take place, complete with live trivia. Fielder hires an actor to play the part of the confessor’s friend, who then arranges a meeting with this friend to better understand her personality, speech, and mannerisms in addition to gathering information about her from a blog she runs.

To simulate motherhood, the team hires many child actors to act as the adopted son. However, labor laws prevent a child actor from working more than four hours in a single day and limit the days a child can work each week. So, Fielder and his team must regularly replace the actor playing the child but do so covertly to maintain the illusion of raising a single child. Additionally, the team of actors changes each week, to a group of older actors, so the woman experiences raising a child at each stage of development.

Why go to such lengths, aside from the entertainment value? In the first episode of the series, Fielder notes that in our regular lives whether we achieve happy outcomes is a matter of chance. The idea behind taking painstaking efforts to make the “rehearsal” look and feel like reality is to leave the participants as prepared as possible in order to reduce the role fortune plays.

The appeal of performing these “rehearsals” seems to be motivated by a desire to control our interactions with others, in order to produce the best outcomes for all involved.

This is an incredibly common desire. Feeling like things are out of your control, especially those things which have a significant impact on the course of your life and the lives of those you care about, is anxiety inducing. The fact that things may go horribly wrong for us, despite our best efforts and intentions, creates a feeling of powerlessness. Being wrecked with anxiety and marred with feelings of powerlessness makes life difficult, to put it plainly.

But ought we follow through on this desire to gain control over our interactions with others? Richard Gibson helpfully analyzes the desire for control in the context of gene drives here. In doing so, Gibson presents an argument from Michael Sandel. Sandel argues that our desires for control, particularly in the realm of genes, involves a lack of humility. When we try to control as much as we can, this implies that we think it is appropriate for us to control these things. Specifically, Sandel claims that when we view the world in this way we lose sight of what he calls life’s giftedness. Our talents, skills, and abilities are given to us in the same way that a friend might give us a present. Much like one would think it inappropriate to alter a friend’s gift, perhaps trying to take total control of our lives is similarly inappropriate.

However, the real moral issues behind our desires for control become clear only when we consider that “rehearsing” involves other people.

For instance, the bar trivia fanatic is not just aiming to limit the fallout he experiences as a result from his confession. Instead, he is afraid of how his friend will react, and thus tries to control her reaction.

Of course, one might see no problem here. After all, we regularly tailor our interactions with others to avoid offending them while getting what we want. This is simply part of life.

Yet “rehearsed” interactions seem importantly different. To see why, consider the following: Daniel Susser, Beate Roessler, and Helen Nissebaum, in a discussion of manipulative practices on digital platforms, describe manipulation as “imposing a hidden or covert influence on another person’s decision-making.” Manipulative practices, they argue, involve trying to control a person in the same way that one might control a puppet, producing the desired behavior in the target by pulling on the target’s proverbial strings. Further, they argue that manipulative practices are more problematic the more targeted they are – manipulation that is tailor-made to match one person’s psychological profile seems more troubling than manipulation that trades on a widespread cognitive bias. Compare an ad for beer on TV the week before the Super Bowl that shows people excitedly watching a football game, to the same ad appearing in the social media feeds of sports fans after they make posts which suggest that they are feeling sad.

Although not perfectly analogous, there are important similarities between manipulation and “rehearsal.” We can see this with the trivia fanatic. In some cases, the “rehearsal” must be covert; if the fanatic’s friend knew he spent hours “rehearsing” their conversation, this would surely undermine his efforts and likely cause great offense.

A “rehearsal” may involve efforts to control how others respond to the conversation. One practices pulling different strings during the conversation to see how that changes the final outcome.

Finally, some “rehearsals” are targeted; the actor in the fanatic scenario puts in significant effort to mimic the friend as closely as possible. Surely, the actor cannot perfectly capture the psychological profile of the target. Nonetheless, imperfect execution does not seem wholly relevant. Thus, at least some “rehearsals” appear morally problematic for the same reason manipulation is worrisome.

Yet other “rehearsals” may lack these features. The rehearsal of parenthood, while hilarious due to its absurdity, does not need to be covert, involve an effort to guarantee particular outcomes, nor target a specific individual. One’s child will certainly have a different psychological profile than the child actor and, no matter how skilled the actors, surely they will not have indistinguishable performances. Thus, “rehearsals” that aim to try out a particular role, like parenthood, seem to have a different moral character than those that aim to make another person act in a desired way.

There is, however, one thing which may be universally problematic about “rehearsals.” During “rehearsals” of a conversation, Fielder stands by, taking notes and turning the conversation into an elaborate decision tree. This seems to turn the conversation into a sort of game – one practices it, determines cause and effect relationships between particular conversational choices and interlocutor responses, then pushes the proverbial reset button if the conversation takes an undesired turn.

As a result, it seems that the ultimate goal of a “rehearsal” is to eliminate spontaneity in the real conversation.

But part of what makes our experiences with others worthwhile is when the unexpected occurs. The price we pay for spontaneity is the anxiety of uncertainty. Our desires for control, if satisfied, may leave our interactions with others feeling impoverished and hollow.

I cannot say with perfect certainty what the goals of The Rehearsal are. The show offers a hilarious but often uncomfortable glimpse into what people are willing to do to gain a feeling of control. In doing so, it offers us the opportunity to reflect on what we should aim to take out of our interactions with others, and whether gaining control is worth what we might lose. If this was Fielder’s purpose with The Rehearsal, then it is a rousing success.

Computer Simulations and the Ethics of Predicting Human Behavior

A row of black supercomputer processors

In an episode of the British sketch comedy series That Mitchell and Webb Look, a minister of finance is sitting across from two aides, who are expressing their frustration at how to deal with a recent recession. They have run a number of scenarios through a computer simulation: increasing or decreasing value-added tax, lowering or raising interest rates, or any combination thereof, fail to produce any positive result. The minister then suggests adding a new variable to the simulation: “Have you tried ‘kill all the poor’?” At his behest, the aides run the simulation, and show that it wouldn’t have any positive result, either. The minister is insistent that he merely wanted to see what the computer would say, as an intellectual exercise, and would not have followed its advice even if the results had been different.

Although this example is clearly fictitious, computer simulations that model human behavior have become a reality, and bring with them a number of ethical problems. For instance, a recent article published at The Atlantic reports results of the Modeling Religion Project, a project which addresses questions about “the most compelling features of religion” by “turning to an unconventional source: computer modeling and simulation.” According to the project, some of these models “examine processes of group formation, religious leadership, extremism and violence, terror management, ritual patterns, and much more.” One such model, called MERV, models “mutually escalating religious violence,” while another called NAHUM models “terror management theory.” For instance, one recent publication coming out of the project called “Can we predict religious extremism?” provides a tentative answer of “yes.”

These and other models have been used to test out various policies in an artificial environment. For example, the Modeling Religion in Norway project is currently modeling policy decisions concerning the immigration of refugees into Norway: “Governments and organizations seek policies that will encourage cohesion over conflict,” the project outline states, “but it’s hard to know what ideas will lead to harmony and tolerance, facilitating integration between local and immigrant communities. Problems like these need a road-map that can point us towards a better future, and tools for considering all of the possible outcomes.” One study suggested that, since Norwegians have a strong social safety net, religiosity is expected to continue to decrease in Norway, since one factor that predicts higher degrees of religiosity is a feeling of “existential anxiety” (to put it bluntly, the researchers suggest that the less worried one is about dying, the less religious one will tend to be).

While such models might be interesting as an intellectual exercise, there are a host of ethical concerns when it comes to relying on them to influence policy decision. First and foremost there is the concern about how plausible we should think such models will be. Human behavior is complex, and the number of variables that influence that behavior is immense, so it seems near impossible for such models to make perfectly accurate predictions. Of course, these models do not purport to be able to tell the future, and so they could still at least potentially be useful at predicting broad trends and changes, and in that regard may still be useful in guiding policy decisions.

Perhaps an even more significant problem, however, is when the example of comedic fiction becomes disturbing close to reality. As The Atlantic reports, Wesley Wildman, one of the directors of the Modeling Religion Project, reported having developed a model that suggested that the best course of action when dealing with extremist religious groups with charismatic leaders was to assassinate said leader. Wildman was, understandably, troubled by the result, stating that he felt “deeply uncomfortable that one of my models accidentally produced a criterion for killing religious leaders.” Wildman also stated that according to a different model, if one wanted to decrease secularization in a society that one could do so by “triggering some ecological disaster,” a thought that he reports “keeps me up at night.”

Some of the results produced by these models clearly prescribe immoral actions: there seem to be very few, if any, situations that would justify the triggering of an ecological disaster, and it seems that assassination, if it should ever be an option, should be a very last resort, not the first course of action one considers. Such simulations may model courses of action that are most efficient, but that is hardly to say that they are the ones that are most morally responsible. As Wildman himself noted, the results of these simulations are only potentially useful when one has taken into consideration all of the relevant ethical factors.

One might worry about whether these kinds of simulation should be performed at all. After all, if it turned out that the model predicted that an immoral course of action should result in a desired benefit, this information might be used to attempt to justify performing those actions. As The Atlantic reports:

[Wildman] added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. ‘It’s going to be done. So not doing it is not the answer.’ Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

If one is indeed worried about the ethical ramifications of such models, this kind of reasoning is unlikely to provide much comfort: even if it is a fact that “if I don’t do it, someone else will,” that does not absolve one of their moral responsibility (since they, in fact, did it!). It is also difficult to tell how being transparent and discussing the ethical danger of their simulation’s proposed course of action would do to mitigate the damage.

On the other hand, we might not think that there is anything necessarily wrong with merely running simulations: simulating ecological disasters is a far cry from the real thing, and we might think that there’s nothing inherently unethical in merely gathering information. Wildman certainly does seem right about one thing: regardless of whether we think that these kinds of models are useful, it is clear that they cannot be relied upon responsibly without any consideration of the moral ramifications of their results.