← Return to search results
Back to Prindle Institute
TechnologyWorld Affairs

Computer Simulations and the Ethics of Predicting Human Behavior

By Kenneth Boyd
21 Jan 2019

In an episode of the British sketch comedy series That Mitchell and Webb Look, a minister of finance is sitting across from two aides, who are expressing their frustration at how to deal with a recent recession. They have run a number of scenarios through a computer simulation: increasing or decreasing value-added tax, lowering or raising interest rates, or any combination thereof, fail to produce any positive result. The minister then suggests adding a new variable to the simulation: “Have you tried ‘kill all the poor’?” At his behest, the aides run the simulation, and show that it wouldn’t have any positive result, either. The minister is insistent that he merely wanted to see what the computer would say, as an intellectual exercise, and would not have followed its advice even if the results had been different.

Although this example is clearly fictitious, computer simulations that model human behavior have become a reality, and bring with them a number of ethical problems. For instance, a recent article published at The Atlantic reports results of the Modeling Religion Project, a project which addresses questions about “the most compelling features of religion” by “turning to an unconventional source: computer modeling and simulation.” According to the project, some of these models “examine processes of group formation, religious leadership, extremism and violence, terror management, ritual patterns, and much more.” One such model, called MERV, models “mutually escalating religious violence,” while another called NAHUM models “terror management theory.” For instance, one recent publication coming out of the project called “Can we predict religious extremism?” provides a tentative answer of “yes.”

These and other models have been used to test out various policies in an artificial environment. For example, the Modeling Religion in Norway project is currently modeling policy decisions concerning the immigration of refugees into Norway: “Governments and organizations seek policies that will encourage cohesion over conflict,” the project outline states, “but it’s hard to know what ideas will lead to harmony and tolerance, facilitating integration between local and immigrant communities. Problems like these need a road-map that can point us towards a better future, and tools for considering all of the possible outcomes.” One study suggested that, since Norwegians have a strong social safety net, religiosity is expected to continue to decrease in Norway, since one factor that predicts higher degrees of religiosity is a feeling of “existential anxiety” (to put it bluntly, the researchers suggest that the less worried one is about dying, the less religious one will tend to be).

While such models might be interesting as an intellectual exercise, there are a host of ethical concerns when it comes to relying on them to influence policy decision. First and foremost there is the concern about how plausible we should think such models will be. Human behavior is complex, and the number of variables that influence that behavior is immense, so it seems near impossible for such models to make perfectly accurate predictions. Of course, these models do not purport to be able to tell the future, and so they could still at least potentially be useful at predicting broad trends and changes, and in that regard may still be useful in guiding policy decisions.

Perhaps an even more significant problem, however, is when the example of comedic fiction becomes disturbing close to reality. As The Atlantic reports, Wesley Wildman, one of the directors of the Modeling Religion Project, reported having developed a model that suggested that the best course of action when dealing with extremist religious groups with charismatic leaders was to assassinate said leader. Wildman was, understandably, troubled by the result, stating that he felt “deeply uncomfortable that one of my models accidentally produced a criterion for killing religious leaders.” Wildman also stated that according to a different model, if one wanted to decrease secularization in a society that one could do so by “triggering some ecological disaster,” a thought that he reports “keeps me up at night.”

Some of the results produced by these models clearly prescribe immoral actions: there seem to be very few, if any, situations that would justify the triggering of an ecological disaster, and it seems that assassination, if it should ever be an option, should be a very last resort, not the first course of action one considers. Such simulations may model courses of action that are most efficient, but that is hardly to say that they are the ones that are most morally responsible. As Wildman himself noted, the results of these simulations are only potentially useful when one has taken into consideration all of the relevant ethical factors.

One might worry about whether these kinds of simulation should be performed at all. After all, if it turned out that the model predicted that an immoral course of action should result in a desired benefit, this information might be used to attempt to justify performing those actions. As The Atlantic reports:

[Wildman] added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. ‘It’s going to be done. So not doing it is not the answer.’ Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

If one is indeed worried about the ethical ramifications of such models, this kind of reasoning is unlikely to provide much comfort: even if it is a fact that “if I don’t do it, someone else will,” that does not absolve one of their moral responsibility (since they, in fact, did it!). It is also difficult to tell how being transparent and discussing the ethical danger of their simulation’s proposed course of action would do to mitigate the damage.

On the other hand, we might not think that there is anything necessarily wrong with merely running simulations: simulating ecological disasters is a far cry from the real thing, and we might think that there’s nothing inherently unethical in merely gathering information. Wildman certainly does seem right about one thing: regardless of whether we think that these kinds of models are useful, it is clear that they cannot be relied upon responsibly without any consideration of the moral ramifications of their results.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories