← Return to search results
Back to Prindle Institute

The Ethics of AI Behavior Manipulation

photograph of server room

Recently, news came from California that police were playing loud, copyrighted music when responding to criminal activity. While investigating a stolen vehicle report, video was taken of the police blasting Disney songs like those from the movie Toy Story. The reason the police were doing this was to make it easier to take down footage of their activities. If the footage has copyrighted music, then a streaming service like YouTube will flag it and remove it, so the reasoning goes.

A case like this presents several ethical problems, but in particular it highlights an issue of how AI can change the way that people behave.

The police were taking advantage of what they knew about the algorithm to manipulate events in their favor. This raises obvious questions: Does the way AI affects our behavior present unique ethical concerns? Should we be worried about how our behavior is adapting to suit an algorithm? When is it wrong to use one’s understanding of an algorithm as leverage to their own benefit? And, if there are ethical concerns about algorithms having this effect on our behavior should they be designed in ways to encourage you to act ethically?

It is already well-known that algorithms can affect your behavior by creating addictive impulses. Not long ago, I noted how the attention economy incentivizes companies to make their recommendation algorithms as addictive as possible, but there are other ways in which AI is altering our behavior. Plastic surgeons, for example, have noted a rise in what is being called “snapchat dysmorphia,” or patients who desperately want to look like their snapchat filter. The rise of deepfakes are also encouraging manipulation and deception, making it more difficult to tell reality apart from fiction. Recently, philosophers John Symons and Ramón Alvarado have even argued that such technologies undermine our capacity as knowers and diminishes our epistemic standing.

Algorithms can also manipulate people’s behavior by creating measurable proxies for otherwise immeasurable concepts. Once the proxy is known, people begin to strategically manipulate the algorithm to their advantage. It’s like knowing in advance what a test will include and then simply teaching the test. YouTubers chase whatever feature, function, length, or title they believe the algorithm will pick up and turn their video into a viral hit. It’s been reported that music artists like Halsey are frustrated by record labels who want a “fake viral moment on TikTok” before they will release a song.

This is problematic not only because viral TikTok success may be a poor proxy for musical success, but also because the proxies in the video that the algorithm is looking for also may have nothing to do with musical success.

This looks like a clear example of someone adapting their behavior to suit an algorithm for bad reasons. On top of that, the lack of transparency creates a market for those who know more about the algorithm and can manipulate it to take advantage of those that do not.

Should greater attention be paid to how algorithms generated by AI affect the way we behave? Some may argue that these kinds of cases are nothing new. The rise of the internet and new technologies may have changed the means of promotion, but trying anything to drum up publicity is something artists and labels have always done. Arguments about airbrushing and body image also predate the debate about deepfakes. However, if there is one aspect of this issue that appears unique, it is the scale at which algorithms can operate – a scale which dramatically affects their ability to alter the behavior of great swaths of people. As philosopher Thomas Christiano notes (and many others have echoed), “the distinctive character of algorithmic communications is the sheer scale of the data.”

If this is true, and one of the most distinctive aspects of AI’s ability to change our behavior is the scale at which it is capable of operating, do we have an obligation to design them so as to make people act more ethically?

For example, in the book The Ethical Algorithm, the authors present the case of an app that gives directions. When an algorithm is considering the direction to give you, it could choose to try and ensure that your directions are the most efficient for you. However, by doing the same for everyone it could lead to a great deal of congestion on some roads while other roads are under-used, making for an inefficient use of infrastructure. Alternatively, the algorithm could be designed to coordinate traffic, making for a more efficient overall solution, but at the cost of potentially getting personally less efficient directions. Should an app cater to your self-interest or the city’s overall best-interest?

These issues have already led to real world changes in behavior as people attempt to cheat the algorithm to their benefit. In 2015, there were reports of people reporting false traffic accidents or traffic jams to the app Waze in order to deliberately re-route traffic elsewhere. Cases like this highlight the ethical issues involved. An algorithm can systematically change behavior, and just like trying to ease congestion, it can attempt to achieve better overall outcomes for a group without everyone having to deliberately coordinate. However, anyone who becomes aware of the system of rules and how they operate will have the opportunity to try to leverage those rules to their advantage, just like the YouTube algorithm expert who knows how to make your next video go viral.

This in turn raises issues about transparency and trust. The fact that it is known that algorithms can be biased and discriminatory weakens trust that people may have in an algorithm. To resolve this, the urge is to make algorithms more transparent. If the algorithm is transparent, then everyone can understand how it works, what it is looking for, and why certain things get recommended. It also prevents those who would otherwise understand or reverse engineer the algorithm from leveraging insider knowledge for their own benefit. However, as Andrew Burt of the Harvard Business Review notes, this introduces a paradox.

The more transparent you make the algorithm, the greater the chances that it can be manipulated and the larger the security risks that you incur.

This trade off between security, accountability, and manipulation is only going to become more important the more that algorithms are used and the more that they begin to affect people’s behaviors. Some outline of the specific purposes and intentions of an algorithm as it pertains to its potential large-scale effect on human behavior should be a matter of record if there is going to be public trust. Particularly when we look to cases like climate change or even the pandemic, we see the benefit of coordinated action, but there is clearly a growing need to address whether algorithms should be designed to support these collective efforts. There also needs to be greater focus on how proxies are being selected when measuring something and whether those approximations continue to make sense when it’s known that there are deliberate efforts to manipulate them and turned to an individual’s advantage.

Computer Simulations and the Ethics of Predicting Human Behavior

A row of black supercomputer processors

In an episode of the British sketch comedy series That Mitchell and Webb Look, a minister of finance is sitting across from two aides, who are expressing their frustration at how to deal with a recent recession. They have run a number of scenarios through a computer simulation: increasing or decreasing value-added tax, lowering or raising interest rates, or any combination thereof, fail to produce any positive result. The minister then suggests adding a new variable to the simulation: “Have you tried ‘kill all the poor’?” At his behest, the aides run the simulation, and show that it wouldn’t have any positive result, either. The minister is insistent that he merely wanted to see what the computer would say, as an intellectual exercise, and would not have followed its advice even if the results had been different.

Although this example is clearly fictitious, computer simulations that model human behavior have become a reality, and bring with them a number of ethical problems. For instance, a recent article published at The Atlantic reports results of the Modeling Religion Project, a project which addresses questions about “the most compelling features of religion” by “turning to an unconventional source: computer modeling and simulation.” According to the project, some of these models “examine processes of group formation, religious leadership, extremism and violence, terror management, ritual patterns, and much more.” One such model, called MERV, models “mutually escalating religious violence,” while another called NAHUM models “terror management theory.” For instance, one recent publication coming out of the project called “Can we predict religious extremism?” provides a tentative answer of “yes.”

These and other models have been used to test out various policies in an artificial environment. For example, the Modeling Religion in Norway project is currently modeling policy decisions concerning the immigration of refugees into Norway: “Governments and organizations seek policies that will encourage cohesion over conflict,” the project outline states, “but it’s hard to know what ideas will lead to harmony and tolerance, facilitating integration between local and immigrant communities. Problems like these need a road-map that can point us towards a better future, and tools for considering all of the possible outcomes.” One study suggested that, since Norwegians have a strong social safety net, religiosity is expected to continue to decrease in Norway, since one factor that predicts higher degrees of religiosity is a feeling of “existential anxiety” (to put it bluntly, the researchers suggest that the less worried one is about dying, the less religious one will tend to be).

While such models might be interesting as an intellectual exercise, there are a host of ethical concerns when it comes to relying on them to influence policy decision. First and foremost there is the concern about how plausible we should think such models will be. Human behavior is complex, and the number of variables that influence that behavior is immense, so it seems near impossible for such models to make perfectly accurate predictions. Of course, these models do not purport to be able to tell the future, and so they could still at least potentially be useful at predicting broad trends and changes, and in that regard may still be useful in guiding policy decisions.

Perhaps an even more significant problem, however, is when the example of comedic fiction becomes disturbing close to reality. As The Atlantic reports, Wesley Wildman, one of the directors of the Modeling Religion Project, reported having developed a model that suggested that the best course of action when dealing with extremist religious groups with charismatic leaders was to assassinate said leader. Wildman was, understandably, troubled by the result, stating that he felt “deeply uncomfortable that one of my models accidentally produced a criterion for killing religious leaders.” Wildman also stated that according to a different model, if one wanted to decrease secularization in a society that one could do so by “triggering some ecological disaster,” a thought that he reports “keeps me up at night.”

Some of the results produced by these models clearly prescribe immoral actions: there seem to be very few, if any, situations that would justify the triggering of an ecological disaster, and it seems that assassination, if it should ever be an option, should be a very last resort, not the first course of action one considers. Such simulations may model courses of action that are most efficient, but that is hardly to say that they are the ones that are most morally responsible. As Wildman himself noted, the results of these simulations are only potentially useful when one has taken into consideration all of the relevant ethical factors.

One might worry about whether these kinds of simulation should be performed at all. After all, if it turned out that the model predicted that an immoral course of action should result in a desired benefit, this information might be used to attempt to justify performing those actions. As The Atlantic reports:

[Wildman] added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. ‘It’s going to be done. So not doing it is not the answer.’ Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

If one is indeed worried about the ethical ramifications of such models, this kind of reasoning is unlikely to provide much comfort: even if it is a fact that “if I don’t do it, someone else will,” that does not absolve one of their moral responsibility (since they, in fact, did it!). It is also difficult to tell how being transparent and discussing the ethical danger of their simulation’s proposed course of action would do to mitigate the damage.

On the other hand, we might not think that there is anything necessarily wrong with merely running simulations: simulating ecological disasters is a far cry from the real thing, and we might think that there’s nothing inherently unethical in merely gathering information. Wildman certainly does seem right about one thing: regardless of whether we think that these kinds of models are useful, it is clear that they cannot be relied upon responsibly without any consideration of the moral ramifications of their results.