← Return to search results
Back to Prindle Institute
OpinionTechnology

A Right To Attentional Freedom?

By Aaron Schultz
7 Mar 2023
collage of various people on their phones

The White House recently posted a proposal for an AI Bill of Rights. In California, there is a bill that aims to hold social media companies accountable for getting young children addicted to their platforms. Several of these companies also face a federal lawsuit for emotionally and physically harming their users.

For those who use technology on a day-to-day basis, these developments are likely unsurprising. There is an intuition, backed by countless examples, that our technology harms us and that those who have created the technology are somehow responsible. Many of us find ourselves doomscrolling or stuck on YouTube for hours because of infinite scrolling.

Less settled is precisely how these technologies are bad for us and how exactly these companies wrong us.

The California bill and the lawsuit both argue that one notable form of harm can be understood through the lens of addiction. They argue that social media companies are harming a particularly vulnerable group, namely young adults and children, by producing an addicting product.

While this way of understanding the problem certainly has plausibility, one might favor other ways of explaining the problem. The way that we frame the moral relationship users have with technology will shape legal argumentation and future regulation. If our aim is to forge a morally sound relationship between users, technology, and producers, it is important to get the moral story right.

What makes social media addicting is the fact that it has become especially adept at producing content that users want to engage with. Complex algorithms learn about its user’s predilections and can accurately predict the kinds of things people want to see. The ability for AI to manipulate us so effectively highlights our failure to recognize the importance of attention – a valuable good that has gone underappreciated for far too long.

First, our attention is limited. We cannot attend to everything before us and so each moment of attention is accompanied with non-attention. If I am paying attention to a film, then I am not paying attention to the cars outside, or the rain falling, or the phone in my pocket.

Second, attention is susceptible to outside influence. If someone is talking loudly while a film plays, I may become distracted. I may want to watch the film closely, but the noise pulls my attention away.

Third, attention is related to many foundational moral rights. Take for instance freedom of thought. We might think that in a society where there are no laws about what you are allowed to think, read, or say guarantees the freedom of thought. However, unless your attention is respected, freedom of thought cannot be secured.

We need only think of Kurt Vonnegut’s story “Harrison Bergeron” to show what this claim misses. In it, Harrison Bergeron lives in a society that goes to great lengths to ensure equality. In order to make sure everyone remains equal, those who are born with natural talents are given artificial burdens. For Harrison, who is exceptional both physically and mentally, one particularly clever tactic is used to ensure he does not think too much. Periodically, a loud, harsh sound is played through an earpiece. This makes it impossible for Harrison to focus.

The relevant point here is that even if no law exists that prohibits you from thinking whatever you please, reading what you want, or discussing what you wish, your freedom of thought can be indirectly overridden.

By utilizing the fact that your attention is limited and not fully voluntary, another party can prevent you from thinking freely. Thus, although our rights may be respected on paper, assaults on our attention may inhibit us from utilizing the capacities these rights are supposed to protect in practice.

When we interact with technology, we must give our attention over to it. Furthermore, much of the technology we interact with on a day-to-day basis is designed specifically to maintain and increase user engagement. As a result of these design choices, we have developed technology that is highly effective at capturing our attention.

As predictive technology improves, machines will also improve their ability to distract us. The result of this will mean that more people will spend more time using the technology (e.g., watching videos, reading news pieces, viewing content produced by other users). The more time people spend using this technology, the less they can spend attending to other things.

If our attention is limited, can be controlled from the outside, and is vital for utilizing other morally important capacities, it seems clear that it is something that should be treated with respect.

Consider how we tend to think that it is rude to distract someone while they are trying to concentrate. It rarely feels satisfying if the person causing the distraction simply replies “Just ignore me.” This response denies a crucial reality of the nature of attention, viz., it is often non-voluntary.

Furthermore, it would be even worse if the distracting person tried to mask their presence and distract someone secretly, and yet this is precisely what a great deal of our technology does. It exploits the non-voluntary nature of our attention, overrides attentional freedom, and does so in the most discrete way possible. Technology could be designed in a way that respected our attentional freedom, instead of covertly trying to undermine it. For example, periodically prompting the user to consider doing something else, instead of endlessly presenting more content to engage with.

Rather than focusing on technology’s tendency to encourage addictive behavior in young people, I would like us to think about the effects technology has on all users’ attentional freedom.

Technology that is designed to distract you is harmful because it overrides your attentional freedom. When you use this technology, you are less free. This analysis must overcome at least two challenges, both centered around consent.

The first is that we consent to use these products. To argue that my phone wrongfully harms me because it is distracting seems like arguing that a book wrongfully harms me if it is so gripping that I cannot put it down.

However, while a book may be enticing and may even be created with the hopes that it captures attention, the book does not learn about what captures attention. There is a difference between something capturing your attention because it is interesting and something that learns your preferences and sets about satisfying them. What makes AI driven technology unique is that it has the capacity to fine tune the kinds of things it offers you in real time. It knows what you click on, what you watch, and how long you engage. It also relies on the involuntary part of attention to keep you engaged.

The second argument is about general human interaction. If it is wrong to affect someone’s attention, then daily interactions must be wrong. For instance, if someone walks down the street and asks me to take a flier for a show, do they wrong me by distracting me? Do all interactions require explicit consent lest they be moral violations? If our moral analysis of attention forces us to conclude that even something as trivial as a stranger saying hello to you constitutes a moral wrong because it momentarily distracts you, we will have either gone wrong somewhere along the way, or else produced a moral demand that is impossible to respect.

To answer this second objection, one thing we can say is this. When someone distracts you, they do not necessarily wrong you. Someone who tries to hand you a flier in the street effectively asks for your attention, and you have the opportunity to deny this request with fairly little effort. Notably, if the person who asks for your attention continues to pester you, and follows you down the road as you walk, their behavior no longer seems blameless and quickly turns into a form of harassment. When someone intentionally tries to override your attentional freedom, the moral problem emerges. Because attentional freedom is connected to a set of important freedoms (e.g., freedom of thought, freedom of choice, etc.), if one can override another’s attentional freedom, they can override other important freedoms indirectly.

If technology harms us because we become addicted to it, then we have reason to protect children from it. We may even have reason to provide more warnings for adults, like we do with addictive substances. However, if we stop our analysis at addiction, we miss something important about how this technology operates and how it harms us. When we see that technology harms us because it overrides our attentional freedom, we will need to do more than simply protect children and warn adults. Several new questions emerge: Can we design technology to preserve attentional freedom, and if so, what changes should we make to existing technology? How can we ensure that technology does not exploit the non-voluntary part of our attention? Are some technologies too effective at capturing our attention, such that they should not be on the market? Is there a right to attentional freedom?

Aaron Schultz is currently an Assistant Professor at Michigan State University. His past research has focused on Buddhist responses to wrongdoing and problems related to the justification of state punishment. Currently, he is interested in the moral and political problems presented by artificial intelligence.
Related Stories