← Return to search results
Back to Prindle Institute

Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World

photograph of crowd in subway station

In his recent article, Aaron Schultz asks whether we have a right to attentional freedom. The consequences of a life lived with our heads buried in our phones – consequences not only for individuals but for society at large – are only becoming more and more visible. At least partly to blame are tech’s (intentionally) addictive qualities, and Schultz documents the way AI attempts to maximize our engagement by taking an internal X-ray of our preferences while we surf different platforms. Schultz’s concern is that as better and better mousetraps get built, we see more and more of our agency erode each day. Someday, we’ll come to see the importance of attentional freedom – freedom from being reduced to prey for these technological wonders. Hopefully, that occurs before it’s too late.

Attention is a crucial concept to consider when thinking about ourselves as moral beings. Simone Weil, for instance, claims that attention is what distinguishes us from animals: when we pay attention to our body, we aim at bringing consciousness to our actions and behaviors; when we pay attention to our mind, we strive to shut out intrusive thoughts. Attention is what allows us, from a theoretical perspective, to avoid errors, and from a moral, practical perspective, to avoid wrong-doing.

Technological media captures our attention in almost an involuntary manner. What often starts as a simple distraction – TikTok, Instagram, video games – may quickly lead to addiction, triggering compulsive behaviors with severe implications.

That’s why China, in 2019, imposed a limit on gaming and social media gaming use. Then, in 2021, in an attempt to further control and reduce mental and physical health problems of the young population, stricter limits for online gaming on school days were enforced, and children and teenagers’ use was limited to one hour a day on weekends and holidays.

In Portugal, meanwhile, there is a crisis among children who, from a very young age, are being diagnosed with addiction to online gaming and gambling –  an addiction which compromises their living habits and routine such as going to school, being with others, or taking a shower. In Brazil, a recent study showed that 28% of adolescents show signs of hyperactivity and mental disorder from tech use to the point that they forget to eat or sleep.

The situation is no different in the U.S., where a significant part of the population uses social media and young people spend most of their time in front of a screen, developing a series of mental conditions inhibiting social interaction. Between online gaming and social media use, we are witnessing a new kind of epidemic that attacks the very foundations of what it is to be human, to be able to relate to the world and to others.

The inevitable question is: should Western countries follow the Chinese example of controlling tech use? Should it be the government’s job to determine how many hours per day are acceptable for a child to remain in the online space?

For some, the prospect of big brother’s protection might look appealing. But let us remember Tocqueville’s warning of the despotism and tutorship inherent in this temptation – of making the  State the steward of our interests. Not only is the strategy paternalistic, in curbing one’s autonomy and the freedom to make one’s own choices, but it is also totalitarian in its predisposition, permitting the State control of one more sphere of our lives.

This may seem an exaggeration. Some may think that the situation’s urgency demands the strong hand of the State. However, while an unrestrained use of social media and online gaming may have severe implications for one’s constitution, we should recognize the problem for what it is. Our fears concerning technology and addiction are merely a symptom of another more profound problem: the difficulty one has in relating to others and finding one’s place in the world.

What authors like Hannah Arendt, Simone Weil, Tocqueville, and even Foucault teach us is that the construction of our moral personality requires laying roots in the world. Limiting online access will not, by itself, resolve the underlying problem. You may actually end up by throwing children to an abyss of solitude and despair by exacerbating the difficulties they have in communicating. We must ask: how might we rescue the experience of association, of togetherness, of sharing physical spaces and projects?

Here is where we go back to the concept of attention. James used to say that attention is the

taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness is of its essence. It implies withdrawal from some things in order to deal effectively with others. 

That is something that social media, despite catching our (in)voluntary attention, cannot give us. So, our withdrawal into social media must be compensated with a positive proposal of attentive activity to (re)learn how to look, interpret, think, reflect upon things and most of all to (re)learn how to listen and be with others. More than 20 years ago, Robert Putnam documented the loss of social capital in Bowling Alone. Simone Weil detailed our sense of “uprootedness” fifty years prior to that. Unfortunately, today we’re still looking for a cure that will have us trading in our screens for something that we can actually do attentively together. Legislation is unlikely to fill that void, alone.

Calibrating Trust Amidst Information Chaos

photograph of Twitter check mark on iphone with Twitter logo in background

It’s been a tumultuous past few months on Twitter. Ever since Elon Musk’s takeover, there have been almost daily news stories about some change to the company or platform, and while there’s no doubt that Musk has his share of fans, many of the changes he’s made have not been well-received. Many of these criticisms have focused on questionable business decisions and almost unfathomable amounts of lost money, but Musk’s reign has also produced a kind of informational chaos that makes it even more difficult to identify good sources of information on Twitter.

For example, one early change that received a lot of attention was the introduction of the “paid blue check mark,” where one could pay for the privilege of having what was previously a feature reserved for notable figures on Twitter. This infamously led to a slew of impersonators creating fake accounts, the most notable being the phony Eli Lilly account that had real-world consequences. In response, changes were made: the paid check system was modified, then re-modified, then color-coded, then the colors changed, and now it’s not clear how the system will work in the future. Additional changes have been proposed, such as a massive increase in the character limits for tweets, although it’s not clear if they will be implemented.  Others have recently made their debut, such as a “view count” that has been added to each tweet, next to “replies,” “retweets,” and “likes.”

It can be difficult to keep up with all the changes. This is not a mere annoyance: since it’s not clear what will happen next, or what some of the symbols on Tweets really represent anymore – such as those aforementioned check marks – it can be difficult for users to find their bearings in order to identify trustworthy sources.

More than a mere cause of confusion, informational chaos presents a real risk of undermining the stability of online indicators that help people evaluate online information.

When evaluating information on social media, people appeal to a range of factors to determine whether they should accept it, for better or for worse. Some of these factors include visible metrics on posts, such as how many times it’s been approved of – be it in the form of a “like” or a “heart” or an “upvote,” etc. – shared, or interacted with in the form of comments, replies, or other measures. This might seem to be a blunt and perhaps ineffective way of evaluating information, but it’s not just that people tend to believe what’s popular: given that in many social media it’s easy to misrepresent oneself and generally just make stuff up, users tend to look to aspects of their social media experience that cannot easily be faked. While it’s of course not impossible to fabricate numbers of likes, retweets, and comments, it is at least more difficult to do so, and so these kinds of markers often serve as quick heuristics to determine if some content is worth engaging with.

There are others. People will use the endorsement of sources they trust when evaluating an unknown source, and the Eli Lilly debacle showed how people used the blue check mark at least as an indicator of authenticity – unsurprisingly, given its original function. Similar markers play the same role on other social media sites – the “verified badge” on Instagram, for example, at least gives users the information that the given account is authentic; although it’s not clear how much “authenticity” translates to “credibility.”

(For something that is so often coveted among the influencers and influencer-wannabes there appears to be surprisingly little research on the actual effects of verification on levels of trust among users: some studies seem to suggest that it makes little to no difference in perceived trustworthiness or engagement, while others suggest the opposite).

In short: the online world is messy, and it can be hard to get one’s bearings when evaluating the information that comes at one constantly on social media.

This is why making sudden changes to even superficial markers of authenticity and credibility can make this problem significantly worse. While people might not be the best at interpreting these markers in the most reliable ways, having them be stable can at the very least allow us to consider how we should respond to them.

It’s not as though this is the first change that’s been made to how people evaluate entries on social media. In late 2021, YouTube removed publicly-visible counts of how many dislikes videos received, a change that arguably made it more difficult to identify spam, off-topic, or otherwise low-quality videos at a glance. While relying on a heuristic like “don’t trust videos with a bunch of dislikes” is not always going to lead you to the best results, having a stable set of indicators can at least help users calibrate their levels of trust.

So, it’s not that users will be unable to adjust to changes to their favorite online platforms. But with numerous changes of uncertain value or longevity comes disorientation. Combined with Musk’s recent unbanning of accounts that were previously deemed problematic, resulting in the overall increase of misinformation being spread around the site, conditions are made even worse for those looking for trustworthy sources of information online.

Moral Duties in an Online Age: The Depp/Heard Discourse

photograph of Johnny Depp and Amber Heard at event

The Johnny Depp/Amber Heard defamation trial reached a verdict in late June, but the conversation around it is far from over. Both Heard and Depp have alleged that the other perpetrated domestic violence, and spectators have been quick to take sides. The televised trial dredged up salacious stories of abuse, the infamous turd, and a severed finger. The court largely sided with Depp, ordering Heard to pay $10.35 million to Depp and Depp to pay $2 million to Heard. But that is not the end of the story.

Over the weekend, over 6,000 pages of sealed court documents were released, reigniting the controversy. Some details within were not very flattering for Depp, and the hashtag, “#AmberHeardDeservesAnApology,” made its rounds on Twitter this weekend. During the trial itself, however, the hashtag, “#JusticeforJohnnyDepp,” was the predominant one, with discussion on TikTok largely supporting a pro-Depp narrative.

The news stories about this new development lead with headlines from “Unsealed Depp v. Heard court docs reveal ‘Aquaman’ actress was ‘exotic dancer’” to “Amber Heard Lawyers Claimed Johnny Depp Had Erectile Dysfunction That Likely Made Him ‘Angry’” to “Depp Swore in Declaration That Amber Heard Never Caused Him Harm: ‘Damning’” to “Amber Heard’s sister ‘told her boss the acres did sever Johnny Depp’s finger when she hurled a vodka bottle at him.’

We collectively have played into these events unfolding in the way that they did, both by giving our attention to the trial and then making judgments about Depp and Heard based on the evidence and testimony provided. This is not necessarily a bad thing, as Heard and Depp are public figures who should be held accountable for their actions.

But combine overconfidence in amateur sleuthing, the necessity of taking sides on the internet, fan loyalty to Depp or Heard, and trauma due to experience with domestic violence, and you do not end up with productive internet conversation.

While it seems that it might have been better in some ways to leave these details private instead of amplifying the public nature of the trial through social and traditional media networks, the information about the trial and the discussion around it cannot be taken back. Given that Depp’s and Heard’s former relationship was and is still being picked apart on the internet, what duties do we have in responding to this ongoing discussion?

There are roughly three ways that we could respond productively at this point:

1) We could let it go and turn our attention away from the spectacle.

2) We could dig through the court documents and records to try to determine the truth and either correct or affirm our previous judgments.

3) We could step in or comment on parts of the discussion around Heard and Depp when it becomes misogynistic, bullying, or otherwise rancid.

Take the first option: turning our attention away from the spectacle. In some ways this seems like a good option, because the ongoing toxic discourse survives and thrives on our attention. If we take our attention away from it, we remove its sustenance. At the same time, if the people who are making thoughtful contributions to the conversation turn away from it, that will likely make the quality of the ongoing conversation even worse than it already is. And now that this case has been so publicly litigated, there seems to be some injustice in allowing an inaccurate public conception of Heard and/or Depp to stand.

Take the second option: relitigating the evidence. While this does provide more fuel for the controversy, it can get us closest to understanding the truth about what happened. Trying to figure out what is true in this kind of case is difficult, however, as there are mountains of legal documents and testimony to review. Few people have the time or expertise to do that kind of investigation well. While it is good to find the truth and put it out there, especially in response to such a public maligning of Heard and Depp, this kind of response could still fall into the trap of digging too deeply into what should be private information about Heard’s and Depp’s lives.

Take the third option: stepping in at the level of the discussion itself. In some ways, this response is easier than the second option, because it does not require amassing the full information about the Depp/Heard trial. It does, however, require a keen eye for toxic patterns in internet discourse and the ability to point those out without creating a new, toxic meta-conversation. This kind of response has the potential to improve the collective conversation, but it does not by itself provide the full resources for doing justice to Heard and Depp by speaking the truth about the trial. It does, however, have the potential to speak truth and do justice to the way the public conversation around the trial has gone, though that might depend on having a good enough understanding of the facts of the case.

None of these responses are exclusive, and they likely do not exhaust the possible options for responding productively to the discourse. How should you figure out which response(s) to take? If you have poured lots of time and energy into speaking with strangers about this case on the internet, it might be good to step back and give the whole thing less of your attention. If you have made public judgments about Depp and Heard and realize that new evidence points towards your judgments being wrong, it seems that there is good reason for you to do your research to determine the accuracy of your public claims and to apologize if you were wrong. If you don’t have the time and energy to research everything but see bad patterns of discourse happening in your social media circles, you might step in and say something.

However, having the courage to step up and speak the truth and knowing whether it is the right thing to do can be very difficult in cases like these. Good intentions and true judgments may not be enough to turn the tide.

Because of the way the dynamics of cancellations like these play out, it is nearly impossible to make any substantive judgments about Depp, Heard, or the conversation around them without being accused of minimizing domestic violence and getting sucked back into the same unproductive patterns of discourse.

If someone thinks that Depp was the primary aggressor, that leaves them open to accusations of minimizing domestic violence against women. If someone thinks that Heard was the primary aggressor, that leaves them open to accusations of minimizing domestic violence against men. If someone thinks that there was mutual abuse, that leads to accusations of playing into both-sides-ism and ignoring the violence done by the real perpetrator. Meta-level observations about feminism or domestic violence against men can also get pulled back into these tropes. The only ways to get out might be to change the conversation to be able to talk more directly about the larger moral issues about gender and domestic violence that the trial raises, or to wait until the dust settles so all the facts can be properly addressed and appreciated.

Individual actions within the discourse are unlikely to solve the underlying structural problems of both social and traditional media that form the basis for the collective conversation, but they do allow us, as users of social media, to take responsibility for our individual actions that contribute to either a healthier or a more toxic discourse.

Anti-Maskers and the Dangers of Collective Endorsement

photograph of group of hands raised

Tensions surrounding the coronavirus pandemic continue to run high, especially in parts of America in which discussions over measures to control spread of the virus have become something of a political issue. Recently, some of these tensions erupted in the form of protests of “anti-maskers”: in Florida, for example, a group of such individuals marched through a Target, telling people to take off their masks, and playing the song “We’re Not Going to Take It.” Presumably the “it” that they were no longer interested in taking pertained to what they perceived to be a violation of personal liberties, as they felt as though they were being forced to wear a mask against their wills. While evidence regarding the effectiveness of masks at keeping oneself and others safe continues to grow, there nevertheless remains a vocal minority that believes otherwise.

A lot of thought has been put into the problem of why it is that people continually ignore good scientific evidence, especially when the consequences of doing so are potentially dire. There is almost certainly no singular, easy answer to the problem. However, there is one potential reason that I think is worth focusing on, namely that anti-maskers, among many others of those who reject the best available scientific evidence on a number of issues, will tend to trust sources that they find on social media instead of through more reputable outlets. For instance, one investigation of why anti-maskers hold their beliefs pointed to the effects of Facebook groups in which such beliefs are discussed and shared. Indeed, despite their efforts to contain the spread of such misinformation, anti-masker Facebook groups remain easy to find.

However, the question remains: why would anyone believe a group of random Facebook users over scientific experts? The answer to this is no doubt multifaceted as well. But one reason may come down to a matter of trust, and that the ways we determine who is trustworthy works differently online than it does in other contexts.

As frequent internet users will no doubt be familiar with already, it can often be difficult to identify trustworthy sources of information online. One reason is that the internet offers varying degrees of anonymity: the consequence is that one will potentially not have much information about the person one’s talking with, especially given the possibility that people can fabricate aspects of their identities in online environments. Furthermore, interacting with others through text boxes on a computer screen is a very different kind of interaction than one that occurs face-to-face. For instance, researchers have shown that there are different “communication cues” that we pick up on when interacting with each other, including verbal cues like tone of voice, volume of speech, and rate at which one is speaking, and visual cues like facial expressions and body language. These kinds of cues are important when we make judgments about whether we should believe what the other person is saying, and are largely absent in a lot of online communication.

With less information about each other to go on when interacting online, we will then tend to look to other sources of information when determining who to trust. One thing internet users tend to appeal to is endorsement. For instance, when reading things on social media or message board sites we tend to put more trust in those posts that have the most hearts, or likes, or upvotes, etc. This is perhaps most apparent when you’re trying to decide what product to buy: we tend to gravitate towards those with not only the highest ratings, but those that have the most high ratings (something with one 5 star review doesn’t mean much, but a product with hundreds of high reviews means a lot more). The same can be the case when it comes to determining which information to believe: if your post has thousands of endorsements then I’m probably going to at least give it a look, whereas if it has very few, I’ll probably pass it by.

There is good reason to trust information that is highly endorsed. As noted above, it can be hard to determine who to trust online because it’s not clear whether someone is really who they say they are. It’s easy for me to join a Facebook group and tell everyone that I’m an epidemiologist, for example, and without having access to any more information about me you’ve got little other than my word to go on. Something that’s much harder to fake, though, is a whole bunch of likes, or hearts, or upvotes. So the thought is that if enough other people endorse something, that’s good reason to trust it. So here’s one reason why people getting their information off social media might trust that information more than that coming from the experts, namely because it is highly endorsed by many other members of their group.

At the same time, people might be more willing to believe those with whom they interact with online in virtue of the fact that they are interacting with them. For instance, when a scientific body like the CDC tells you that you should be wearing a mask, information is traveling in only one direction. When interacting with groups online, though, it can be much easier to trust those that you are interacting with, and not merely deferring to. Again, this is one of the problems raised by online communication: while there is lots of good information available, it can be easier to trust those with whom one can engage with, as opposed to just take orders from them.

Again, given that the problem is complex and multifaceted means that there will not be a one-size-fits-all solution. That said, it is worthwhile to think about how it might be possible for those with the good information to establish relationships of trust with those who need it, given the unique qualities of online environments.

Twitter and Disinformation

black and white photograph of carnival barker

At the recent CES event, Twitter’s director of product management Suzanne Xie announced some proposed changes to Twitter which are aimed to begin rolling out in a beta version this year. They represent fundamental and important changes to the ways that conversations are had on the platform, including the ability to make tweets to limited groups of users (as opposed to globally), and perhaps the biggest change, tweets that cannot be replied to (what Twitter is calling “statements”). Xie stated that the changes were meant to prevent what are seen by Twitter as unhealthy behavior by its users, including “getting ratio’d” (when one’s Tweet receives a very high ratio of replies to likes, which is taken to represent general disapproval), and “getting dunked on” (a phenomenon in which the replies to one’s tweet are very critical, often going into detail about why the original poster was wrong).

If you have spent any amount of time on Twitter you have no doubt come across the kind of toxic behavior that the platform has become infamous for: people being rude, insulting, and aggressive is commonplace. So one might think that any change that might reduce this toxicity should be welcomed.

The changes that Twitter is proposing, however, could have some seriously negative consequences, especially when it comes to the potential for spreading misinformation.

First things first: when people act aggressively and threatening on Twitter, they are acting badly. While there are many parts of the internet that can seem like cesspools of vile opinions (various parts of YouTube, Facebook, and basically every comment section on any news website), Twitter has long had the reputation of being a place where nasty prejudices of any kind you can imagine run amok. Twitter itself has recognized that people who use the platform for the expression of racist, sexist, homophobic, and transphobic views (among others) are a problem, and have in the past taken some measures to curb this kind of behavior. It would be a good thing, then, if Twitter could take further steps to actually deter this kind of behavior.

The problem with allowing users the ability to Tweet in such a way that it cannot receive any feedback, though, is that the community can provide valuable information about the quality and trustworthiness about the content of a tweet. Consider first the phenomenon of “getting ratio’d”. While Twitter gives users the ability to endorse Tweets – in the form of “hearts” – it does not have any explicit mechanism in place that can allow users to show their disapproval – there is no non-heart equivalent. In the absence of a disapproval mechanism, Twitter users generally take a high ratio of replies-to-hearts to be an indication of disapproval (there are exceptions to this: when someone asks a question or seeks out advice, they may receive a lot of replies, thus resulting in a relatively high ratio that signals engagement as opposed to disapproval). Community signaling of disapproval can provide important information, especially when it comes from individuals in positions of power. For example, if a politician makes a false or spurious claim, their getting ratio’d can indicate to others that the information being presented should not be accepted uncritically. In the absence of such a mechanism it is much more difficult to determine the quality of information.

In addition to the quantity of responses that contribute to a ratio, the content of those responses can also help others determine whether the content of a tweet should be accepted. Consider, for example, the existence of a world leader who does not believe that global warming is occurring, and who tweets as such to their many followers. If this tweet were merely made as a statement without the possibility of a conversation occurring afterwards, those who believe the content of the tweet will not be exposed to arguments that correctly show it to be false.

A concern with limiting the kinds of conversations that can occur on Twitter, then, is that preventing replies can seriously limit the ability of the community to indicate that one is spreading misinformation. This is especially worrisome, given recent studies that suggest that so-called “fake news” can spread very quickly on Twitter, and in some cases much more quickly than the truth.

At this point, before the changes have been implemented, it is unclear whether the benefits will outweigh the costs. And while one should always be cautious when getting information from Twitter, in the absence of any possibility for community feedback it is perhaps worth employing an even healthier skepticism in the future.

Is Reddit Run By Angry Warlords?

Reddit is heralded as “The Front Page of the Internet”, but is now coming under increased scrutiny. T.C. Sottek writes.

As Reddit trips over itself trying to contain its stolen nude photo problem, CEO Yishan Wong finally addressed the controversy on Saturday by releasing a remarkably clueless manifesto. Reddit, he wrote, is “not just a company running a website where one can post links and discuss them, but the government of a new type of community.” So, then, what type of government is Reddit? It’s the kind any reasonable person would want to overthrow.

What’s the nude photo problem? During the celebrity nude photo leak, a redditor named “John”, created a sub-reddit to serve as a repository of these nude photos. Reddit refused to take the photos down in the name of Free Speech and not compelling virtuous behavior. When The Washington Post published information about the creator of the sub-reddit in a scathing expose , the redditor screamed privacy violation. Many redditors rallied to John’s defense. Which is why Sottek, says that Reddit’s Government is a failed state. It’s “a weak feudal system that’s actually run by a small group of angry warlords who use “free speech” as a weapon.”

The whole situation raises a host of interesting ethical issues.

  1. Is the value of free-speech so important that people should be entitled to post private, stolen images? (That seems to be the Reddit position)
  2. Did the Washington Post do something ethically wrong when they posted their piece criticizing John and divulging his personal information? (that’s something this article suggests)

Let us know what you think.