On August 18, Elon Musk suggested that the blocking feature on Twitter, which allows users to prevent another user from interacting with them in any way on the site, would soon be removed. (Musk rebranded Twitter “X,” though it still uses the domain twitter.com. For clarity, I’ll refer to this social media site as Twitter, as many of its users still do.) Musk claimed that the feature would be limited to blocking direct messages, adding in a subsequent post that the feature “makes no sense.” This declaration was met with a good deal of criticism and disbelief among Twitter users.
Twitter’s CEO Linda Yaccarino later walked back the statement, claiming that the company is “building something better than the current state of block and mute.” Musk’s claim may be unlikely to be true anyway, since the guidelines for both the App Store and the Google Play Store appear to require that user-generated content must be able to be blocked in any app they offer.
But Musk’s suggestion raises the question of whether blocking on social media is something users have a right to. I won’t attempt to comment on any relevant legal right, but let’s consider the users’ moral rights.
First, a blocking ban violates our right to privacy. We have a right not to expose ourselves to content — and to people — on social media sites. The privacy achieved with blocking on social media goes in two directions: blocking keeps one’s own posts from being viewed by another user, and it also prevents the other user from contacting or interacting with the person who blocked them. In preventing another person from viewing one’s posts, a person can limit who accesses their personal information, thoughts, photos, and social interactions with other users. Even when posts are public, for users who aren’t public figures acting in a public capacity, the ability to retain some privacy when desired is valuable. Blocking is essential for achieving this privacy.
Privacy is also a matter of the ability to prevent someone else from entering your space. Twitter is a place where people meet others, interact, and learn about the world. It facilitates a unique kind of community-building, and thus is a unique kind of place — one that can at once be both public and intimate, involving networks of friends and parasocial relationships. Just as the ability to prevent an arbitrary person from broadcasting their thoughts into your home is essential for privacy, so also the ability to block interactions from someone on social media is an important means of privacy in that space.
Second, the ability to block an account on social media is necessary for safety. Blocking allows users to prevent future harassment, private messages, or hate speech from another user, thus protecting their mental health. By similar reasoning to a restraining order, the ability to block also protects the user from another user’s attempt to maintain unwanted contact or to stalk them through the site. Blocking alone doesn’t accomplish these goals perfectly, but it is necessary for achieving them for anyone who uses social media.
Important to both the above points is the lack of a feasible alternative to Twitter. It’s not always possible for someone to simply use another form of social media to prevent unwanted interactions. Not all platforms have the same functions or norms. The default public settings of Twitter (and its permitting anonymous accounts) make it a much different place from Facebook, which defaults to private posts and requires full names from its users. Twitter has been a successful home for activism and real-time crisis information. Despite recent attempts to launch competing sites, no other social media site compares to Twitter in terms of reach and, for better and worse, ethos. One can’t simply leave the party to avoid interactions as one does in real life; there’s no viable alternative place to go.
Third, blocking gives users more agency than reporting users for suspension or banning. Blocking is immediate, user-achieved, and not dependent on another entity’s approval. It is more efficient than reporting users for suspension or banning, because it does not require either the time or effort that goes into deciding the results of these reports. Neither does blocking depend on the blocked user having violated one of the terms of use on the site, such as rules against hate speech. If I can block another user for any personal reason whatsoever, I have much greater control over my social life online.
With these considerations in mind, it’s worth pointing out that one personal account blocking another is not a case of government censorship or online moderation. People are free to block for any reason whatsoever, without being beholden to principles about what a government or business may rightly censor. There are moral considerations when people act towards each other in any situation, so this is not to say that no moral considerations could make blocking wrong in a particular case. But individuals do not have a blanket moral obligation to allow others to say whatever they want to them, even though a government or the site itself might have no standing to prevent the person from saying it.
One worry you might have is that blocking could intensify echo chambers. An echo chamber is a social community bound by shared beliefs and insulated from opposing evidence. If a person always blocks people who challenge their political ideas, they will likely find themselves in an environment that’s not particularly conducive to forming justified beliefs. This effect is intensified if it should turn out that the blocking actions of a user are then fed into the algorithm that determines what posts show up on one’s social feed. If the algorithm favors displaying accounts the user is likely to find much in common with, then blocking gives highly useful information that would likely result in some further insulation from differing viewpoints beyond the specific account that the user blocks.
Outrage breeds engagement, so the algorithm may instead use information about blocks to show posts that might get the user riled up. But seeing chiefly the most extreme of opposing views does not necessarily diminish the strength of an echo chamber. In fact, if one’s political opponents are showed mostly in an extreme form most likely to generate engagement, their presence might actually serve as evidence that one’s own side of the issue is the rational one. So, even an algorithm that uses blocks as an indication of what outrages the user — and therefore feeds the user more of that — could contribute to a situation where one is insulated from opposing viewpoints. This issue stems from the broader structure and profit incentives of social media, but it is worth considering alongside the issues of privacy, safety, and agency discussed above — in part because the ability to foster an environment that is conducive to forming justified beliefs is itself an important part of safety and agency.
Although it is imperfect, blocking on social media protects users’ privacy, safety, and agency. It is not a matter of government or corporate censorship, and it is necessary for protecting the moral rights of users. Contrary to Musk’s claim, a blocking feature makes moral sense.