← Return to search results
Back to Prindle Institute

The Right to Block on Social Media

photograph of couple looking at smartphone behind window

On August 18, Elon Musk suggested that the blocking feature on Twitter, which allows users to prevent another user from interacting with them in any way on the site, would soon be removed. (Musk rebranded Twitter “X,” though it still uses the domain twitter.com. For clarity, I’ll refer to this social media site as Twitter, as many of its users still do.) Musk claimed that the feature would be limited to blocking direct messages, adding in a subsequent post that the feature “makes no sense.” This declaration was met with a good deal of criticism and disbelief among Twitter users.

Twitter’s CEO Linda Yaccarino later walked back the statement, claiming that the company is “building something better than the current state of block and mute.” Musk’s claim may be unlikely to be true anyway, since the guidelines for both the App Store and the Google Play Store appear to require that user-generated content must be able to be blocked in any app they offer.

But Musk’s suggestion raises the question of whether blocking on social media is something users have a right to. I won’t attempt to comment on any relevant legal right, but let’s consider the users’ moral rights.

First, a blocking ban violates our right to privacy. We have a right not to expose ourselves to content — and to people — on social media sites. The privacy achieved with blocking on social media goes in two directions: blocking keeps one’s own posts from being viewed by another user, and it also prevents the other user from contacting or interacting with the person who blocked them. In preventing another person from viewing one’s posts, a person can limit who accesses their personal information, thoughts, photos, and social interactions with other users. Even when posts are public, for users who aren’t public figures acting in a public capacity, the ability to retain some privacy when desired is valuable. Blocking is essential for achieving this privacy.

Privacy is also a matter of the ability to prevent someone else from entering your space. Twitter is a place where people meet others, interact, and learn about the world. It facilitates a unique kind of community-building, and thus is a unique kind of place — one that can at once be both public and intimate, involving networks of friends and parasocial relationships. Just as the ability to prevent an arbitrary person from broadcasting their thoughts into your home is essential for privacy, so also the ability to block interactions from someone on social media is an important means of privacy in that space.

Second, the ability to block an account on social media is necessary for safety. Blocking allows users to prevent future harassment, private messages, or hate speech from another user, thus protecting their mental health. By similar reasoning to a restraining order, the ability to block also protects the user from another user’s attempt to maintain unwanted contact or to stalk them through the site. Blocking alone doesn’t accomplish these goals perfectly, but it is necessary for achieving them for anyone who uses social media.

Important to both the above points is the lack of a feasible alternative to Twitter. It’s not always possible for someone to simply use another form of social media to prevent unwanted interactions. Not all platforms have the same functions or norms. The default public settings of Twitter (and its permitting anonymous accounts) make it a much different place from Facebook, which defaults to private posts and requires full names from its users. Twitter has been a successful home for activism and real-time crisis information. Despite recent attempts to launch competing sites, no other social media site compares to Twitter in terms of reach and, for better and worse, ethos. One can’t simply leave the party to avoid interactions as one does in real life; there’s no viable alternative place to go.

Third, blocking gives users more agency than reporting users for suspension or banning. Blocking is immediate, user-achieved, and not dependent on another entity’s approval. It is more efficient than reporting users for suspension or banning, because it does not require either the time or effort that goes into deciding the results of these reports. Neither does blocking depend on the blocked user having violated one of the terms of use on the site, such as rules against hate speech. If I can block another user for any personal reason whatsoever, I have much greater control over my social life online.

With these considerations in mind, it’s worth pointing out that one personal account blocking another is not a case of government censorship or online moderation. People are free to block for any reason whatsoever, without being beholden to principles about what a government or business may rightly censor. There are moral considerations when people act towards each other in any situation, so this is not to say that no moral considerations could make blocking wrong in a particular case. But individuals do not have a blanket moral obligation to allow others to say whatever they want to them, even though a government or the site itself might have no standing to prevent the person from saying it.

One worry you might have is that blocking could intensify echo chambers. An echo chamber is a social community bound by shared beliefs and insulated from opposing evidence. If a person always blocks people who challenge their political ideas, they will likely find themselves in an environment that’s not particularly conducive to forming justified beliefs. This effect is intensified if it should turn out that the blocking actions of a user are then fed into the algorithm that determines what posts show up on one’s social feed. If the algorithm favors displaying accounts the user is likely to find much in common with, then blocking gives highly useful information that would likely result in some further insulation from differing viewpoints beyond the specific account that the user blocks.

Outrage breeds engagement, so the algorithm may instead use information about blocks to show posts that might get the user riled up. But seeing chiefly the most extreme of opposing views does not necessarily diminish the strength of an echo chamber. In fact, if one’s political opponents are showed mostly in an extreme form most likely to generate engagement, their presence might actually serve as evidence that one’s own side of the issue is the rational one. So, even an algorithm that uses blocks as an indication of what outrages the user — and therefore feeds the user more of that — could contribute to a situation where one is insulated from opposing viewpoints. This issue stems from the broader structure and profit incentives of social media, but it is worth considering alongside the issues of privacy, safety, and agency discussed above — in part because the ability to foster an environment that is conducive to forming justified beliefs is itself an important part of safety and agency.

Although it is imperfect, blocking on social media protects users’ privacy, safety, and agency. It is not a matter of government or corporate censorship, and it is necessary for protecting the moral rights of users. Contrary to Musk’s claim, a blocking feature makes moral sense.

Eric Schneiderman and the Moral Wrongs of Hypocrisy

Image of New York Attorney General Eric Schneiderman

Eric Schneiderman was the New York Attorney General since 2011 and a strong opponent of President Trump’s policies to end DACA. Most recently, he sued the Weinstein Company over sexual harassment and civil rights violations while being a vocal supporter of the #Metoo movement. His clear stance as an advocate for civil rights, and specifically feminist goals, has made the circumstances of his recent resignation particularly frustrating.  Schneiderman resigned as New York Attorney General the first week in May in response to claims that in four past relationships, he had physically assaulted his partners.

Continue reading “Eric Schneiderman and the Moral Wrongs of Hypocrisy”

NFL to Cheerleaders: Down Girl!

Photo of cheerleaders performing at the 2006 Pro Bowl

I’ve always thought there was a problem with cheerleading. However great they are as athletes and dancers, cheerleaders give the impression that a woman’s place on an athletic field is to cheer on the men. But now we’re learning that there are also problems for cheerleaders. NFL cheerleaders are subject to a truly bizarre list of conduct requirements, as well as regular sexual harassment.

The story has been told in a series of New York Times articles (April 4, April 10, April 17, April 17, and April 24), but perhaps most compellingly in this interview of Bailey Davis, a former New Orleans Saints cheerleader, on the New York Times podcast, “The Daily.”

Continue reading “NFL to Cheerleaders: Down Girl!”

Does Implicit Bias Explain Gender Discrimination?

Photo of men's and women's bathroom stall signs

Implicit bias is a concept that’s been enormously useful to feminists grappling with the way progress for women has stalled in some areas. Women are still under 5 percent of CEOs of Fortune 500 companies. They still make considerably less per hour than men for doing the same work. Women are still just 20 percent of PhD engineers and around the same percentage of philosophers. They still haven’t made it into the pantheon of US presidents, and only 23 out of the current members of the US Senate are women.

It’s all difficult to explain, especially if you don’t believe that women as a group have distinctive interests or aptitudes. But then, what’s going on? Outright sexism and misogyny aren’t exactly rare in the US, but neither are they common. Thus, if you suspect bias is at the root of the underrepresentation problem, implicit bias is a welcome concept.

Continue reading “Does Implicit Bias Explain Gender Discrimination?”

Trusting Women and Epistemic Justice

An anonymous woman holding up a sign that says #MeToo

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Over the past three months, public figures have been exposed as serial sexual harassers and perpetrators of sexual assault. Survivors of harassment and assault have raised new awareness of toxic masculinity and its effects in a short period of time.

However, as time goes on, supporters of the movement have been voicing rising concerns that something is bound to go awry. There is an undercurrent of worry that an untrustworthy individual will make an errant claim and thereby provide fodder for skeptics and bring the momentum of the movement to a halt. In response to this, it may seem like more vetting or investigation of the claims is the way forward. On the other hand, wouldn’t it be unfortunate to erode trust and belief in women’s stories in hopes of keeping the very momentum in service of hearing women’s voices?

Continue reading “Trusting Women and Epistemic Justice”

Is There a Problem With Scientific Discoveries Made by Harassers?

A scientist taking notes next to a rack of test tubes.

The question about bias in science is in the news again.

It arose before, in the summer, when the press got hold of an inflammatory internal memo that Google employees had been circulating around their company. The memo’s author, James Damore, now formerly of Google, argued that Google’s proposed solutions to eradicating the gender gap in software engineering are flawed. They’re flawed, Damore thought, because they assume that the preponderance of men in “tech and leadership positions” is a result only of social and institutional biases, and they ignore evidence from evolutionary psychology suggesting that biologically inscribed differences in “personality,” “interests,” and “preferences” explain why women tend not to hold such positions.

Continue reading “Is There a Problem With Scientific Discoveries Made by Harassers?”

Harvey Weinstein and Addressing Hollywood’s Unacceptable Reality

A photo of the Hollywood sign at sunset.

On October 5, The New York Times released a report detailing various instances of sexual assault perpetrated by Hollywood director and executive, Harvey Weinstein, on many of his female colleagues. The allegations span over a period of 30 years, as Weinstein’s power in the film industry protected him from consequences. “Movies were his private leverage,” the report reads, as Weinstein often offered promotions and bonuses to his female colleagues in exchange for sexual acts, and silenced those who spoke out with payments that ranged between $80,000 and $150,000.

Continue reading “Harvey Weinstein and Addressing Hollywood’s Unacceptable Reality”

Fighting Obscenity with Automation

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Continue reading “Fighting Obscenity with Automation”