← Return to search results
Back to Prindle Institute
Society

Censorship on Social Media

By Emily Troyer
9 Nov 2016

It’s no secret that Republican presidential nominee Donald Trump has been known to write inflammatory posts on his social media platforms. Facebook employees have been questioning how to deal with these; some say that Trump’s posts calling for a ban on Muslim immigration “should be removed for violating the site’s rules on hate speech.” Facebook CEO Mark Zuckerberg definitively halted this talk when he said, “we can’t create a culture that says it cares about diversity and then excludes almost half the country because they back a political candidate.” A Wall Street Journal article uses this as a stepping stone to discuss whether or not presidential candidates should have more wiggle room when it comes to potentially damaging post than an average Joe. Regardless of who is posting, however, such incidents raise the question of, to what extent is it appropriate for social media platforms to censor posts at all?

All virtual social networks have slightly different target audiences with slightly different purposes. Reddit and Facebook are two unique, popular social media platforms with slightly different audiences and purposes. Facebook, in its most literal sense, keeps a log of people the user knows and updates said user with these faces’ happenings. Reddit, on the other hand, aims to be a platform in which people share ideas and articles or pictures of interest. Both sites include some form of approval or disapproval of posts—Facebook in the form of likes and Reddit in the form of upvotes. There are elements of idea-sharing in both as well, perhaps more central to Reddit’s purpose, but nonetheless an integral part of Facebook as well. Because both want to maintain and build an online community geared towards enticing users, the issue of what can be said online becomes a complicated one.

Reddit operates under a censorship policy which essentially bans content if it, “is illegal, encourages or incites violence, threatens, harasses, or bullies or encourages others to do so, is personal and confidential information, impersonates someone in a misleading or deceptive manner, [or] is spam.” To aid in deciphering what “incites violence” or to what extent a comment bullies or harasses, they specify these become true if a member no longer feels safe to make any post at all or fears for their real life. Reddit additionally clarifies that “being annoying, vote brigading, or participating in a heated argument is not harassment.”

The Reddit community is divided into “subreddits” that are specific to various topics ranging from r/politics to r/funny. These subreddits are able to add requirements and are free to govern their domains stricter, if they view it as best for their subreddit. Reddit’s main goal is to preserve the genuineness of the site’s content while still ensuring Reddit can be widely enjoyed. This goal in and of itself voices the struggle between keeping speech genuine and organic but not infringing on others’ right to safety. What is crossing that line? Are Trump’s comments about forbidding Muslims to enter the United States really bullying, or simply unpleasant? Many Reddit users were up in arms when Reddit banned the forum r/fatpeoplehate for harassment. The conflict here began because at this time, subreddits like r/rapingwomen, which harassed and threatened women, and r/atheism, where a top post revealed intimate personal information about a couple unwillingly. One user makes the claim that this is because posts from r/fatpeoplehate made the front page of Reddit but posts from the other two did not. Popularity again becomes the point of contention. Should the amount of people a post is reaching influence whether or not the post is dangerous enough to remove?

Facebook’s content moderation policies air more towards protecting users’ “safe space” than users’ ability to say whatever they want. Their policy states that while they believe differing opinions enhance discussions about tough topics, they “may remove certain kinds of sensitive content or limit the audience that sees it… to “help balance the needs, safety, and interests of a diverse community.” The language of this policy reflects their desire to create a habitat of like views and protect users’ feelings of security while also factoring in the extent to which diverse, controversial views contribute to the political discourse. One must wonder whether or not public forums like Facebook should be a “safe space” when it comes to political views, and if it is more beneficial or detrimental to be faced with troublesome and distasteful comments. Reddit is slightly different because users create the subreddits, and as a result each subreddit becomes a “safe space” of sorts for those who hold the views voiced by and in the subreddit. Only users who subscribe to the subreddit receive notifications from it. Similarly, Facebook offers an option for users themselves to censor what they personally see. If both networks offer these narrowing options, should either take down posts or forums unless they are directly violent or harassing another user?

Clearly, this is a multifaceted issue. Because social media is such a novelty, it is difficult to discern what exactly the communities should be and how far they should allow discourse to go. The two slightly different approaches and controversial situations that have arisen among two of the largest social networks show how complicated discernment of right violations can be. Governing bodies of social media are not actually involved or necessarily trained in legal grey areas. With this in mind, who, exactly, should be able to judge the appropriateness of online dialogue?

Emily is a junior from Fishers, IN. She is majoring in Economics and minoring in Chemistry and Spanish. Emily is the Treasurer for College Republicans, a member of Alpha Chi Omega, and a member of Timmy Global Health.
Related Stories