← Return to search results
Back to Prindle Institute
Technology

Fighting Obscenity with Automation

By Conner Gordon
21 Jul 2016

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Take, for example, recent discussions about police brutality and Black Lives Matter on the social media platform. As reported by NPR’s Aarti Shahani, Facebook users on both sides of the debate have expressed frustration at what they perceive as inconsistent moderation policy. One user of color, for example, noted that an innocuous post expressing solidarity with the black community was temporarily taken down by Facebook’s moderators for containing offensive content. Another expressed frustration that a bloody illustration of a masked man slitting a policeman’s throat was allowed to remain on the site. Such episodes have left some users questioning the efficacy of Facebook’s moderation policy, especially when it deals with the politically charged speech that often makes its home on the platform.

Concerns about bias in hate speech moderation have become all the more relevant as observers question the objectivity of other forms of human-led content moderation at the social media giant. In May, for example, Gizmodo reported that former employees of Facebook admitted prioritizing liberal news topics on the social network’s trending news feature, burying conservative voices in the process. The admission of bias would be a problem on its own, especially for a site that plays such a fundamental role in bringing news to the public. Yet Facebook’s role as a host for millions of informal debates on a variety of issues makes accusations of editorial bias all the more damaging.

Questions surrounding human screeners of offensive speech are only reinforced by the evolving problems such measures are meant to confront. As noted by al-Bab’s Brian Whitaker, users of hate speech have already introduced various degrees of automation, including using automated Twitter accounts, to spread hate and vitriol. Such tactics will no doubt challenge the model of using humans to moderate instances of hate speech, especially when faced with a near-limitless capabilities of automated hate speech to evade censure.

In response to the pressures of human-led content moderation, some have looked to automation of the process as a viable alternative. Instead of humans combing through social media posts in a time-consuming case-by-case basis, algorithms would handle editorial decision-making automatically. Developing such software could help ease the load on human content moderators, making the endless task of screening online media for offensive content that much easier. Such initiatives have benefits for existing content moderators, as well. Some working in the industry have noted how damaging it is to view graphic violent images for eight hours a day; increased automation, then, could stand to reduce the mental strain on these workers.

By some accounts, companies have already begun investigating these options; in June, Reuters reported that Facebook and Google had begun using software to automatically detect and delete extremist videos. And while such efforts are not yet widespread, they present an appealing option in what has become a perpetual struggle for many websites and social media networks.

While the creation of automated hate speech screeners could address the problems sites like Facebook continue to encounter, they do not come without drawbacks. On a surface level, the utility of such software must be questioned – especially given how quickly language can shift online. In recent months, for example, neo-Nazis have begun using an “echo,” a series of three parentheses surrounding a name, to target Jewish public figures for harassment. As pointed out by Mic’s Cooper Fleishmann and Anthony Smith, Twitter’s own search function has difficulties finding tweets that include the echo, making the problem hard to track for affected users. In order to be effective, then, an automated hate speech blocker would have to be continually updated to recognize all forms of hate speech online – speech that, as the echo proves, can quickly arise in new, and increasingly abstract, forms.

Beyond issues of utility, however, are questions of whether automating hate speech screeners is ethically advisable. Unlike a team of human screeners, such as the people Facebook employs, automated software could be less insulated to correction when the process goes wrong. A human screener deals with instances involving hate speech on a case-by-case basis, making minute corrections relatively easy, albeit time-consuming. And as Shahani’s article notes, human screeners also are able to take context into account – in Facebook’s case, determining whether an offensive image was shared to condemn or glorify the underlying message. It is yet to be seen whether an automated screener could approach similar situations with the same finesse.

So too are forms of automation also prone to the same bias that have driven complaints about social media content moderation. Certainly, implementing an algorithm to delete offensive content could reduce circumstances of individual workers letting political bias determine what counts as offensive. While software could reduce such instances, little stands in the way of the software’s creators intentionally or unwittingly coding similar biases into the software itself. In this regard, while it may solve some problems posed by human-led content moderation, an automated approach carries with it its own potential issues of bias.

The questions surrounding automated content moderation also underscore the changing role of private companies as guardians of speech. Teleread’s Chris Meadows notes that, in contrast to government-enforced censorship, companies have more flexibility in determining which kinds of speech should be prohibited on their products. However, Meadows also argues that private forms of communication, such as social media, have expanded to the point where heavy-handed speech policies “can take on some of the same worrisome qualities as government censorship of old media.” With non-governmental sites like Facebook playing such a vital role in public discourse, then, the stakes of adopting automated technologies to moderate speech are even higher.

In this regard, the benefits offered by automated content moderation are not without their own pitfalls. Certainly, automation could help reduce accusations of editorial bias and lighten the strain on workers often forced to confront the internet’s most offensive and graphic content. However, it has not yet been seen whether such automation could approach situations with the same finesse as a human screener. Indeed, this skillfulness seems critical to the task of content moderation in the first place. Finding the delicate balance between moderation and expression is not an easy one, especially regarding politically sensitive topics. And while automation could stand to make this balance more standardized,  it certainly would not end the debate over how sites moderate expression.

Conner was a Graduate Fellow at the Prindle Institute from 2016-2018. Conner's writing focuses on memory, politics and culture. He is currently an MFA candidate at the University of Oregon.
Related Stories