← Return to search results
Back to Prindle Institute
Technology

Content Moderation and Emotional Trauma

By Megan Fritts
15 Mar 2022
image of wall of tvs each displaying something different

In the wake of the Russian invasion of Ukraine, which has been raging violently since February 24th of 2022, Facebook (now known as “Meta”) recently announced its decision to change some of its content-moderation rules. In particular, Meta will now allow for some calls for violence against “Russian invaders,” though Meta emphasized that credible death threats against specific individuals would still be banned.

“As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians,” spokesman Andy Stone said.

This recent announcement has reignited a discussion of the rationale — or lack thereof — of content moderation rules. The Washington Post reported on the high-level discussion around social media content moderation guidelines: how these guidelines are often reactionary, inconsistently-applied, and not principle-based.

Facebook frequently changes its content moderation rules and has been criticized by its own independent Oversight Board for having rules that are inconsistent. The company, for example, created an exception to its hate speech rules for world leaders but was never clear which leaders got the exception or why.

Still, politicians, academics, and lobbyists continue to call for stricter content moderation. For example, take the “Health Misinformation Act of 2021”, introduced by Senators Amy Klobuchar (D-Minnesota) and Ben Ray Luján (D-New Mexico) in July of 2021. This bill, a response to online misinformation during the COVID-19 pandemic, would revoke certain legal protections for any interactive computer service, e.g., social media websites, that “promotes…health misinformation through an algorithm.” The purpose of this bill is to incentivize internet companies to take greater measures to combat the spread of misinformation by engaging in content-moderation measures.

What is often left out of these discussions, however, is the means by which content moderation happens. It is often assumed that such a monumental task must be left up to algorithms, which can scour through mind-numbing amounts of content at a breakneck speed. However, much of the labor of content-moderation is performed by humans. And in many cases, these human content-moderators are poor laborers working in developing nations for an extremely small salary. For example, employees at Sama, a Kenyan technology company that is the direct employer of Facebook’s Kenya-based content moderators, “remain some of Facebook’s lowest-paid workers anywhere in the world.” While U.S.-based moderators are typically paid a starting wage of $18/hour, Sama moderators make an average of $2.20/hour. And this low wage is their salary after a recent pay-increase, which happened a few weeks ago. Prior to that, Sama moderators made $1.50/hour.

Such low wages, especially for labor outsourced to poor or developing nations, is nothing new. However, content moderation can be a particularly harrowing — in some cases, traumatizing — line of work. In their paper “Corporeal Moderation: Digital Labour as Affective Good,” Dr. Rae Jereza interviews one content moderator named Olivia about her daily work, which includes identifying “non‐moving bod[ies]”, visible within a frame, “following an act of violence or traumatic experience that could reasonably result in death.” The purpose of this is so videos containing dead bodies can be flagged as containing disturbing content. This content moderator confesses to watching violent or otherwise disturbing content prior to her shift, in an effort to desensitize herself to the content she would have to pick through as part of her job. The content that she was asked to moderate ranged over many categories, including “hate speech, child exploitation imagery (CEI), adult nudity and more.”

Many kinds of jobs involve potentially traumatizing duties: military personnel, police, first responders, slaughterhouse and factory farm workers, and social workers all work jobs with high rates of trauma and other kinds of emotional/psychological distress. Some of these jobs are also compensated very poorly — for example, factory and industrial farms primarily hire immigrants (many undocumented) willing to work for pennies on the dollar in dangerous conditions. Poorly-compensated high-risk jobs tend to be filled by people in the most desperate conditions, and these workers often end up in dangerous employment situations that they are nevertheless unable or unwilling to leave. Such instances may constitute a case of exploitation: someone exploits someone else when they take unfair advantage of the other’s vulnerable state. But not all instances of exploitation leave the exploited person worse-off, all things considered. The philosopher Jason Brennan describes the following case of exploitation:

Drowning Man: Peter’s boat capsizes in the ocean. He will soon drown. Ed comes along in a boat. He says to Peter, “I’ll save you from drowning, but only if you provide me with 50% of your future earnings.” Peter angrily agrees.

In this example, the drowning man is made better-off even though his vulnerability was taken advantage of. Just like this case, certain unpleasant or dangerous lines of work may be exploitative, but may ultimately make the exploited employees better-off. After all, most people would prefer poor work conditions to life in extreme poverty. Still, there seems to be a clear moral difference between different instances of mutually-beneficial exploitation. Requiring interest on a loan given to a financially-desperate acquaintance may be exploitative to some extent, but is surely not as morally egregious as forcing someone to give up their child in exchange for saving their life. What we demand in exchange for the benefit morally matters. Can it even be permissible to demand emotional and mental vulnerability in exchange for a living wage (or possibly less)?

Additionally, there is something unique about content moderation in that the traumatic material moderators view on any given day is not a potential hazard of the job — it is the whole job. How should we think about the permissibility of hiring people to moderate content too disturbing for the eyes of the general public? How can we ask some people to weed out traumatizing, pornographic, racist, threatening posts, so that others don’t have to see it? Fixing the low compensation rates may help with some of the sticky ethical issues concerning this sort of work. Yet, it is unclear whether any amount of compensation can truly make hiring people for this line of work permissible. How can you put a price on mental well-being, on humane sensitivity to violence and hate?

On the other hand, the alternatives are similarly bleak. There seem to be few good options when it comes to cleaning up the dregs of virtual hate, abuse, and shock-material.

Megan is an Assistant Professor of Philosophy at the College of St. Scholastica in Duluth, MN. Her research interests span a wide array of topics, including technology ethics, human agency, the work of Nietzsche and Kierkegaard.
Related Stories