← Return to search results
Back to Prindle Institute

Facebook Live’s Violence Problem

By Rachel Robison-Greene
31 May 2017

On the evening of Easter Sunday, 74-year-old Pennsylvania resident Robert Godwin was enjoying a walk through his neighborhood after a holiday meal with his family when he was approached, at random, by self-described “monster” Steve Stephens.  Stephens, who was given the moniker “The Facebook Killer” by the media, blamed what was about to happen to Godwin on his broken relationship with his girlfriend, before shooting Godwin in the head, killing him instantly.  

The case made national headlines when Stephens posted a live video of the crime to his personal Facebook account.  In a series of rambling posts on the social media site, Stephens claimed that Goodwin was the fourteenth person he had killed, though police could not verify that this claim is true.  He also attempted to provide an explanation for his behavior, claiming that he had snapped in response to a challenging relationship with his girlfriend. A $50,000 reward was offered for information leading to the apprehension of Stephens.  Police tracked him down and, as they approached to put him under arrest, he shot and killed himself.

When reached for comment, representatives of Facebook had this to say: “We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.”  Facebook received reports about the video and quickly took it down, though they have been criticized for failing to take it down quickly enough.

The case has generated public debate about the role that social media companies should play in monitoring and censoring content posted by users.  In the past year, Facebook has added a feature that allows users to “go live.”  Using this feature, users share videos in real time.  Some less than fully rational Facebook users have utilized this feature in order to film themselves committing crimes.

Some of the crimes caught on film are misdemeanors committed by young people who don’t realize that they shouldn’t be posting pictures of themselves drinking alcohol while underage. Others unwisely post evidence of consumption of illegal drugs. Other crimes, like the one Stephens committed, are more serious.  The murders of Rodney Hess, Philando Castile, and Antonio Perkins were all captured on Facebook’s live feature.  Other crimes like rape, torture, and extreme cruelty to animals have also been streamed live.  Perpetrators have also been known to post videos of their crimes to the social media site after the crimes have been committed.

The main argument for the position that Facebook should take these videos down as soon as possible has to do with the violent content.  First, perpetrators should not be given a platform from which to launch an infamous reputation for their crimes.  The motivation behind posting a serious crime to a social media website is somewhat mysterious, but the desire for respect or fear from the perpetrator’s social group may be a likely candidate.  If the criminal truly is motivated by considerations such as these, justice seems to require that they are unsuccessful.  Violent people shouldn’t become well known for their violence.  This is a problem that many people have with the infamy of serial killers—they become notorious at the expense of their victims, who are defined in the public eye by the crime perpetrated against them.  A further concern is that, when violent crimes and explicit content are posted on the internet and discussed at length in public forums, other people who might be considering performing a similar kind of act may be motivated to do so.  It isn’t clear whether empirical data supports such a conclusion, but many argue that it is best to err on the side of caution.

A second argument for quickly censoring violent content has to do with what people expect to see when scrolling Facebook.  Of course, everyone has their own social circle, and the content that a person sees when they log on is largely determined by the people that they have decided to “friend.”  As a general rule, though, people don’t expect to see gruesome images when they log on.  Many people scroll their news feeds in plain view of their children, not expecting to encounter violent or otherwise explicit images that they would prefer that their children didn’t see.  Many adults have delicate constitutions themselves when it comes to these types of things, and they would prefer to refrain from using the site at all rather than be presented with content that they may find deeply unsettling.

There are also compelling arguments on the other side of the issue.  Of course, many Facebook users are concerned about the ways in which censorship by Facebook administrators limits their free speech.  The concern here is not that we are desperate to protect the speech rights of a killer like Stephens, but, rather, that the best way to deal with hateful instances of speech is to let the hateful speech acts happen and then let the resultant response speech take its inevitable effect.  

The immediate response to such a concern might be something like this—it is the government that is prevented, in many cases, from violating free speech rights, not a company like Facebook.  Facebook can operate the way that it wants, and if users feel that they are being unfairly restricted, they can simply not use the social network.  The response to this argument might be that it is a moral right rather than a legal right that is being appealed to here.  Like the Internet in general, Facebook is a mixed bag in terms of the harms and benefits that it causes for society.  However, free speech has been shown, time and time again, to be a touchstone of social justice.  Facebook provides an almost unprecedented opportunity for free speech to occur at a truly epic level.

The idea that free speech is a touchstone of social justice underlies another main consideration in favor of keeping ostensibly violent, objectionable material up on the site and to let private users work it out.  Do we really want social media administrators to determine what kind of content is violent and offensive?  Consider the following example: a black couple falls victim to what they (and, perhaps the rest of us as well), would take to be police brutality.  One of them has the presence of mind to record what is taking place and subsequently posts the captured event to Facebook to let the public be the judge of what has taken place.  Such occurrences do take place. This is an instance of violent content being posted to social media that might serve an important social end.

Moreover, there is disagreement over what type of content counts as offensive or explicit.  If a woman posts a picture of herself breastfeeding her baby and someone reports it as explicit, should it be taken down?  What about a woman taking ownership of her mastectomy scars?

Finally, and perhaps most obviously, some argue that videos like the one that Stephens captured and posted should remain up purely for pragmatic reasons pertaining to the apprehension and successful conviction of the criminal. When people see videos such as these, there is increased potential that someone will come forward with additional important information that might help lead to the conviction of the perpetrator.

Rachel is an Assistant Professor of Philosophy at Utah State University. Her research interests include the nature of personhood and the self, animal minds and animal ethics, environmental ethics, and ethics and technology. She is the co-host of the pop culture and philosophy podcast I Think Therefore I Fan.
Related Stories