← Return to search results
Back to Prindle Institute

Nasty, Brutish and Online: Is Facebook Revealing a Hobbesian Dystopia?

Mark Zuckerberg giving a speech against a blue background

The motto and mission of Facebook – as Mark Zuckerberg (founder and CEO), Facebook spokespeople, and executives have repeated over the years ad nauseam, is to “make the world a better place by making it more open and connected.” The extent to which Facebook has changed our social and political world can hardly be underestimated. Yet, over the past several years, as Facebook has grown into a behemoth with currently 2.2 billion monthly and 1.4 billion daily active users worldwide, the problems that have emerged from its capacity to foment increasingly hysterical and divisive ideas, to turbocharge negative messages and incendiary speech, and to disseminate misinformation, raises serious questions about the ideal of openness and connectedness.

The problems, now well documented, that have attended Facebook’s meteoric rise indicate that there has been a serious, perhaps even deliberate, lack of critical engagement with what being ‘more open and connected’ might really entail in terms of how those ideals can manifest themselves in new, powerful, and malign ways. The question here is whether Facebook is, or is able to be – as Zuckerberg unwaveringly believes – a force for good in the world; or, rather, whether it has facilitated, even encouraged, some of the baser, darker aspects of human nature and human behavior to emerge in a quasi Hobbesian “state of nature” scenario.  

Thomas Hobbes was a social contract theorist in the seventeenth century. One of the central tenets of his political philosophy, with obvious implications for his view of the moral nature of people, was that in a “state of nature” – that is, without government, laws or rules to which humans voluntarily (for our benefit) submit, we would exist in a state of aggression, discord and war. Hobbes famously argued that, under such conditions, life would be “nasty, brutish, and short.” He thought that morality emerged when people were prepared to give up some of their unbridled freedom to harm to others in exchange for protection from being harmed by others.

The upside was that legitimate sovereign power could keep our baser instincts in check, and could lead to a relatively harmonious society. The social contract, therefore, is a rational choice made by individuals for their own self-preservation. This version of the nature and role of social organization does, to be sure, rest on a bleak view of human nature. Was Hobbes in any way right in that a basic aspect of human nature is cruel and amoral? And does this have anything to do with what the kinds of behaviors that have emerged on Facebook through its ideal of fostering openness and connectivity, largely free from checks and controls?

Though Facebook has recently been forced to respond to questions about its massive surveillance operation, about data breaches such as the Cambridge Analytica scandal, about use of the platform to spread misinformation and propaganda to influence elections; and its use for stoking hatred, inciting violence and aiding genocide, Mark Zuckerberg remains optimistic that Facebook is a force for good in the world – part of the solution rather than the problem.

In October 2018 PBS’s Frontline released a two-part documentary entitled The Facebook Dilemma in which several people who were interviewed claimed that from unique positions of knowledge ‘on the ground’ or ‘in the world,’ they tried to tell Facebook about various threats of propaganda, fake news and other methods being used on the platform to sow division and incite violence. The program meticulously details repeatedly missed, or ducked, opportunities for Facebook company executives, and Mark Zuckerberg himself, to comprehend and take seriously the egregious nature of some of these problems.

When forced to speak about these issues, Facebook spokespeople and Zuckerberg himself have consistently repeated the line that they were slow to act on threats and to understand the use of Facebook by people with pernicious agendas. This is doubtless true, but to say that Facebook was unsuspecting or inattentive to the potential dangers of what harms the platform might attract is putting it very mildly, and indeed appears to imply that Facebook’s response, or lack thereof, is rather benign; while not making them blameless exactly, it appears designed to neutralize blame: ‘we are only to blame insofar as we didn’t notice. Also we are not really to blame because we didn’t notice.’

Though Facebook does take some responsibility for monitoring and policing what is posted on the site (removing explicit sexual content, sexual abuse material, and clear hate speech), it has taken a very liberal view in terms of moderating content. From this perspective it could certainly be argued that the company is to some extent culpable in the serious misuse of its product.

The single most important reason that so many malign uses of Facebook have been able to occur is the lax nature of editorial control over what appears on the site, and how it is prioritized or shared, taken together with Facebook’s absolutely unprecedented capacity to offer granular, fine-tuned highly specific targeted advertising. It may be that Facebook has a philosophical defense for taking such a liberal stance, like championing and defending free speech.

Take, for example, Facebook’s ‘newsfeed’ feature. Tim Sparapani, Facebook Director of Public Policy from 2009 to 2011, told Frontline, “I think some of us had an early understanding that we were creating, in some ways, a digital nation-state. This was the greatest experiment in free speech in human history.” Sparapani added, “We had to set up some ground rules. Basic decency, no nudity and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.” Facebook has consistently fallen back on the ‘free speech’ defense, but it is disingenuous for the company to claim to be merely a conduit for people to say what they like, when the site’s algorithms, determined by (and functioning in service of) its business model, play an active role.

In the Facebook newsfeed, the more hits a story gets, the more the site’s algorithms prioritize it. Not only is there no mechanism for differentiating between truth and falsehood here, nor between stories which are benign and those which are pernicious, but people are more likely respond to (by ‘liking’ and ‘sharing’) stories with more outrageous or hysterical claims – stories which are less likely to be true and more likely to cause harm.

Roger McNamee, an early Facebook investor, told Frontline:  “…In effect, polarization was the key to the model – this idea of appealing to people’s lower-level emotions; things like fear and anger to create greater engagement and, in the context of Facebook, more time on site, more sharing, and therefore, more advertising value.” Because Facebook makes its money by micro-targeted advertising, the more engagement with a story, the more ‘hits’ it gets, the better the product Facebook has to sell to advertisers who can target individuals based on what Facebook can learn about them from their active responses. It is therefore in Facebook’s interest to cause people to react.

Facebook profits when stories are shared, and it is very often the fake, crazy stories, and/or those with most far-flung rhetoric that are most shared. But why should it be the case that people are more likely to respond to such rhetoric? This brings us back to Hobbes, and the question about the ‘darker’ aspects of human nature: is there something to be gleaned here about what people are like – what they will say and do if no one is stopping them?

The ‘real-world’ problems associated with fake news, such as violence in Egypt, Ukraine, Philippines and Myanmar, have emerged in the absence of a guiding principle – an epistemic foundation in the form of a set of ethics based on a shared conception of civilized discourse, and a shared conception of the importance of truth. In this analogy, the editorial process might be thought of as a kind of social contract, and the effects of removing it might be read as having implications for what humans in a ‘state of nature,’ where behavior is unchecked, are really like. Perhaps too much openness and connectivity does not, after all, necessarily make the world a better place, and might sometimes make it a worse one.

The conclusion seems unavoidable that Facebook has provided something like a Hobbesian state of nature by relaxing, removing, failing to use all but the most basic editorial controls. Yet it is equally true that Facebook has facilitated, encouraged and profited from all the nasty and brutish stuff. If the Hobbesian analogy is borne out, perhaps it is time to revisit the question of what kinds of controls need to be implemented for the sake of rational self (and social) preservation.