← Return to search results
Back to Prindle Institute

Sexual Violence in the Metaverse: Are We Really “There”?

photograph of woman using VR headset

Sexual harassment can take many forms, whether in an office or on social media. However, there might seem to be a barrier separating “us” as the user of a social media account, from “us” as an avatar or visual representation in a game since the latter is “virtual” whereas “we” are “real.” Even though we are prone to experience psychological and social damage to our virtual representations, it seems that we cannot – at least directly – be affected physically. A mean comment may hurt my feelings and change my mood – I  might even get physically ill – but no direct physical damage seemed possible. Until now.

Recently, a beta tester of Horizon Worlds – a VR-based platform of Meta – reported that a stranger “simulated groping and ejaculating onto her avatar.” Even more recently, additional incidents, concerning children, have been reported. A safety campaigner stated that “He has spoken to children who say they were groomed on the platform and forced to take part in virtual sex.” The same article talks about howa “researcher posing as a 13-year-old girl witnessed grooming, sexual material, racist insults and a rape threat in the virtual-reality world.” How should we understand these virtual assaults? While sexual harassment requires no physical presence, when we attempt to consider whether such actions represent a kind of physical violence, things get complicated as the victim has not been violated in the traditional sense.

This problem has been made more pressing by the thinning of the barrier that separates what is virtual from what is physical. Mark Zuckerberg, co-founder and CEO of Meta, has emphasized the concept of “presence” as “one of the basic concepts” of Metaverse. The goal is to make the virtual space as “detailed and convincing” as possible. In the same video, some virtual items are designed to give a “realistic sense of depth and occlusion.” Metaverse attempts to win the tech race by mimicking the physical sense of presence as much as possible.

The imitation of the physical sense of presence is not a new thing. Many video games also develop  a robust sense of  presence. Especially in mmo (massive multiplayer online) games, characters can commonly touch, push, or persistently follow each other, even when it is unwelcomed and has nothing to do with one’s progress in the game. We often accept these actions as natural, as an obvious and basic part of the game’s social interaction. It is personal touches like these that encourage gamers to bond with their avatars. They encourage us to feel two kinds of physical presence: present as a user playing a game in a physical environment, and present as a game character in a virtual environment.

But these two kinds of presence mix very easily, and the difference between a user and the avatar can easily be blurred. Having one’s avatar pushed or touched inappropriately, has very real psychological effects. It seems that at some point, these experiences can no longer be considered as merely “virtual.”

This line is being further blurred by the push toward Augmented Reality (AR) which places “virtual” items in our world, and Virtual Reality (VR) where “this” world remains inaccessible to user during the session. As opposed to classic games’ sense of presence, in AR and VR, we explore the game environment mainly within one sense of presence instead of two, from the perspective of a single body. Contrary to our typical gaming experience, these new environments – like that of the Metaverse – may only work if this dual presence is removed or weakened. This suggests that our experience can no longer be thought of as taking place “somewhere else” but always “here.”

Still, at some level, dual presence remains: When we take our headsets off, “this world” waits for us. And so we return to our main moral question under discussion: Can we identify an action within the embodied online world as physical? Or, more specifically, Is the charge of sexual assault appropriate in the virtual space?

If one’s avatar is taken as nothing but a virtual puppet controlled by the user from “outside,” then it seems impossible to conclude that gamers can be physically threatened in the relevant sense. However, as the barrier separating users from their game characters erodes, the illusion of presence makes the avatar mentally inseparable from the user, experience-wise they become increasingly the same. Since the aim of the Metaverse is to create such a union, one could conclude that sharing the same “space” means sharing the same fate.

These are difficult questions, and the online spaces as well as the concepts which govern them are always in development. However, recent events should be taken as a warning to consider preventive measures, as these new spaces require new definitions, new moral codes, and new precautions.

Nasty, Brutish and Online: Is Facebook Revealing a Hobbesian Dystopia?

Mark Zuckerberg giving a speech against a blue background

The motto and mission of Facebook – as Mark Zuckerberg (founder and CEO), Facebook spokespeople, and executives have repeated over the years ad nauseam, is to “make the world a better place by making it more open and connected.” The extent to which Facebook has changed our social and political world can hardly be underestimated. Yet, over the past several years, as Facebook has grown into a behemoth with currently 2.2 billion monthly and 1.4 billion daily active users worldwide, the problems that have emerged from its capacity to foment increasingly hysterical and divisive ideas, to turbocharge negative messages and incendiary speech, and to disseminate misinformation, raises serious questions about the ideal of openness and connectedness.

The problems, now well documented, that have attended Facebook’s meteoric rise indicate that there has been a serious, perhaps even deliberate, lack of critical engagement with what being ‘more open and connected’ might really entail in terms of how those ideals can manifest themselves in new, powerful, and malign ways. The question here is whether Facebook is, or is able to be – as Zuckerberg unwaveringly believes – a force for good in the world; or, rather, whether it has facilitated, even encouraged, some of the baser, darker aspects of human nature and human behavior to emerge in a quasi Hobbesian “state of nature” scenario.  

Thomas Hobbes was a social contract theorist in the seventeenth century. One of the central tenets of his political philosophy, with obvious implications for his view of the moral nature of people, was that in a “state of nature” – that is, without government, laws or rules to which humans voluntarily (for our benefit) submit, we would exist in a state of aggression, discord and war. Hobbes famously argued that, under such conditions, life would be “nasty, brutish, and short.” He thought that morality emerged when people were prepared to give up some of their unbridled freedom to harm to others in exchange for protection from being harmed by others.

The upside was that legitimate sovereign power could keep our baser instincts in check, and could lead to a relatively harmonious society. The social contract, therefore, is a rational choice made by individuals for their own self-preservation. This version of the nature and role of social organization does, to be sure, rest on a bleak view of human nature. Was Hobbes in any way right in that a basic aspect of human nature is cruel and amoral? And does this have anything to do with what the kinds of behaviors that have emerged on Facebook through its ideal of fostering openness and connectivity, largely free from checks and controls?

Though Facebook has recently been forced to respond to questions about its massive surveillance operation, about data breaches such as the Cambridge Analytica scandal, about use of the platform to spread misinformation and propaganda to influence elections; and its use for stoking hatred, inciting violence and aiding genocide, Mark Zuckerberg remains optimistic that Facebook is a force for good in the world – part of the solution rather than the problem.

In October 2018 PBS’s Frontline released a two-part documentary entitled The Facebook Dilemma in which several people who were interviewed claimed that from unique positions of knowledge ‘on the ground’ or ‘in the world,’ they tried to tell Facebook about various threats of propaganda, fake news and other methods being used on the platform to sow division and incite violence. The program meticulously details repeatedly missed, or ducked, opportunities for Facebook company executives, and Mark Zuckerberg himself, to comprehend and take seriously the egregious nature of some of these problems.

When forced to speak about these issues, Facebook spokespeople and Zuckerberg himself have consistently repeated the line that they were slow to act on threats and to understand the use of Facebook by people with pernicious agendas. This is doubtless true, but to say that Facebook was unsuspecting or inattentive to the potential dangers of what harms the platform might attract is putting it very mildly, and indeed appears to imply that Facebook’s response, or lack thereof, is rather benign; while not making them blameless exactly, it appears designed to neutralize blame: ‘we are only to blame insofar as we didn’t notice. Also we are not really to blame because we didn’t notice.’

Though Facebook does take some responsibility for monitoring and policing what is posted on the site (removing explicit sexual content, sexual abuse material, and clear hate speech), it has taken a very liberal view in terms of moderating content. From this perspective it could certainly be argued that the company is to some extent culpable in the serious misuse of its product.

The single most important reason that so many malign uses of Facebook have been able to occur is the lax nature of editorial control over what appears on the site, and how it is prioritized or shared, taken together with Facebook’s absolutely unprecedented capacity to offer granular, fine-tuned highly specific targeted advertising. It may be that Facebook has a philosophical defense for taking such a liberal stance, like championing and defending free speech.

Take, for example, Facebook’s ‘newsfeed’ feature. Tim Sparapani, Facebook Director of Public Policy from 2009 to 2011, told Frontline, “I think some of us had an early understanding that we were creating, in some ways, a digital nation-state. This was the greatest experiment in free speech in human history.” Sparapani added, “We had to set up some ground rules. Basic decency, no nudity and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.” Facebook has consistently fallen back on the ‘free speech’ defense, but it is disingenuous for the company to claim to be merely a conduit for people to say what they like, when the site’s algorithms, determined by (and functioning in service of) its business model, play an active role.

In the Facebook newsfeed, the more hits a story gets, the more the site’s algorithms prioritize it. Not only is there no mechanism for differentiating between truth and falsehood here, nor between stories which are benign and those which are pernicious, but people are more likely respond to (by ‘liking’ and ‘sharing’) stories with more outrageous or hysterical claims – stories which are less likely to be true and more likely to cause harm.

Roger McNamee, an early Facebook investor, told Frontline:  “…In effect, polarization was the key to the model – this idea of appealing to people’s lower-level emotions; things like fear and anger to create greater engagement and, in the context of Facebook, more time on site, more sharing, and therefore, more advertising value.” Because Facebook makes its money by micro-targeted advertising, the more engagement with a story, the more ‘hits’ it gets, the better the product Facebook has to sell to advertisers who can target individuals based on what Facebook can learn about them from their active responses. It is therefore in Facebook’s interest to cause people to react.

Facebook profits when stories are shared, and it is very often the fake, crazy stories, and/or those with most far-flung rhetoric that are most shared. But why should it be the case that people are more likely to respond to such rhetoric? This brings us back to Hobbes, and the question about the ‘darker’ aspects of human nature: is there something to be gleaned here about what people are like – what they will say and do if no one is stopping them?

The ‘real-world’ problems associated with fake news, such as violence in Egypt, Ukraine, Philippines and Myanmar, have emerged in the absence of a guiding principle – an epistemic foundation in the form of a set of ethics based on a shared conception of civilized discourse, and a shared conception of the importance of truth. In this analogy, the editorial process might be thought of as a kind of social contract, and the effects of removing it might be read as having implications for what humans in a ‘state of nature,’ where behavior is unchecked, are really like. Perhaps too much openness and connectivity does not, after all, necessarily make the world a better place, and might sometimes make it a worse one.

The conclusion seems unavoidable that Facebook has provided something like a Hobbesian state of nature by relaxing, removing, failing to use all but the most basic editorial controls. Yet it is equally true that Facebook has facilitated, encouraged and profited from all the nasty and brutish stuff. If the Hobbesian analogy is borne out, perhaps it is time to revisit the question of what kinds of controls need to be implemented for the sake of rational self (and social) preservation.

 

Privacy and a Year in the Life of Facebook

Photograph of Mark Zuckerberg standing with a microphone

Mark Zuckerberg, the CEO of Facebook, declared on January 4 that he would “fix Facebook” in 2018. Since then, the year has contained scandal after scandal. Throughout the year, Facebook has provided a case study of questions regarding how to protect or value information privacy. On March 17, the New York Times and The Guardian revealed that Cambridge Analytica used information gleaned from Facebook users to attempt to influence voters’ behavior. Zuckerberg had to testify before Congress and rolled out new data privacy practices. In April, the Cambridge Analytica scandal was revealed to be more far-reaching than previously thought and in June it was revealed that Facebook shared data with other companies such as Apple, Microsoft and Samsung. The UK fined Facebook the legal maximum for illegal handling of user data related to Cambridge Analytica. In September, a hack accessed 30 million users data. In November, another New York Times investigation revealed that Facebook had failed to be sufficiently forthcoming about Russia’s interference on the site regarding political manipulation, and on December 18 more documents came out showing that Facebook offered user data, even from private messages, to companies including Microsoft, Netflix, Spotify and Amazon.

The repeated use of data regarding users of Facebook without their knowledge or consent, often to manipulate their future behavior as consumers or voters, has led to Facebook’s financial decline and loss of public trust. The right to make your own decisions regarding access to information about your life is called informational privacy. We can articulate the tension in discussions over the value of privacy as between the purported right to be left alone, on the one hand, and the supposed right of society to know about its members on the other. The rapid increase in technology that can collect and disseminate information about individuals raises the question of whether the value of privacy should shift along with this shift in actual privacy practices or whether greater efforts need to be devoted to protect the informational privacy of members of society.

The increase in access to personal information is just one impact of the rise of information technology. Technological advances have also affected the meaning of personal information. For instance, it has become easier to track your physical whereabouts given the sorts of apps and social media that are commonly used, but also the reason that the data from Facebook is so useful is that so much can be extrapolated about a person based on seemingly unrelated behaviors, changing what sorts of information may be considered sensitive. Cambridge Analytica was able to use Facebook data to attempt to sway voting behavior because of trends in activity on the social media site and political behavior. Advertising companies can take advantage of the data to better target consumers.

When ethicists and policy makers began discussing the right to privacy, considerations centered on large and personal life choices and protecting public figures from journalists. The aspects of our lives that we would typically consider most central to the value of privacy would be aspects of our health, say, our religious and political beliefs, and other aspects of life deemed personal such as romantic and sexual practices and financial situations. The rise of data analysis that comes with social media renders a great deal of our behaviors potentially revelatory: what pictures we post, what posts we like, how frequently we use particular language, etc. can be suggestive of a variety of further aspects of our life and behaviors.

If information regarding our behavior on platforms such as Facebook is revealing of the more traditionally conceived private domain of our lives, should this information be protected? Or should we reconceive of what we conceive of as private? One suggestion has been to acknowledge the brute economic fact of the rise of these technologies: this data is worth money. Therefore, it could be possible to abstract away from the moral value or right to privacy and focus instead on the reality that data is worth something, but if the individual owns the data about themselves they perhaps are owed the profits of the use of their data.

There are moral reasons to protect personal data. If others have unrestricted access to their whereabouts, health information, passwords protecting financial accounts, etc., they could be used to harm the individual. Security and a right to privacy thus could be justified as harm prevention. It also could be justified via right to autonomy, as data about one’s life can be used to unduly influence her choices. This is exacerbated by the ways that data changes relevance and import depending on the sphere in which it is used. For instance, revealing data regarding your health being used in your healthcare dealings has different significance than if potential employers had access to such data. If individuals are in less control over their personal data, this can lead to discrimination and disadvantages.

Thus there are both economic or property considerations as well as moral considerations for protecting personal data. Zuckerberg has failed to “fix” Facebook in 2018, but more transparency of the protections and regulation of how platforms can use data would be positive moves forward for respecting our value of privacy in 2019.