← Return to search results
Back to Prindle Institute

A (Spoiler-Free) Discussion of the Classism and Ableism of Spoilers

photograph of Star Wars robots on film set

On Friday, the first two episodes of Obi-Wan Kenobi — the latest installment in the ever-growing Star Wars franchise — were released on Disney+. The episodes went live at midnight Pacific Time – yet within minutes of their release, YouTube was rife with reaction and review videos featuring thumbnails spoiling all kinds of details from the show.

This kind of behavior isn’t the sole realm of malicious internet trolls.

Many otherwise reputable entertainment sites do the same thing, posting spoilerific headlines and thumbnails only days — or sometimes even hours — after a movie or television episode premieres. Sometimes, even the content creators themselves are guilty of this behavior. Last year’s Spider Man: No Way Home featured many surprising cameos from the last two decades of Spider Man films. Some of these cameos were clearly advertised in trailers preceding the film’s cinematic release, but others (arguably, the best) were preserved for theatergoers to discover on opening night. Sadly, however, Sony Pictures decided to spoil these very same cameos in the marketing for the home video release of the film, preventing anyone waiting to watch the movie at home from experiencing the same sense of surprise and wonder as theatergoers.

These spoilers are certainly annoying, but are they morally wrong? This is a question taken up by Richard Greene in his recent book Spoiler Alert!, and previously touched upon by fellow Prindle Post author A.G. Holdier. Here, however, I want to argue not only that spoilers are morally wrong, but that the reason for this is that they are inherently classist and ableist.

Spoilers are classist because certain barriers exist to immediately consuming entertainment upon release, and these barriers are more easily overcome by those of a higher socio-economic status.

Take, for example, the premiere episodes of Obi-Wan Kenobi. If you wanted to completely remove the risk of being spoiled for these episodes — and lived on the East Coast of the USA — you’d need to be up at 3am on Friday morning to watch them. Many people — including lower- to middle-income earners working a standard 9-to-5 job — are simply unable to do this. There are financial barriers, too. Going to the cinema isn’t cheap. The average cost of a movie ticket is $9.16, meaning that a family of four will pay more than $35.00 to see the latest release on the big screen (ridiculously expensive popcorn not included). This means that for many families, waiting for the home video release (where a movie can be rented for less than five dollars) is the only financially viable way of enjoying new movies.

Spoilers are ableist for similar reasons. While cinemas strive to provide better accessibility for those with mobility issues and audio and visual impairments, there are still many people for whom the theatergoing experience is unattainable. Those who are neurodiverse, have an intellectual disability, are immunocompromised, or suffer from ADHD are often unable to enjoy films during their theatrical run, and must wait for these movies to finally come to home video. Spoilers strip these less-able individuals of their ability to enjoy the very same surprises as those who can attend theaters.

The current pandemic provides yet another reason why someone may avoid the theatre. Released on December 17th 2021, Spider Man: No Way Home arrived just as the Omicron variant was beginning to spread through the U.S. — ultimately leading to the highest ever COVID daily case count just a few weeks later. For many people, seeing a movie in the cinema simply wasn’t worth the risk of spreading an infection that could greatly harm — and possibly even kill — their fellow attendees. Yet these individuals — those who sacrificed their own enjoyment in order to keep others safe — are those who suffer the most when a company like Sony Pictures releases home video trailers spoiling some of the biggest cameos of the film.

As we’ve seen, spoilers disproportionately affect those who are less well-off, less-able, and those who are simply trying to do what’s right in the midst of a global pandemic.

But are spoilers really all that harmful? It would seem so. Studios clearly understand the entertainment value of surprise. It’s why they fiercely guard plot details and issue watertight non-disclosure agreements to cast and crew. And we can appreciate the reasons for this. There’s nothing quite like the unanticipated return of a favorite character, or a delicious plot-twist that — despite your countless speculations — you never saw coming. Further, as Holdier previously noted, spoilers prevent us from taking part in a shared community experience — and may cause us to feel socially excluded as a result.

We might justify this harm on Consequentialist grounds if there was some greater good to be achieved. But there isn’t. It’s not entirely clear why entertainment sites or YouTube reviewers feel the need to wantonly spoil details of a new show or movie. While there’s obviously a financial motive in gaining clicks and views, it’s unclear how sharing spoilerific details in a headline or thumbnail furthers this end (especially since burying such details in the middle of an article or video would surely force people to click or view more).

Some might claim that they prefer to know plot details in advance — and there’s even evidence suggesting that spoilers might cause certain people to enjoy some stories more. But here’s the thing: you only get one chance to enjoy a story spoiler-free, and we should let people make this choice for themselves. The kinds of spoilers discussed here — those thrust to the top of a newsfeed, or to the main page of YouTube, or aired on network television — are unavoidable. They don’t give people a choice. What’s more, these spoilers disproportionately harm the underprivileged — and it’s the inherent classism and ableism of these spoilers that makes them so morally wrong.

When It Comes to Privacy, We Shouldn’t Have to “EARN-IT”

photograph of laptop with a lock with keys on it

At the moment, the subject on everyone’s minds is COVID-19, and for good reason: the number of infected and dying in the United States and around the world is growing every day.  But as Kenneth Boyd has pointed out, there are a number of subjects that are getting ignored. There is a massive locust swarm devastating crops in East Africa. There is an ongoing oil war driving gas prices down and decimating financial markets. And, in the United States, Congress is considering passing a bill that would have significant negative impacts on privacy and free speech on the internet. The bill in question is the EARN-IT Act, and the reason it is not capturing popular attention is obvious: viruses are scary, fast-moving, and make their ways into people’s homes. This bill is complex and understanding its ramifications for people’s rights to privacy and free speech requires a good deal of legal context.

But first, what is the EARN-IT Act? Legislators are clearly not marketing it as an attack on privacy and free speech since such a bill would be widely unpopular. The EARN-IT Act is really intended as a necessary measure to combat the widespread issue of child pornography, or, as the act would admirably rename it, “child sexual abuse materials” on the internet. This is a big problem. Right now, a ring of child abusers using the encrypted messaging app Telegram are being uncovered in South Korea. Proponents of the bill view the encryption that Telegram and other apps like WhatsApp, and soon Facebook, use as a tool for child abusers to evade government detection and prosecution. They see the owners of these apps as neglecting their responsibility to monitor the content going through their servers by encrypting said content so even they cannot see it. Essentially, these companies seem to know that child abuse is a problem on their platforms and instead of putting in the effort to find and report it, they simply blindfold themselves with encryption for their users.

So how will the EARN-IT Act resolve this seemingly willful ignorance and bring child abusers to justice? Well, here is where the issue gets complex and requires legal context. The act itself creates a government committee which would create a recommendation of “best practices” for companies to follow to minimize the spread of child sexual abuse materials on their websites. This recommendation would also be binding. If companies were to fail to follow the recommendations of the committee, they might lose something called “Section 230 immunity” which ordinarily keeps them from being prosecuted when child sexual abuse materials are found on their websites. Right now, if the government finds these materials on a hard drive belonging to you or me, we would go to jail for at least 5 years. But, if those same materials are found on Facebook or Telegram’s hard drives, the site owners will not go to jail, all due to that Section 230 immunity. Understanding why such a difference makes any sense requires understanding the history behind it and the distinction between speakers (the ones who create and share child sexual abuse materials) and distributors (sites like Facebook or Telegram that child abusers may use to share the evidence of their abuse).

In legislation prior to the internet, the legal burden for illegal speech (which, though it sounds weird when you say it, includes images) fell only on publishers and speakers, not distributors. If a book containing illegal content was sold in a bookstore (a “distributor”), the bookstore would not be responsible; only the author (“speaker”) of the content and the publisher would be. Obviously the author would know he broke the law and presumably his publisher should have had the sense to check what they were publishing. But the store that sold the book could not have this responsibility since they might sell thousands upon thousands of titles and could not spend the time checking each one. If the government put that responsibility on the bookstore, bookstore owners might be afraid to sell more titles than they could reasonably read through and check themselves. Fewer ordinary writers would be able to get their works to their audiences. So, many authors would not bother writing, knowing their books would never be sold. While the government would never directly force them to stop speaking, authors would be indirectly silenced. As Supreme Court Justice William Brennan put it in the Court’s unanimous opinion in Smith v. California, the law cannot “have the collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it.”

The question, then, is whether Facebook or Telegram should count as distributors or publishers. In 1996, Congress decided the issue with Section 230 of the Communications Decency Act. In this section was the following provision: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Let’s break this down. An “interactive computer service” would be any website or app people share content on (like Facebook or Telegram). The “provider” would be the owner, and “user[s],” would be, of course, anyone who used the site. Any other “information content provider” would be another person sharing information on the “interactive computer service.” If someone (the “information content provider”) posts illegal content on Facebook (the “interactive computer service”), that means that neither the owners of Facebook (the “provider”) nor someone who retweets that content (a “user”) is legally responsible for it. They are not legally responsible because they are not “treated as publisher or speaker” of that content. So, when child sexual abuse materials are shared using Facebook or Telegram’s servers, they have immunity.

The problem is that this immunity does not incentivize sites to find and remove these materials. There is no penalty for allowing them to spread, so long as the site owners never see them. And, with encryption, they can’t see them. One binding “recommendation” the committee created by the EARN-IT Act might make is to require sites to create a “backdoor” to their encryption that would allow the government to bypass it. One of the bill’s main sponsors, Senator Lindsey Graham, has said that he intends for it to end encryption like this for all websites. He has said that “We’re not going to go blind and let this abuse go forward in the name of any other freedom,” in reference to Facebook’s plans to institute end-to-end encryption of their messaging app.

Essentially, if two people on Telegram are texting, it is as though they are both going into a locked house to talk where only they have the keys. These keys would be unique to this particular house. A “backdoor” would be an unlocked entrance to every house that anyone who knew about it could get through and listen to the conversations people are having. There are two serious problems with this proposal: first, the government will be able to see, without any warrant and without checking with the site owners, any information from users; second, since there is no way to guarantee that only the government ever finds out about such a backdoor, it would be possible for anyone who finds the backdoor to access all of your personal information online. Privacy on the internet would quickly disappear.

Now it is important to remember that this recommendation to build backdoors in all websites is not a necessary condition of the EARN-IT Act. Perhaps the principle of it being possible for the government to take away our privacy on the internet is objectionable but it is not necessary that anyone actually lose their free speech. Such an abridgment of our right to privacy would occur only if the committee ultimately decided to include backdoors in its recommendations. One of the bill’s other sponsors, Senator Richard Blumenthal, has said this would not be the case. But, there is not anything stopping the committee from making such a recommendation either, which is where the trouble lies. If the committee acts in good faith, doing what is right, respecting our right to privacy, there will be no problem.

But, of course, politicians and governmental committees do not always act in good faith. The PATRIOT Act was enacted in the wake of 9/11 ostensibly to fight terrorism. As we all know, there was a darker side to this act, including the creation of a number of programs that allowed widespread wiretapping of ordinary citizens, among other violations of people’s rights. None of these harms were actual until they were. “Power corrupts, and absolute power corrupts absolutely” is such a common quotation as to become proverbial. All the committee of the EARN-IT Act has to do to end privacy on the internet is to make a simple recommendation and to threaten companies with the loss of Section 230 immunity. And, since without Section 230 immunity site owners could face serious jail time, sites would either have to manually check every post, every text, every image going through their servers (a virtual impossibility with the scale of internet content sharing) or would have to end encryption as instructed.

The internet is a wonderful and horrible thing, much like the human beings who compose it. The ability to communicate with anyone, around the world is an amazing thing. And to be able to do so privately is even more amazing. But, this amazing technology can, like every technology, be used for both good and for evil. Are we willing to sacrifice our ability to communicate privately online to wholly eliminate child sexual abuse on the internet? What value does privacy really have? The proposal and passage of bills like the EARN-IT Act threaten some of our most fundamental rights, both to speech and to privacy. Like the PATRIOT Act before it, it coats this dangerous abridgement of our rights in a veneer of justice, telling us that the cost to our freedom is worth it to right some wrong. As Graham would have it, we cannot “let this abuse go forward in the name of any other freedom.” But we can, and we must. If privacy is to be a true right, then it cannot be “earned.” The EARN-IT Act would have our right to privacy reduced in this way and so cannot be supported unless the powers of the proposed committee are harshly limited. Our rights are unalienable. The right for the government to limit our rights is not. If one of us, the citizens or the government, needs to “earn” their rights, it is them, not us.

Nasty, Brutish and Online: Is Facebook Revealing a Hobbesian Dystopia?

Mark Zuckerberg giving a speech against a blue background

The motto and mission of Facebook – as Mark Zuckerberg (founder and CEO), Facebook spokespeople, and executives have repeated over the years ad nauseam, is to “make the world a better place by making it more open and connected.” The extent to which Facebook has changed our social and political world can hardly be underestimated. Yet, over the past several years, as Facebook has grown into a behemoth with currently 2.2 billion monthly and 1.4 billion daily active users worldwide, the problems that have emerged from its capacity to foment increasingly hysterical and divisive ideas, to turbocharge negative messages and incendiary speech, and to disseminate misinformation, raises serious questions about the ideal of openness and connectedness.

The problems, now well documented, that have attended Facebook’s meteoric rise indicate that there has been a serious, perhaps even deliberate, lack of critical engagement with what being ‘more open and connected’ might really entail in terms of how those ideals can manifest themselves in new, powerful, and malign ways. The question here is whether Facebook is, or is able to be – as Zuckerberg unwaveringly believes – a force for good in the world; or, rather, whether it has facilitated, even encouraged, some of the baser, darker aspects of human nature and human behavior to emerge in a quasi Hobbesian “state of nature” scenario.  

Thomas Hobbes was a social contract theorist in the seventeenth century. One of the central tenets of his political philosophy, with obvious implications for his view of the moral nature of people, was that in a “state of nature” – that is, without government, laws or rules to which humans voluntarily (for our benefit) submit, we would exist in a state of aggression, discord and war. Hobbes famously argued that, under such conditions, life would be “nasty, brutish, and short.” He thought that morality emerged when people were prepared to give up some of their unbridled freedom to harm to others in exchange for protection from being harmed by others.

The upside was that legitimate sovereign power could keep our baser instincts in check, and could lead to a relatively harmonious society. The social contract, therefore, is a rational choice made by individuals for their own self-preservation. This version of the nature and role of social organization does, to be sure, rest on a bleak view of human nature. Was Hobbes in any way right in that a basic aspect of human nature is cruel and amoral? And does this have anything to do with what the kinds of behaviors that have emerged on Facebook through its ideal of fostering openness and connectivity, largely free from checks and controls?

Though Facebook has recently been forced to respond to questions about its massive surveillance operation, about data breaches such as the Cambridge Analytica scandal, about use of the platform to spread misinformation and propaganda to influence elections; and its use for stoking hatred, inciting violence and aiding genocide, Mark Zuckerberg remains optimistic that Facebook is a force for good in the world – part of the solution rather than the problem.

In October 2018 PBS’s Frontline released a two-part documentary entitled The Facebook Dilemma in which several people who were interviewed claimed that from unique positions of knowledge ‘on the ground’ or ‘in the world,’ they tried to tell Facebook about various threats of propaganda, fake news and other methods being used on the platform to sow division and incite violence. The program meticulously details repeatedly missed, or ducked, opportunities for Facebook company executives, and Mark Zuckerberg himself, to comprehend and take seriously the egregious nature of some of these problems.

When forced to speak about these issues, Facebook spokespeople and Zuckerberg himself have consistently repeated the line that they were slow to act on threats and to understand the use of Facebook by people with pernicious agendas. This is doubtless true, but to say that Facebook was unsuspecting or inattentive to the potential dangers of what harms the platform might attract is putting it very mildly, and indeed appears to imply that Facebook’s response, or lack thereof, is rather benign; while not making them blameless exactly, it appears designed to neutralize blame: ‘we are only to blame insofar as we didn’t notice. Also we are not really to blame because we didn’t notice.’

Though Facebook does take some responsibility for monitoring and policing what is posted on the site (removing explicit sexual content, sexual abuse material, and clear hate speech), it has taken a very liberal view in terms of moderating content. From this perspective it could certainly be argued that the company is to some extent culpable in the serious misuse of its product.

The single most important reason that so many malign uses of Facebook have been able to occur is the lax nature of editorial control over what appears on the site, and how it is prioritized or shared, taken together with Facebook’s absolutely unprecedented capacity to offer granular, fine-tuned highly specific targeted advertising. It may be that Facebook has a philosophical defense for taking such a liberal stance, like championing and defending free speech.

Take, for example, Facebook’s ‘newsfeed’ feature. Tim Sparapani, Facebook Director of Public Policy from 2009 to 2011, told Frontline, “I think some of us had an early understanding that we were creating, in some ways, a digital nation-state. This was the greatest experiment in free speech in human history.” Sparapani added, “We had to set up some ground rules. Basic decency, no nudity and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.” Facebook has consistently fallen back on the ‘free speech’ defense, but it is disingenuous for the company to claim to be merely a conduit for people to say what they like, when the site’s algorithms, determined by (and functioning in service of) its business model, play an active role.

In the Facebook newsfeed, the more hits a story gets, the more the site’s algorithms prioritize it. Not only is there no mechanism for differentiating between truth and falsehood here, nor between stories which are benign and those which are pernicious, but people are more likely respond to (by ‘liking’ and ‘sharing’) stories with more outrageous or hysterical claims – stories which are less likely to be true and more likely to cause harm.

Roger McNamee, an early Facebook investor, told Frontline:  “…In effect, polarization was the key to the model – this idea of appealing to people’s lower-level emotions; things like fear and anger to create greater engagement and, in the context of Facebook, more time on site, more sharing, and therefore, more advertising value.” Because Facebook makes its money by micro-targeted advertising, the more engagement with a story, the more ‘hits’ it gets, the better the product Facebook has to sell to advertisers who can target individuals based on what Facebook can learn about them from their active responses. It is therefore in Facebook’s interest to cause people to react.

Facebook profits when stories are shared, and it is very often the fake, crazy stories, and/or those with most far-flung rhetoric that are most shared. But why should it be the case that people are more likely to respond to such rhetoric? This brings us back to Hobbes, and the question about the ‘darker’ aspects of human nature: is there something to be gleaned here about what people are like – what they will say and do if no one is stopping them?

The ‘real-world’ problems associated with fake news, such as violence in Egypt, Ukraine, Philippines and Myanmar, have emerged in the absence of a guiding principle – an epistemic foundation in the form of a set of ethics based on a shared conception of civilized discourse, and a shared conception of the importance of truth. In this analogy, the editorial process might be thought of as a kind of social contract, and the effects of removing it might be read as having implications for what humans in a ‘state of nature,’ where behavior is unchecked, are really like. Perhaps too much openness and connectivity does not, after all, necessarily make the world a better place, and might sometimes make it a worse one.

The conclusion seems unavoidable that Facebook has provided something like a Hobbesian state of nature by relaxing, removing, failing to use all but the most basic editorial controls. Yet it is equally true that Facebook has facilitated, encouraged and profited from all the nasty and brutish stuff. If the Hobbesian analogy is borne out, perhaps it is time to revisit the question of what kinds of controls need to be implemented for the sake of rational self (and social) preservation.

 

Is Google Obligated to Stay out of China?

Photograph of office building display of Google China

Recently, news broke that Google was once again considering developing a version of its search engine for China. Google has not offered an official version of its website in China since 2010, when it withdrew its services due to concerns about censorship: the Chinese government has placed significant constraints on what its citizens can access online, typically involving information about global and local politics, as well as information that generally does not paint the Chinese government in a positive light. Often referred to as “The Great Firewall of China”, one notorious example of Chinese censorship involves searches for “Tiananmen Square”: if you are outside of China, chances are your searches will prominently include in its results information concerning the 1989 student-led protest and subsequent massacre of civilians by Chinese troops, along with the famous picture of a man standing down a column of tanks; within China, however, search results return information about Tiananmen Square predominantly as a tourist destination, but nothing about the protests.

While the Chinese government has not lifted any of their online restrictions since 2010, Google nevertheless is reportedly considering re-entering the market. The motivation for doing so is obvious: it is an enormous market, and would be extremely profitable for the company to have a presence in China. However, as many have pointed out, doing so would seem to be in violation of Google’s own mantra: “Don’t be evil!” So we should ask: would it be evil for Google to develop a search engine for China that abided by the requirements for censorship dictated by the Chinese government?

One immediate worry is with the existence of the censorship itself. There is no doubt about the fact that the Chinese government is actively restricting its citizens from accessing important information about the world. This kind of censorship is often considered to be a violation of free speech: not only are Chinese citizens restricted from sharing certain kinds of information, they are prevented from acquiring information that would allow them to engage in conversations with others about political and other important matters. That people should not be censored in this way is encapsulated in the UN’s Universal Declaration of Human rights:

Article 19. Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

The right to freedom of expression is what philosophers will sometimes refer to as a “negative right”: it’s a right to not be restricted from doing something that you might otherwise be able to do. So while we shouldn’t say that Google is required to provide its users with all possible information out there, we should say that Google should not actively prevent people from acquiring information that they should otherwise have access to. While the UN’s declaration does not have any official legal status, at the very least it is a good guideline for evaluating whether a government is treating its citizens in the right way.

It seems that we should hold the Chinese government responsible for restricting the rights of its citizens. But if Google were to create a version of their site that adhered to the censorship guidelines, should Google itself be held responsible, as well? We might think that they should not: after all, they didn’t create the rules, they are merely following them. What’s more, the censorship would occur with or without Google’s presence, so it does not seem as though they would be violating any more rights by entering the market.

But this doesn’t seem like a good enough excuse. Google would be, at the very least, complicit: they are fully aware of the censorship laws, how they harm citizens, and would be choosing to actively profit as a result of following those rules. Furthermore, it is not as if Google is forced to abide by these rules: they are not, say, a local business that has no other choice but to follow the rules in order to survive. Instead, it would be their choice to return to a market that they once left because of moral concerns. The fact that they would merely be following the rules again this time around does not seem to absolve them of any responsibility.

Perhaps Google could justify its re-entry into China in the following way: the dominant search engine in China is Baidu, which has a whopping 75% of the market share. Google, then, would be able to provide Chinese citizens with an alternative. However, unless Google is actually willing to flout censorship laws, offering an alternative hardly seems to justify their presence in the Chinese market: if Google offers the same travel tips about Tiananmen Square as Baidu does but none of its more important history, then having one more search engine is no improvement.

Finally, perhaps we should think that Google, in fact, really ought to enter the Chinese market, because doing so would fulfil a different set of obligations Google has, namely towards its shareholders and those otherwise invested in the business. Google is a business, after all, and as such should take measures to be as profitable as it reasonably can for those who have a stake in its success. Re-entering the Chinese market would almost certainly be a very profitable endeavour, so we might think that, at least when it comes to those invested in the business, that Google has an obligation to do so. One way to think about Google’s position, then, is that it is forced to make a moral compromise: it has to make a moral sacrifice – in this case, knowingly engaging in censorship practices – in order to fulfil other obligations that it has – those it has towards its shareholders.

Google may very well be faced with a conflict of obligations of this kind, but that does not mean that they should compromise in a way that favors profits: there are, after all, lots of ways to make money, but that does not mean that doing anything and everything for a buck is a justifiable compromise. When weighing the interests of those invested in Google, a company that is by any reasonable definition thriving, against being complicit in aiding in the online censorship of a quarter of a billion people, the balance of moral consideration seems to point clearly in only one direction.

The Celebrity Nude Leak: What’s in a View?

By now, most people have heard that nude photos of nearly 100 celebrities, including actress Jennifer Lawrence, were stolen and posted to the internet by a hacker. The resultant leak has sparked both an FBI investigation and significant public outcry. On one hand, it is relatively easy to evaluate the morality of the hacker’s actions. But do those who simply view the photos share the blame?

Continue reading “The Celebrity Nude Leak: What’s in a View?”