← Return to search results
Back to Prindle Institute

Trump v. Facebook, and the Future of Free Speech

photograph of trump speech playing on phone with Trump's Twitter page displayed in the background

On July 7th, former President Donald Trump announced his intention to sue Facebook, Twitter, and Google for banning him from posting on their platforms. Facebook initially banned Donald Trump following the January 6th insurrection and Twitter and Google soon followed suit. Trump’s ban poses not only legal questions concerning the First Amendment, but also moral questions concerning whether or not social media companies owe a duty to guarantee free speech.

Does Trump have any moral standing when it comes to his ban from Facebook, Twitter, and Google? How can we balance the value of free expression with the rights of social media companies to regulate their platforms?

After the events of January 6th, Trump was immediately banned from social media platforms. In its initial ban, the CEO of Facebook, Mark Zuckerberg, offered a brief justification: “We believe the risks of allowing the President to continue to use our service during this period are too great.” Following Trump’s exit from office, Facebook decided to extend Trump’s ban to two years. Twitter opted for a permanent ban, and YouTube has banned him indefinitely.

Though this came as a shock to many, some argued that Trump’s ban should have come much sooner. Throughout his presidency, Trump regularly used social media to communicate with his base, at times spreading false information. While some found this communication style unpresidential, it arguably brought the Office of the President closer to the American public than ever before. Trump’s use of Twitter engaged citizens who might not have otherwise engaged with politics and even reached many who did not follow him. Though there is value in allowing the president to authentically communicate with the American people, Trump’s use of the social media space has been declared unethical by many; he consistently used these communiques to spread falsehoods, issue personal attacks, campaign, and fund-raise.

But regardless of the merits of Trump’s lawsuit, it raises important questions regarding the role that social media platforms play in modern society. The First Amendment, and its protections regarding free speech, only apply to federal government regulation of speech (and to state regulation of speech, as incorporated by the 14th Amendment). This protection has generally not extended to private businesses or individuals who are not directly funded or affiliated with the government. General forums, however, such as the internet, have been considered a “free speech zone.” While located on the internet, social media companies have not been granted a similar “free speech zone” status. The Supreme Court has acknowledged that the “vast democratic forums of the Internet” serve an important function in the exchange of views, but it has refused to extend the responsibility to protect free speech beyond state actors, or those performing traditional and exclusive government functions. The definition of state actors is nebulous, but the Supreme Court has drawn hard lines, recently holding that private entities which provide publicly accessible forums are not inherently performing state actions. Recognizing the limits of the First Amendment, Trump has attempted to bridge the gap between private and state action in his complaint, arguing that Facebook, Twitter, and Google censored his speech due to “coercive pressure from the government” and therefore their “activities amount to state action.”

Though this argument may be somewhat of a stretch legally, it is worth considering whether or not social media platforms play an important enough role in our lives to consider them responsible for providing an unregulated forum for speech. Social media has become such a persistent and necessary feature of our lives that Supreme Court Justice Clarence Thomas has argued that they should be considered “common carriers” and subject to heightened regulation in a similar manner to planes, telephones, and other public accommodations. And perhaps Justice Thomas has a point. About 70% of Americans hold an active social media account and more than half of Americans rely upon social media for news. With an increasing percentage of society not only using social media, but relying upon it, perhaps social media companies would be better treated as providers of public accommodations rather than private corporations with the right to act as gatekeepers to their services.

Despite American’s growing dependence on social media, some have argued that viewing social media as a public service is ill-advised. In an article in the National Review, Jessica Melugin argues that there is not a strong legal nor practical basis for considering social media entities as common carriers. First, Melugin argues that exclusion is central to the business model of social media companies, who generate their revenue from choosing which advertisements to feature to generate revenue. Second, forcing social media companies to allow any and all speech to be published on their platforms may be more akin to compelling speech rather than preventing its suppression. Lastly, social media companies, unlike other common carriers, face consistent market competition. Though Facebook, Instagram, and Twitter appear to have taken over for now, companies such as Snapchat and TikTok represent growing and consistent competition.

Another consideration which weighs against applying First Amendment duties to social media companies is the widespread danger of propaganda and misinformation made possible by their algorithmic approach to boosting content. Any person can post information, whether true or false, which has the potential to reach millions of people. Though an increasing amount of Americans rely on social media for news, studies have found that those who do so tend to be less informed and more exposed to conspiracies. Extremists have also found a safe-haven on social media platforms to connect and plan terrorist acts. With these considerations in mind, allowing social media companies to limit the content on their platforms may be justified in combating the harmful tendencies of an ill-informed and conspiracy-laden public and perhaps even in preventing violent attacks.

Despite the pertinent moral questions posed by Trump’s lawsuit, he is likely to lose. Legal experts have argued that Trump’s suit “has almost no chance of success.” However, the legal standing of Trump’s claims do not necessarily dictate their morality, which is equally worthy of consideration. Though Trump’s lawsuit may fail, the role that social media companies play in the regulation of speech and information will only continue to grow.

Is Shaming an Important Moral Tool?

Photo of a person behind a banner that says "Shame on Mel Rogers, CEO, PBS SoCal"

Misbehaving students at Washington Middle School last month couldn’t expect their bad behavior to go unnoticed by their peers and teachers. A list titled “Today’s Detention” was projected onto the wall of the cafeteria, making the group of students to be punished public knowledge. This particular incident made local news, but it’s just one instance of a phenomenon known as an “accountability wall.” These take different forms, sometimes they involve displays of grades or other achievements, and sometimes they focus on bad behaviors. The motivation for such public displays of information is to encourage good behavior and hard work from students.  

Middle school administrators aren’t the only ones employing this strategy.  Judges around the country have participated in “creative sentencing,” using shaming to motivate the reduction or elimination of criminal behavior. For example, a district court judge in North Carolina sentenced a man convicted of domestic abuse to carry a poster around town reading, “This is the face of domestic abuse” for four hours a day, seven days in a row.  

The Internet ensures that the audience for public shaming will be wide in scope. Shaming behavior on social media ranges from photos of pugs wearing signs indicating that they “Ate Mommy’s Shoes” all the way to doxing—the sharing of names and addresses of people who participate in socially unpopular activities.

All of this is not entirely without warrant. Some emotions play a central role in morality—emotions like pride, guilt, and shame. We’re social beings, and as such, one of the ways that we protect against bad behavior in our social circles is to hold one another accountable. Imagine, for example, that Tom has a habit of not keeping his promises. He develops a bad reputation as an unreliable, untrustworthy member of the group. He may begin to feel guilt or shame for his behavior as a result, and he may then begin to actually do the things he has said that he is going to do. The recognition that his peers feel that he ought to feel badly about his behavior has the desired effect—it changes Tom’s behavior. It seems, then that shame can be a powerful tool in governing the behavior of members of a social group.

Shaming might play other important social roles as well.  First, it often makes the public aware of problematic behavior. It picks out people that some members of the population might want to avoid. For example, the revelation that Mike is a white supremacist who attended a white nationalist rally may prevent a potential employer from making the mistake of hiring Mike.

Second, public shaming may serve as a deterrent. If Sam, the regional manager of a small company, witnesses other people in his position being called out for sexual harassment against employees, perhaps Sam will stop harassing his employees out of fear of being publically treated the same way.

Third, shaming might be an important way of reinforcing our community values and making good on our commitment to speaking out against unacceptable behavior. After all, some of the most egregious human rights atrocities happened because of, or were prolonged by, the silence of people who knew better, could have spoken out, but did nothing.

On the other hand, there are some pretty compelling arguments against the practice of shaming as well. Often, shaming manifests in ways that end up humiliating another person for actions they have performed. Humiliation is, arguably, inconsistent with an attitude of respect for the dignity of persons. In response, some might argue that though humiliation may be a terrible thing to experience, many of the behaviors for which people are being shamed are comparatively much worse. For example, is it bad to humiliate someone for being a white supremacist?

In practice, shaming has real legs—stories about bad behavior travel fast. The details that provide context for the behavior are often not ready at hand and, most of the time, no one is looking at the context to begin with. Even if it’s true that shaming has an important place in moral life, this will presumably only be true when the shaming is motivated by the actual facts—after all, a person shouldn’t be shamed if they don’t deserve to be.

The question of ‘deserving’ is important to the resolution of the question of whether shaming is ever morally defensible. The practice of shaming can be seen as retributive—the assumption being made is that the person being shamed for their actions is fully morally responsible for those actions. A variety of factors including environment, socialization, and biology contribute to, and perhaps, at least in some cases, even determine what a person does. If societies are going to maintain the position that retributivism is necessary for fairness, they better be sure that they are using those retributivist tools in ways that are, themselves, fair. Similar actions don’t all have similar backstories, and being sensitive to the nuance of individual cases is important.  

The motivation for shaming behavior tends to be bringing about certain kinds of results such as behavior modification and deterrence. The question of whether shaming actually changes or deters behavior is an empirical one. Given the potential costs, for the practice to be justified, we should be exceptionally confident that it actually works.

A careful look at the real intentions behind any particular act of shaming is warranted as well. Sometimes people’s intentions aren’t transparent even to themselves. Moral reflection and assessment are, of course, very important. Sometimes, however, the real motivation for shaming behaviors is power and political influence. It’s important to know the difference.

Even if the evidence allowed us to conclude that shaming adults is a worthwhile enterprise, it would not follow that what is appropriate for adults is appropriate for children. Young people are in a very active stage of self-creation and learning. Shaming behavior might be a recipe for lifelong self-esteem issues.

Finally, given that shaming has the potential for bringing about such negative consequences, it’s useful to ask: is there a better way to achieve the same result?

Nasty, Brutish and Online: Is Facebook Revealing a Hobbesian Dystopia?

Mark Zuckerberg giving a speech against a blue background

The motto and mission of Facebook – as Mark Zuckerberg (founder and CEO), Facebook spokespeople, and executives have repeated over the years ad nauseam, is to “make the world a better place by making it more open and connected.” The extent to which Facebook has changed our social and political world can hardly be underestimated. Yet, over the past several years, as Facebook has grown into a behemoth with currently 2.2 billion monthly and 1.4 billion daily active users worldwide, the problems that have emerged from its capacity to foment increasingly hysterical and divisive ideas, to turbocharge negative messages and incendiary speech, and to disseminate misinformation, raises serious questions about the ideal of openness and connectedness.

The problems, now well documented, that have attended Facebook’s meteoric rise indicate that there has been a serious, perhaps even deliberate, lack of critical engagement with what being ‘more open and connected’ might really entail in terms of how those ideals can manifest themselves in new, powerful, and malign ways. The question here is whether Facebook is, or is able to be – as Zuckerberg unwaveringly believes – a force for good in the world; or, rather, whether it has facilitated, even encouraged, some of the baser, darker aspects of human nature and human behavior to emerge in a quasi Hobbesian “state of nature” scenario.  

Thomas Hobbes was a social contract theorist in the seventeenth century. One of the central tenets of his political philosophy, with obvious implications for his view of the moral nature of people, was that in a “state of nature” – that is, without government, laws or rules to which humans voluntarily (for our benefit) submit, we would exist in a state of aggression, discord and war. Hobbes famously argued that, under such conditions, life would be “nasty, brutish, and short.” He thought that morality emerged when people were prepared to give up some of their unbridled freedom to harm to others in exchange for protection from being harmed by others.

The upside was that legitimate sovereign power could keep our baser instincts in check, and could lead to a relatively harmonious society. The social contract, therefore, is a rational choice made by individuals for their own self-preservation. This version of the nature and role of social organization does, to be sure, rest on a bleak view of human nature. Was Hobbes in any way right in that a basic aspect of human nature is cruel and amoral? And does this have anything to do with what the kinds of behaviors that have emerged on Facebook through its ideal of fostering openness and connectivity, largely free from checks and controls?

Though Facebook has recently been forced to respond to questions about its massive surveillance operation, about data breaches such as the Cambridge Analytica scandal, about use of the platform to spread misinformation and propaganda to influence elections; and its use for stoking hatred, inciting violence and aiding genocide, Mark Zuckerberg remains optimistic that Facebook is a force for good in the world – part of the solution rather than the problem.

In October 2018 PBS’s Frontline released a two-part documentary entitled The Facebook Dilemma in which several people who were interviewed claimed that from unique positions of knowledge ‘on the ground’ or ‘in the world,’ they tried to tell Facebook about various threats of propaganda, fake news and other methods being used on the platform to sow division and incite violence. The program meticulously details repeatedly missed, or ducked, opportunities for Facebook company executives, and Mark Zuckerberg himself, to comprehend and take seriously the egregious nature of some of these problems.

When forced to speak about these issues, Facebook spokespeople and Zuckerberg himself have consistently repeated the line that they were slow to act on threats and to understand the use of Facebook by people with pernicious agendas. This is doubtless true, but to say that Facebook was unsuspecting or inattentive to the potential dangers of what harms the platform might attract is putting it very mildly, and indeed appears to imply that Facebook’s response, or lack thereof, is rather benign; while not making them blameless exactly, it appears designed to neutralize blame: ‘we are only to blame insofar as we didn’t notice. Also we are not really to blame because we didn’t notice.’

Though Facebook does take some responsibility for monitoring and policing what is posted on the site (removing explicit sexual content, sexual abuse material, and clear hate speech), it has taken a very liberal view in terms of moderating content. From this perspective it could certainly be argued that the company is to some extent culpable in the serious misuse of its product.

The single most important reason that so many malign uses of Facebook have been able to occur is the lax nature of editorial control over what appears on the site, and how it is prioritized or shared, taken together with Facebook’s absolutely unprecedented capacity to offer granular, fine-tuned highly specific targeted advertising. It may be that Facebook has a philosophical defense for taking such a liberal stance, like championing and defending free speech.

Take, for example, Facebook’s ‘newsfeed’ feature. Tim Sparapani, Facebook Director of Public Policy from 2009 to 2011, told Frontline, “I think some of us had an early understanding that we were creating, in some ways, a digital nation-state. This was the greatest experiment in free speech in human history.” Sparapani added, “We had to set up some ground rules. Basic decency, no nudity and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.” Facebook has consistently fallen back on the ‘free speech’ defense, but it is disingenuous for the company to claim to be merely a conduit for people to say what they like, when the site’s algorithms, determined by (and functioning in service of) its business model, play an active role.

In the Facebook newsfeed, the more hits a story gets, the more the site’s algorithms prioritize it. Not only is there no mechanism for differentiating between truth and falsehood here, nor between stories which are benign and those which are pernicious, but people are more likely respond to (by ‘liking’ and ‘sharing’) stories with more outrageous or hysterical claims – stories which are less likely to be true and more likely to cause harm.

Roger McNamee, an early Facebook investor, told Frontline:  “…In effect, polarization was the key to the model – this idea of appealing to people’s lower-level emotions; things like fear and anger to create greater engagement and, in the context of Facebook, more time on site, more sharing, and therefore, more advertising value.” Because Facebook makes its money by micro-targeted advertising, the more engagement with a story, the more ‘hits’ it gets, the better the product Facebook has to sell to advertisers who can target individuals based on what Facebook can learn about them from their active responses. It is therefore in Facebook’s interest to cause people to react.

Facebook profits when stories are shared, and it is very often the fake, crazy stories, and/or those with most far-flung rhetoric that are most shared. But why should it be the case that people are more likely to respond to such rhetoric? This brings us back to Hobbes, and the question about the ‘darker’ aspects of human nature: is there something to be gleaned here about what people are like – what they will say and do if no one is stopping them?

The ‘real-world’ problems associated with fake news, such as violence in Egypt, Ukraine, Philippines and Myanmar, have emerged in the absence of a guiding principle – an epistemic foundation in the form of a set of ethics based on a shared conception of civilized discourse, and a shared conception of the importance of truth. In this analogy, the editorial process might be thought of as a kind of social contract, and the effects of removing it might be read as having implications for what humans in a ‘state of nature,’ where behavior is unchecked, are really like. Perhaps too much openness and connectivity does not, after all, necessarily make the world a better place, and might sometimes make it a worse one.

The conclusion seems unavoidable that Facebook has provided something like a Hobbesian state of nature by relaxing, removing, failing to use all but the most basic editorial controls. Yet it is equally true that Facebook has facilitated, encouraged and profited from all the nasty and brutish stuff. If the Hobbesian analogy is borne out, perhaps it is time to revisit the question of what kinds of controls need to be implemented for the sake of rational self (and social) preservation.

 

The Rise of Political Echo Chambers

Photograph of the White House

Anyone who has spent even a little bit of time on the internet is no doubt familiar with its power to spread false information, as well as the insular communities that are built around the sharing of such information. Examples of such groups can readily be found on your social media of choice: anti-vaccination and climate change denial groups abound on Facebook, while groups like the subreddit “The Donald” boast 693,000 subscribers (self-identified as “patriots”) who consistently propagate racist, hateful, and false claims made by Trump and members of the far-right. While the existence of these groups is nothing new, it is worth considering their impact and ethical ramifications as 2019 gets underway.

Theorists have referred to these types of groups as echo chambers, namely groups in which a certain set of viewpoints and beliefs are shared amongst its members, but in such a way that views from outside the group are either paid no attention or actively thought of as misleading. Social media groups are often presented as examples: an anti-vaxx Facebook group, for example, may consist of members who share their views with other members of the group, but either ignore or consider misleading the tremendous amount of evidence that their beliefs are mistaken. These views tend to propagate because the more that one sees that one’s beliefs are shared and repeated (in other words, “echoed back”) the more confident they become that they’re actually correct.

The potential dangers of echo chambers have received a lot of attention recently, with some blaming such groups for contributing to the decrease in rate of parents vaccinating their children, and to increased political partisanship. Philosopher C Thi Nguyen compares echo chambers to “cults,” arguing that their existence can in part explain what appears to be an increasing disregard for the truth. Consider, for example, The Washington Post’s recent report that Trump made 7,645 false or misleading claims since the beginning of his presidency. While some of these claims required more complex fact-checking than others, numerous claims (e.g. that the border wall is already being built, or those concerning the size of his inauguration crowd) are much more easily assessed. The fact that Trump supporters continue to believe and propagate his claims can be partly explained by the existence of echo chambers: if one is a member of a group in which similar views are shared and outside sources are ignored or considered untrustworthy then it is easier to understand how such claims can continue to be believed, even when patently false.

The harms of echo chambers, then, are wide ranging and potentially significant. As a result it would seem that we have an obligation to attempt to break out of any echo chambers we happen to find ourselves in, and to convince others to get out of theirs. Nguyen urges us to attempt to “escape the echo chamber” but emphasizes that doing so might not be easy: members of echo chambers will continue to receive confirmation from those that they trust and share their beliefs, and, because they distrust outside sources of information, will not be persuaded by countervailing evidence.

As 2019 begins, the problem of echo chambers is perhaps getting worse. As a recent Pew Research Center study reports, polarization along partisan lines has been steadily increasing since the beginning of Trump’s presidency on a wide range of issues. Trump’s consistent labeling of numerous news sources and journalists as untrustworthy is clearly contributing to the problem: Trump supporters will be more likely to treat information provided by those sources deemed “fake news” as untrustworthy, and thus will fail to consider contradictory evidence.

So what do we do about the problem of echo chambers? David Robert Grimes at The Guardian suggests that while echo chambers can be comforting – it is nice, after all, to have our beliefs validated and not to have to challenge our convictions – that such comfort hardly outweighs the potential harms. Instead, Grimes suggests that “we need to become more discerning at analysing our sources” and that “we must learn not to cling to something solely because it chimes with our beliefs, and be willing to jettison any notion when it is contradicted by evidence.”

Grimes’ advice is reminiscent of the American philosopher Charles Sanders Peirce, who considered the type of person who forms beliefs using what he calls a “method of tenacity,” namely someone who sticks to one’s beliefs no matter what. As Peirce notes, such a path is comforting – “When an ostrich buries its head in the sand as danger approaches,” Peirce says, “it very likely takes the happiest course. It hides the danger, and then calmly says there is no danger; and, if it feels perfectly sure there is none, why should it raise its head to see?” – but nevertheless untenable, as no one can remain an ostrich for very long, and will thus be forced to come into contact with ideas that will ultimately force them to address challenges to their beliefs. Peirce insists that the we instead approach our beliefs scientifically, where “the scientific spirit requires a man to be at all times ready to dump his whole cart-load of beliefs, the moment experience is against them.”

Hopefully 2019 will see more people taking the advice of Grimes and Peirce seriously, and that the comfort of being surrounded by familiar beliefs and not having to perform any critical introspection will no longer win out over a concern for truth.

Privacy and a Year in the Life of Facebook

Photograph of Mark Zuckerberg standing with a microphone

Mark Zuckerberg, the CEO of Facebook, declared on January 4 that he would “fix Facebook” in 2018. Since then, the year has contained scandal after scandal. Throughout the year, Facebook has provided a case study of questions regarding how to protect or value information privacy. On March 17, the New York Times and The Guardian revealed that Cambridge Analytica used information gleaned from Facebook users to attempt to influence voters’ behavior. Zuckerberg had to testify before Congress and rolled out new data privacy practices. In April, the Cambridge Analytica scandal was revealed to be more far-reaching than previously thought and in June it was revealed that Facebook shared data with other companies such as Apple, Microsoft and Samsung. The UK fined Facebook the legal maximum for illegal handling of user data related to Cambridge Analytica. In September, a hack accessed 30 million users data. In November, another New York Times investigation revealed that Facebook had failed to be sufficiently forthcoming about Russia’s interference on the site regarding political manipulation, and on December 18 more documents came out showing that Facebook offered user data, even from private messages, to companies including Microsoft, Netflix, Spotify and Amazon.

The repeated use of data regarding users of Facebook without their knowledge or consent, often to manipulate their future behavior as consumers or voters, has led to Facebook’s financial decline and loss of public trust. The right to make your own decisions regarding access to information about your life is called informational privacy. We can articulate the tension in discussions over the value of privacy as between the purported right to be left alone, on the one hand, and the supposed right of society to know about its members on the other. The rapid increase in technology that can collect and disseminate information about individuals raises the question of whether the value of privacy should shift along with this shift in actual privacy practices or whether greater efforts need to be devoted to protect the informational privacy of members of society.

The increase in access to personal information is just one impact of the rise of information technology. Technological advances have also affected the meaning of personal information. For instance, it has become easier to track your physical whereabouts given the sorts of apps and social media that are commonly used, but also the reason that the data from Facebook is so useful is that so much can be extrapolated about a person based on seemingly unrelated behaviors, changing what sorts of information may be considered sensitive. Cambridge Analytica was able to use Facebook data to attempt to sway voting behavior because of trends in activity on the social media site and political behavior. Advertising companies can take advantage of the data to better target consumers.

When ethicists and policy makers began discussing the right to privacy, considerations centered on large and personal life choices and protecting public figures from journalists. The aspects of our lives that we would typically consider most central to the value of privacy would be aspects of our health, say, our religious and political beliefs, and other aspects of life deemed personal such as romantic and sexual practices and financial situations. The rise of data analysis that comes with social media renders a great deal of our behaviors potentially revelatory: what pictures we post, what posts we like, how frequently we use particular language, etc. can be suggestive of a variety of further aspects of our life and behaviors.

If information regarding our behavior on platforms such as Facebook is revealing of the more traditionally conceived private domain of our lives, should this information be protected? Or should we reconceive of what we conceive of as private? One suggestion has been to acknowledge the brute economic fact of the rise of these technologies: this data is worth money. Therefore, it could be possible to abstract away from the moral value or right to privacy and focus instead on the reality that data is worth something, but if the individual owns the data about themselves they perhaps are owed the profits of the use of their data.

There are moral reasons to protect personal data. If others have unrestricted access to their whereabouts, health information, passwords protecting financial accounts, etc., they could be used to harm the individual. Security and a right to privacy thus could be justified as harm prevention. It also could be justified via right to autonomy, as data about one’s life can be used to unduly influence her choices. This is exacerbated by the ways that data changes relevance and import depending on the sphere in which it is used. For instance, revealing data regarding your health being used in your healthcare dealings has different significance than if potential employers had access to such data. If individuals are in less control over their personal data, this can lead to discrimination and disadvantages.

Thus there are both economic or property considerations as well as moral considerations for protecting personal data. Zuckerberg has failed to “fix” Facebook in 2018, but more transparency of the protections and regulation of how platforms can use data would be positive moves forward for respecting our value of privacy in 2019.

The Ethics of Facebook’s Virtual Cemeteries

A photo of reporters taking pictures of the Facebook logo with their phones.

In May, Facebook reported hitting 1.94 billion users—a statistic that speaks to the tremendous popularity and influence of the social network.  As any Facebook user knows, members must take the good aspects of the technology with the bad.  The network can be a great place to reconnect with old friends, to make new ones, and to keep in touch with loved ones who live far away.  Unfortunately, conversations on Facebook also frequently end friendships. Facebook profiles and posts often tell us far more about people than may seem warranted by the intimacy level of our relationship with them.

Continue reading “The Ethics of Facebook’s Virtual Cemeteries”

Mental Health, Information Literacy and the Slenderman Stabbing Case

A sidewalk chalk drawing of Slenderman.

On May 31, 2014, two 12-year-old girls lured a friend, also 12, into the woods with the promise of a game of hide-and-seek.  Once there, one of the girls pinned their friend down, while the other stabbed her 19 times with a long-bladed kitchen knife, causing serious injuries to major organs and arteries.  The young perpetrators then fled the scene, leaving their young friend to die of her injuries.  Miraculously, the victim survived.  She was able to crawl to a road where a cyclist found her and went for help.  

Continue reading “Mental Health, Information Literacy and the Slenderman Stabbing Case”

Doxxing for Social Justice

In 2015, after Lindsey Graham said that Donald Trump should “stop being a jackass,” Trump read Graham’s personal cell phone number aloud to a crowd at one of his campaign rallies and urged people to call the number. Journalists who dialed the number were directed to an automated voicemail account reporting “Lindsey Graham is not available.” His voicemail inbox was, unsurprisingly, full.

Continue reading “Doxxing for Social Justice”

Law Enforcement Surveillance and the Protection of Civil Liberties

In a sting operation conducted by the FBI in 2015, over 8,000 IP addresses in 120 countries were collected in an effort to take down the website Playpen and its users. Playpen was a communal website that operates on the Dark Web through the Tor browser. Essentially, the site was used to collect images related to child pornography and extreme child abuse. At its peak, Playpen had a community of around 215,000 members and more than 117,000 posts, with 11,000 unique visitors a week.

Continue reading “Law Enforcement Surveillance and the Protection of Civil Liberties”

Should You Have the Right to Be Forgotten?

In 2000, nearly 415 million people used the Internet. By July 1, 2016, that number is estimated to grow to nearly 3.425 billion. That is about 46% of the world’s population. Moreover, there are as of now about 1.04 billion websites on the world wide web. Maybe one of those websites contains something you would rather keep out of public view, perhaps some evidence of a youthful indiscretion or an embarrassing social media post. Not only do you have to worry about friends and family finding out, but now nearly half of the world’s population has near instant access to it, if they know how to find it. Wouldn’t it be great if you could just get Google to take those links down?

This question came up in a recent court case in the European Union in 2014. A man petitioned for the right to request that Google remove a link from their search results that contained an announcement of the forced sale of one of his properties, arising from old social security debts. Believing that since the sale had concluded years before and was no longer relevant, he wanted Google to remove the link from their search results. They refused. Eventually, the court sided with the petitioner, ruling that search engines must consider requests from individuals to remove links to pages that result from a search on their name. The decision recognized for the first time the “right to be forgotten.”

This right, legally speaking, now exists in Europe. Morally speaking, however, the debate is far from over. Many worry that the right to be forgotten threatens a dearly cherished right to free speech. I, however, think some accommodation of this right is justified on the basis of an appeal to the protection of individual autonomy.

First, what are rights good for? Human rights matter because their enforcement helps protect the free exercise of agency—something that everyone values if they value anything at all. Alan Gewirth points out that the aim of all human rights is “that each person have rational autonomy in the sense of being a self-controlling, self-developing agent who can relate to others person on a basis of mutual respect and cooperation.” Now, virtually every life goal we have requires the cooperation of others. We cannot build a successful career, start a family, or be good citizens without other people’s help. Since an exercise of agency that has no chance of success is, in effect, worthless, the effective enforcement of human rights entails that our opportunities to cooperate with others are not severely constrained.

Whether people want to cooperate depends on what they think of us. Do they think of us as trustworthy, for example? Here is where “the right to be forgotten” comes in. This right promotes personal control over access to personal information that may unfairly influence another person’s estimation of our worthiness for engaging in cooperative activities—say, in being hired for a job or qualifying for a mortgage.

No doubt, you might think, we have a responsibility to ignore irrelevant information about someone’s past when evaluating their worthiness for cooperation. “Forgive and forget” is, after all, a well-worn cliché. But do we need legal interventions? I think so. First, information on the internet is often decontextualized. We find disparate links reporting personal information in a piecemeal way. Rarely do we find sources that link these pieces of information together into a whole picture. Second, people do not generally behave as skeptical consumers of information. Consider the anchoring effect, a widely shared human tendency to attribute more relevance to the first piece of information we encounter than we objectively should. Combine these considerations with the fact that the internet has exponentially increased our access to personal information about others, and you have reason to suspect that we can no longer rely upon the moral integrity of others alone to disregard irrelevant personal information. We need legal protections.

This argument is not intended to be a conversation stopper, but rather an invitation to explore the moral and political questions that the implementation of such a right would raise. What standards should be used to determine if a request should be honored? Should search engines include explicit notices in their search results that a link has been removed, or should it appear as if the link never existed in the first place? Recognizing the right to be forgotten does not entail the rejection of the right to free speech, but it does entail that these rights need to be balanced in a thoughtful and context-sensitive way.

The Socioeconomic Divide of Dating Apps

Many are familiar with the popular dating app “Tinder,” best known for its quick “swiping” method of indicating interest in nearby users and creating “matches.” In an apparent effort to get away from its reputation of simply being a convenient “hook-up” app and get closer to its original dating purpose, Tinder recently announced that profiles will now feature work and education information. The change doesn’t go so far as to completely eliminate those with less education or a lower income, such as apps like Luxy, but it does bear possibly problematic consequences. Tinder has marketed this change as a response to user requests for these added profile details to help make more “informed choices”. Yet some are wary that this change comes with an ulterior motive.

Continue reading “The Socioeconomic Divide of Dating Apps”

Dissecting the Deathstagram

For many, the feeling of morbid curiosity is a common yet unsettling one. It is difficult to be sure where this feeling comes from, but its presence when viewing death is strangely magnetic. It would be easy to feel that this morbid curiosity is immoral, some sort of perverse feeling not shared by the rest of the population. According to The Atlantic’s Leah Sottile, however, perhaps it is more common than expected. In documenting celebrity death sites like FindADeath.com, Sottile’s piece makes clear that this curiosity is not only widespread, but also potent enough to form entire communities where morbid curiosity is at center stage. When observing how it manifests regarding these celebrities, it is clear that such morbid curiosity is hardly uncommon.

Continue reading “Dissecting the Deathstagram”

I Am The Lorax, I Tweet for the Trees

Lovers of social media, rejoice! It appears that even the furthest expanses of nature are not beyond the range of wireless internet. This was only further underscored this morning, when Japanese officials announced that Mount Fuji, the country’s iconic, snow-topped peaked, will be equipped with free Wi-Fi in the near future. Tourists and hikers alike will now be able to post from eight hotspots on the mountain, in a move likely to draw scorn from some environmental purists.

Continue reading “I Am The Lorax, I Tweet for the Trees”

Are You Your Avatar?

The online world has always been one of seemingly endless possibilities. In this space, it has been said, anything can happen and anything can be changed, including one’s own identity. And while this has been the case with many games, others have upended this model entirely. One of them, the online survival game Rust, is doing so to provoke debate about a topic rarely considered: race in the online world.

Continue reading “Are You Your Avatar?”

Crowdsourcing Justice

The video begins abruptly. Likely recorded on a phone, the footage is shaky and blurry, yet the subject is sickeningly unmistakeable: a crying infant being repeatedly and violently dunked into a bucket of water. First it is held by the arms, then upside down by one leg, then grasped by the face as an unidentified woman pulls it through the water. Near the end of the video, the infant falls silent, the only remaining audio the splashing of water and murmured conversation as the child is dunked again and again.

Continue reading “Crowdsourcing Justice”

When Memories 404

The Internet is forever. Think before you post. Once something is uploaded, it can’t be taken back. These prophetic warnings, parroted in technology literacy PSAs and middle school lectures all over the country, remind us to think about our online presence, to consider what will come up when we Google our name fifteen years from now.

Continue reading “When Memories 404”