← Return to search results
Back to Prindle Institute

Sexual Violence in the Metaverse: Are We Really “There”?

photograph of woman using VR headset

Sexual harassment can take many forms, whether in an office or on social media. However, there might seem to be a barrier separating “us” as the user of a social media account, from “us” as an avatar or visual representation in a game since the latter is “virtual” whereas “we” are “real.” Even though we are prone to experience psychological and social damage to our virtual representations, it seems that we cannot – at least directly – be affected physically. A mean comment may hurt my feelings and change my mood – I  might even get physically ill – but no direct physical damage seemed possible. Until now.

Recently, a beta tester of Horizon Worlds – a VR-based platform of Meta – reported that a stranger “simulated groping and ejaculating onto her avatar.” Even more recently, additional incidents, concerning children, have been reported. A safety campaigner stated that “He has spoken to children who say they were groomed on the platform and forced to take part in virtual sex.” The same article talks about howa “researcher posing as a 13-year-old girl witnessed grooming, sexual material, racist insults and a rape threat in the virtual-reality world.” How should we understand these virtual assaults? While sexual harassment requires no physical presence, when we attempt to consider whether such actions represent a kind of physical violence, things get complicated as the victim has not been violated in the traditional sense.

This problem has been made more pressing by the thinning of the barrier that separates what is virtual from what is physical. Mark Zuckerberg, co-founder and CEO of Meta, has emphasized the concept of “presence” as “one of the basic concepts” of Metaverse. The goal is to make the virtual space as “detailed and convincing” as possible. In the same video, some virtual items are designed to give a “realistic sense of depth and occlusion.” Metaverse attempts to win the tech race by mimicking the physical sense of presence as much as possible.

The imitation of the physical sense of presence is not a new thing. Many video games also develop  a robust sense of  presence. Especially in mmo (massive multiplayer online) games, characters can commonly touch, push, or persistently follow each other, even when it is unwelcomed and has nothing to do with one’s progress in the game. We often accept these actions as natural, as an obvious and basic part of the game’s social interaction. It is personal touches like these that encourage gamers to bond with their avatars. They encourage us to feel two kinds of physical presence: present as a user playing a game in a physical environment, and present as a game character in a virtual environment.

But these two kinds of presence mix very easily, and the difference between a user and the avatar can easily be blurred. Having one’s avatar pushed or touched inappropriately, has very real psychological effects. It seems that at some point, these experiences can no longer be considered as merely “virtual.”

This line is being further blurred by the push toward Augmented Reality (AR) which places “virtual” items in our world, and Virtual Reality (VR) where “this” world remains inaccessible to user during the session. As opposed to classic games’ sense of presence, in AR and VR, we explore the game environment mainly within one sense of presence instead of two, from the perspective of a single body. Contrary to our typical gaming experience, these new environments – like that of the Metaverse – may only work if this dual presence is removed or weakened. This suggests that our experience can no longer be thought of as taking place “somewhere else” but always “here.”

Still, at some level, dual presence remains: When we take our headsets off, “this world” waits for us. And so we return to our main moral question under discussion: Can we identify an action within the embodied online world as physical? Or, more specifically, Is the charge of sexual assault appropriate in the virtual space?

If one’s avatar is taken as nothing but a virtual puppet controlled by the user from “outside,” then it seems impossible to conclude that gamers can be physically threatened in the relevant sense. However, as the barrier separating users from their game characters erodes, the illusion of presence makes the avatar mentally inseparable from the user, experience-wise they become increasingly the same. Since the aim of the Metaverse is to create such a union, one could conclude that sharing the same “space” means sharing the same fate.

These are difficult questions, and the online spaces as well as the concepts which govern them are always in development. However, recent events should be taken as a warning to consider preventive measures, as these new spaces require new definitions, new moral codes, and new precautions.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Content Moderation and Emotional Trauma

image of wall of tvs each displaying something different

In the wake of the Russian invasion of Ukraine, which has been raging violently since February 24th of 2022, Facebook (now known as “Meta”) recently announced its decision to change some of its content-moderation rules. In particular, Meta will now allow for some calls for violence against “Russian invaders,” though Meta emphasized that credible death threats against specific individuals would still be banned.

“As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians,” spokesman Andy Stone said.

This recent announcement has reignited a discussion of the rationale — or lack thereof — of content moderation rules. The Washington Post reported on the high-level discussion around social media content moderation guidelines: how these guidelines are often reactionary, inconsistently-applied, and not principle-based.

Facebook frequently changes its content moderation rules and has been criticized by its own independent Oversight Board for having rules that are inconsistent. The company, for example, created an exception to its hate speech rules for world leaders but was never clear which leaders got the exception or why.

Still, politicians, academics, and lobbyists continue to call for stricter content moderation. For example, take the “Health Misinformation Act of 2021”, introduced by Senators Amy Klobuchar (D-Minnesota) and Ben Ray Luján (D-New Mexico) in July of 2021. This bill, a response to online misinformation during the COVID-19 pandemic, would revoke certain legal protections for any interactive computer service, e.g., social media websites, that “promotes…health misinformation through an algorithm.” The purpose of this bill is to incentivize internet companies to take greater measures to combat the spread of misinformation by engaging in content-moderation measures.

What is often left out of these discussions, however, is the means by which content moderation happens. It is often assumed that such a monumental task must be left up to algorithms, which can scour through mind-numbing amounts of content at a breakneck speed. However, much of the labor of content-moderation is performed by humans. And in many cases, these human content-moderators are poor laborers working in developing nations for an extremely small salary. For example, employees at Sama, a Kenyan technology company that is the direct employer of Facebook’s Kenya-based content moderators, “remain some of Facebook’s lowest-paid workers anywhere in the world.” While U.S.-based moderators are typically paid a starting wage of $18/hour, Sama moderators make an average of $2.20/hour. And this low wage is their salary after a recent pay-increase, which happened a few weeks ago. Prior to that, Sama moderators made $1.50/hour.

Such low wages, especially for labor outsourced to poor or developing nations, is nothing new. However, content moderation can be a particularly harrowing — in some cases, traumatizing — line of work. In their paper “Corporeal Moderation: Digital Labour as Affective Good,” Dr. Rae Jereza interviews one content moderator named Olivia about her daily work, which includes identifying “non‐moving bod[ies]”, visible within a frame, “following an act of violence or traumatic experience that could reasonably result in death.” The purpose of this is so videos containing dead bodies can be flagged as containing disturbing content. This content moderator confesses to watching violent or otherwise disturbing content prior to her shift, in an effort to desensitize herself to the content she would have to pick through as part of her job. The content that she was asked to moderate ranged over many categories, including “hate speech, child exploitation imagery (CEI), adult nudity and more.”

Many kinds of jobs involve potentially traumatizing duties: military personnel, police, first responders, slaughterhouse and factory farm workers, and social workers all work jobs with high rates of trauma and other kinds of emotional/psychological distress. Some of these jobs are also compensated very poorly — for example, factory and industrial farms primarily hire immigrants (many undocumented) willing to work for pennies on the dollar in dangerous conditions. Poorly-compensated high-risk jobs tend to be filled by people in the most desperate conditions, and these workers often end up in dangerous employment situations that they are nevertheless unable or unwilling to leave. Such instances may constitute a case of exploitation: someone exploits someone else when they take unfair advantage of the other’s vulnerable state. But not all instances of exploitation leave the exploited person worse-off, all things considered. The philosopher Jason Brennan describes the following case of exploitation:

Drowning Man: Peter’s boat capsizes in the ocean. He will soon drown. Ed comes along in a boat. He says to Peter, “I’ll save you from drowning, but only if you provide me with 50% of your future earnings.” Peter angrily agrees.

In this example, the drowning man is made better-off even though his vulnerability was taken advantage of. Just like this case, certain unpleasant or dangerous lines of work may be exploitative, but may ultimately make the exploited employees better-off. After all, most people would prefer poor work conditions to life in extreme poverty. Still, there seems to be a clear moral difference between different instances of mutually-beneficial exploitation. Requiring interest on a loan given to a financially-desperate acquaintance may be exploitative to some extent, but is surely not as morally egregious as forcing someone to give up their child in exchange for saving their life. What we demand in exchange for the benefit morally matters. Can it even be permissible to demand emotional and mental vulnerability in exchange for a living wage (or possibly less)?

Additionally, there is something unique about content moderation in that the traumatic material moderators view on any given day is not a potential hazard of the job — it is the whole job. How should we think about the permissibility of hiring people to moderate content too disturbing for the eyes of the general public? How can we ask some people to weed out traumatizing, pornographic, racist, threatening posts, so that others don’t have to see it? Fixing the low compensation rates may help with some of the sticky ethical issues concerning this sort of work. Yet, it is unclear whether any amount of compensation can truly make hiring people for this line of work permissible. How can you put a price on mental well-being, on humane sensitivity to violence and hate?

On the other hand, the alternatives are similarly bleak. There seem to be few good options when it comes to cleaning up the dregs of virtual hate, abuse, and shock-material.

Ukraine, Digital Sanctions, and Double Effect: A Response

image of Putin profile, origami style

Kenneth Boyd recently wrote a piece on the Prindle Post on whether tech companies, in addition to governments, have an obligation to help Ukraine by way of sanctions. Various tech companies and media platforms, such as TikTok and Facebook, are ready sources of misinformation about the war. This calls into question whether imposing bans on such platforms would prove helpful to deter Putin by raising the costs of the invasion of Ukraine and silencing misinformation. It is no surprise, then, that the digital minister of Ukraine, Mykhailo Fedorov, has approached Apple, Google, Meta, Netflix, and YouTube to block Russia from their services in different capacities. These methods would undoubtedly be less effective than financial sanctions, but the question is an important one: Are tech companies permitted or obligated to intervene?

One of the arguments Kenneth entertains against this position is that there could be side effects on the citizens of Russia who do not support the attack on Ukraine. As such, there are bystanders for whom such a move to ban media platforms would cause damage (how will some people reach their loved ones?). While such sanctions are potentially helpful in the larger picture of deterring Putin from continuing acts of aggression, is the potential cost morally acceptable in this scenario? The answer, if no, is a mark against tech and media companies enacting such sanctions.

I want to make two points. First, this question of permissible costs is equally applicable to any government deciding to put sanctions on Russia. When the EU, Canada, U.K., and the U.S. put economic sanctions on Russia’s central bank and involvement in Swift, for instance, this effectively caused a cash run and is likely the beginning of an inflation issue for Russians. This affects all in Russia, spanning from those in the government to the ‘mere civilians,’ including those protesting. As such, this cost must be addressed in the moral deliberation to execute such an act.

Second, the Doctrine of Double Effect (DDE) helps us see why unintentionally harming bystanders is morally permissible in this scenario (Not, mind you, in the case of innocent bystanders in Ukraine). So long as non-governmental institutions are the kind of entities morally permitted or obligated to respond (a question worth discussing, which Kenneth also raises), DDE applies equally to both the types of institutions in imposing sanctions with possible side effects.

What does the Doctrine of Double Effect maintain? The bumper sticker version is the following from the BBC: “[I]f doing something morally good has a morally bad side-effect, it’s ethically OK to do it providing the bad side-effect wasn’t intended. This is true even if you foresaw that the bad effect would probably happen.”

The name, one might guess, addresses the two effects one action produces. This bumper sticker version has considerable appeal. For instance, killing in self-defense falls under this. DDE is also applicable to certain cases of administering medicine with harmful side effects and explains the difference between suicide and self-sacrifice.

A good litmus question is whether and when a medical doctor is permitted to administer a lethal dose of medicine. It depends on the intentions, of course, but the bumper sticker version doesn’t catch whether the patient must be mildly or severely ill, whether there are other available options, etc.

The examples and litmus question should prime the intuitions for this doctrine. The full version of DDE (which the criterion below roughly follows) maintains that an agent may intentionally perform an action that will bring about an evil side effect(s) so long as the following conditions are simultaneously and entirely satisfied:

  1. The action performed must in itself be morally good or neutral;
  2. The good action and effect(s), and not the evil effect, are intended;
  3. The evil effect cannot be the means to achieve the good effect — the good must be achieved as directly (or more directly) than the evil;
  4. There must be a proportionality between the good and the evil, in which the evil is lesser than or equal to the good, which serves as a good reason for the act in question.

One can easily see how this applies to killing in self-defense. While impermissible to kill someone in cold blood or even kill someone who is plotting your own death, it is morally permissible to kill someone in self-defense. This is the case even if one foresees that the act of defense will require lethal effort.

As is evident, DDE does not justify the death of individuals in Ukraine who are unintentionally killed (say, in a bombing). For the very act of untempered aggression is an immoral act and fails to meet the criterion.

Now, apply this criterion to the question of tech companies who may impose sanctions to achieve a certain good and with it, an evil.

What are the relevant goods and evils? In this case, the good is at least that of deterring Putin from further aggression and stopping misinformation. The bad is the consequences upon locals. For instance, the anti-war protestors in Russia who are communicating their situation, and perhaps the individuals who use these media outlets to secure communication with loved ones.

This type of act hits all four marks: the action is neutral, the good effects are the ones intended (presumably this is the case), the evil effects are not the means of achieving this outcome and are no more direct than the good effects, and the good far outweighs the evil caused by this.

That the evil is equal to or less than the good achieved in this scenario might not seem apparent. But consider how the civilians have other means of reaching loved ones, and how news reporters (not only TikTok and Facebook) are still prominent ways to communicate information. These are both goods. And thankfully, they would not be entirely lost because of such potential sanctions.

As should be clear, the potential bad side effects are not a good reason to refrain from imposing media and tech sanctions on Russia. This is not to say that it is therefore a good reason to impose sanctions. All we have done in this case is see how the respective side effects are not sufficient to deter sanctions and how the action meets all four criteria. And this shows that it is morally permissible.

Russia, Ukraine, and Digital Sanctions

image of Putin profile, origami style

Russian aggression towards Ukraine has prompted many responses across the world, with a number of countries imposing (or at least considering imposing) sanctions against Russia. In the U.S., Joe Biden recently announced a set of financial sanctions that would cut off Russian transactions with U.S. banks, and restrict Russian access to components used in high tech devices and weapons. In Canada, Justin Trudeau also announced various sanctions against Russia, and many Canadian liquor stores stopped selling Russian vodka. While some of these measures will likely be more effective than others – not having access to U.S. banks probably stings a bit more than losing the business of the Newfoundland and Labrador Liquor Corporation – there is good reason for governments to impose sanctions as a way to attempt to deter further aggression from Russia.

It is debatable whether the imposition of sanctions by governments is enough (for example, providing aid to Ukraine in some form also seems like something that governments should do), but it certainly seems like something that they should do. If we accept the view that powerful governments have at least some moral obligation to help keep the peace, then sanctioning Russia is something such governments ought to do.

What about corporations? Do they have any such obligations? Companies are certainly within their rights to stop doing business with Russia, or to cut off services they would normally supply, if they see fit. But do the moral obligations that apply to governments apply to private businesses, as well?

Ukraine’s digital minister Mykhailo Fedorov may think that they do. He recently asked Apple CEO Tim Cook to stop supplying Apple products to Russia, and to cut off Russian access to the app store. “We need your support,” wrote Fedorov, “in 2022, modern technology is perhaps the best answer to the tanks, multiple rocket launchers … and missiles.” Fedorov asked Meta, Google, and Netflix to also stop providing services to Russia, and to ask that Google block YouTube channels that promote Russian propaganda.

It is not surprising why Fedorov singled out tech companies. It has been well-documented that Facebook and YouTube have been major sources of misinformation in the past, and the current conflict between Russian and Ukraine is no exception. There has been a lot said already about how tech companies have obligations to attempt to stem the flow of misinformation on their respective platforms, and in this sense, they clearly have obligations towards Ukraine to make sure that their inactions do not contribute to the proliferation of damaging information.

It is a separate question, though, as to whether a company like Apple ought to suspend its service in Russia as a form of sanction. We can consider arguments on either side.

Consider first an argument in favor: like a lot of other places in the world, many people in Russia rely on the services of companies like Apple, Meta, and Google in their daily lives, as do members of Russia’s government and military. Cutting Russia off from these services would then be disruptive in ways that may be comparable to the sanctions imposed by the governments of other countries (and in some cases could very well be more disruptive). If these companies are in a position to help Ukraine by imposing such digital sanctions, then we might think they ought to.

Indeed, this kind of obligation may stem from a more general obligation to help victims of unjust aggression. For instance, I may have some such obligation: given that I am a moderately well-off Westerner with an interest in global justice, we might think that I should (say) avoid buying Russian products and give money to charities that aid the people of Ukraine. If I were in a position to make a more significant difference – say, if I were the CEO of a large company popular in Russia – we might then think that I should do more, in a way that is proportional to the power and influence I have.

However, we could also think of arguments opposed to the idea that tech companies have obligations to impose digital sanctions. For instance, we might think that corporations are not political entities, and thus have no special obligations when it comes to matters of global politics. This is perhaps a simplistic view of the relationship between corporations and governments; regardless, we still might think that corporations simply aren’t the kinds of things that stand in relationship to governments. These private entities don’t (or shouldn’t) have similar responsibilities to impose sanctions or otherwise help keep the peace.

One might also worry about the effect digital sanctions might have on Russian civilians. For example, lack of access to tech could have collateral damage in the form of preventing groups of protestors from communicating with one another, or from helping debunk propaganda or other forms of misinformation. While many forms of sanctions have indirect impacts on civilians, digital sanctions have immediate and direct impacts that one might think should be avoided.

While some tech companies have already begun taking actions to address misinformation from Russia, whether Fedorov’s request will be granted by tech giants like Apple remains to be seen.

Trump v. Facebook, and the Future of Free Speech

photograph of trump speech playing on phone with Trump's Twitter page displayed in the background

On July 7th, former President Donald Trump announced his intention to sue Facebook, Twitter, and Google for banning him from posting on their platforms. Facebook initially banned Donald Trump following the January 6th insurrection and Twitter and Google soon followed suit. Trump’s ban poses not only legal questions concerning the First Amendment, but also moral questions concerning whether or not social media companies owe a duty to guarantee free speech.

Does Trump have any moral standing when it comes to his ban from Facebook, Twitter, and Google? How can we balance the value of free expression with the rights of social media companies to regulate their platforms?

After the events of January 6th, Trump was immediately banned from social media platforms. In its initial ban, the CEO of Facebook, Mark Zuckerberg, offered a brief justification: “We believe the risks of allowing the President to continue to use our service during this period are too great.” Following Trump’s exit from office, Facebook decided to extend Trump’s ban to two years. Twitter opted for a permanent ban, and YouTube has banned him indefinitely.

Though this came as a shock to many, some argued that Trump’s ban should have come much sooner. Throughout his presidency, Trump regularly used social media to communicate with his base, at times spreading false information. While some found this communication style unpresidential, it arguably brought the Office of the President closer to the American public than ever before. Trump’s use of Twitter engaged citizens who might not have otherwise engaged with politics and even reached many who did not follow him. Though there is value in allowing the president to authentically communicate with the American people, Trump’s use of the social media space has been declared unethical by many; he consistently used these communiques to spread falsehoods, issue personal attacks, campaign, and fund-raise.

But regardless of the merits of Trump’s lawsuit, it raises important questions regarding the role that social media platforms play in modern society. The First Amendment, and its protections regarding free speech, only apply to federal government regulation of speech (and to state regulation of speech, as incorporated by the 14th Amendment). This protection has generally not extended to private businesses or individuals who are not directly funded or affiliated with the government. General forums, however, such as the internet, have been considered a “free speech zone.” While located on the internet, social media companies have not been granted a similar “free speech zone” status. The Supreme Court has acknowledged that the “vast democratic forums of the Internet” serve an important function in the exchange of views, but it has refused to extend the responsibility to protect free speech beyond state actors, or those performing traditional and exclusive government functions. The definition of state actors is nebulous, but the Supreme Court has drawn hard lines, recently holding that private entities which provide publicly accessible forums are not inherently performing state actions. Recognizing the limits of the First Amendment, Trump has attempted to bridge the gap between private and state action in his complaint, arguing that Facebook, Twitter, and Google censored his speech due to “coercive pressure from the government” and therefore their “activities amount to state action.”

Though this argument may be somewhat of a stretch legally, it is worth considering whether or not social media platforms play an important enough role in our lives to consider them responsible for providing an unregulated forum for speech. Social media has become such a persistent and necessary feature of our lives that Supreme Court Justice Clarence Thomas has argued that they should be considered “common carriers” and subject to heightened regulation in a similar manner to planes, telephones, and other public accommodations. And perhaps Justice Thomas has a point. About 70% of Americans hold an active social media account and more than half of Americans rely upon social media for news. With an increasing percentage of society not only using social media, but relying upon it, perhaps social media companies would be better treated as providers of public accommodations rather than private corporations with the right to act as gatekeepers to their services.

Despite American’s growing dependence on social media, some have argued that viewing social media as a public service is ill-advised. In an article in the National Review, Jessica Melugin argues that there is not a strong legal nor practical basis for considering social media entities as common carriers. First, Melugin argues that exclusion is central to the business model of social media companies, who generate their revenue from choosing which advertisements to feature to generate revenue. Second, forcing social media companies to allow any and all speech to be published on their platforms may be more akin to compelling speech rather than preventing its suppression. Lastly, social media companies, unlike other common carriers, face consistent market competition. Though Facebook, Instagram, and Twitter appear to have taken over for now, companies such as Snapchat and TikTok represent growing and consistent competition.

Another consideration which weighs against applying First Amendment duties to social media companies is the widespread danger of propaganda and misinformation made possible by their algorithmic approach to boosting content. Any person can post information, whether true or false, which has the potential to reach millions of people. Though an increasing amount of Americans rely on social media for news, studies have found that those who do so tend to be less informed and more exposed to conspiracies. Extremists have also found a safe-haven on social media platforms to connect and plan terrorist acts. With these considerations in mind, allowing social media companies to limit the content on their platforms may be justified in combating the harmful tendencies of an ill-informed and conspiracy-laden public and perhaps even in preventing violent attacks.

Despite the pertinent moral questions posed by Trump’s lawsuit, he is likely to lose. Legal experts have argued that Trump’s suit “has almost no chance of success.” However, the legal standing of Trump’s claims do not necessarily dictate their morality, which is equally worthy of consideration. Though Trump’s lawsuit may fail, the role that social media companies play in the regulation of speech and information will only continue to grow.

Facebook Groups and Responsibility

image of Facebook's masthead displayed on computer screen

After the Capitol riot in January, many looked to the role that social media played in the organization of the event. A good amount of blame has been directed at Facebook groups: such groups have often been the target of those looking to spread misinformation as there is little oversight within them. Furthermore, if set to “private,” these groups run an especially high risk of becoming echo chambers, as there is much less opportunity for information to flow freely within them. Algorithms that Facebook uses to populate your feed were also part of the problem: more popular groups are more likely to be recommended to others, which led to some of the more pernicious groups getting a much broader range of influence than they would have otherwise. As noted recently in the Wall Street Journal, while it was not long ago that Facebook saw groups as the heart of the platform, abuses of the feature has forced the company to make some significant changes into how they are run.

The spread of misinformation in Facebook groups is a complex and serious problem. Some proposals have been made to try to ameliorate it: Facebook itself implemented a new policy in which groups that were the biggest troublemakers – civics groups and health groups – would not be promoted during the first three weeks of their existence. Others have called for more aggressive proposals. For instance, a recent article in Wired suggested that:

“To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume.”

A worry with Facebook groups, as well as a lot of communication online generally, is that it can be difficult to tell what the source of information is, as one might post information anonymously or under the guise of a username. Perhaps with more information about who was in charge of a group, then, one would be able to make a better decision as to whether to accept the information that one finds within it.

Are you part of the problem? If you’re actively infiltrating groups with the intent of spreading misinformation, or building bot armies to game Facebook’s recommendation system, then the answer is clearly yes. I’m guessing that you, gentle reader, don’t fall into that category. But perhaps you are a member of a group in which you’ve seen misinformation swirling about, even though you yourself didn’t post it. What is the extent of your responsibility if you’re part of a group that spreads misinformation?

Here’s one answer: you are not responsible at all. After all, if you didn’t post it, then you’re not responsible for what it says, or if anyone else believes it. For example, let’s say you’re interested in local healthy food options, and join the Healthy Food News Facebook group (this is not a real group, as far as I know). You might then come across some helpful tips and recipes, but also may come across people sharing their views that new COVID-19 vaccines contain dangerous chemicals that mutate your DNA (they don’t). This might not be interesting to you, and you might think that it’s bunk, but you didn’t post it, so it’s not your problem.

This is a tempting answer, but I think it’s not quite right. The reason is because of how Facebook groups work, and how people are inclined to find information plausible online. As noted above, sites like Facebook employ various algorithms to determine which information to recommend to its users. A big factor that goes into such suggestions is how popular a topic or group is: the more engagement a post gets, the more likely it’s going to show up in your news feed, and the more popular a group is, the more likely it will be recommended to others. What this means is that mere membership in such a group will contribute to that group’s popularity, and thus potentially to the spread of the misinformation it contains.

Small actions within such a group can also have potentially much bigger effects. For instance, in many cases we put little thought into “liking” or reacting positively to a post: perhaps we read it quickly and it coheres with our worldview, so we click a thumbs-up, and don’t really give it much thought afterwards. From our point of view, liking a post does not mean that we wholeheartedly believe it, and it seems that there is a big difference between liking something and posting it yourself. However, these kinds of engagements influence the extent to which that post will be seen by others, and so if you’re not liking in a conscientious way, you may end up contributing to the spread of bad information.

What does this say about your responsibilities as a member of a Facebook group? There are no doubt many such groups that are completely innocuous, where people do, in fact, only share helpful recipes or perhaps even discuss political issues in a calm and reasoned way. So it’s not as though you necessarily have an obligation to quit all of your Facebook groups, or to get off the platform altogether. However, given that otherwise innocent actions like clicking “like” on a post can have much worse effects in groups in which misinformation is shared, and that being a member of such a group at all can contribute to its popularity and thus the extent to which it can be suggested to others means that if you find yourself a member of such a group, you should leave it.

Is the Future of News a Moral Question?

closeup photograph of stack of old newspapers

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In the face of increasing calls to regulate social media over monopolization, privacy concerns, and the spread of misinformation, the Australian government might be the world’s first country to force companies like Google and Facebook to pay to license Australian news articles featured in those site’s news feeds. The move comes after years of declining revenue for newspapers around the world as people increasingly got their news online instead of in print. But, is there a moral imperative to make sure that local journalism is sustainable and if so, what means of achieving this are appropriate?

At a time when misinformation and conspiracy theories have reached a fever pitch, the state of news publication is in dire straits. From 2004 to 2014, revenue for U.S. newspapers declined by over 40 billion dollars. Because of this, several local newspapers have closed and news staff have been cut. In 2019 it was reported that 1 in 5 papers had closed in the United States. COVID has not helped with the situation. In 2020 ad revenue was down 42% from the previous year. Despite this drop, the revenue raised from digital advertising has grown exponentially and estimates suggest that as much as 80% of online news is derived from newspapers. Unfortunately, most of that ad revenue goes to companies like Facebook and Google rather than news publishers themselves.

This situation is not unique to the United States. Newspapers have been in decline in places like the United Kingdom, Canada, Australia, certain European nations, and more. Canadian newspapers recently published a blank front page to highlight the disappearance of news. In Australia, for example, circulation has fallen by over two-thirds since 2003. Last year over 100 newspapers closed down. This is part of the reason Australia has become the first nation to pursue legislation requiring companies like Google and Facebook to pay for the news that they use in their feeds. Currently for every $100 spent on advertising, Google takes $53 and Facebook receives $28. Under the proposed legislation, such companies would be forced to negotiate commercial deals to license the use of their news material. If they refuse to negotiate, they face stiff penalties of potentially 10 million dollars or more.

The legislation has been strongly opposed by Google and Facebook who have employed tactics like lobbying legislators and starting campaigns on YouTube to get content creators to oppose the bill. They have also threatened to block Australians from Google services telling the public, “The way Aussies search everyday on Google is at risk from new government regulation.” (Meanwhile, they have recently been taking some steps to pay for news.) Facebook has also suggested that they will pull out of Australia, however the government has stated that they will not “respond to threats” and have said that paying for news will be “inevitable.” Australia is not the only jurisdiction that is moving against Google and Facebook to protect local news. Just recently, several newspapers in West Virginia filed a lawsuit against Google and Facebook for anti-competitive practices relating to advertising, claiming that they “have monopolized the digital advertising market, thereby strangling a primary source of revenue for newspapers.”

This issue takes on a moral salience when we consider the relative importance of local journalism. For example, people who live in areas where the local news has disappeared have reported only hearing about big things like murders, while stories on local government, business, and communities issues go unheard. For example, “As newsrooms cut their statehouse bureaus, they also reduced coverage of complex issues like utility and insurance regulation, giving them intermittent and superficial attention.” Without such news it becomes more difficult to deal with corruption and there is less accountability. Empirical research suggests that local journalism can help reduce corruption, increase responsiveness of elected officials, and encourage political participation. The importance of local journalism has been sufficient to label the decline of newspapers a threat to democracy. Indeed, studies show that when people rely more on national news and social media for information, they are more vulnerable to misinformation and manipulation.

Other nations, such as Canada, have taken a different approach by having the federal government subsidize local news across the country with over half a billion dollars in funding. Critics, however, argue that declining newspapers are a matter of old models failing to adapt to new market forces. While many newspapers have tried to embrace the digital age, these steps can create problems. For example, some news outlets have tried to entice readers with a larger social media presence and by making the news more personalized. But if journalists are more focused on getting clicks, they may be less likely to cover important news that doesn’t already demand attention. Personalizing news also plays to our biases, making it less likely that we will encounter different perspectives, and more likely that we will create a filter bubble that will echo our own beliefs back to us. This can make political polarization worse. Indeed, a good example of this can be found in the current shift amongst the political right in the U.S. away from Fox News to organizations like NewsMax and One America News because they reflect a narrower and narrower set of perspectives.

Google and Facebook – and others opposed to legislation like that proposed in Australia – argue that both sides benefit from the status quo. They argue that their platforms bring readers to newspapers. Google, for example, claims that they facilitated 3.44 billion visits to Australian news in 2018. And both Google and Facebook emphasize that news provides limited economic value to the platforms. However, this seems like a strange argument to make; if the news doesn’t matter much for your business, why not simply remove the news feeds from Google rather than wage a costly legal and PR battle?

Professor of Media Studies Amanda Lotz argues that the primary business of commercial news media has been to attract an audience for advertisers. This worked so long as newspapers were one of the only means to access information. With the internet this is no longer the case; “digital platforms are just more effective vehicles for advertisers seeing to buy consumer’s attention.” She argues that the news needs to get out of the advertising business; save journalism rather than the publishers. One way to do this would be by strengthening independent public broadcasters or by providing incentives to non-profit journalism organizations. This raises an important moral question for society: has news simply become a necessary public good like firefighting and policing; one that is not subject to the free market? If so, then the future of local news may be a moral question of whether news has any business in business.

Come into My Parler

photograph of relection on Chicago Bean of skyline

Efforts to curtail and limit the effect of disinformation reached a fever-pitch in the run up to the 2020 election for President of the United States. Prominent social media platforms, Facebook and Twitter, after long resistance to exerting significant top-down control of user posted content, began actively combating misinformation. Depending on who you ask, this change of course either amounts to seeing reason or abandoning it. In the latter camp are those ditching Facebook and Twitter for relative newcomer, Parler.

Parler bills itself as a free speech platform, exerting top-down control only in response to criminal activity and spam. This nightwatchman approach to moderation makes clear the political orientation of Parler’s founders and those people who have dumped mainstream platforms and moved over to Parler. Libertarian political philosophy concerning the proper role of state power was famously described by American philosopher Robert Nozick as relegating the state to the role of nightwatchman: leaving citizens to do as they please and only intervening to sanction those who break the minimal rules that underpin fair and open dealing.

Those making the switch characterize Facebook and Twitter, on the other hand, as becoming increasingly tyrannical. Any attempt to curate and fact-check introduces bias, claims Parler co-founder John Matze. Whereas Parler aims to be a “neutral platform,” according to Parler co-founder Rebekah Mercer. This kind of political and ideological neutrality is a hallmark aspiration of libertarianism and classical liberalism.

However, Parler’s pretension became hypocrisy, as it banned leftist parody accounts and pornography. However, this is neither surprising nor on its own bad. As some have pointed out, every social media site faces the same set of issues with content and largely responds to it the same way. However, Parler’s aspiration of libertarian neutrality when it comes to speech content makes their terms of service, which allow them to remove user content “at any time and for any reason or no reason,” and their policy of kicking users off the platform “even where the [terms of service] have been followed” particularly obnoxious.

But suppose that Parler stuck to its professed principles. What would it mean to be politically or ideologically neutral, and why would fact-checking compromise it? A simple way of thinking about the matter is embodied by Parler’s espoused position toward speech content: no speech will be treated differently by those in power simply on the basis of its message, regardless of whether that message is Democratic or Republican, liberal or conservative, capitalist or socialist. Stepping from the merely political to the ideological, to remain neutral would be to think that no speech content was false simply on its face. Here is where the “problem” of fact-checking arises.

We live, so we keep being told, in a “post-truth” society. Whatever this exactly means, its practical import is that distinct groups of society disagree fundamentally both over their goals and how to achieve them, politically. The idea of fact-checking as a neutral arbiter between disagreeing parties breaks down in these situations because supposed facts will appear neutral only to parties who agree about how to see the world at a basic level. That is, the appearance of a fact-value distinction will evaporate. (The distinction between facts (i.e., how the world allegedly is without regard to any agents’ perceptions) and values (i.e., how the world ought to be according to a given agent’s goals/preferences) is argued by many to be untenable.)

In this atmosphere, fact-checking takes on the hue of a litmus test, examining statements for their ideological bona fides. When a person’s claim is fact-checked, and found wanting, it will appear to them not that an uninterested judge cast a stoic gaze out onto the world to see whether it is as the person says; instead, the person will feel that the judge looked into their own heart and rejected the claim as undesirable. When people feel this way, they will not stick around and continue to engage. Instead, they’ll pack up and go where they think their claims will get “fair” treatment. None of this is to say that fact-checking is necessarily a futile or oppressive exercise. However, it is a reason to not treat it as a panacea for all disagreement.

Owning a Monopoly on Knowledge Production

photograph of Monopoly game board

With Elizabeth Warren’s call to break up companies like Facebook, Google, and Amazon, there has been increasing attention to the role that large corporations play on the internet. The matter of limited competition within different markets has become an important area of focus, however much of the debate tends to focus on the economic and legal factors involved (such as whether there should be greater antitrust enforcement). However, the philosophical and moral issues have not received as much attention. If a select few corporations are responsible for the kinds of information we get to see, they are capable of exerting a significant influence on our epistemic standards, practices, and conclusions. This also makes the issue a moral one.

Last year Facebook co-founder Chris Hughes surprised many with his call for Facebook to be broken up. Referencing America’s history of breaking up monopolies such as Standard Oil and AT&T, Hughes charged that Facebook dominates social networking and faces no market-based accountability. Earlier, Elizabeth Warren had also called for large companies such as Facebook, Google, and Amazon to be broken apart, claiming that they have bulldozed competition and are using private information for profit. Much of the focus on the issue has been on the mergers of companies like Facebook and Instagram or Google and Nest. The argument holds that these mergers are anti-competitive and are creating economics problems. According to lawyer and professor Tim Wu, “If you took a hard look at the acquisition of WhatsApp and Instagram, the argument that the effect of those acquisitions have been anticompetitive would be easy to prove for a number of reasons.” For one, he cites the significant effect that such mergers have had on innovation.

Still, others have argued that breaking up such companies would be a bad idea. They will note that a concept like social networking is not clearly defined, and thus it is difficult to say that a company like Facebook constitutes a monopoly in its market. Also, unlike Standard Oil, companies like Facebook or Instagram are not essential services for the economy which undermines potential legal justifications for breaking these companies up. Most of these corporations also offer their services for free which means that the typical concerns about monopolies and anticompetitive practices regarding prices and rising costs of services do not apply. Those who argue this tend to suggest that the problem lies with the capitalist system or that there is a lack of proper regulation of these industries.

Most of the proponents and opponents focus on the legal and economic factors involved. However, there are epistemic factors at stake as well. Social epistemologists study matters relating to questions like “how do groups come to know things?” or “how can communities of inquirers affect what individuals come to accept as knowledge?” In recent years, philosophers like Kevin Zollman have provided accounts of how individual knowers are affected by communication within their network of fellow knowers. Some of these studies have demonstrated that different communication structures within an epistemic network in terms of the beliefs, evidence, and testimonies that are shared can affect what conclusions an epistemic community will settle on. The way that evidence, beliefs, and testimony of other knowers within the network is shared will affect what other people in the network believe is rational.

Once we factor the ways that a handful of corporations are able to influence the communication of information in epistemic communities on the internet, a real concern emerges. Google and Facebook are responsible for roughly 70% of referral traffic on the internet. For different categories of articles the number changes. Facebook is responsible for referring 87% of “lifestyle” content. Google is responsible for 84% of referrals of job postings. Facebook and Google together are responsible for 79% of referral traffic regarding the world economy. Internet searching is a common way of getting knowledge and information and Google controls almost 90% of this field.

What this means is that a few companies are responsible for the communication of the incredibly large amounts of information, beliefs, and testimony that is shared by knowers all over the world. If we think about a global epistemic community or even smaller sub-communities learning and eventually knowing things through referral of services like Google or Facebook, this means that few large corporations are capable of affecting what we are capable of knowing and will call knowledge. As Hughes noted in his criticism of Facebook, Mark Zuckerberg can alone decide how to configure Facebook’s algorithms to determine what people see in their News Feed, what messages get delivered, and what constitutes violent and incendiary speech. What this means is that if a person comes to adopt many or most of their beliefs because of what they are exposed to on Facebook, then Zuckerberg alone can significantly determine what that person can know.

A specific example of this kind of dominance is YouTube. When it comes to the online video hosting platform marketplace, YouTube holds a significantly larger share than competitors like Vimeo or Dailymotion. Content creators know this all too well YouTube’s policies on content and monetization have led many on the platform to lament the lack of competition. YouTube creators are often confused by why certain videos get demonetized, what is and is not acceptable content, and what standards should be followed. In recent weeks demonetization of history focused channels has been particularly interesting. For example, a channel devoted to the history of the First World War had over 200 videos demonetized. Many of these channels have had to begin censoring themselves based on what they think is not allowed. So, history channels have started censoring words that would be totally acceptable on network television.

The problem isn’t merely one of monetization either. If a video is demonetized, it will no longer be promoted and recommended by YouTube’s algorithm. Thus, if you wish to learn something about history on YouTube, Google is going to play a large role in terms of who gets to learn what. This can affect the ways that people evaluate information on these (sometimes controversial) topics and thus what epistemic communities will call knowledge. Some of these content creators have begun looking for alternatives to YouTube because of these issues, however it remains to be seen whether they will offer a real source of competition. In the meantime, however, much of the information that gets referred to us comes from a select few companies. These voices have significant influence (intentionally or not) over what we as an epistemic community come to know or believe.

This makes the issue of competition an epistemic issue, but it also inherently is a moral one. This is because as a global society we are capable of regulating in one way or another the ways in which corporations are capable of impacting our lives. This raises an important moral question: is it morally acceptable for a select few companies to determine what constitutes knowledge? Having information being referred by corporations provides the opportunity for some to benefit over others, and we as a global society will have to determine whether we are okay with the significant influence they wield.

Who fact-checks the fact-checkers?

photograph of magnifying glass examining text

If you’re reading something about Facebook in the news these days, chances are you’re reading about how bad it is at preventing people from posting false or misleading information (either that, or it’s about concerns that Facebook is not good at keeping your personal information private). The platform has become notorious for being a place where conspiracy theories are allowed to run amok, and where pseudo- or anti-scientific views can receive strong endorsement by its user base. In an attempt to curb the spread of misinformation, Facebook has recently employed a number of fact-checking services. While Facebook has made the use of fact-checkers for a while now, the number of people responsible for the entirety of user output has in the past been tiny, a problem to which Facebook has recently responded by quadrupling the number of their American fact-checking partners. There are a number of websites that offer fact-checking services, and can provide various ratings on posts indicating whether a claim is true or false, or whether it presents information in a misleading way. The hope is that such fact-checking will help stop the spread of false information on Facebook overall, and especially with regard to that which can be actively damaging, such as false claims that vaccines are unsafe.

While making use of fact-checkers seems like a good move on Facebook’s part, some have recently expressed concerns that one of the fact-checking websites that Facebook employs in the US (there are different fact-checking services employed for different countries, a full list of which can be found here) is politically biased: the site Check Your Fact, which is a subsidiary of the website Daily Caller. The Daily Caller is an unambiguously right-wing and pro-Trump website, that often publishes articles denying climate change, and whose founder has expressed white supremacist views. There are concerns, then, that false or misleading claims made on Facebook that support a right-wing political agenda may not receive the kind of scrutiny as other kinds of claims because of the political affiliation of one of the fact-checkers.

Vox recently noted one incident of this type, in which a former conservative fact-checking website that Facebook used – the now defunct Weekly Standard – was over-aggressive with designating a headline critical of then supreme court nominee Brett Kavanaugh as false. Instead of controlling for false information, the fact-checking website in a sense created it, improperly flagging a headline that was, at worst, slightly misleading as outright false.

There are concerns, then, not only about the truth or falsity of individual claims being made on Facebook, but also about whether claims that fact-checkers are making about those claims are themselves true or false. What, then, are we supposed to do when faced with a claim on Facebook that has been fact-checked? Can we fact-check the fact-checkers?

There are, in fact, organizations that attempt to do just that. For instance, Facebook only uses fact-checkers that are certified by Poynter’s International Fact-Checking Network, an organization that evaluates fact-checkers on the bases of a code of principles, including “nonpartisanship and fairness,” “open and honest corrections,” and transparency of sources, funding, organization, and methodology. While all of these principles sound like good ones, we might still be concerned whether such an organization can really pick out the reliable fact-checkers from the unreliable ones. For instance, Check Your Fact does, in fact, pass the standards of the International Fact-Checking Network. 

What, then, of concerns about the partisanship of Facebook’s fact-checking partners? Are they overblown? Or should we go one step further, and fact-check those who fact-check the fact-checkers?

While this is perhaps not a bad idea, most people are probably not going to take the time to research the organization that determines the standards for fact-checkers when scrolling through Facebook. There is, however, perhaps a more pressing matter: in addition to how reliable these fact-checkers are – that is to say, how good they are at determining which claims are true, false, or misleading – there are also concerns about how effective they are – that is to say, how good they are at actually making it known that a false or misleading claim is, in fact, false or misleading. As reported at Poynter, there is reason to think that even if a claim is properly fact-checked as false, more people read the original false claim than the report showing that it is false. A worry, then, is that since information moves so quickly on Facebook it is often incredibly difficult for fact-checkers to keep up.

We might be worried about the efficacy of Facebook fact-checking for another reason, namely that people who have their posts fact-checked as false will probably not be deterred from posting similar such claims in the future. After all, if you believe that the information you are sharing is true, that a website tells you it is false may lead you not to reconsider your views, but instead to simply think that the fact-checking websites are wrong or biased.

So what are we to make of this complicated situation? Despite concerns about reliability and efficacy, making use of fact-checkers still seems to be a step in the right direction for Facebook: anything that can make any progress, even a little, towards stemming the tide of misinformation online is a good thing. What we perhaps should take away from all this is that fact-checking can be used as one tool among many for determining which Facebook posts you should pay attention to and which you should ignore.

The Ethics of Brand Humanization

close-up photo of Wendy's logo

Brand humanization is becoming increasingly common in all arenas of advertisement, but it’s perhaps the most noticeable on social media. This strategy is exactly what it sounds like; corporations create social media accounts to interact directly with customers, and try to make their brand seem as human and relatable as possible. It’s ultimately used to make companies more approachable, more customer-oriented. The official Twitter account for Wendy’s, for example, has amassed a massive audience of nearly three million followers. Much of their popularity has to do with their willingness to interact with customers, like when the account famously roasted other Twitter users, or when they post memes to reach out to a younger demographic. The goal is to make the brand itself feel like a real person, to remind the consumer of the human being on the other end of the interaction.

In an article advising brands how to humanize themselves in the eyes of consumers, Meghan M. Biro, a marketing strategist and regular contributor to Forbes, describes how a presence on social media allows companies,

“to build emotional connections with their customers, to become a part of their lives, both in their homes and—done right—in their hearts. The heart of this is ongoing, online dialogue. Both parties benefit. The customer’s idiosyncratic (and sometimes maddening) needs and wants can be met. The company gets increased sales, of course, but also instant feedback on its products—every online chat has the potential to yield an actionable nugget of knowledge.”

The tactic of presenting ads as a mutually beneficial conversation between consumer and brand has become increasingly prominent in recent years. Studies have shown that millenials hate being advertised to, so companies are adopting strategies like the one Biro recommends to restructure the consumer-company interaction in a way that feels less manipulative. However, not everyone believes this new arrangement is truly mutually beneficial. In an article for The New Inquiry, Kate Losse takes a critical view of conversational advertising. “The corporation,” she notes, “while needing nothing emotional from us, still wants something: our attention, our loyalty, our love for its #brand, which it can by definition never return, either for us individually or for us as a class of persons. Corporations are not persons; they live above persons, with rights and profits superseding us.” On the subject of using memes as marketing, she says, “The most we can get from the brand is the minor personal branding thrill of retweeting a corporation’s particularly well-mixed on-meme tweet to show that we ‘get’ both the meme and the corporation’s remix of it.” In this sense, the back-and-forth conversational approach is much more one-sided than it seems.

There is, however, a difference between traditional marketing strategies and the tactics employed by social media accounts to gain popularity. If you follow Wendy’s on Twitter, it’s because you choose to follow them, because you want to see their content on your feed. For those who don’t want to be directly advertised to, it’s as simple as not following (or if you want to be more thorough, blocking) corporate Twitter accounts. Responding to transparent advertising with a sarcastic meme, an increasingly common and often funny response to these kind of Tweets, only gives the brand more exposure online, so the best strategy is to not engage at all.

Furthermore, a 2015 study on brand humanization conducted by the Vrije University of Amsterdam provides another dimension to this issue. When studying the positive correlation between social media presence and a brand’s reputation, they wondered whether “the fact that exposure to corporate social media activity is, to a large degree, self-chosen raises the question whether these results reflect a positive effect of exposure on brand attitudes, or rather the reverse causal effect–that consumers who already have positive brand attitudes are more likely to choose to expose themselves to selected brand content.” No extensive studies have been done on this yet, but it might provide valuable insight on the actual impact of corporate Twitter accounts.

Using a Facebook page to take questions or criticism from consumers seems like a harmless and even productive approach to marketing through social media. Even corporate Twitter accounts posting memes, while not as beneficial to the consumer as companies like to present it as, is hardly unethical. But brand humanization can steer companies into murky moral waters when they try too hard to be relatable.

In December of 2018, the verified Twitter account for Steak-umm, an American frozen steak company, posted a tweet that produced significant backlash. The tweet reads, “why are so many young people flocking to brands on social media for love, guidance, and attention? I’ll tell you why. they’re isolated from real communities, working service jobs they hate while barely making ends meet, and are living w/ unchecked personal/mental health problems.” A similar tweet from February of 2019, posted by the beverage company Sunny-D, reads cryptically, “I can’t do this anymore.” Both of these messages demonstrate two things; firstly, the strategy employed by modern companies to speak to customers in the more humanizing first-person, to move away from the collective corporate “we” to the individual (and therefore more relatable) “I”. The voice of corporations has changed; once brands were desperate to come across as serious and professional, but now brands marketing to a twenty-something demographic want to sound cool and detached, and speak with the voice of an individual rather than a disembodied conglomerate of shareholders and executives.

Secondly, these brands are now appropriating and parroting millennial “depression culture”, which is often expressed through frustration at capitalism and its insidious effect on the individual. To quote Kate Losse again, “It isn’t enough for Denny’s [another prominent presence on the social media scene] to own the diners, it wants in on our alienation from power, capital, and adulthood too.” There is something invasive and inauthentic about this kind of marketing, and furthermore, something ethically troubling about serious issues being used as props to sell frozen food. The point of the Steak-umm tweet may be salient, but the moral implications of a corporate Twitter account appropriating social justice issues to gain attention left many uneasy. As John Paul Rollert, a professor of business and ethics at the University of Chicago, said in an interview with Vice, “It can’t say anything good about society when depressed people feel their best outlet is the Twitter account for Steak-umm.”

Nasty, Brutish and Online: Is Facebook Revealing a Hobbesian Dystopia?

Mark Zuckerberg giving a speech against a blue background

The motto and mission of Facebook – as Mark Zuckerberg (founder and CEO), Facebook spokespeople, and executives have repeated over the years ad nauseam, is to “make the world a better place by making it more open and connected.” The extent to which Facebook has changed our social and political world can hardly be underestimated. Yet, over the past several years, as Facebook has grown into a behemoth with currently 2.2 billion monthly and 1.4 billion daily active users worldwide, the problems that have emerged from its capacity to foment increasingly hysterical and divisive ideas, to turbocharge negative messages and incendiary speech, and to disseminate misinformation, raises serious questions about the ideal of openness and connectedness.

The problems, now well documented, that have attended Facebook’s meteoric rise indicate that there has been a serious, perhaps even deliberate, lack of critical engagement with what being ‘more open and connected’ might really entail in terms of how those ideals can manifest themselves in new, powerful, and malign ways. The question here is whether Facebook is, or is able to be – as Zuckerberg unwaveringly believes – a force for good in the world; or, rather, whether it has facilitated, even encouraged, some of the baser, darker aspects of human nature and human behavior to emerge in a quasi Hobbesian “state of nature” scenario.  

Thomas Hobbes was a social contract theorist in the seventeenth century. One of the central tenets of his political philosophy, with obvious implications for his view of the moral nature of people, was that in a “state of nature” – that is, without government, laws or rules to which humans voluntarily (for our benefit) submit, we would exist in a state of aggression, discord and war. Hobbes famously argued that, under such conditions, life would be “nasty, brutish, and short.” He thought that morality emerged when people were prepared to give up some of their unbridled freedom to harm to others in exchange for protection from being harmed by others.

The upside was that legitimate sovereign power could keep our baser instincts in check, and could lead to a relatively harmonious society. The social contract, therefore, is a rational choice made by individuals for their own self-preservation. This version of the nature and role of social organization does, to be sure, rest on a bleak view of human nature. Was Hobbes in any way right in that a basic aspect of human nature is cruel and amoral? And does this have anything to do with what the kinds of behaviors that have emerged on Facebook through its ideal of fostering openness and connectivity, largely free from checks and controls?

Though Facebook has recently been forced to respond to questions about its massive surveillance operation, about data breaches such as the Cambridge Analytica scandal, about use of the platform to spread misinformation and propaganda to influence elections; and its use for stoking hatred, inciting violence and aiding genocide, Mark Zuckerberg remains optimistic that Facebook is a force for good in the world – part of the solution rather than the problem.

In October 2018 PBS’s Frontline released a two-part documentary entitled The Facebook Dilemma in which several people who were interviewed claimed that from unique positions of knowledge ‘on the ground’ or ‘in the world,’ they tried to tell Facebook about various threats of propaganda, fake news and other methods being used on the platform to sow division and incite violence. The program meticulously details repeatedly missed, or ducked, opportunities for Facebook company executives, and Mark Zuckerberg himself, to comprehend and take seriously the egregious nature of some of these problems.

When forced to speak about these issues, Facebook spokespeople and Zuckerberg himself have consistently repeated the line that they were slow to act on threats and to understand the use of Facebook by people with pernicious agendas. This is doubtless true, but to say that Facebook was unsuspecting or inattentive to the potential dangers of what harms the platform might attract is putting it very mildly, and indeed appears to imply that Facebook’s response, or lack thereof, is rather benign; while not making them blameless exactly, it appears designed to neutralize blame: ‘we are only to blame insofar as we didn’t notice. Also we are not really to blame because we didn’t notice.’

Though Facebook does take some responsibility for monitoring and policing what is posted on the site (removing explicit sexual content, sexual abuse material, and clear hate speech), it has taken a very liberal view in terms of moderating content. From this perspective it could certainly be argued that the company is to some extent culpable in the serious misuse of its product.

The single most important reason that so many malign uses of Facebook have been able to occur is the lax nature of editorial control over what appears on the site, and how it is prioritized or shared, taken together with Facebook’s absolutely unprecedented capacity to offer granular, fine-tuned highly specific targeted advertising. It may be that Facebook has a philosophical defense for taking such a liberal stance, like championing and defending free speech.

Take, for example, Facebook’s ‘newsfeed’ feature. Tim Sparapani, Facebook Director of Public Policy from 2009 to 2011, told Frontline, “I think some of us had an early understanding that we were creating, in some ways, a digital nation-state. This was the greatest experiment in free speech in human history.” Sparapani added, “We had to set up some ground rules. Basic decency, no nudity and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.” Facebook has consistently fallen back on the ‘free speech’ defense, but it is disingenuous for the company to claim to be merely a conduit for people to say what they like, when the site’s algorithms, determined by (and functioning in service of) its business model, play an active role.

In the Facebook newsfeed, the more hits a story gets, the more the site’s algorithms prioritize it. Not only is there no mechanism for differentiating between truth and falsehood here, nor between stories which are benign and those which are pernicious, but people are more likely respond to (by ‘liking’ and ‘sharing’) stories with more outrageous or hysterical claims – stories which are less likely to be true and more likely to cause harm.

Roger McNamee, an early Facebook investor, told Frontline:  “…In effect, polarization was the key to the model – this idea of appealing to people’s lower-level emotions; things like fear and anger to create greater engagement and, in the context of Facebook, more time on site, more sharing, and therefore, more advertising value.” Because Facebook makes its money by micro-targeted advertising, the more engagement with a story, the more ‘hits’ it gets, the better the product Facebook has to sell to advertisers who can target individuals based on what Facebook can learn about them from their active responses. It is therefore in Facebook’s interest to cause people to react.

Facebook profits when stories are shared, and it is very often the fake, crazy stories, and/or those with most far-flung rhetoric that are most shared. But why should it be the case that people are more likely to respond to such rhetoric? This brings us back to Hobbes, and the question about the ‘darker’ aspects of human nature: is there something to be gleaned here about what people are like – what they will say and do if no one is stopping them?

The ‘real-world’ problems associated with fake news, such as violence in Egypt, Ukraine, Philippines and Myanmar, have emerged in the absence of a guiding principle – an epistemic foundation in the form of a set of ethics based on a shared conception of civilized discourse, and a shared conception of the importance of truth. In this analogy, the editorial process might be thought of as a kind of social contract, and the effects of removing it might be read as having implications for what humans in a ‘state of nature,’ where behavior is unchecked, are really like. Perhaps too much openness and connectivity does not, after all, necessarily make the world a better place, and might sometimes make it a worse one.

The conclusion seems unavoidable that Facebook has provided something like a Hobbesian state of nature by relaxing, removing, failing to use all but the most basic editorial controls. Yet it is equally true that Facebook has facilitated, encouraged and profited from all the nasty and brutish stuff. If the Hobbesian analogy is borne out, perhaps it is time to revisit the question of what kinds of controls need to be implemented for the sake of rational self (and social) preservation.

 

Privacy and a Year in the Life of Facebook

Photograph of Mark Zuckerberg standing with a microphone

Mark Zuckerberg, the CEO of Facebook, declared on January 4 that he would “fix Facebook” in 2018. Since then, the year has contained scandal after scandal. Throughout the year, Facebook has provided a case study of questions regarding how to protect or value information privacy. On March 17, the New York Times and The Guardian revealed that Cambridge Analytica used information gleaned from Facebook users to attempt to influence voters’ behavior. Zuckerberg had to testify before Congress and rolled out new data privacy practices. In April, the Cambridge Analytica scandal was revealed to be more far-reaching than previously thought and in June it was revealed that Facebook shared data with other companies such as Apple, Microsoft and Samsung. The UK fined Facebook the legal maximum for illegal handling of user data related to Cambridge Analytica. In September, a hack accessed 30 million users data. In November, another New York Times investigation revealed that Facebook had failed to be sufficiently forthcoming about Russia’s interference on the site regarding political manipulation, and on December 18 more documents came out showing that Facebook offered user data, even from private messages, to companies including Microsoft, Netflix, Spotify and Amazon.

The repeated use of data regarding users of Facebook without their knowledge or consent, often to manipulate their future behavior as consumers or voters, has led to Facebook’s financial decline and loss of public trust. The right to make your own decisions regarding access to information about your life is called informational privacy. We can articulate the tension in discussions over the value of privacy as between the purported right to be left alone, on the one hand, and the supposed right of society to know about its members on the other. The rapid increase in technology that can collect and disseminate information about individuals raises the question of whether the value of privacy should shift along with this shift in actual privacy practices or whether greater efforts need to be devoted to protect the informational privacy of members of society.

The increase in access to personal information is just one impact of the rise of information technology. Technological advances have also affected the meaning of personal information. For instance, it has become easier to track your physical whereabouts given the sorts of apps and social media that are commonly used, but also the reason that the data from Facebook is so useful is that so much can be extrapolated about a person based on seemingly unrelated behaviors, changing what sorts of information may be considered sensitive. Cambridge Analytica was able to use Facebook data to attempt to sway voting behavior because of trends in activity on the social media site and political behavior. Advertising companies can take advantage of the data to better target consumers.

When ethicists and policy makers began discussing the right to privacy, considerations centered on large and personal life choices and protecting public figures from journalists. The aspects of our lives that we would typically consider most central to the value of privacy would be aspects of our health, say, our religious and political beliefs, and other aspects of life deemed personal such as romantic and sexual practices and financial situations. The rise of data analysis that comes with social media renders a great deal of our behaviors potentially revelatory: what pictures we post, what posts we like, how frequently we use particular language, etc. can be suggestive of a variety of further aspects of our life and behaviors.

If information regarding our behavior on platforms such as Facebook is revealing of the more traditionally conceived private domain of our lives, should this information be protected? Or should we reconceive of what we conceive of as private? One suggestion has been to acknowledge the brute economic fact of the rise of these technologies: this data is worth money. Therefore, it could be possible to abstract away from the moral value or right to privacy and focus instead on the reality that data is worth something, but if the individual owns the data about themselves they perhaps are owed the profits of the use of their data.

There are moral reasons to protect personal data. If others have unrestricted access to their whereabouts, health information, passwords protecting financial accounts, etc., they could be used to harm the individual. Security and a right to privacy thus could be justified as harm prevention. It also could be justified via right to autonomy, as data about one’s life can be used to unduly influence her choices. This is exacerbated by the ways that data changes relevance and import depending on the sphere in which it is used. For instance, revealing data regarding your health being used in your healthcare dealings has different significance than if potential employers had access to such data. If individuals are in less control over their personal data, this can lead to discrimination and disadvantages.

Thus there are both economic or property considerations as well as moral considerations for protecting personal data. Zuckerberg has failed to “fix” Facebook in 2018, but more transparency of the protections and regulation of how platforms can use data would be positive moves forward for respecting our value of privacy in 2019.

How Much Should We Really Use Social Media?

Photograph of a person holding a smartphone with Instagram showing on the screen

Today, we live in a digital era. Modern technology has drastically changed how we go about our everyday lives. It has changed how we learn, for we can retrieve almost any information instantaneously. Even teachers can engage with students through the internet. Money is exchanged digitally. Technology has also changed how we are entertained, for we watch what we want on our phones. But perhaps one of the most popular and equally controversial changes that modern technology has brought to society is how we communicate. Social media. We live in an era where likes and retweets reign supreme. People document their every thought using platforms such as Facebook and Twitter. They share every aspect of their lives through platforms like Instagram. Social media acts as way to connect people who never would have connected without it, but the effects of social media can also be negative. Based on all the controversy that surrounds social media, should we be using it as often as we do?

If you were to walk down the street, or go wait in line at a restaurant, or go to a sporting event, or go anywhere, you’d most likely see people on their phones. They’re scrolling through various social media platforms or sharing the most recent funny dog video. And this phenomenon is happening everywhere and all the time. Per Jessica Brown, a staff writer for BBC, three billion people, which is around 40 percent of the world’s population, use social media. Brown went on to explain that we spend an average of two hours per day on social media, which translates to half a million pieces of content shared every minute. How does this constant engagement with social media affect us?

According to Amanda Macmillan of Time Magazine, in a survey that aimed to gauge the effect that social media platforms had on mental health, results showed that Instagram performed the worst. Per Macmillan, the social media platform was associated with high levels of anxiety, depression, bullying, and other negative symptoms. Other social media platforms, but Instagram especially, can cause FOMO, or the “fear of missing out.” Users will scroll through their feed and see their friends having fun that they cannot experience. For women users, there is the pressure of an unrealistic body images. Based on the survey that ranked social media platforms and their effect on users, one participant explained that Instagram makes girls and women feel that their bodies aren’t good enough because other users add filters and alter their pictures to look “perfect,” or the ideal image of beauty. The manipulation of images on Instagram can cause users to feel low self-esteem, anxiety, and feel insecure about themselves overall. The negativity that users feel because of what others post can create a toxic environment. Would the same effects be happening if people spent less time on social media? If so, maybe users need to take a hard look at how much time they are spending. Or social media platforms could monitor the content that is being posted more to prevent some of the mental effects that some users are getting from social media usage.

Although Instagram can cause have adverse effects on mental health, it can create a positive environment for self-identity and self expression. It can be a place of community building support as well. However, such positive outcomes from social media must be a result of all users cooperating and working to make the digital space a positive environment. Based on the survey of social media platforms, though, this does not seem to be the case and currently, the pros of social media platforms like Instagram seem to be far outweighed by the cons.

Although Facebook and Twitter were ranked higher than Instagram in terms of negatively affecting the mental health of users, they can still have adverse effects as well. In a survey of 1,800 people, women were found to be more stressed than men and a large factor to their stress was Twitter. However, it was also found that the more women used Twitter, the less stressed they became. It’s likely that Twitter acting as both a stressor and a coping mechanism comes from the type of content that women were interacting with. In another survey, researchers found that participants reported lower moods after using Facebook for twenty minutes compared to those who just browsed the internet. But the weather that was occurring that day (i.e rainy, sunny) could have also been a factor in the user’s mood.

Although social media seems to only have adverse effects on the mental health of its users, social media is a great way to connect with others. It can act as a cultural bridge, bringing people from all across the globe together. It’s way to share content that can be positive and unite people with similar beliefs. With the positives and negatives in mind, should we change how much we are using social media? Or at least try to regulate? People could take it upon themselves to simply try and stay off social media sites, although with the digital age that we live in, that might be a hard feat to pull off. After all, too much of a good thing can be a bad thing, as demonstrated from the surveys on social media. But perhaps we should be looking at the way that we are using social media rather than the time we spend on it. If users share positive content and strive to create a positive online presence and community, other users might not deal with the mental health issues that arise after usage of social media. But then again, people should be free to post whatever content they want. At the end of the day, users have their own agenda for how they manage their social media. So perhaps it’s dependent on every individual to look at their own health and their social media usage, and regulate it based on what they see in themselves.

Facebook and the Rohingya Genocide

Photograph of a long line of people in a refugee camp

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Rohingya are a mixed ethno-religious group that have lived in Myanmar’s Rakhine province for centuries. The Rohingya are mostly Muslim, though a minority of Hindus exist among their number. Both religious identities are vastly outnumbered by the 88% Buddhist population of Myanmar. Despite their long residence in the area, the Rohingya are not among the eight major ethnic groups recognized by the government. Instead, the Burmese government has systematically worked to strip the Rohingya of citizenship, characterizing them as ethnic and religious outsiders, chiefly referred to as “Bengalis.”

Stringent restrictions on mobility, employment, and eventually voting rights left the now-stateless Rohingya completely disenfranchised over a period of decades, leading them to be labeled “the most persecuted people in the world.” Amnesty International and Desmond Tutu described the Burmese treatment of the Rohingya as apartheid.

In 2016, men with knives and sharpened sticks attacked police outposts on the Burmese and Bangladesh borders, killing a handful of officers. The Arakan Rohingya Salvation Army (ARSA) claimed responsibility for these attacks as protest against the harms suffered by the Rohingya. The Burmese military retaliated with a full-scale pogrom against the Rohingya, culminating in the present day.

The military instantiated a reign of terror, using murder, rape, and torture against this already battered people. 392 villages were partially or wholly destroyed, while an estimated 10,000 Rohingya deaths are considered to be a conservative estimate of the bloodshed. 723,000 of the Rohingya (according to the UN’s count) have fled to neighboring countries.

Recent UN reports found evidence of a concerted, premeditated effort on the part of Burmese generals to engage in ethnic cleansing. Aung San Suu Kyi, leader of Myanmar and 1991 Nobel Peace Prize winner who herself once suffered at the hands of the Burmese military, has long ignored or denied atrocities against the Rohingya, eliciting international censure. Buddhists monks like Ashin Wirathu, though idealized in the Western imagination as the Platonic realization of the pacifist, play a significant role in advocating violence against Muslims in the name of Buddhist nationalism.

In this systematic decimation of the Rohingya, Burmese authorities found help from a surprising quarter: Facebook. The UN ascribed a fundamental role to Facebook for the dissemination of hate and disinformation. For most people in Myanmar, Facebook is the only source of information. It was thus easy for military generals to deploy Facebook as a covert propaganda tool. Their efforts reached 12 million users (a large chunk of the national population of 51 million). Recently, in response to intense international scrutiny, Facebook finally announced that it was removing the accounts of twenty military individuals and organizations, provoking a greater outcry among the Burmese than the Rohingya genocide itself.

Facebook hate speech throughout the Burmese ethnic cleansing was not just a concerted military operation. It flourished among political parties in Myanmar. An analysis by Buzzfeed News found that, of four thousand posts by Burmese politicians, one in ten contained hate speech that violated Facebook’s community standards. Examples included “othering” comments comparing Rohingya to animals, misogynistic statements against Muslim women saying that they were “too ugly to rape,” claims that the Rohingya faked their tragedies and that Muslims were seeking to out-populate Buddhists, and direct threats of bloodshed. After months of inaction, when confronted by a Buzzfeed representative, Facebook finally began to take some of these posts down.

It is surprising that, in the words of writer Casey Newton, it took “a genocide to remove someone from Facebook.” It is slightly less surprising in light of Facebook’s policies and track record on dealing with hate speech on scales less than genocide. Through numerous shared user experiences, we see a picture forming of Facebook’s extraordinarily crude application of their officially “neutral” policy. Within the less extreme North American context, women regularly get suspended by Facebook administrators for calling out men who threaten them with rape and violence (while their harassers suffer no consequences). Meanwhile, black children are not a protected group, although white men are. Danielle Citron, law professor at the University of Maryland and expert on information privacy, notes that Facebook’s context-blind algorithms purporting to curb hate speech ultimately serve to “protect the people who least need it and take it away from those who really need it.”

Facebook possess the resources to hire experts on best practices in regulating hate speech and propaganda, even in highly volatile contexts. And yet, the social media platform falls wide of the mark in confronting hate speech, harassment, and disinformation even in stable democracies. What is holding them back?

Facebook’s culpable vulnerabilities to becoming a propaganda machine and fuel for unsavory regimes will continue unless civil society devises clear norms to demand of it and other social media platforms. We must work to translate social, scientific, and political knowledge about how hate and violence are generated in local contexts. We must also establish minimum standards for internal oversight on social media so that plausible deniability on the part of corporations can no longer be an option. Facebook is a reminder that corporations are not guided by the advancement of humankind but by markets and users. Being indifferent to outcomes, their platforms can nurture community building, the spread of knowledge, and skill-building, or they can foster intense group identification, disinformation, hatred, and government propaganda. As Facebook is currently the global giant of social media, synonymous with the Internet itself in Myanmar, it is up to us as members of the international community to hold them accountable with other players in this tragedy.

The Ethics of Facebook’s Virtual Cemeteries

A photo of reporters taking pictures of the Facebook logo with their phones.

In May, Facebook reported hitting 1.94 billion users—a statistic that speaks to the tremendous popularity and influence of the social network.  As any Facebook user knows, members must take the good aspects of the technology with the bad.  The network can be a great place to reconnect with old friends, to make new ones, and to keep in touch with loved ones who live far away.  Unfortunately, conversations on Facebook also frequently end friendships. Facebook profiles and posts often tell us far more about people than may seem warranted by the intimacy level of our relationship with them.

Continue reading “The Ethics of Facebook’s Virtual Cemeteries”

Censorship on Social Media

It’s no secret that Republican presidential nominee Donald Trump has been known to write inflammatory posts on his social media platforms. Facebook employees have been questioning how to deal with these; some say that Trump’s posts calling for a ban on Muslim immigration “should be removed for violating the site’s rules on hate speech.” Facebook CEO Mark Zuckerberg definitively halted this talk when he said, “we can’t create a culture that says it cares about diversity and then excludes almost half the country because they back a political candidate.” A Wall Street Journal article uses this as a stepping stone to discuss whether or not presidential candidates should have more wiggle room when it comes to potentially damaging post than an average Joe. Regardless of who is posting, however, such incidents raise the question of, to what extent is it appropriate for social media platforms to censor posts at all?

Continue reading “Censorship on Social Media”

Fighting Obscenity with Automation

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Continue reading “Fighting Obscenity with Automation”

Social Media Vigils and Mass Shootings

In the wake of the largest mass shooting in the United States to date, Facebook and other social media sites have been flooded with posts honoring the victims in Orlando. Many such posts include the faces of the victims, rainbow banners and “share if you’re praying for Orlando” posts. Although there is nothing particularly harmful about sharing encouraging thoughts through social media, opinions are surfacing that it might do more harm than good.

Continue reading “Social Media Vigils and Mass Shootings”

The Socioeconomic Divide of Dating Apps

Many are familiar with the popular dating app “Tinder,” best known for its quick “swiping” method of indicating interest in nearby users and creating “matches.” In an apparent effort to get away from its reputation of simply being a convenient “hook-up” app and get closer to its original dating purpose, Tinder recently announced that profiles will now feature work and education information. The change doesn’t go so far as to completely eliminate those with less education or a lower income, such as apps like Luxy, but it does bear possibly problematic consequences. Tinder has marketed this change as a response to user requests for these added profile details to help make more “informed choices”. Yet some are wary that this change comes with an ulterior motive.

Continue reading “The Socioeconomic Divide of Dating Apps”