← Return to search results
Back to Prindle Institute

Free Speech and the Media Matters Lawsuit

image of 'no signal' TV screen with test pattern

In November 2023, Elon Musk filed a lawsuit against Media Matters (a left-leaning nonprofit dedicated to “monitoring, analyzing, and correcting conservative misinformation”) in response to their investigative report that suggested X, formerly known as Twitter, ran corporate advertisements alongside Nazi content. As a result, corporations including IBM and Comcast pulled their ads, causing further damage to a company whose reputation and finances are already bruised. Media Matters is just the latest to raise concerns about an increase in Nazi and white nationalist content on Twitter enabled by updated content policies.

But Musk is a self-described “free speech absolutist,” and his laissez-faire attitude towards hate speech has informed those policy decisions. While Musk’s position might seem extreme, it is not without precedent. The ACLU, for example, has taken a similar stance. They have come out against Musk in response to a lawsuit similar to the Media Matters one, they also represented white nationalist and Unite the Right organizer Jason Kessler in Virginia courts. They defend this action in a public statement, writing that the government should not be the arbiter of “when and whether the voices opposing a person’s speech can be preferred,” even when that speech is  “deeply offensive to others.”

The question of free speech, especially hate speech and misinformation, is nothing new, even to social media. Facebook has long been criticized for its tolerance of hate speech and misinformation, while Twitter has time and again fallen under public ire, with both Republicans and Democrats raising accusations of censorship. To explore this problem, it is first helpful to consider what limiting free speech is not. While the United States Constitution does guarantee the right to free speech under the First Amendment, this pertains to governmental interference. The First Amendment does not obviously prohibit Musk’s permissive content policies, Media Matters’s attack on those policies, nor corporations’ decision to spend their advertising dollars elsewhere.

Even so, social media’s power as a public forum complicates things. Social media corporations may be private, but they play an outsized role socially and politically, often with negative results. We saw this during the COVID pandemic, for example, where social media misinformation was responsible for increased mortality rates. Due to the magnitude of social media’s impact, there might be grounds for restricting speech in those private spheres and some places already do. Germany, for example, has strict hate speech laws that have been expanded to include internet speech, though not without controversy. (One German judge recently ruled that the law was government overreach and an infringement on free speech.)  Likewise, in the United States, there are some limits imposed even on constitutionally guaranteed freedoms: while the Second Amendment guarantees the right to bear arms, some guns like short-barreled shotguns and automatic weapons are nonetheless illegal.

While there may be good reasons for favoring permissive free speech policies, they can quickly lead to what the Austrian-British philosopher Karl Popper called the paradox of tolerance. To have a tolerant society, one that is permissive of a plurality of viewpoints and expressions, it must be tolerant of all opinions — except those that are intolerant. Intolerance undermines the very conditions for a tolerant society. In a free society, we should allow for any belief so long as it can be countered by reason. But an intolerant position, on Popper’s account, is one that refuses rational argument. And those who refuse to participate in rational, common discourse often express them through coercion, threatening to destroy the tolerant and, with them, free society. One example might be a Holocaust denier who ignores the historical and testimonial evidence supporting those events as historical fact. This person’s discourse is either irrational or not in good faith; in either case, they are not engaging in rational argument. Popper would say it is no surprise, then, that we often see violent speech and actions coming from those who hold this view.

Using Popper’s criteria for identifying intolerant speech, however, may be especially difficult in our current socio-political climate. As Thomas Hobbes said, when it comes to the perception of our own rationality, “almost all men think they have [it] in a greater degree” than any other. With so much misinformation swirling around, the marketplace of ideas has lost shared conceptions of evidence and reasons. Since the grounds for discourse themselves are questioned, both parties in a dispute are open to accusations of irrationality from the other. Mob rule decides which opinions are disqualified, and this leads right into Popper’s worries about intolerance. So, while his paradox might be a helpful way of framing the problem, it does not offer much practical advice for escaping it.

We might gain more traction, however, by looking to the British philosopher John Stuart Mill. Mill is something like a free speech absolutist himself, arguing that personal liberties must be “absolute and unqualified.” Like Popper, Mill argues that society must allow for unpopular opinions, or it is not a free society at all. When we silence people that we disagree with, we assume that our beliefs are true beyond correction. Likewise, when our beliefs go unchallenged, we hold them superficially and without much conviction. When unpopular ideas are expressed, they provide an opportunity to refine our own opinions and more clearly understand them. For reasons like these, Mill says we should allow freedom of speech and expression with almost no limits, as there is one important exception: the harm principle. Our vast personal freedoms — including the freedom of speech — end when they harm others. There is already legal precedent for restricted speech in the United States on these grounds: fraud and incitement to violence are not protected speech, for example, due to their harmful consequences. A plurality of opinions, however unpopular, should then be welcomed, so long as expressions of those opinions do not cause these kinds of injury to others.

Where does that leave us?

For both Popper and Mill, unqualified free speech rests upon the benefits of free discourse outweighing the risks of abhorrent speech; as a free society, we must allow space for persuasion. But an additional factor not yet considered here is that social media platforms like Twitter, Facebook, and Instagram are different from other media that reach large audiences such as radio, television, and other internet platforms like Substack. Timelines and newsfeeds are uniquely addictive. While social media has the illusion of offering the public a marketplace of ideas, newsfeeds and timelines are designed to engross the user, indefinitely limiting their exposure to opinions that are new or different from their own. Social media can be a tool for discourse, but it is often the antithesis of what Popper and Mill envisioned when advocating for unrestricted speech — an echo chamber that is susceptible to validating poor arguments and calcifying opinions without any opportunity for refutation. While this might be enough of a reason for a free speech absolutist to limit certain speech on social media, there remains the tremendous challenge of how we are to determine what such algorithms should filter and how.

This is no easy task. Musk’s lawsuit assumes the economic harm that Media Matters’s speech caused to X. Yet the advertisers motivated by the economic risks of being associated with Nazi content could make similar arguments. Media Matters’s report is motivated by different harms, namely social, psychological, and physical harms that they believe unrestricted white nationalist content causes. These different types of harm are not easily parsed, and one harm often indirectly causes another; someone physically harmed may not be able to return to work, for example. Yet, in the Media Matters case, direct harms like political polarization, stoking racism (social), increased hate crimes (physical), and doxing or threats (psychological) are more destructive than the direct economic harm caused by lost corporate revenue. Of course, that is only if the types of hate speech they draw attention to in their investigative report are directly responsible for causing those harms.

Does this lead us back to the same challenges facing the paradox of tolerance? Perhaps not. Where the paradox of tolerance faces challenges due to the difficulty of assessing rational discourse, cases of harm might be more easily measured. One important first step could be listening to members of communities affected by hate speech, rather than assuming on their behalf that there is or is not harm. When navigating the difficult problems of internet free speech and its limits, we might find it helpful to begin not by defining free speech, but by asking what counts as harm.

The Right to Block on Social Media

photograph of couple looking at smartphone behind window

On August 18, Elon Musk suggested that the blocking feature on Twitter, which allows users to prevent another user from interacting with them in any way on the site, would soon be removed. (Musk rebranded Twitter “X,” though it still uses the domain twitter.com. For clarity, I’ll refer to this social media site as Twitter, as many of its users still do.) Musk claimed that the feature would be limited to blocking direct messages, adding in a subsequent post that the feature “makes no sense.” This declaration was met with a good deal of criticism and disbelief among Twitter users.

Twitter’s CEO Linda Yaccarino later walked back the statement, claiming that the company is “building something better than the current state of block and mute.” Musk’s claim may be unlikely to be true anyway, since the guidelines for both the App Store and the Google Play Store appear to require that user-generated content must be able to be blocked in any app they offer.

But Musk’s suggestion raises the question of whether blocking on social media is something users have a right to. I won’t attempt to comment on any relevant legal right, but let’s consider the users’ moral rights.

First, a blocking ban violates our right to privacy. We have a right not to expose ourselves to content — and to people — on social media sites. The privacy achieved with blocking on social media goes in two directions: blocking keeps one’s own posts from being viewed by another user, and it also prevents the other user from contacting or interacting with the person who blocked them. In preventing another person from viewing one’s posts, a person can limit who accesses their personal information, thoughts, photos, and social interactions with other users. Even when posts are public, for users who aren’t public figures acting in a public capacity, the ability to retain some privacy when desired is valuable. Blocking is essential for achieving this privacy.

Privacy is also a matter of the ability to prevent someone else from entering your space. Twitter is a place where people meet others, interact, and learn about the world. It facilitates a unique kind of community-building, and thus is a unique kind of place — one that can at once be both public and intimate, involving networks of friends and parasocial relationships. Just as the ability to prevent an arbitrary person from broadcasting their thoughts into your home is essential for privacy, so also the ability to block interactions from someone on social media is an important means of privacy in that space.

Second, the ability to block an account on social media is necessary for safety. Blocking allows users to prevent future harassment, private messages, or hate speech from another user, thus protecting their mental health. By similar reasoning to a restraining order, the ability to block also protects the user from another user’s attempt to maintain unwanted contact or to stalk them through the site. Blocking alone doesn’t accomplish these goals perfectly, but it is necessary for achieving them for anyone who uses social media.

Important to both the above points is the lack of a feasible alternative to Twitter. It’s not always possible for someone to simply use another form of social media to prevent unwanted interactions. Not all platforms have the same functions or norms. The default public settings of Twitter (and its permitting anonymous accounts) make it a much different place from Facebook, which defaults to private posts and requires full names from its users. Twitter has been a successful home for activism and real-time crisis information. Despite recent attempts to launch competing sites, no other social media site compares to Twitter in terms of reach and, for better and worse, ethos. One can’t simply leave the party to avoid interactions as one does in real life; there’s no viable alternative place to go.

Third, blocking gives users more agency than reporting users for suspension or banning. Blocking is immediate, user-achieved, and not dependent on another entity’s approval. It is more efficient than reporting users for suspension or banning, because it does not require either the time or effort that goes into deciding the results of these reports. Neither does blocking depend on the blocked user having violated one of the terms of use on the site, such as rules against hate speech. If I can block another user for any personal reason whatsoever, I have much greater control over my social life online.

With these considerations in mind, it’s worth pointing out that one personal account blocking another is not a case of government censorship or online moderation. People are free to block for any reason whatsoever, without being beholden to principles about what a government or business may rightly censor. There are moral considerations when people act towards each other in any situation, so this is not to say that no moral considerations could make blocking wrong in a particular case. But individuals do not have a blanket moral obligation to allow others to say whatever they want to them, even though a government or the site itself might have no standing to prevent the person from saying it.

One worry you might have is that blocking could intensify echo chambers. An echo chamber is a social community bound by shared beliefs and insulated from opposing evidence. If a person always blocks people who challenge their political ideas, they will likely find themselves in an environment that’s not particularly conducive to forming justified beliefs. This effect is intensified if it should turn out that the blocking actions of a user are then fed into the algorithm that determines what posts show up on one’s social feed. If the algorithm favors displaying accounts the user is likely to find much in common with, then blocking gives highly useful information that would likely result in some further insulation from differing viewpoints beyond the specific account that the user blocks.

Outrage breeds engagement, so the algorithm may instead use information about blocks to show posts that might get the user riled up. But seeing chiefly the most extreme of opposing views does not necessarily diminish the strength of an echo chamber. In fact, if one’s political opponents are showed mostly in an extreme form most likely to generate engagement, their presence might actually serve as evidence that one’s own side of the issue is the rational one. So, even an algorithm that uses blocks as an indication of what outrages the user — and therefore feeds the user more of that — could contribute to a situation where one is insulated from opposing viewpoints. This issue stems from the broader structure and profit incentives of social media, but it is worth considering alongside the issues of privacy, safety, and agency discussed above — in part because the ability to foster an environment that is conducive to forming justified beliefs is itself an important part of safety and agency.

Although it is imperfect, blocking on social media protects users’ privacy, safety, and agency. It is not a matter of government or corporate censorship, and it is necessary for protecting the moral rights of users. Contrary to Musk’s claim, a blocking feature makes moral sense.

Calibrating Trust Amidst Information Chaos

photograph of Twitter check mark on iphone with Twitter logo in background

It’s been a tumultuous past few months on Twitter. Ever since Elon Musk’s takeover, there have been almost daily news stories about some change to the company or platform, and while there’s no doubt that Musk has his share of fans, many of the changes he’s made have not been well-received. Many of these criticisms have focused on questionable business decisions and almost unfathomable amounts of lost money, but Musk’s reign has also produced a kind of informational chaos that makes it even more difficult to identify good sources of information on Twitter.

For example, one early change that received a lot of attention was the introduction of the “paid blue check mark,” where one could pay for the privilege of having what was previously a feature reserved for notable figures on Twitter. This infamously led to a slew of impersonators creating fake accounts, the most notable being the phony Eli Lilly account that had real-world consequences. In response, changes were made: the paid check system was modified, then re-modified, then color-coded, then the colors changed, and now it’s not clear how the system will work in the future. Additional changes have been proposed, such as a massive increase in the character limits for tweets, although it’s not clear if they will be implemented.  Others have recently made their debut, such as a “view count” that has been added to each tweet, next to “replies,” “retweets,” and “likes.”

It can be difficult to keep up with all the changes. This is not a mere annoyance: since it’s not clear what will happen next, or what some of the symbols on Tweets really represent anymore – such as those aforementioned check marks – it can be difficult for users to find their bearings in order to identify trustworthy sources.

More than a mere cause of confusion, informational chaos presents a real risk of undermining the stability of online indicators that help people evaluate online information.

When evaluating information on social media, people appeal to a range of factors to determine whether they should accept it, for better or for worse. Some of these factors include visible metrics on posts, such as how many times it’s been approved of – be it in the form of a “like” or a “heart” or an “upvote,” etc. – shared, or interacted with in the form of comments, replies, or other measures. This might seem to be a blunt and perhaps ineffective way of evaluating information, but it’s not just that people tend to believe what’s popular: given that in many social media it’s easy to misrepresent oneself and generally just make stuff up, users tend to look to aspects of their social media experience that cannot easily be faked. While it’s of course not impossible to fabricate numbers of likes, retweets, and comments, it is at least more difficult to do so, and so these kinds of markers often serve as quick heuristics to determine if some content is worth engaging with.

There are others. People will use the endorsement of sources they trust when evaluating an unknown source, and the Eli Lilly debacle showed how people used the blue check mark at least as an indicator of authenticity – unsurprisingly, given its original function. Similar markers play the same role on other social media sites – the “verified badge” on Instagram, for example, at least gives users the information that the given account is authentic; although it’s not clear how much “authenticity” translates to “credibility.”

(For something that is so often coveted among the influencers and influencer-wannabes there appears to be surprisingly little research on the actual effects of verification on levels of trust among users: some studies seem to suggest that it makes little to no difference in perceived trustworthiness or engagement, while others suggest the opposite).

In short: the online world is messy, and it can be hard to get one’s bearings when evaluating the information that comes at one constantly on social media.

This is why making sudden changes to even superficial markers of authenticity and credibility can make this problem significantly worse. While people might not be the best at interpreting these markers in the most reliable ways, having them be stable can at the very least allow us to consider how we should respond to them.

It’s not as though this is the first change that’s been made to how people evaluate entries on social media. In late 2021, YouTube removed publicly-visible counts of how many dislikes videos received, a change that arguably made it more difficult to identify spam, off-topic, or otherwise low-quality videos at a glance. While relying on a heuristic like “don’t trust videos with a bunch of dislikes” is not always going to lead you to the best results, having a stable set of indicators can at least help users calibrate their levels of trust.

So, it’s not that users will be unable to adjust to changes to their favorite online platforms. But with numerous changes of uncertain value or longevity comes disorientation. Combined with Musk’s recent unbanning of accounts that were previously deemed problematic, resulting in the overall increase of misinformation being spread around the site, conditions are made even worse for those looking for trustworthy sources of information online.

No Fun and Games

image of retro "Level Up" screen

You may not have heard the term, but you’ve probably encountered gamification of one form or another several times today already.

‘Gamification’ refers to the process of embedding game-like elements into non-game contexts to increase motivation or make activities more interesting or gratifying. Game-like elements include attainable goals, rules dictating how the goal can be achieved, and feedback mechanisms that track progress.

For example, Duolingo is a program that gamifies the process of purposefully learning a language. Users are given language lessons and tested on their progress, just like students in a classroom. But these ordinary learning strategies are scaffolded by attainable goals, real-time feedback mechanisms (like points and progress bars), and rewards, making the experience of learning on Duolingo feel like a game. For instance, someone learning Spanish might be presented with the goal of identifying 10 consecutive clothing words, where their progress is tracked in real-time by a visible progress bar, and success is awarded with a colorful congratulation from a digital owl. Duolingo is motivating because it gives users concrete, achievable goals and allows users to track progress towards those goals in real time.

Gamification is not limited to learning programs. Thanks to advocates who tout the motivational power of games, increasingly large portions of our lives are becoming gamified, from online discourse to the workplace to dating.

As with most powerful tools, we should be mindful about how we allow gamification to infiltrate our lives. I will mention three potential downsides.

One issue is that particular gamification elements can function to directly undermine the original purpose of an activity. An example is the Snapstreak feature on Snapchat. Snapchat is a gamified application that enables users to share (often fun) photographs with friends. While gamification on Snapchat generally enhances the fun of the application, certain gamification elements, such as Snapstreaks, tend to do the opposite. Snapstreaks are visible records, accompanied by an emoji, of how many days in a row two users have exchanged photographs. Many users feel compelled to maintain Snapstreaks even when they don’t have any interesting content to share. To achieve this, users laboriously send meaningless content (e.g., a completely black photograph) to all those with whom they have existing Snapstreaks, day after day. The Snapstreak feature has, for users like this, transformed Snapchat into a chore. This benefits the company that owns Snapchat by increasing user engagement. But it undermines the fun.

Relatedly, sometimes an entire gamification structure threatens to erode the quality of an activity by changing the goals or values pursued in an activity. For example, some have argued that the gamification of discourse on Twitter undermines the quality of that discourse by altering people’s conversational aims. Healthy public discourse in a liberal society will include diverse interlocutors with diverse conversational aims such as pursuing truth, persuading others, and promoting empathy. This motivational diversity is good because it fosters diverse conversational approaches and content. (By analogy, think about the difference between, on the one hand, the conversation you might find at a party with people from many different backgrounds who have many different interests and, on the other hand, the one-dimensional conversation you might find at a party where everyone wants to talk about a job they all share). Yet Twitter and similar platforms turn discourse into something like a game, where the goal is to accumulate as many Likes, Followers, and Retweets as possible. As more people adopt this gamified aim as their primary conversational aim, the discursive community becomes increasingly motivationally homogeneous, and consequently the discourse becomes less dynamic. This is especially so given that getting Likes and so forth is a relatively simple conversational aim, which is often best achieved by making a contribution that immediately appeals to the lowest common denominator. Thus, gamifying discourse can reduce its quality. And more generally, gamification of an activity can undermine its value.

Third, some worry that gamification designed to improve our lives can sometimes actually inhibit our flourishing. Many gamification applications, such as Habitify and Nike Run Club, promise to help users develop new activities, habits, and skills. For example, Nike Run Club motivates users to become better runners. The application tracks users across various metrics such as distance and speed. Users can win virtual trophies, compete with other users, and so forth. These gamification mechanisms motivate users to develop new running habits. Plausibly, though, human flourishing is not just a matter of performing worthwhile activities. It also requires that one is motivated to perform those activities for the right sorts of reasons and that these activities are an expression of worthwhile character traits like perseverance. Applications like Nike Run Club invite users to think about worthwhile activities and good habits as a means of checking externally-imposed boxes. Yet intuitively this is a suboptimal motivation. Someone who wakes up before dawn to go on a run primarily because they reflectively endorse running as a worthwhile activity and have the willpower to act on their considered judgment is more closely approximating an ideal of human flourishing than someone who does the same thing primarily because they want to obtain a badge produced by the Nike marketing department. The underlying thought is that we should be intentional not just about what sort of life we want to live but also how we go about creating that life. The easiest way to develop an activity, habit, or skill is not always the best way if we want to live autonomously and excellently.

These are by no means the only worries about gamification, but they are sufficient to establish the point that gamification is not always and unequivocally good.

The upshot, I think, is that we should be thoughtful about when and how we allow our lives to be gamified in order to ensure that gamification serves rather than undermines our interests. When we encounter gamification, we might ask ourselves the following questions:

    1. Is getting caught up in these gamified aims consistent with the value or point of the relevant activity?
    2. Does getting caught up in this form of gamification change me in desirable or undesirable ways?

Let’s apply these questions to Tinder as a test case.

Tinder is a dating application that matches users who signal mutual interest in one another. Users create a profile that includes a picture and a short autobiographical blurb. Users are then presented with profiles of other users and have the option of either signaling interest (by swiping right) or lack thereof (by swiping left). Users who signal mutual interest have the opportunity to chat directly through the application.

Tinder invites users to think of the dating process as a game where the goals include evaluating others and accumulating as many matches (or right swipes) as possible. This is by design.

“We always saw Tinder, the interface, as a game,” Tinder’s co-founder, Sean Read, said in a 2014 Time interview. “Nobody joins Tinder because they’re looking for something,” he explained. “They join because they want to have fun. It doesn’t even matter if you match because swiping is so fun.”

The tendency to think of dating as a game is not new (think about the term “scoring”). But Tinder changes the game since Tinder’s gamified goals can be achieved without meaningful human interaction. Does getting caught up in these aims undermine the activity of dating? Arguably it does, if the point of dating is to engage in meaningful human interaction of one kind or another. Does getting caught up in Tinder’s gamification change users in desirable or undesirable ways? Well, that depends on the user. But someone who is motivated to spend hours a day thumbing through superficial dating profiles is probably not in this respect approximating an ideal of human flourishing. Yet this is a tendency that Tinder encourages.

There is a real worry that when we ask the above questions (and others like them), we will discover that many gamification systems that appear to benefit us actually work against our interests. This is why it pays to be mindful about how gamification is applied.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

The Ethics of Protest Trolling

image of repeating error windows

There is a new Trump-helmed social media site being developed, and it’s been getting a lot of attention from the media. Called “Truth Social,” the site and associated app initially went up for only a few hours before it was taken offline due to trolling. Turns out, the site’s security was not exactly top-of-the-line: users were able to claim handles that you think would have been reserved for others – including “donaldjtrump” and “mikepence” – and then used their new accounts to post a variety of images that few people would want to be associated with their name.

This isn’t the first time a far-right social media site has been targeted by internet pranksters. Upon its release, GETTR, a Twitter clone founded by one of Trump’s former spokespersons, was flooded with hentai and other forms of cartoon pornography. While a defining feature of far-right social media thus far has been a fervor for “free speech” and a rejection of “cancel culture,” it is clear that such sites do not want this particular kind of content clogging up their feeds.

Those familiar with the internet will recognize posting irrelevant, gross, and generally not-suitable-for-work images on sites in this manner as acts of trolling. So, here’s a question: is it morally permissible to troll?

The question quickly becomes complicated when we realize that “trolling” is not a well-defined act, and encompasses potentially many different forms of behavior. There has been some philosophical work on the topic: for example, in the excellently titled “I Wrote this Paper for the Lulz: The Ethics of Internet Trolling,” philosopher Ralph DiFranco distinguishes 5 different forms of trolling.

There’s malicious trolling, which is intended to specifically harm a target, often through the use of offensive images or slurs. There’s also jocular trolling, actions that are not done out of any intention to harm, but rather to poke fun at someone in a typically lighthearted manner. While malicious trolling seems to be generally morally problematic, jocular trolling can certainly also risk crossing a moral line (e.g., when “it’s just a prank, bro!” videos go wrong).

There’s also state-sponsored trolling, which was a familiar point of discussion during the 2016 U.S. elections, wherein companies in Russia were accused of creating fake profiles and posts in order to support Trump’s campaign; concern trolling, wherein someone feigns sympathy in an attempt to elicit a genuine response, which they are then ridiculed for; and subcultural trolling, wherein someone again pretends to be authentically engaged, this time in a discussion or issue in order elicit genuine engagement by the target. Again, it’s easy to see how many of these kinds of acts can be morally troubling: intentional interference with elections, and feigning sincerity to provoke someone else generally seem like the kind of behaviors that one ought not perform.

What about the kinds of acts we’re seeing being performed on Truth Social, and that we’ve seen on other far-right social media apps like GETTR? They seem to be a form of trolling, but do they fall into any of the above categories? And what should we think about their moral status?

As we saw above, trolling captures a wide variety of phenomena, and not all of them have been fully articulated. I think that the kind of trolling I’m focusing on here – i.e., that which is involved in snatching up high-profile usernames and clogging up feeds with irrelevant images – doesn’t neatly fit into any of the above categories. Instead, let’s call it something else: protest trolling.

Protest trolling has a lot of the hallmarks of other forms of trolling – it often involves acts that are meant to distract a particular target or targets, and involves what the troll finds funny (e.g., inappropriate pictures of Sonic the Hedgehog). Unlike other forms of trolling, however, it is not necessarily done in “good fun,” nor is it necessarily meant to be malicious. Instead, it’s meant to express one’s principled disagreement with a target, be it an individual, group, or platform.

Compare, for example, a protest of a new university policy that involves a student sit-in. A group of students will coordinate their efforts to disrupt the activities of those in charge, an act that expresses their disagreement with the institution, governance, and/or authority figure. The act itself is intentionally disruptive, but is not itself motivated by malice: they are not acting in this way because they want others to be harmed, even though some harm may come about as a result.

While the analogy to the case of online trolling is imperfect, there are, I think, some important similarities between a student sit-in and the flooding of right-wing social media with irrelevant content. Both are primarily meant to disrupt, without specifically intending harm, and both are directed towards a perceived threat to one’s core values. For instance, we have seen how right-wing media has perpetrated violence, both in terms of violent acts and towards members of marginalized groups. One might thereby be concerned that a whole social network dedicated to the expression of such views could result in similar harms, and is thus worth protesting.

Of course, in the case of online trolling there may be other intentions at play: for example, the choice of material that’s been used to disrupt these services is clearly meant to shock, gross-out, and potentially even offend its core users. Furthermore, not every such action will have principled intentions: some will simply want to jump on the bandwagon because it seems fun, as opposed to actually expressing a principled disagreement.

There are, then, many tangled issues surrounding the intentions and execution of different forms of protest trolling. However, just as many cases of real-life protesting are disruptive without being unethical, so, too, may cases of protest trolling be potentially morally unproblematic.

Trump v. Facebook, and the Future of Free Speech

photograph of trump speech playing on phone with Trump's Twitter page displayed in the background

On July 7th, former President Donald Trump announced his intention to sue Facebook, Twitter, and Google for banning him from posting on their platforms. Facebook initially banned Donald Trump following the January 6th insurrection and Twitter and Google soon followed suit. Trump’s ban poses not only legal questions concerning the First Amendment, but also moral questions concerning whether or not social media companies owe a duty to guarantee free speech.

Does Trump have any moral standing when it comes to his ban from Facebook, Twitter, and Google? How can we balance the value of free expression with the rights of social media companies to regulate their platforms?

After the events of January 6th, Trump was immediately banned from social media platforms. In its initial ban, the CEO of Facebook, Mark Zuckerberg, offered a brief justification: “We believe the risks of allowing the President to continue to use our service during this period are too great.” Following Trump’s exit from office, Facebook decided to extend Trump’s ban to two years. Twitter opted for a permanent ban, and YouTube has banned him indefinitely.

Though this came as a shock to many, some argued that Trump’s ban should have come much sooner. Throughout his presidency, Trump regularly used social media to communicate with his base, at times spreading false information. While some found this communication style unpresidential, it arguably brought the Office of the President closer to the American public than ever before. Trump’s use of Twitter engaged citizens who might not have otherwise engaged with politics and even reached many who did not follow him. Though there is value in allowing the president to authentically communicate with the American people, Trump’s use of the social media space has been declared unethical by many; he consistently used these communiques to spread falsehoods, issue personal attacks, campaign, and fund-raise.

But regardless of the merits of Trump’s lawsuit, it raises important questions regarding the role that social media platforms play in modern society. The First Amendment, and its protections regarding free speech, only apply to federal government regulation of speech (and to state regulation of speech, as incorporated by the 14th Amendment). This protection has generally not extended to private businesses or individuals who are not directly funded or affiliated with the government. General forums, however, such as the internet, have been considered a “free speech zone.” While located on the internet, social media companies have not been granted a similar “free speech zone” status. The Supreme Court has acknowledged that the “vast democratic forums of the Internet” serve an important function in the exchange of views, but it has refused to extend the responsibility to protect free speech beyond state actors, or those performing traditional and exclusive government functions. The definition of state actors is nebulous, but the Supreme Court has drawn hard lines, recently holding that private entities which provide publicly accessible forums are not inherently performing state actions. Recognizing the limits of the First Amendment, Trump has attempted to bridge the gap between private and state action in his complaint, arguing that Facebook, Twitter, and Google censored his speech due to “coercive pressure from the government” and therefore their “activities amount to state action.”

Though this argument may be somewhat of a stretch legally, it is worth considering whether or not social media platforms play an important enough role in our lives to consider them responsible for providing an unregulated forum for speech. Social media has become such a persistent and necessary feature of our lives that Supreme Court Justice Clarence Thomas has argued that they should be considered “common carriers” and subject to heightened regulation in a similar manner to planes, telephones, and other public accommodations. And perhaps Justice Thomas has a point. About 70% of Americans hold an active social media account and more than half of Americans rely upon social media for news. With an increasing percentage of society not only using social media, but relying upon it, perhaps social media companies would be better treated as providers of public accommodations rather than private corporations with the right to act as gatekeepers to their services.

Despite American’s growing dependence on social media, some have argued that viewing social media as a public service is ill-advised. In an article in the National Review, Jessica Melugin argues that there is not a strong legal nor practical basis for considering social media entities as common carriers. First, Melugin argues that exclusion is central to the business model of social media companies, who generate their revenue from choosing which advertisements to feature to generate revenue. Second, forcing social media companies to allow any and all speech to be published on their platforms may be more akin to compelling speech rather than preventing its suppression. Lastly, social media companies, unlike other common carriers, face consistent market competition. Though Facebook, Instagram, and Twitter appear to have taken over for now, companies such as Snapchat and TikTok represent growing and consistent competition.

Another consideration which weighs against applying First Amendment duties to social media companies is the widespread danger of propaganda and misinformation made possible by their algorithmic approach to boosting content. Any person can post information, whether true or false, which has the potential to reach millions of people. Though an increasing amount of Americans rely on social media for news, studies have found that those who do so tend to be less informed and more exposed to conspiracies. Extremists have also found a safe-haven on social media platforms to connect and plan terrorist acts. With these considerations in mind, allowing social media companies to limit the content on their platforms may be justified in combating the harmful tendencies of an ill-informed and conspiracy-laden public and perhaps even in preventing violent attacks.

Despite the pertinent moral questions posed by Trump’s lawsuit, he is likely to lose. Legal experts have argued that Trump’s suit “has almost no chance of success.” However, the legal standing of Trump’s claims do not necessarily dictate their morality, which is equally worthy of consideration. Though Trump’s lawsuit may fail, the role that social media companies play in the regulation of speech and information will only continue to grow.

“Fake News” Is Not Dangerously Overblown

image of glitched "FAKE NEWS" title accompanied by bits of computer code

In a recent article here at The Prindle Post, Jimmy Alfonso Licon argues that the hype surrounding the problem of “fake news” might be less serious than people often suggest. By pointing to several recent studies, Licon highlights that concerns about social standing actually prevent a surprisingly large percentage of people from sharing fake news stories on social media; as he says, “people have strong incentives to avoid sharing fake news when their reputations are at stake.” Instead, it looks like many folks who share fake news do so because of pre-existing partisan biases (not necessarily because of their gullibility about or ignorance of the facts). If this is true, then calls to regulate speech online (or elsewhere) in an attempt to mitigate the spread of fake news might end up doing more harm than good (insofar as they unduly censor otherwise free speech).

To be clear: despite the “clickbaity” title of this present article, my goal here is not to argue with Licon’s main point; the empirical evidence is indeed consistently suggesting that fake news spreads online not simply because individual users are always fooled into believing a fake story’s content, but rather because the fake story:

On some level, this is frustratingly difficult to test: given the prevalence of expressive responding and other artifacts that can contaminate survey data, it is unclear how to interpret an affirmation of, say, the (demonstrably false) “immense crowd size” at Donald Trump’s presidential inauguration — does the subject genuinely believe that the pictures show a massive crowd or are they simply reporting this to the researcher as an expression of partisan allegiance? Moreover, a non-trivial amount of fake news (and, for that matter, real news) is spread by users who only read a story’s headline without clicking through to read the story itself. All of this, combined with additional concerns about the propagandistic politicization of the term ‘fake news,’ as when politicians invoke the concept to avoid responding to negative accusations against them, has led some researchers to argue that the “sloppy, arbitrary” nature of the term’s definition renders it effectively useless for careful analyses.

However, whereas Licon is concerned about potentially unwarranted threats to free speech online, I am concerned about what the reality of “fake news” tells us about the nature of online speech as a whole.

Suppose that we are having lunch and, during the natural flow of our conversation, I tell you a story about how my cat drank out of my coffee cup this morning; although I could communicate the details to you in various ways (depending on my story-telling ability), one upshot of this speech act would be to assert the following proposition:

1. My cat drank my coffee.

To assert something is to (as explained by Sandford Goldberg) “state, report, contend, or claim that such-and-such is the case. It is the act through which we tell others things, by which we inform an audience of this-or-that, or in which we vouch for something.” Were you to later learn that my cat did not drink my coffee, that I didn’t have any coffee to drink this morning, or that I don’t live with a cat, you would be well within your rights to think that something has gone wrong with my speech (most basically: I lied to you by asserting something that I knew to be false).

The kinds of conventions that govern our speech are sometimes described by philosophers of language as “norms” or “rules,” with a notable example being the knowledge norm of assertion. When I assert Proposition #1 (“My cat drank my coffee”), you can rightfully think that I’m representing myself as knowing the content of (1) — and since I can only know (as opposed to merely believe) something that is true, I furthermore am representing (1) as true when I assert it. This, then, is one of the problems with telling a lie: I’m violating how language is supposed to work when I tell you something false; I’m breaking the rules governing how assertion functions.

Now to add a wrinkle: what if, after hearing my story about my cat and coffee, you go and repeat the story to someone else? Assuming that you don’t pretend like the story happened to you personally, but you instead explain how (1) describes your friend (me) and you’re simply relaying the story as you heard it, then what you’re asserting might be something like:

2. My friend’s cat drank his coffee.

If this other person you’re speaking to later learns that I was lying about (1), that means that you’re wrong about (2), but it doesn’t clearly mean that you’re lying about (2) — you thought you knew that (2) was true (because you foolishly trusted me and my story-telling skills). Whereas I violated one or more norms of assertion by lying to you about (1), it’s not clear that you’ve violated those norms by asserting (2).

It’s also not clear how any of these norms might function when it comes to social media interaction and other online forms of communication.

Suppose that instead of speaking (1) in a conversation, I write about it in a tweet. And suppose that instead of asserting (2) to someone else, you simply retweet my initial post. While at first glance it might seem right to say that the basic norms of assertion still apply as before here, we’ve already seen (with those bullet points in the second paragraph of this article) that fake news spreads precisely because internet users seemingly aren’t as constrained in their digital speech acts. Maybe you retweet my story because you find it amusing (but don’t think it’s true) or because you believe that cat-related stories should be promoted online — we could imagine all sorts of possible reasons why you might retransmit the (false) information of (1) without believing that it’s true.

Some might point out that offline communication can often manifest some of these non-epistemic elements of communication, but C. Thi Nguyen points out how the mechanics of social media intentionally encourage this kind of behavior. Insofar as a platform like Twitter gamifies our communication by rewarding users with attention and acclaim (via tools such as “likes” and “follower counts”), it promotes information spreading online for many reasons beyond the basic knowledge norm of assertion. Similarly, Lucy McDonald argues that this gamification model (although good for maintaining a website’s user base) demonstrably harms the quality of the information shared throughout that platform; when people care more about attracting “likes” than communicating truth, digital speech can become severely epistemically problematic.

Now, add the concerns mentioned above (and by Licon) about fake news and it might be easy to see how those kinds of stories (and all of their partisan enticements) are particularly well-suited to spread through social media platforms (designed as they are to promote engagement, regardless of accuracy).

So, while Licon is right to be concerned about the potential over-policing of online speech by governments or corporations interested in shutting down fake news, it’s also the case that conversational norms (for both online and offline speech) are important features of how we communicate — the trick will be to find a way to manifest them consistently and to encourage others to do the same. (One promising element of a remedy — that does not approximate censorship — involves platforms like Twitter explicitly reminding or asking people to read articles before they share them; a growing body of evidence suggests that these kinds of “nudges” can help promote more epistemically desirable online norms of discourse in line with those well-developed in offline contexts.)

Ultimately, then, “fake news” seems like less of a rarely-shared digital phenomenon and more of a curiously noticeable indicator of a more wide-ranging issue for communication in the 21st century. Rather than being “dangerously overblown,” the problem of fake news is a proverbial canary in the coal mine for the epistemic ambiguities of online speech acts.

Do Hashtags Make Political Discourse Worse?

image of hashtags on sticky note

Are hashtags ruining political discourse? On Twitter, the hashtag now serves little technical purpose following upgrades to the site’s search function, yet the use of hashtags in the political sphere are incredibly popular as a means of bringing attention to, or giving one’s thoughts on, a subject of significant public interest. While some suggest that hashtags facilitate better public debate, there is good reason to believe that they also make political discussion less rational and more polarized. If the means of expressing political ideas through a hashtag makes political discourse worse off, then their continued use poses a morally significant decision for anyone choosing to use them.

Let’s begin by considering a hashtag like #defundthepolice. The past year has drawn more attention to the idea of police reform. While much of this attention centered on reduction of police violence following the death of George Floyd, greater focus has also been attached to reconsidering the institutions and meaning of policing. For example, questions have been raised whether police should be responding to mental health crises. In the case of Daniel Prude, for instance, police responded to someone in a state of delirium and it resulted in Prude’s death. Walter Wallace Jr. suffered from bipolar disorder until a domestic disturbance also led to his death by the police. 13-year-old Linden Cameron is now paralyzed after being shot by police responding to a call that a juvenile was having a “violent psychological issue.”

These cases remind us that police are called to handle a wide variety of social disturbances and that the best way to handle such cases might vary, which can render traditional policing a bad fit. It is worth noting that the concept of policing and the means by which public order has been maintained has changed over time. For example, not that long ago, it was a novel idea to have a police force that went out and actively investigated crimes, or to have uniformed officers use military-style training and tactics. And yet, many concepts of policing and their institutions come from a time before any significant advances were made in understanding and treating mental health, and before contemporary methods for social work were devised. The question, therefore, is whether the concept of policing, and the means employed, still best suit the problems for which they were created as we understand those problems today?

In terms of the debate about reform, it is important to note that this isn’t an issue of big or small government. Police already respond to these calls anyway; taxpayers already pay for these services (and the lawsuits that follow from them). Deciding what kinds of problems we think police should respond to, whether social workers should be involved, and how to assess effectiveness are all matters for careful community debate, evidence-gathering, and experimentation. But does a slogan like “#defundthepolice” actually make this deliberative process worse?

Part of the problem with a phrase like “defund the police” is that it is incredibly vague: Does defund mean to reduce in budget? Does it mean elimination of police? Does it mean reform to traditional policing? But the problem is even more complicated. The broader question facing the public is about redefining what “policing” even is (and should be) given our current understanding of the contemporary problems it is meant to address. Given this, we may choose to redefine social work and policing such that they may blur. Police officers do not have to be the only people involved in ensuring safety and preventing disorder. So, what does “the police” mean? Are we talking about a specific police institution, such as in the case of Minneapolis where the police department itself may be eliminated? Or does “the police” refer to the entire concept of civil protection in general?

Part of the problem with the popularization of a phrase like #defundthepolice is that it not only makes political discourse more ambiguous, but that it also has the potential to limit our thinking. We do a disservice to ourselves and our concepts by presuming a narrow definition of things like “policing” in our shared vocabulary and collective imagination. And the ambiguity introduced with such slogans may make it more difficult to achieve political consensus.

In a recent study by Eugenia Ha Rim Rho and Melissa Mazmanian of the University of California comparing the difference between people who read news which includes hashtags compared to news which does not. They found that when people were exposed to a politically charged hashtag like #BlackLivesMatter or #MeToo, these people were more likely to use partisan language to discuss the news, and more likely to focus on assumed political biases rather than the social issue discussed in the news content. The study notes, “those shown news posts with hashtags exhibited more black-and-white and less emotionally temperate rhetorical patterns compared to the control group” and found that nuanced understandings of content get drowned out by the hashtag. Such findings reinforce the idea that hashtags are potentially harmful to political discourse.

On the other hand, researchers have assumed that political hashtags facilitate better and more meaningful conversations. Such hashtags are known to increase narrative agency by allowing for personal and communal storytelling. For example, the ability to share personal stories using the #MeToo hashtag contributes to political discourse on the issue of sexual harassment by offering perspective and by making the nature of the problem clearer. Hashtags can also make it easier to draw attention to important issues that might not receive attention otherwise, and allow for more opportunity for contribution by each participant in the discussion. As a recent paper argues, Twitter does offer the potential for the emergence of non-elite actors to engage in content creation and framing in communities that form in response to an event or issue.

Hashtags can also be used as part of heuristic processing, making it easier to understand topics and events. This can facilitate communication, organization, and cooperation in response to social issues. As a form of “hashtag activism,” the use of hashtags may make people more likely to be engaged. According to a 2014 study almost 60% of Americans felt that tweeting or posting about something is an effective form of advocacy. As Bev Gooden, a creator of the hashtag #whyIstayed, notes, “I think the beauty of hashtag activism is that it creates an opportunity for sustained engagement.”

So while there is a lot of potential promise to the idea of hashtags as a tool to rally and inform, hashtags also have the potential to rally to the point of obstinacy and to misinform. Of course, it is unlikely that all hashtags will always have the same effects on political discourse and so the choice of when to use them, and how, ultimately becomes an important individual moral question about how to best contribute to a public democratic discourse, demanding of us the need to carefully consider nuance and context.

QAnon and Two Johns

photograph of 'Q Army" sign displayed at political rally

In recent years, threats posed to and by free speech on the internet have grown larger and more concerning. Such problems as authoritarian regimes smothering dissent and misinformation campaigns targeting elections and public health have enjoyed quite a share of the limelight. Social media platforms have sought (and struggled) to address such challenges. Recently, a new insidious threat posed by free speech has emerged: far-right conspiracy theories. The insurrection of January 6th unveiled the danger of speech promoting such beliefs, namely ones the QAnon theory embraces. The insurrection demonstrated that speech promoting the anti-government extremist theory can not only engender violence but existentially threaten the United States. Such speech so threatens harm by manipulating individuals into believing in the necessity of violence to combat the schemes of a secretive, satanic elite. In the days following the insurrection, social media platforms rushed to combat this threat. Twitter alone removed more than 70,000 QAnon-focused accounts from its platform.

This bold but wise move was met with resistance, however. Right-wing media commentators were quick to decry this and similar policies as totalitarian censorship. Legal experts retorted that, as private entities, social media companies can restrict speech on their platform as they please. This is because the First Amendment to the U.S. Constitution protects citizens from legal restrictions on free speech, not the rules of private organizations. Such legal experts may be perfectly correct, and unequivocally siding with them might seem to offer a temptingly quick way to dismiss fanatic right-wing commentators. Nevertheless, caring only about government restrictions on speech seems perilous: such a stance neglects the great importance of social restrictions on speech.

The weight of social restrictions on speech (and behavior, more generally) is very real. Jean-Jacques Rousseau referred to such social restrictions as moral laws. He even seemed to regard this class of laws as more fundamental than the constitutional, civil, and criminal classes. Moral laws are inscribed in the very “hearts of the citizens” and include “morals, customs, and especially opinion.” Violations of these laws are typically penalized with either criticism or ostracism (or both). The emergence of “cancel culture” provides conspicuous examples (for better or worse) of this structure in action, from Gina Carano to John Schnatter. First, an individual (typically, a public figure) violates a moral law (frequently, customary prohibitions on racist speech). Then, the individual receives a punishment (often, in the form of damage to reputation and career). The prohibitions on QAnon-focused Twitter accounts are a form of ostracism: those promoting QAnon beliefs have been expelled from the Twitter community for transgressing moral laws, namely peace (by promoting violence) and honesty (by promoting misinformation). As Twitter has become an integral forum for political discourse (politicians, like former President Trump, heavily rely on the platform to both court popular support and bash their rivals), this Twitter expulsion amounts to marginalization within, or partial expulsion from, general public discourse. Upon considering this, the real restrictiveness of such prohibitions on speech should now be evident.

Once the real strength of social restrictions on speech is acknowledged, a certain tension becomes apparent: that between our liberties concerning speech and our liberties in regard to property. To elaborate, there appears to be a tension between Twitter users and Twitter shareholders (particularly, the right to set and enforce private restrictions on the speech shared over the platform they own). Efforts to balance the two can perhaps be aided by the wisdom of two great Johns: John Locke and Jean-Jacques Rousseau. Their writings offer some thought-provoking perspectives on the grounds and scope of each of the parties’ freedoms.

John Locke believed that rights are derived from nature. He thought they were contained in what he called the Law of Nature: “no one ought to harm another in [their] Life, Health, Liberty, or Possessions.” Certainly, this general rule implies the rights to free speech and property. Moreover, it follows that those particular rights extend only so far as they accord with that rule. Locke’s theory can thus affirm both natural rights and natural limits to them. Stated in Lockean terms, then, the now-removed QAnon accounts apparently promoted speech which transgressed natural limits on the right to free speech (by promoting violence).

Unlike Locke, Jean-Jacques Rousseau held that rights are derived from social agreement, not nature. He held that this social agreement takes the form of continuous negotiation by all members of the “body politic:” manifold “individual wills” are boiled into an all-binding “general will.” In this perspective, the rights to free speech and property extend only so far as social agreement allows. Rousseau’s theory can thus recognize the value of including diverse individuals in social discourse while also recognizing the validity of socially-established regulations on that discourse. Understood in this perspective, Twitter expelled the QAnon accounts for violating regulations on social discourse (namely, by supporting violence and thus threatening the process of discourse itself).

Locke’s and Rousseau’s perspectives can provide a useful guide to assessing the issues related to free speech and the internet. Each perspective offers a framework which seems reasonable and yet is opposed to the other. Considering both, then, should allow for multi-sided and nuanced discussion. Employing these two frameworks (and other conceivable ones), as well as considering the opinions of more recent thinkers, can potentially enrich public discourse surrounding free speech and the internet.

In the Limelight: Ethics for Journalists as Public Figures

photograph of news camera recording press conference

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Journalistic ethics are the evolving standards that dictate the responsibilities reporters have to the public. As members of the press, news writers play an important role in the accessibility of information, and unethical journalistic practices can have a detrimental impact on the knowledgeability of the population. Developing technology is a major factor in changes to journalism and the way journalists navigate ethical dilemmas. Both the field of journalism, and its ethics, have been revolutionized by the internet.

The increased access to social media and other public platforms of self-expression have expanded the role of journalists as public figures. The majority of journalistic ethical concerns focus on journalists’ actions in the scope of their work. As the idea of privacy changes, more people feel comfortable sharing their lives online and journalists’ actions outside of their work come further under scrutiny. Increasingly, questions of ethics in journalism include journalists’ non-professional lives. What responsibilities do journalists have as public-facing individuals?

As a student of journalism, I am all too aware that there is no common consensus on the issue. At the publication I write for, staff members are restricted from participating in protests for the duration of their employment. In a seminar class, a professional journalist discussed workplace moratoriums they’d encountered on publicly stating political leanings and one memorable debate about whether or not it was ethical for journalists to vote — especially in primaries, on the off-chance that their vote or party affiliation could become public. Each of these scenarios stems from a common fear that a journalist will become untrustworthy to their readership due to their actions outside of their work. With less than half the American public professing trust in the media, according to Gallup polls, journalists are facing intense pressure to prove themselves worthy of trust.

Journalists have a duty to be as unbiased as possible in their reporting — this is a well-established standard of journalism, promoted by groups like the Society for Professional Journalists (SPJ). How exactly they accomplish that is changing in the face of new technologies like social media. Should journalists avoid publicizing their personal actions and opinions and opt-out of any personal social media? Or should they restrict them entirely to avoid any risk of them becoming public? Where do we draw the lines?

The underlying assumption here is that combating biased reporting comes down to the personal responsibility of journalists to either minimize their own biases or conceal them. At least a part of this assumption is flawed. People are inherently biased; a person cannot be completely impartial. Anyone who attempts to pretend otherwise actually runs a greater risk of being swayed by these biases because they become blind to them. The ethics code of the SPJ advises journalists to “avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts.” Although this was initially written to be applied to journalists’ professional lives, I believe that that short second sentence is a piece of the solution. “Disclose unavoidable conflicts.” More effective than hiding biases is being clear about them. Journalists should be open about any connections or political leanings that intersect with their field. It truly provides the public with all the information and the opportunity to judge the issues for themselves.

I don’t mean to say that journalists should be required to make parts of their private lives public if they don’t intersect with their work. However, they should not be asked to hide them either. Although most arguments don’t explicitly suggest journalists hide their biases, they either suggest journalists avoid public action that could reveal a bias or avoid any connection that could result in a bias — an entirely unrealistic and harmful expectation. Expecting journalists to either pretend to be bias-free or to isolate themselves from the issues they cover as much as possible results in either dishonesty or “parachute journalism” — journalism in which reporters are thrust into situations they do not understand and don’t have the background to report on accurately. Fostering trust with readers and deserving that trust should not be accomplished by trying to turn people into something they simply cannot be, but by being honest about any potential biases and working to ensure the information is as accurate as possible regardless.

The divide between a so-called “public” or “professional” life and a “private” life is not always as clear as we might like, however. Whether they like it or not, journalists are at least semi-public figures, and many use social media to raise awareness for their work and the topics they cover, while also using social media in more traditional, personal ways. In these situations, it can become more difficult to draw a line between sharing personal thoughts and speaking as a professional.

In early 2020, New York Times columnist Ben Smith wrote a piece criticizing New Yorker writer Ronan Farrow for his journalism, including, in some cases the exact accuracy or editorializing of tweets Farrow had posted. Despite my impression that Smith’s column was in itself inaccurate, poorly researched and hypocritical, it raised important questions about the role of Twitter and other social media in reporting. A phrase I saw numerous times afterwards was “tweets are not journalism” — a criticism of the choice to place the same importance on and apply the same journalistic standards to Farrow’s Twitter account as his published work.

Social media makes it incredibly easy to share information, opinions, and ideas. It is far faster than many other traditional methods of publishing. It can, and has been, a powerful tool for journalists to make corrections and updates in a timely manner and to make those corrections more likely to be viewed by people who already read a story and might not check it again. If a journalist intends them to be, tweets can, in fact, be journalism.

Which brings us back to the issue of separating public from private. Labeling advocacy, commentary, and advertisement (and keeping them separated) is an essential part of ethical journalism. But which parts of these standards should be extrapolated to social media, and how? Many individuals will use separate accounts to make this distinction. Having a work account and personal account, typically with stricter privacy settings, is not uncommon. It does, however, prevent many of the algorithmic tricks people may use to make their work accessible, and accessibility is an important part of journalism. Separating personal and public accounts effectively divides an individual’s audience and prevents journalists from forming more personal connections to their audience in order to publicize their work. It also prevents the engagement benefits of more frequent posting that comes from using a single account. By being asked to abstain from a large part of what is now ordinary communication with the public, journalists are being asked to hinder their effectiveness.

Tagging systems within social media currently provide the best method for journalists to mark and categorize these differences, but there’s no “standard practice” amongst journalists on social media to help readers navigate these issues, and so long as debates about journalistic ethics outside of work focus on trying to restrict journalists from developing biases at all, it won’t become standard practice. Adapting to social media means shifting away from the idea that personal bias can be prevented by isolating individuals from the controversial issues, rather than helping readers and journalists understand, acknowledge, and deconstruct biases in media for themselves by promoting transparency and conversation.

Trump and the Dangers of Social Media

photograph of President Trump's twitter bio displayed on tablet

In the era of Trump, social media has been both the medium through which political opinions are disseminated and a subject of political controversy itself. Every new incendiary tweet feeds into another circular discussion about the role sites like Twitter and Facebook should have in political discourse, and the recent attack on the U.S. Capitol by right-wing terrorists is no different. In what NPR described as “the most sweeping punishment any major social media company has ever taken against Trump,” Twitter has banned the president from using their platform. Not long before Twitter’s announcement, Facebook banned him as well, and now Parler, the conservative alternative to Twitter, has been removed from the app store by Apple.

While these companies are certainly justified in their desire to prevent further violence, is this all too little, too late? Much in the same way that members of the current administration have come under fire for resigning with only two weeks left in office, and not earlier, it seems that social media sites could have acted sooner to squash disinformation and radical coordination, potentially averting acts of domestic terror like this one.

At the same time, there isn’t a simple way to cleanse social media sites of white supremacist violence; white supremacy is insidious and often very difficult to detect through an algorithm. This places social media sites in an unwinnable situation: if you allow QAnon conspiracy theories to flourish unchecked, then you end up with a wide base of xenophobic militants with a deep hatred for the left. But if you force conspiracy theorists off your site, they either migrate to new, more accommodating platforms (like Parler), or resort to an ever-evolving lexicon of dog-whistles that are much harder to keep track of.

Furthermore, banning Trump supporters from social media sites only feeds into their imagined oppression; what they view as “censorship” (broad social condemnation for racist or simply untrue opinions) only serves as proof that their First Amendment rights are being trampled upon. This view, of course, ignores the fact that the First Amendment is something the government upholds, not private companies, which Trump-appointee Justice Kavanaugh affirmed in the Supreme Court in 2019. But much in the same way that the Confederacy’s romantic appeal relies on its defeat, right-wing pundits who are banned from tweeting might become martyrs for their base, adding more fuel to the fire of their cause. As David Graham points out, that process has already begun; insurrectionists are claiming the status of victims, and even Republican politicians who condemn the violence in one moment tacitly validate the rage of conspiracy theorists in another.

The ethical dilemma faced by social media sites at this watershed moment encompasses more than just politics. It also encompasses the idea of truth itself. As Andrew Marantz explained in The New Yorker,

“For more than five years now, a complacent chorus of politicians and talking heads has advised us to ignore Trump’s tweets. They were just words, after all. Twitter is not real life. Sticks and stones may break our bones, but Trump’s lies and insults and white-supremacist propaganda and snarling provocations would never hurt us.” But, Marantz goes on, “The words of a President matter. Trump’s tweets have always been consequential, just as all of our online excrescences are consequential—not because they are always noble or wise or true but for the opposite reason. What we say, online and offline, affects what we believe and what we do—in other words, who we are.”

We have to rise about our irony and detachment, and understand as a nation that language is not divorced from reality. Conspiracy theories, which depend in large part on language games and fantasy, must be addressed to prevent further violence, and only an openness to truth can help us move beyond them as a nation.

Come into My Parler

photograph of relection on Chicago Bean of skyline

Efforts to curtail and limit the effect of disinformation reached a fever-pitch in the run up to the 2020 election for President of the United States. Prominent social media platforms, Facebook and Twitter, after long resistance to exerting significant top-down control of user posted content, began actively combating misinformation. Depending on who you ask, this change of course either amounts to seeing reason or abandoning it. In the latter camp are those ditching Facebook and Twitter for relative newcomer, Parler.

Parler bills itself as a free speech platform, exerting top-down control only in response to criminal activity and spam. This nightwatchman approach to moderation makes clear the political orientation of Parler’s founders and those people who have dumped mainstream platforms and moved over to Parler. Libertarian political philosophy concerning the proper role of state power was famously described by American philosopher Robert Nozick as relegating the state to the role of nightwatchman: leaving citizens to do as they please and only intervening to sanction those who break the minimal rules that underpin fair and open dealing.

Those making the switch characterize Facebook and Twitter, on the other hand, as becoming increasingly tyrannical. Any attempt to curate and fact-check introduces bias, claims Parler co-founder John Matze. Whereas Parler aims to be a “neutral platform,” according to Parler co-founder Rebekah Mercer. This kind of political and ideological neutrality is a hallmark aspiration of libertarianism and classical liberalism.

However, Parler’s pretension became hypocrisy, as it banned leftist parody accounts and pornography. However, this is neither surprising nor on its own bad. As some have pointed out, every social media site faces the same set of issues with content and largely responds to it the same way. However, Parler’s aspiration of libertarian neutrality when it comes to speech content makes their terms of service, which allow them to remove user content “at any time and for any reason or no reason,” and their policy of kicking users off the platform “even where the [terms of service] have been followed” particularly obnoxious.

But suppose that Parler stuck to its professed principles. What would it mean to be politically or ideologically neutral, and why would fact-checking compromise it? A simple way of thinking about the matter is embodied by Parler’s espoused position toward speech content: no speech will be treated differently by those in power simply on the basis of its message, regardless of whether that message is Democratic or Republican, liberal or conservative, capitalist or socialist. Stepping from the merely political to the ideological, to remain neutral would be to think that no speech content was false simply on its face. Here is where the “problem” of fact-checking arises.

We live, so we keep being told, in a “post-truth” society. Whatever this exactly means, its practical import is that distinct groups of society disagree fundamentally both over their goals and how to achieve them, politically. The idea of fact-checking as a neutral arbiter between disagreeing parties breaks down in these situations because supposed facts will appear neutral only to parties who agree about how to see the world at a basic level. That is, the appearance of a fact-value distinction will evaporate. (The distinction between facts (i.e., how the world allegedly is without regard to any agents’ perceptions) and values (i.e., how the world ought to be according to a given agent’s goals/preferences) is argued by many to be untenable.)

In this atmosphere, fact-checking takes on the hue of a litmus test, examining statements for their ideological bona fides. When a person’s claim is fact-checked, and found wanting, it will appear to them not that an uninterested judge cast a stoic gaze out onto the world to see whether it is as the person says; instead, the person will feel that the judge looked into their own heart and rejected the claim as undesirable. When people feel this way, they will not stick around and continue to engage. Instead, they’ll pack up and go where they think their claims will get “fair” treatment. None of this is to say that fact-checking is necessarily a futile or oppressive exercise. However, it is a reason to not treat it as a panacea for all disagreement.

What Would Kierkegaard Make of Twitter?

photogrph of Twitter homepage on computer screene

In the weeks leading up to Election Day 2020, Twitter and other social media companies announced they would be voluntarily implementing new procedures to discourage the spread of misinformation across their platforms; on November 12th, Twitter indicated that it would maintain some of those procedures indefinitely, arguing that they were successful in slowing the spread of election misinformation. In general, the procedures in question are examples of “nudges” designed to subtly influence the user to think twice before spreading information further through the social network; dubbed “friction” by the social media industry, examples include labeling (and, in some cases, hiding) tweets containing misleading, disputed, or unverified claims, and double-prompting a user who attempts to share a link to an article that they have not opened. While the general effectiveness of social media friction remains unclear (although at least one study related to COVID-19 misinformation has shown promise), Twitter has argued that their recent policy changes have led to a 29% reduction in quote-tweeting (where a user simultaneously comments on and shares a tweet) and a 20% overall reduction in tweet-sharing, both of which have slowed the spread of misleading information.

We currently have no shortage of ethical questions arising from the murky waters of social networks like Twitter. From the viral spread of “fake news” and propaganda to the problems of epistemic bubbles and echo chambers to malicious agents spearheading disinformation campaigns to the fostering of violence-producing communities like QAnon and more, alerts about the risks posed by social media programs are aplenty (including here at The Prindle Post, such as Desdemona Lawrence’s article from August of 2018). Given the size of Twitter’s user base (it was the fourth-most-visited website by traffic in October 2020 with over 353 million users visiting the site over 6.1 billion times), even relatively uncommon problems could still manifest in significant numbers and no clear solution has arisen for limiting the spread of falsehoods that would not also limit benign Twitter usage.

But is there such a thing as benign Twitter usage?

The early existentialist philosopher and theologian Søren Kierkegaard might think not. Writing from Denmark in the early 1800s, Kierkegaard was exceedingly skeptical of the social movements of his day; as he explains in The Present Age: On the Death of Rebellion, “A revolutionary age is an age of action; ours is the age of advertisement and publicity. Nothing ever happens but there is immediate publicity everywhere.” Instead of living full, meaningful lives, Kierkegaard criticized his contemporaries for simply desiring to talk about things in ways that, ultimately, amounted to little more than gossip. Moreover, Kierkegaard saw how this would underlie a superficiality of love for showing off to “the Public” (the abstract collection of people made up of “individuals at the moments when they are nothing”); all this “talkativeness” would produce a constant “state of tension” that, in the end, “exhausts life itself.” Towards the end of his essay, Kierkegaard summarizes his criticism of his social environment by saying that “Everyone knows a great deal, we all know which way we ought to go and all the different ways we can go, but nobody is willing to move.”

This all probably sounds unsettlingly familiar to anyone with a Twitter account.

Instead of giving into the seductions and the talkativeness of the present age, Kierkegaard argues for the value of silence, saying that “only someone who knows how to remain essentially silent can really talk — and act essentially” (that is, act in a way that would give one’s life genuine meaning). Elsewhere, in the first Godly Discourse of The Lily of the Field and the Bird of the Air, Kierkegaard draws a lesson from birds and flowers about the value of quietly focusing on what genuinely matters. As a Christian theologian, Kierkegaard locates ultimate value in “the Kingdom of God” and argues that lilies and birds do not speak, but are simply present in the world in a way that mimics a humble, unassuming, simple presence before God. The earnestness or authenticity that comes from learning how to live in silence allows a person to avoid the distractions prevalent in the posturing of social games. “Out there with the lily and the bird,” Kierkegaard writes, “you perceive that you are before God, which most often is quite entirely forgotten in talking and conversing with other people.”

Indeed, the talkativeness and superficiality inherent to the operation of social media networks like Twitter would trouble Kierkegaard to no end, even before considering the myriad ways in which such networks can be abused. And, in a similar way, whatever we now consider to be of ultimate importance (be that Kierkegaard’s God or something else), the phenomenology of distraction away from its pursuit is no small thing. Twitter can (and should) continue to try and address its role in the spread of misinformation and the like, but no matter how much friction it creates for its users, it seemingly can’t promote contemplative silence: “talkativeness” is a necessary Twitter feature.

So, Kierkegaard would likely not be interested in the Twitter Bird much at all; instead, he would say, we should attend to the birds of the air and the lilies of the field so that we can learn how to silently begin experiencing life and other things that truly matter.

Misericordia and Trump’s Illness

photograph of screen displaying Trump's Twitter profile

Is it okay to feel joy or mirth at another person’s misfortune? In most cases, the answer is clearly ‘no.’  But what if that person is Donald Trump? If my Facebook feed is any indicator, many people are having such feelings and expressing them unapologetically. On one approach to normative ethics known as virtue ethics, the main question to ask about this is: what does this response tell us about our character? Is it compatible with good character for someone to express joy over Trump’s illness and possible demise?

For Aristotle, who is one of the originators of this approach to ethics, a virtue is a good quality of a person’s desires, emotions, and thoughts. A person has a virtue, an excellence of character, when their desires, emotions, and thinking reflect the value that the objects of these desires, emotions, and thoughts have in the context of a well-lived human life. If we are intemperate, we overvalue pleasures of eating, drinking, and sex relative to other goods such as knowledge and family; if we are cowardly, we over-value physical safety, placing it above friendship and community. Applying this framework to feeling joy over Trump’s illness, there is a question of whether we are appropriately reacting to that human being’s suffering and misfortune.

The question isn’t settled by the fact that in most cases we would condemn expressions of joy at a rival or opponent’s misfortune. Virtue ethicists favor taking context into account; it really is a matter of whether we are feeling appropriately toward this person in this context. In many cases in which we might feel Schadenfreude, we can recognize that the stakes of our disagreement or competition are simply not comparable to the value of life and freedom from suffering. If I am competing with another person for a job, say, his falling seriously ill before an important interview leading him to miss the interview should not be an occasion for joy. After all, there are other jobs, presumably, but not another life for my rival. For that reason, to display joy at the misfortune reveals a flawed character.

Aristotle, it seems to me, did not quite have what it takes to capture this thought. Although he conceived of the virtues in a powerful way that many to this day take seriously, he did not have a clear label for a virtue that came to be prominent in the Christian tradition that followed him. Thomas Aquinas gives a privileged place to the virtue of charity. For him, this is a virtue that, at least in part, comes from God, a so-called ‘infused’ virtue. Our capacity to love God and our fellow human beings appropriately goes beyond our natural resources and requires an infusion of grace. But one aspect of charity seems not require this infusion, and that is the virtue of mercy or misericordia: a virtue to respond to the suffering of others with sadness that motivates us to works of mercy, among which are enumerated visiting the sick and giving comfort to the afflicted. This is a virtue that stems from our human nature, which is susceptible to disease and injury, and we all have reason to want our disease and injury to be greeted with concern and care rather than indifference or mockery. It seems clear that in most cases, expressing joy at another’s sickness would be a clear indicator of lacking the virtue of mercy, a defect in our capacity to love our fellow human beings as they should be loved.

The case of Trump strikes me as more complex than the case of a rival for a job. After all, he has caused real suffering for many people, including thousands of children locked in detention centers. It seems to me that people inclined to feel joy at Trump’s suffering have felt enormous, and to my mind, appropriate anguish over the impact of Trump’s policies. Further, he has himself created the conditions that have led to the prevalence of the very illness that he has caught.  Hence, his illness may seem a just comeuppance to someone who has at every turn showed himself to be self-serving, oblivious to the impact of his decisions on others, and therefore who himself clearly lacks the virtue of mercy.

And so, does the lack of mercy in someone, including someone whose decisions are so consequential for the well-being of others, justify joy at their suffering, or does that joy indicate a lack of mercy? It seems to me clearly the latter.  It might seem as though I am responding appropriately to the goods at stake in feeling joy at Trump’s illness: I might say that ending the suffering of children in detention centers is reflected in the joy I feel at the illness and possible disablement or death of the person who caused the children’s suffering. Clearly, it would be a joyous occasion if those detention centers were closed, but that isn’t what I am rejoicing over in joy over Trump’s illness. After all, there is no certainty that his demise will bring an end to those detention centers. And so, it is really a desire for revenge: anger and a sense of powerlessness over what he has done occasions the desire to harm the cause of my anger. And so, it might seem that anger is never appropriate, inasmuch as mercy is a virtue, or else there is some inner conflict between the virtues. Yet, this need not be so. For Aquinas, there is appropriate hatred and anger, only it is not directed to the person. Instead, it is directed to acts: we can appropriate hate and feel anger at Trump’s acts and wish them to be counteracted or thwarted, but not in ways that are in conflict with the value of his life. It is, of course, understandable that these feelings get out of our control, all the more so, the more immediately our lives have been touched by what Trump’s opponents take to be his unjust and self-serving acts. Anyone who has lost someone to COVID-19 in the United States can legitimately point to the President’s deeds as a contributing cause of their loved one’s suffering and death. It is difficult to contain our hatred and anger to the acts and not extend them to the person behind the acts. Still, we might wish we did not have such feelings, and recognize that they don’t reflect our deeply considered values. Such, I think, is the right stance to take on expressions of joy over Trump’s illness.

Parler and the Problems of a “Free Speech” Social Network

Image of many blank speech bubbles forming a cloud

Twitter is something of a mess. It has been criticized by individuals from both ends of the political spectrum for either not doing enough to stem the tide of misinformation and hateful content, or of doing too much, and restricting what some see as their right to free expression. Recently, some of those who have chastised the platform for restricting free speech have called for a move to a different social media platform, one where opinions – particularly conservative opinions – could be expressed without fear of censorship. A Twitter-alternative that has seen substantial growth recently is called Parler: calling itself the “Free Speech Social Network,” its userbase gained almost half a million users in a single week, partially because of a backlash to Twitter’s recent fact-checking of a Tweet made by Donald Trump. Although the CEO of Parler stated that he wanted the platform to be a space in which anyone on the political spectrum could participate in discussions without fear of censorship, there is no question that it has become dominated by those on the political right.

It is perhaps easy to understand the appeal of such a platform: if one is worried about censorship, or if one wants to engage with those who have divergent political opinions, one might think that a forum in which there are fewer restrictions on what can be expressed would be beneficial for productive debate. After all, some have expressed concern about online censorship, specifically in terms of what is seen as an overreactive “cancel culture,” in which individuals are punished (some say disproportionately) for expressing their opinions. For example consider the following from a recent article in Harper’s Magazine, titled “A Letter on Justice and Open Debate”:

“The restriction of debate, whether by a repressive government or an intolerant society, invariably hurts those who lack power and makes everyone less capable of democratic participation. The way to defeat bad ideas is by exposure, argument, and persuasion, not by trying to silence or wish them away.”

So, what better way to defeat bad ideas than to provide a platform in which they can be brought out into the open, carefully considered, and argued away? Isn’t a “Free Speech Social Network” a good idea?

Not really. An assumption for the argument in favor of a platform that allows uncensored expressions of opinions is that while it may see an increase in the number of hateful or uninformed views, the benefits of having those ideas in the open to analyze and argue against will outweigh the costs. Indeed, the hope is that a lack of censorship or fact-checking will make debate more productive, and that by allowing the expression of “bad ideas” we can, in fact, “defeat” them. In reality, the platform is awash with dangerous misinformation and conspiracy theories, and while contrarian views are occasionally presented, there is little in the way of productive debate to be found.

Here’s an example. With over 400 thousand followers on Parler, libertarian politician Ron Paul’s videos from the “Ron Paul Institute for Peace and Prosperity” receive thousands of positive votes and comments. Many of these videos have recently expressed skepticism about the dangers of coronavirus: specifically, they call into question the efficacy of tests for the virus, claiming that reports of numbers of cases have been inflated or fabricated, and argue that being made to wear facemasks is a violation of personal liberties. These views fall squarely into the camp of “bad ideas.” One might hope, though, that the community would respond with good reasons and rational debate.

Instead, we get a slew of even worse misinformation. For example, here is a representative sample of some recent comments on Paul’s video titled “Should We Trust The Covid Tests?”:

“My friends husband is world renown doctor. He is getting calls from doctors all over USA and World that tell him CV-19 Numbers are being forged.”

“Nurse all over are saying they are testing the same persons over and over and just building up the numbers not counting them as the same case, but seperate cases. Am against shut down period.”

“No. Plain and simple. COVID tests are increasingly being proven to be lies. Unless you believe the worthless MSM liberal sheep lie pushers.”

The kinds of comments are prevalent, and, as can be seen, are not defeating bad ideas, but rather reinforcing them.

Herein lies the problem: productive debate will not just magically happen once we unleash all the bad ideas into a forum. While some may be examined and defeated, others will receive support and become stronger for having been given the room to grow. Without putting any kind of restriction on the expression of misleading and false information we then risk emboldening those looking to spread politically-motivated misinformation and conspiracy theories. The result is that these bad ideas become more difficult to defeat, not easier.

If one is concerned that potential censorship on social media networks like Twitter will stifle debate, what Parler has shown so far is that a “free speech” social network is good for little other than expressing views that one would be banned for expressing elsewhere. Contrary to Parler’s stated motivations and the concerns expressed in the Harper’s letter, mere exposure is not a panacea for the problem of the bad ideas being expressed on the internet.

Retweets, Endorsements, and Indirect Speech Acts

image of retweet icon

Over the weekend, President Trump engaged in a rare retraction, deleting a retweet of a video of pro-Trump protesters at a Florida retirement village. Midway through this video, a man in a golf cart sporting ‘Trump 2020’ and ‘America First’ placards, raises his fist and clearly shouts ‘white power’ at a group of anti-Trump protesters. The retweet stayed up for around three hours on Saturday morning, before it was taken down after uproar. In subsequent statements, the White House press secretary Kayleigh McEnany has tried to maintain both that the 45th president of the United States watched the video before retweeting, and that he nonetheless didn’t hear the slogan shouted in the middle of the video. We might find this is a little difficult to believe, given his record of sharing white supremacist slogans and iconography.

Setting to one side the question of whether the president actually watched the video before sharing it, this example opens up a more general question: when should one be held responsible for one’s retweets? Is it possible to hide behind the defense that a retweet involves someone else speaking (and in this case making a white supremacist hand gesture), or does retweeting involve repeating what someone else has said, meaning that a retweeter can be held just as responsible as the original poster?

One way to make sense of our responsibilities for sharing other peoples’ words is to deny that there is an important distinction between tweeting and retweeting. On this view, when we share other people’s words, we make them our own, meaning that we put our credibility behind them, express belief in them, and take responsibility for them.

This view faces a number of problems.

The Oxford philosopher G.E. Moore observed that it is absurd to make a claim while denying that one believes that claim. The sentence ‘I went to the park yesterday, but I don’t believe that I did’ is perfectly grammatical, but it is a very strange thing to say. Explanation of so-called Moorean sentences differ, but almost everyone agrees that uttering a Moorean sentence is a strange thing to do. By contrast, it is perfectly possible to retweet an article with the comment that you don’t believe its headline claim. Here’s an example:

(To be clear, I don’t have any strong views about the number of bikes sold, and cycling weekly is a reputable source: this is just an example.) Relatedly, there is a whole genre of tweets in which a fact checker retweets an article or picture, along with a claim that the article is false.

 

If retweeting were equivalent to tweeting, this genre of debunking tweet would involve making a claim and denying it. This wouldn’t be just absurd: it is a flat out contradiction.

Retweets that involve promises, requests, or questions similarly don’t behave like tweets. If you tweet a promise to your partner to clean your house every day in August, and I retweet it, I haven’t thereby promised to clean your flat too!

These differences suggest that we ought to draw a pretty clear distinction between tweeting and retweeting.

A natural strategy in thinking about kinds of online communication is to look for features of offline communication that have similar features. There are two offline devices of communication that are good candidates for making sense of retweets: quotation and pointing.

In a recent paper Neri Marsili explores the view that retweets function like quotation. This view take the original format of retweets — a sentence prefaced by ‘RT’ — seriously and claims that retweeting is like putting quotation marks round a sentence and saying so-and-so said: […]. This view can deal with retweeting with a comment by treating it as a quotation embedded into a longer sentence. It is perfectly reasonable for you to say “Josh said that he went to the park yesterday, but I don’t believe that he did,” or “Josh said that he went to the park yesterday, but he didn’t.”

The problem with this view comes from the diversity of retweets. Besides retweets of sentences, we also find retweets of pictures, gifs, polls, and videos. Unlike sentences, gifs and the like aren’t the kinds of things that one can put in quotation marks, so this view can’t be correct.

An alternative view, suggested by Jessica Pepp, Eliot Michaelson, and Rachel Sterken (and ultimately endorsed by Marsili) treats retweeting as akin to pointing. Pointing is an extremely common and flexible referential device associated with words like ‘this’ and ‘that’. By itself, it can function as a device for directing attention. If we were on a walk together, I might stop and point to draw your attention to an interesting bird. We can also use it to make claims about the world (“that [points] is a very ugly chair”), to answer questions (“which student cheated on the test?”), and even to make commands (“give me that [points]!”). One piece of evidence for this view is the fact that is extremely natural to use ‘this’ and ‘that’ with retweets; in fact some tweets are simply labelled with an imperious ‘THIS’.

The proposal is that retweets function like pointing, with the comments functioning like the sentence that refers to the object pointed towards. On this view, disbelieving and debunking retweets work a bit like the sentences “I don’t believe this [points]” and “this [points] is false” which are clearly reasonable sentences.

So far, we’ve got a bit clearer on how to think about what kind of communicative action retweeting is, but we haven’t yet addressed the issue of responsibility for retweeting. On the view under consideration, a plain retweet is purely referential; it’s like pointing to a bird whilst on a walk to draw others’ attention to it. Retweets with comments may clarify whether the speaker means to endorse the retweeted comment, but merely retweeting doesn’t clarify whether one has endorsed the claim.

Here we can bring in another piece of philosophical technology: indirect speech acts. Indirect speech acts involve performing one direct communicative acts as a means to performing another indirect act. For example, directly asking the question “do we have any beer in the fridge?” might involve indirectly making a request for you to get me a beer. Indirect speech acts are highly conventionalized and context-sensitive. If I’m clearly drawing up a shopping list, asking “do we have any beer in the fridge?” will probably function as a straight question (unless I have a habit of drinking a beer while writing lists).

The suggestion is that retweeting can involve two distinct speech acts: a direct referential act and an indirect act of endorsement. We might think about retweeting an article in order to endorse it as being a little bit like opening a newspaper on an interesting article and leaving it in the spot where your partner goes to have their morning coffee.

Frustratingly, this means that there is no easy answer to the question of what responsibility we bear for retweets. As we’ve just seen, indirect speech acts are highly context-dependent. There may be some internet communities where the conventions around retweeting involve strong endorsement. If I share an article about a new treatment for COVID-19 into a Facebook group for medical professionals, I might be endorsing both the headline claim of the article, and the supplementary claims it makes. By contrast, if I share an article about the performance benefits of a new Nike running shoe into a running group that habitually shares different studies, and where it is common knowledge that these studies are based on shaky science, I might merely be drawing attention to a new piece of information.

What happens when a communicative situation lacks clear norms about the significance of retweeting? Well, things get messy. One person might retweet a controversial article meaning to call attention to its argument, and be interpreted as endorsing it wholesale. Another person might share a picture of a protest meaning to endorse the cause of the protesters, and be interpreted as mocking or belittling them. In this kind of situation, context collapse is rife, and it becomes difficult to rely on shared presuppositions and conventions about communication.

In this defective speech situation, it is extremely difficult to make sense of which indirect speech acts we are performing. When we hold one another responsible for indirect speech acts associated with retweets, we are not implementing established norms for indirect communication, we are trying to create conventions for indirect communication based on sharing content online.

What kinds of conventions do we want to have? Regina Rini suggests that we ought to have a convention whereby retweeting conveys endorsement of the central claims in a retweeted article, accompanied by robust practices of holding users accountable for what they share. An alternative convention would be that retweeting doesn’t convey endorsement of any of the claims in an article (perhaps it merely conveys that something is interesting), in which case we could hold one another to much lower standards. A third possibility is to have a bundle of different conventions for different situations. Maybe the context of political speech involves endorsement of all claims and robust accountability, and contexts of private speech are much more relaxed. This conclusion is unsatisfying, but it does help clarify what is at stake in debates about retweets: we aren’t trying to describe independent and general conventions, but to create linguistic communities that can meet our intellectual needs.

Regulating Companies to Free People’s Speech

photograph of ipad with Trump's twitter profile sitting atop various blurred newspaper front pages feating him

US President Donald Trump has signed an executive order instructing the Federal Communications Commission (FCC) to review legislation that shields social media platforms, like Twitter and Facebook, from liability for content posted by their users. This move appears to be a retaliatory gesture against Twitter for linking fact-checking sites to President Trump’s tweets opining the vulnerability to fraud of mail-in ballots for upcoming elections. This is the second time President Trump has drafted an executive order to review this kind of legislation. The first time was in August 2019. But this isn’t simply (another) Trump temper tantrum. Rather it is the latest push in a concerted and bipartisan effort to bring so-called “Big Tech” companies to heel. These efforts in general face a long road of legal and philosophical challenges, and Trump’s effort in particular is likely doomed to failure.

The relevant legislation is the Telecommunications Act of 1996, and more specifically the “Good Samaritan” clause of Section 230 therein. This clause states that no “provider or user” of an “interactive computer service” can be sued for civil harm because of “good faith efforts”  to restrict access to “objectionable” material posted by other users of their service. Other portions of Section 230 give providers and users of interactive computer services immunity against being sued for any civil harm caused by content posted by other users. Essentially, companies like Twitter, Facebook, and Google are given broad discretion to handle the content posted on their sites as they see fit.

Conservative and Republicans complain that Big Tech companies harbor anti-conservative political bias, which they enforce through their platforms’ outsized influence on the dissemination of news and opinion. Texas’ Senator Ted Cruz has argued that Facebook has censored and suppressed conservative expression on its platform. President Trump’s frequent screeds against CNN, The Washington Post, and Twitter echo the same sentiment. In 2018, Google CEO Sundar Pichai was grilled by Republican lawmakers about alleged anti-conservative bias in his company’s handling of search results. Missouri’s Senator Josh Hawley in 2019 introduced a bill to amend Section 230 to remove its broad protections from liability. Hawley’s bill was specifically geared toward addressing alleged anti-conservative bias and offered reinstatement of Section 230’s protection only to companies who submitted themselves to an audit showing that they pursued “politically neutral” practices.

Liberal and Democratic concerns focus largely on the spread of harmful misinformation and disinformation by foreign actors aimed at influencing US elections. But there are two points of bipartisan agreement. The first concerns the scope and magnitude of Big Tech’s influence on the public exchange of information. Agreement here manifests itself in the what criteria lawmakers have put forward as triggering expanded liability, namely size. Senator Josh Hawley’s 2019 bill targeted companies with, “30 million monthly active users in the US, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.” The other is point of agreement concerns posted content related to human trafficking for sex work. Legislation amending the Telecommunications Act of 1996 pursuant to curtailing human trafficking was passed with bipartisan support in 2017.

All of this bears on the right to freedom of speech, interpretation of which is a perpetually contentious issue. Conservatives complaining about censorship and suppression allege that their freedom of speech is being infringed by the actions of Big Tech. However a recent judicial decision made short work of one such complaint. The US Court of Appeals dismissed a suit claiming that Twitter, Facebook, Apple, and Google had conspired to suppress conservative speech. In their ruling the judges noted that the First Amendment only protects free speech from interference by government action. This illustrates an important point about the nature of rights that is often missed.

Rights can be thought of as comprising three elements: a right-holder, an obligation, and an obliged party. With the right to freedom of speech the right-holder is any legal person (which includes corporations), the obligation is to refrain from suppression/censorship, and the obligated party is the US government. Constitutional rights tend to follow this pattern. Other rights oblige parties other than just the government. A family can sue someone for killing their mother, or the state may sue on the murder victim’s behalf, because a right to life is both understood to exist at common law and is also enshrined by legislation in statutes against homicide. Here the right holder is any individual person, the obligation is to refrain from killing the right-holder, and the obligated party is every other individual person. (Incidentally, both of these are examples of negative rights: rights which entitle the bearers to protection from specific harmful treatment. There are also positive rights, which entitle the bears to the provision of specific goods, services, or treatment.)

As a matter of principle there is no general legal basis for complaints against Big Tech for suppressing or censoring expression. They are not government actors and so are not obviously bound by the right to free speech as expressed in the first amendment. The US Court of Appeals decision mentioned above says as much. Further these companies are themselves legal persons with respect to political speech under US law. This was one the bases of the US Supreme Courts’ (in)famous Citizens United decision. Because corporations are people too, their political speech is protected. Twitter flagging President Trump’s posts with fact-checking tags is just them exercising their speech in competition with President Trump’s speech. This is the much vaunted “marketplace of ideas” of which conservatives are usually enamored.

As a matter of law Trump’s draft executive order is largely toothless because the text of Section 230’s Good Samaritan clause allows Big Tech companies to take “good faith” actions to “restrict access to … material” even when “such material is constitutionally protected.” Despite the opinion of legislators, there is not even a whiff of a political neutrality requirement. While such a requirement used to exist, it ceased being enforced in 1987 and was fully obliterated in 2011. The decision to cease enforcing this requirement was made by US President Ronald Reagan’s FCC Commissioner, Mark Fowler, because it was seen as violating first amendment protections.

Infringement by the government on freedom of speech is held in court to strict scrutiny. Part of the strict scrutiny standard is that the infringement promotes a “compelling government interest.” If the government exercises its authority over private individuals or groups under the auspices of protecting freedom of speech, what standards will the government ask be met? The entire point of rights like the freedom of speech is to permit persons acting in a private capacity to determine things for themselves. As many critics and advocacy groups have pointed out, allowing the government to set these standards is harmful to free speech rather than protective of it. Legislators appear to remember this only as it suits their political needs.

Twitter Bots and Trust

photograph of chat bot figurine in front of computer

Twitter has once again been in the news lately, which you know can’t be a good thing. The platform recently made two sets of headlines: in the first, news broke that a number of Twitter accounts were making identical tweets in support of Mike Bloomberg and his presidential campaign, and in the second, reports came out of a significant number of bots making tweets denying the reality of human-made climate change.

While these incidents are different in a number of ways, they both illustrate one of the biggest problems with Twitter: given that we might not know anything about who is making an actual tweet – whether it is a real person, a paid shill, or a bot – it is difficult to know who or what to trust. This is especially problematic when it comes to the kind of disinformation tweeted out by bots about issues like climate change, where it can not only be difficult to tell whether it comes from a trustworthy source, but also whether the content of the tweet makes any sense.

Here’s the worry: let’s say that I see a tweet declaring that “anthropogenic climate change will result in sea levels rising 26-55 cm. in the 21st century with a 67% confidence interval.” Not being a scientist myself, I don’t have a good sense of whether or not this is true. Furthermore, if I were to look into the matter there’s a good chance that I wouldn’t be able to determine whether the relevant studies that were performed were good ones, whether the prediction models were accurate, etc. In other words, I don’t have much to go on when determining whether I should accept what is tweeted out at me.

This problem is an example of what epistemologists have referred to as the problem of expert testimony: if someone tells me something that I don’t know anything about, then it’s difficult for me, as a layperson, to be critical of what they’re telling me. After all, I’m not an expert, and I probably don’t have the time to go and do the research myself. Instead, I have to accept or reject the information on the basis of whether I think the person providing me with information is someone I should listen to. One of the problems with receiving such information over Twitter, then, is that it’s very easy to prey on that trust.

Consider, for example, a tweet from a climate-change denier bot that stated “Get real, CNN: ‘Climate Change’ dogma is religion, not science.” While this tweet does not provide any particular reason to think that climate science is “dogma” or “religion,” it can create doubt in other information from trustworthy sources. One of the co-authors of the bot study worries that these kinds of messages can also create an illusion of “a diversity of opinion,” with the result that people “will weaken their support for climate science.”

The problem with the pro-Bloomberg tweets is similar: without a way of determining whether a tweet is actually coming from a real person as opposed to a bot or a paid shill, messages that defend Bloomberg may be ones intended to create doubt in tweets that are critical of him. Of course, in Bloomberg’s case it was a relatively simple matter to determine that the messages were not, in fact, genuine expressions of support for the former mayor, as dozens of tweets were identical in content. But a competently run network of bots could potentially have a much greater impact.

What should one do in this situation? As has been written about before here, it is always a good idea to be extra vigilant when it comes to getting one’s information from Twitter. But our epistemologist friends might be able to help us out with some more specific advice. When dealing with information that we can’t evaluate on the basis of content alone – say, because it’s about something that I don’t really know much about – we can look to some other evidence about the providers of that information in order to determine whether we should accept it.

For instance, philosopher Elizabeth Anderson has argued that there are generally three categories of evidence that we can appeal to when trying to decide whether we should accept some information: someone’s expertise (with factors including testifier credentials, and whether they have published and are recognized in their field), their honesty (including evidence about conflicts of interest, dishonesty and academic fraud, and making misleading statement), and the extent to which they display epistemic responsibility (including evidence about the ways in which one has engaged with the scientific community in general and their peers specifically). This kind of evidence isn’t a perfect indication of whether someone is trustworthy, and it might not be the easiest to find. When one is trying to get good information from an environment that is potentially infested with bots and other sources of misleading information, though, gathering as much evidence as one can about one’s source may be the most prudent thing to do.

Twitter and Disinformation

black and white photograph of carnival barker

At the recent CES event, Twitter’s director of product management Suzanne Xie announced some proposed changes to Twitter which are aimed to begin rolling out in a beta version this year. They represent fundamental and important changes to the ways that conversations are had on the platform, including the ability to make tweets to limited groups of users (as opposed to globally), and perhaps the biggest change, tweets that cannot be replied to (what Twitter is calling “statements”). Xie stated that the changes were meant to prevent what are seen by Twitter as unhealthy behavior by its users, including “getting ratio’d” (when one’s Tweet receives a very high ratio of replies to likes, which is taken to represent general disapproval), and “getting dunked on” (a phenomenon in which the replies to one’s tweet are very critical, often going into detail about why the original poster was wrong).

If you have spent any amount of time on Twitter you have no doubt come across the kind of toxic behavior that the platform has become infamous for: people being rude, insulting, and aggressive is commonplace. So one might think that any change that might reduce this toxicity should be welcomed.

The changes that Twitter is proposing, however, could have some seriously negative consequences, especially when it comes to the potential for spreading misinformation.

First things first: when people act aggressively and threatening on Twitter, they are acting badly. While there are many parts of the internet that can seem like cesspools of vile opinions (various parts of YouTube, Facebook, and basically every comment section on any news website), Twitter has long had the reputation of being a place where nasty prejudices of any kind you can imagine run amok. Twitter itself has recognized that people who use the platform for the expression of racist, sexist, homophobic, and transphobic views (among others) are a problem, and have in the past taken some measures to curb this kind of behavior. It would be a good thing, then, if Twitter could take further steps to actually deter this kind of behavior.

The problem with allowing users the ability to Tweet in such a way that it cannot receive any feedback, though, is that the community can provide valuable information about the quality and trustworthiness about the content of a tweet. Consider first the phenomenon of “getting ratio’d”. While Twitter gives users the ability to endorse Tweets – in the form of “hearts” – it does not have any explicit mechanism in place that can allow users to show their disapproval – there is no non-heart equivalent. In the absence of a disapproval mechanism, Twitter users generally take a high ratio of replies-to-hearts to be an indication of disapproval (there are exceptions to this: when someone asks a question or seeks out advice, they may receive a lot of replies, thus resulting in a relatively high ratio that signals engagement as opposed to disapproval). Community signaling of disapproval can provide important information, especially when it comes from individuals in positions of power. For example, if a politician makes a false or spurious claim, their getting ratio’d can indicate to others that the information being presented should not be accepted uncritically. In the absence of such a mechanism it is much more difficult to determine the quality of information.

In addition to the quantity of responses that contribute to a ratio, the content of those responses can also help others determine whether the content of a tweet should be accepted. Consider, for example, the existence of a world leader who does not believe that global warming is occurring, and who tweets as such to their many followers. If this tweet were merely made as a statement without the possibility of a conversation occurring afterwards, those who believe the content of the tweet will not be exposed to arguments that correctly show it to be false.

A concern with limiting the kinds of conversations that can occur on Twitter, then, is that preventing replies can seriously limit the ability of the community to indicate that one is spreading misinformation. This is especially worrisome, given recent studies that suggest that so-called “fake news” can spread very quickly on Twitter, and in some cases much more quickly than the truth.

At this point, before the changes have been implemented, it is unclear whether the benefits will outweigh the costs. And while one should always be cautious when getting information from Twitter, in the absence of any possibility for community feedback it is perhaps worth employing an even healthier skepticism in the future.

The Ethics of Brand Humanization

close-up photo of Wendy's logo

Brand humanization is becoming increasingly common in all arenas of advertisement, but it’s perhaps the most noticeable on social media. This strategy is exactly what it sounds like; corporations create social media accounts to interact directly with customers, and try to make their brand seem as human and relatable as possible. It’s ultimately used to make companies more approachable, more customer-oriented. The official Twitter account for Wendy’s, for example, has amassed a massive audience of nearly three million followers. Much of their popularity has to do with their willingness to interact with customers, like when the account famously roasted other Twitter users, or when they post memes to reach out to a younger demographic. The goal is to make the brand itself feel like a real person, to remind the consumer of the human being on the other end of the interaction.

In an article advising brands how to humanize themselves in the eyes of consumers, Meghan M. Biro, a marketing strategist and regular contributor to Forbes, describes how a presence on social media allows companies,

“to build emotional connections with their customers, to become a part of their lives, both in their homes and—done right—in their hearts. The heart of this is ongoing, online dialogue. Both parties benefit. The customer’s idiosyncratic (and sometimes maddening) needs and wants can be met. The company gets increased sales, of course, but also instant feedback on its products—every online chat has the potential to yield an actionable nugget of knowledge.”

The tactic of presenting ads as a mutually beneficial conversation between consumer and brand has become increasingly prominent in recent years. Studies have shown that millenials hate being advertised to, so companies are adopting strategies like the one Biro recommends to restructure the consumer-company interaction in a way that feels less manipulative. However, not everyone believes this new arrangement is truly mutually beneficial. In an article for The New Inquiry, Kate Losse takes a critical view of conversational advertising. “The corporation,” she notes, “while needing nothing emotional from us, still wants something: our attention, our loyalty, our love for its #brand, which it can by definition never return, either for us individually or for us as a class of persons. Corporations are not persons; they live above persons, with rights and profits superseding us.” On the subject of using memes as marketing, she says, “The most we can get from the brand is the minor personal branding thrill of retweeting a corporation’s particularly well-mixed on-meme tweet to show that we ‘get’ both the meme and the corporation’s remix of it.” In this sense, the back-and-forth conversational approach is much more one-sided than it seems.

There is, however, a difference between traditional marketing strategies and the tactics employed by social media accounts to gain popularity. If you follow Wendy’s on Twitter, it’s because you choose to follow them, because you want to see their content on your feed. For those who don’t want to be directly advertised to, it’s as simple as not following (or if you want to be more thorough, blocking) corporate Twitter accounts. Responding to transparent advertising with a sarcastic meme, an increasingly common and often funny response to these kind of Tweets, only gives the brand more exposure online, so the best strategy is to not engage at all.

Furthermore, a 2015 study on brand humanization conducted by the Vrije University of Amsterdam provides another dimension to this issue. When studying the positive correlation between social media presence and a brand’s reputation, they wondered whether “the fact that exposure to corporate social media activity is, to a large degree, self-chosen raises the question whether these results reflect a positive effect of exposure on brand attitudes, or rather the reverse causal effect–that consumers who already have positive brand attitudes are more likely to choose to expose themselves to selected brand content.” No extensive studies have been done on this yet, but it might provide valuable insight on the actual impact of corporate Twitter accounts.

Using a Facebook page to take questions or criticism from consumers seems like a harmless and even productive approach to marketing through social media. Even corporate Twitter accounts posting memes, while not as beneficial to the consumer as companies like to present it as, is hardly unethical. But brand humanization can steer companies into murky moral waters when they try too hard to be relatable.

In December of 2018, the verified Twitter account for Steak-umm, an American frozen steak company, posted a tweet that produced significant backlash. The tweet reads, “why are so many young people flocking to brands on social media for love, guidance, and attention? I’ll tell you why. they’re isolated from real communities, working service jobs they hate while barely making ends meet, and are living w/ unchecked personal/mental health problems.” A similar tweet from February of 2019, posted by the beverage company Sunny-D, reads cryptically, “I can’t do this anymore.” Both of these messages demonstrate two things; firstly, the strategy employed by modern companies to speak to customers in the more humanizing first-person, to move away from the collective corporate “we” to the individual (and therefore more relatable) “I”. The voice of corporations has changed; once brands were desperate to come across as serious and professional, but now brands marketing to a twenty-something demographic want to sound cool and detached, and speak with the voice of an individual rather than a disembodied conglomerate of shareholders and executives.

Secondly, these brands are now appropriating and parroting millennial “depression culture”, which is often expressed through frustration at capitalism and its insidious effect on the individual. To quote Kate Losse again, “It isn’t enough for Denny’s [another prominent presence on the social media scene] to own the diners, it wants in on our alienation from power, capital, and adulthood too.” There is something invasive and inauthentic about this kind of marketing, and furthermore, something ethically troubling about serious issues being used as props to sell frozen food. The point of the Steak-umm tweet may be salient, but the moral implications of a corporate Twitter account appropriating social justice issues to gain attention left many uneasy. As John Paul Rollert, a professor of business and ethics at the University of Chicago, said in an interview with Vice, “It can’t say anything good about society when depressed people feel their best outlet is the Twitter account for Steak-umm.”

How Much Should We Really Use Social Media?

Photograph of a person holding a smartphone with Instagram showing on the screen

Today, we live in a digital era. Modern technology has drastically changed how we go about our everyday lives. It has changed how we learn, for we can retrieve almost any information instantaneously. Even teachers can engage with students through the internet. Money is exchanged digitally. Technology has also changed how we are entertained, for we watch what we want on our phones. But perhaps one of the most popular and equally controversial changes that modern technology has brought to society is how we communicate. Social media. We live in an era where likes and retweets reign supreme. People document their every thought using platforms such as Facebook and Twitter. They share every aspect of their lives through platforms like Instagram. Social media acts as way to connect people who never would have connected without it, but the effects of social media can also be negative. Based on all the controversy that surrounds social media, should we be using it as often as we do?

If you were to walk down the street, or go wait in line at a restaurant, or go to a sporting event, or go anywhere, you’d most likely see people on their phones. They’re scrolling through various social media platforms or sharing the most recent funny dog video. And this phenomenon is happening everywhere and all the time. Per Jessica Brown, a staff writer for BBC, three billion people, which is around 40 percent of the world’s population, use social media. Brown went on to explain that we spend an average of two hours per day on social media, which translates to half a million pieces of content shared every minute. How does this constant engagement with social media affect us?

According to Amanda Macmillan of Time Magazine, in a survey that aimed to gauge the effect that social media platforms had on mental health, results showed that Instagram performed the worst. Per Macmillan, the social media platform was associated with high levels of anxiety, depression, bullying, and other negative symptoms. Other social media platforms, but Instagram especially, can cause FOMO, or the “fear of missing out.” Users will scroll through their feed and see their friends having fun that they cannot experience. For women users, there is the pressure of an unrealistic body images. Based on the survey that ranked social media platforms and their effect on users, one participant explained that Instagram makes girls and women feel that their bodies aren’t good enough because other users add filters and alter their pictures to look “perfect,” or the ideal image of beauty. The manipulation of images on Instagram can cause users to feel low self-esteem, anxiety, and feel insecure about themselves overall. The negativity that users feel because of what others post can create a toxic environment. Would the same effects be happening if people spent less time on social media? If so, maybe users need to take a hard look at how much time they are spending. Or social media platforms could monitor the content that is being posted more to prevent some of the mental effects that some users are getting from social media usage.

Although Instagram can cause have adverse effects on mental health, it can create a positive environment for self-identity and self expression. It can be a place of community building support as well. However, such positive outcomes from social media must be a result of all users cooperating and working to make the digital space a positive environment. Based on the survey of social media platforms, though, this does not seem to be the case and currently, the pros of social media platforms like Instagram seem to be far outweighed by the cons.

Although Facebook and Twitter were ranked higher than Instagram in terms of negatively affecting the mental health of users, they can still have adverse effects as well. In a survey of 1,800 people, women were found to be more stressed than men and a large factor to their stress was Twitter. However, it was also found that the more women used Twitter, the less stressed they became. It’s likely that Twitter acting as both a stressor and a coping mechanism comes from the type of content that women were interacting with. In another survey, researchers found that participants reported lower moods after using Facebook for twenty minutes compared to those who just browsed the internet. But the weather that was occurring that day (i.e rainy, sunny) could have also been a factor in the user’s mood.

Although social media seems to only have adverse effects on the mental health of its users, social media is a great way to connect with others. It can act as a cultural bridge, bringing people from all across the globe together. It’s way to share content that can be positive and unite people with similar beliefs. With the positives and negatives in mind, should we change how much we are using social media? Or at least try to regulate? People could take it upon themselves to simply try and stay off social media sites, although with the digital age that we live in, that might be a hard feat to pull off. After all, too much of a good thing can be a bad thing, as demonstrated from the surveys on social media. But perhaps we should be looking at the way that we are using social media rather than the time we spend on it. If users share positive content and strive to create a positive online presence and community, other users might not deal with the mental health issues that arise after usage of social media. But then again, people should be free to post whatever content they want. At the end of the day, users have their own agenda for how they manage their social media. So perhaps it’s dependent on every individual to look at their own health and their social media usage, and regulate it based on what they see in themselves.