← Return to search results
Back to Prindle Institute

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Mill’s Dilemma

image of crowd with empty circle in the middle

One of the most famous defenders of free speech, John Stuart Mill, argued against the silencing of unpopular opinion on the grounds that we would potentially miss out on true ideas and that we need to check our own fallibility. Contained in this reasoning is the idea that in the marketplace of ideas that truth would hopefully emerge. But that marketplace is evolving into several niche submarkets. The use of algorithms and the creation of filter bubbles means that we no longer share a common pool of accepted facts. The recent US election has revealed just how polarized the electorate is, and this raises a moral question about to what extent a democratic public is obligated to look outside of their bubbles.

Regardless of whether the current president concedes at this point, it is difficult to think that the conspiracy theories about the 2020 election will disappear, and this will have long-term ramifications for the country. What this means is that at least two sets of citizens will have very different factual understandings of the world, especially after January. The fake news being spread about the election is filling a demand for a particular version of events, and on the right this demand is now being met with news sources whose content is ever more divorced from the reporting that the rest of us get. For example, the backlash by Trump supporters over Fox News’ projected win for Biden has led many on the right to label the network as “too liberal” and to switch to alternatives who are more willing to reflect the reality that is desired rather than the reality that exists. Similarly, conservatives feeling that their views have been censored on Facebook or Twitter have been drifting towards new platforms which are tailor-made to reflect their beliefs and are rife with misinformation.

The long-term concern of course is that as different political perspectives confine themselves to their own apps, sites, and feeds, the filter bubble effect becomes more pronounced. The concerns that Mill had about censorship in the marketplace of ideas isn’t the problem. The problem is that the pluralistic marketplaces that have spawned, and the different sets of political worldviews that have been created, are becoming insular and isolated from one another and thus more unrecognizable to each other. This is a problem for several reasons. Many have already pointed out that it allows for misinformation to spread, but the issue is more complicated.

The political bubbles of information and the echo chamber effect are making it easier to escape that check on fallibilism for those all across the political spectrum. It also makes addressing real world problems like climate change and COVID-19 more complicated. As one nurse has said, people are literally using their last breaths proclaiming that COVID isn’t real as they die from the disease. When recently asked about the fact that President Trump received over 70 million votes in the election, former President Obama opined that the nation is clearly divided and that the worldview presented in rightwing media is powerful. He noted, “It’s very hard for our democracy to function if we are operating on completely different sets of facts.”

As many experts have testified, this split in worldview is not going away. The moral issue isn’t merely that so many people can believe falsehoods or that truths may be buried; it’s the way that “facts,” as understood within an epistemic bubble, are related to each other and how political problems get defined by those relations which all lead to incommensurability. The moral issue is thus practical: how does a society where everyone is free to create their own worldview based on their preferences and have their views echoed back to them function when we can’t recognize what the other side is talking about? As the election debates demonstrated, certain dog whistles or narratives will resonate to some and not be recognized by others. Even if we put facts, fact-checking, and truth aside, do we still have a moral obligation to look outside of our own bubble and understand what our political opponents are saying?

In a recent paper from Episteme on the subject, C Thi Nguyen argues that we need to distinguish between epistemic bubbles and echo chambers. In the former, information is left out because a consumer is only provided certain sources. For example, if I open links to certain kinds of articles in a news feed, an algorithm may begin to provide more articles just like it and exclude articles that I am less likely to open. Thus leading to an epistemic bubble. On the other hand, if I specifically avoid certain sources or exclude certain sources I am creating an echo chamber. As described, “Both are structures of exclusion—but epistemic bubbles exclude through omission, while echo chambers exclude by manipulating trust.” Breaking free from an echo chamber is far more difficult because it involves using distrust of non-members to epistemically discredit them.

Trust is obviously important. Attempts to undermine fake news outlets or engage in censorship have only seemed to inspire more distrust. Fox News tries to maintain journalistic integrity by projecting an election, but this breaks the trust of Fox News viewers who leave for another network which will reflect their wishes. Since Twitter tags misleading tweets, conservatives are opting for other means of sharing their views. It seems the more that the so-called mainstream media tries to combat disinformation spread, the more it creates distrust. Simply trying to correct misinformation will not work either. Studies of disinformation campaigns reveal just how difficult it is to correct because even once a false claim is corrected, it is often the false claim that is remembered.

So, what is the alternative? As mainstream media attempts to prevent the spread of misinformation on their own platforms, trust in those platforms declines. And those who are left watching mainstream media, even if they do want truth, lose a check on their own biases and perspectives. Do the rest of us have an obligation to look at Newsmax, Breitbart, or Parler just so we can see what epistemic framework the other side is coming from? It may not be good for the cause of truth, but it might be necessary for the cause of democracy and for eventually getting the country to recognize and respond to the same problems. It may be that the only way to rebuild the epistemic trust required to break free from our echo chambers is to engage with our adversaries rather than merely fact-check them. By preventing the marketplace of ideas from balkanizing, there may still be a cheerful hope that through the exchange of ideas truth will eventually emerge. On the other hand, it may only cause more disinformation to spread even easier. Mill’s dilemma is still our dilemma.

YouTube and the Filter Bubble

photograph of bubble floating

If you were to get a hold of my laptop and go to YouTube, you’d see a grid of videos that are “recommended” to me, based on videos I’ve watched in the past and channels I’ve subscribed to. To me, my recommendations are not surprising: clips from The Late Show, a few music videos, and a bunch of videos about chess (don’t judge me). There are also some that are less expected – one about lockpicking, for example, and something called “Bruce Lee Lightsabers Scene Recreation (Dual of Fates edit).” All of this is pretty par for the course: YouTube will generally populate your own personalized version of your homepage with videos from channels you’re familiar with, and ones that it thinks you might like. In some cases this leads you down interesting paths to videos you’d like to see more of (that lockpicking one turned out to be pretty interesting) while in other cases they’re total duds (I just cannot suspend my disbelief when it comes to lightsaber nunchucks).

A concern with YouTube making these recommendations, however, is that one will get stuck seeing the same kind of content over and over again. While this might not be a worry when it comes to videos that are just for entertainment, it can be a much bigger problem when it comes to videos that present false or misleading information, or promote generally hateful agendas. This phenomenon – where one tends to be presented with similar kinds of information and sources based on one’s search history and browsing habits – is well documented, and results in what some have called a “filter bubble.” The worry is that once you watch videos of a particular type, you risk getting stuck in a bubble where you’ll be presented with many similar kinds of videos, making it more and more difficult to come across videos that may come from more reputable sources.

YouTube is well aware that there are all sorts of awful content on its platform, and has been attempting to combat it, although with mixed results. In a statement released in early June, YouTube stated that it was focused on removing a variety of types of hateful content, specifically by “prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” They provide some examples of such content that they were targeting, including “videos that promote or glorify Nazi ideology” and “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” They have not, however, been terribly successful in their efforts thus far: as Gizmodo reports, there are plenty of channels on YouTube making videos about conspiracy theories, white nationalism, and anti-LGBTQ hate groups that have not yet been removed from the site. So worries about filter bubbles full of hateful and misleading content persist.

There is another reason to be worried about the potential filter bubbles created by YouTube: if I am not in your bubble, then I will not know what kind of information you’re being exposed to. This can be a problem for a number of reasons: first, given my own YouTube history, it is extremely unlikely that a video about the “dangers” of vaccines, or videos glorifying white supremacy, will show up in my recommendations. Those parts of YouTube are essentially invisible to me, meaning that it is difficult to really tell how prevalent and popular these videos are. Second, since I don’t know what’s being recommended to you, I won’t know what kind of information you’re being exposed to: you may be exposed to a whole bunch of garbage that I don’t know exists, which makes it difficult for us to have a productive conversation if I don’t know, say, what you take to be a reputable source of information, or what the information conveyed by that source might be. 

There is, however, a way to see what’s going on outside of your bubble: simply create a new Google account, sign into YouTube, and its algorithms will quickly build you a new profile of recommended videos. I ran this experiment, and within minutes had created a profile that would be very out of character for myself, but would fit with the profile of someone with very different political views. For example, the top videos recommended to me on my fake account are the following:

FACTS NOT FEELINGS: Shapiro demolishes & humiliates little socialist comrade

CEO creates ‘Snowflake Test’ to weed out job applicants

Tucker: Not everyone in 2020 Democratic field is a lunatic

What Young Men NEED To Understand About Relationships – Jordan Peterson

This is not to say that I want to be recommended videos that push a misleading or hateful agenda, nor would I recommend that anyone actively go and seek them out. But one of the problems in creating filter bubbles is that if I’m not in your bubble then I’m not going to know what’s going on in there. YouTube, then, not only makes it much easier for someone to get caught up in a bubble of terrible recommended content, but also makes it more difficult to combat it.

Of course, this is also not to say that every alternative viewpoint has to be taken seriously: while it may be worth knowing what kinds of reasons antivaxxers are providing for their views, for example, I am under no obligation to take those views seriously. But with more and more people getting their news and seeking out political commentary from places like YouTube, next time you’re clicking through your recommendations it might be a good idea to consider what is not being shown to you. While creating a YouTube alter-ego is optional, it is worth keeping in mind that successfully communicating and having productive discussions with each other requires that we at least know where the other person is coming from, and this might require taking more active efforts to try to get out of one’s filter bubble.