← Return to search results
Back to Prindle Institute

A Duty to Pop?: On Our Obligation to Hear from Others

In my previous column, I surveyed recent criticisms of social media platforms, such as Bluesky, where the userbase skews toward one end of the political spectrum. Many argue that these platforms are echo chambers and that continuing to use them has various negative effects for the users, so users ought to leave for less polarized platforms.

Drawing on the work of C. Thi Nguyen, I argued that social media platforms, at worst, enable users to create epistemic bubbles, and thus one can resolve problems associated with them by seeking additional information from other sources. Further, I contended that polarization on social media platforms is just another facet of our general background polarization, so it is not obvious why we ought to pop this particular bubble first.

However, one might argue that there is something troubling about knowingly remaining in any epistemic bubble. Thus, even if we cannot show that the epistemic bubbles which social media lend themselves towards are especially troubling, then our mere knowledge of them implies that we ought to pop those bubbles. Further still, adding new voices and perspectives to one’s social media feed is much easier than moving, starting a new hobby, etc. So perhaps we should strive to pop epistemic bubbles in general, but we ought to specifically target bubbles on social media platforms because they are the least costly to pop.

But what precisely is troubling about knowingly remaining in an epistemic bubble? One argument is that it inhibits someone’s search for the truth. In the second chapter of On Liberty, John Stuart Mill offers a famous defense of the freedom of thought and discussion. Often echoed in arguments about free speech today, Mill contends that allowing open and broad discussion of any idea, however fringe, is vital for the pursuit and acquisition of knowledge. Even repugnant and widely rejected ideas, Mill contends, must be carefully examined lest our rejection of it become a dead dogma.

Consider the following. Most people in liberal democracies endorse racial equality, the idea that an individual’s race is irrelevant to her moral worth, character, and merit. Yet, if someone is pressed to justify this belief, they may find themselves pausing; while reflexively endorsing this belief after living in institutions that also endorse it, many find themselves unable to articulate why this view is more defensible than the alternatives. A proponent of Mill would argue that racial equality has become a dead dogma. It is the cultural default view, so to speak, and any who question it even for the sake of argument may face social punishment. As a result, many may know that they accept it as true but cannot explain why it is true.

So, without open discourse and discussion, people may find themselves unable to defend their beliefs or explain why alternative views ought to be rejected. This may not make a difference for us personally, but it might leave some in a troubling situation. If people are not able to articulate why their beliefs are true, those beliefs may be on more precarious ground. Perhaps part of the reason for online radicalization is that people have not been inoculated, so to speak, against views on the radical fringes; when one cannot articulate why they believe a basic moral tenant of their society, encountering a cogent argument against its truth may be a profoundly troubling experience.

So how does this connect to epistemic bubbles? The less that we encounter positions we disagree with, the more error prone we may become in our beliefs. The less frequently that our beliefs are challenged, the less we are forced to seriously scrutinize them. Without scrutiny, we may come to accept falsehoods without realizing it. However, so long as we remain within an epistemic bubble, we will not encounter challenges to our beliefs. Thus, knowingly remaining in such a bubble appears troubling from an epistemic perspective.

(Of course, it is worth noting that Mill seems to view our challengers as well-meaning truth seekers. If our interlocutors are trolls, motivated by hate or simply hoping to win an argument via rhetorical tricks, rather than discovering the truth, it is not clear that defending our beliefs against theirs benefits us and may in fact be an epistemic detriment for any audience to the discussion.)

The epistemic importance of encountering different beliefs may dovetail with a second troubling aspect of knowingly remaining in an epistemic bubble. When we perform an action, typically we do so based on our beliefs. If I am hungry, I enter the kitchen because I believe that I can find something to eat there; if I believe the fridge is empty, then I would leave in search of food. What I believe shapes what I do.

Of particular relevance here, though, is the link between beliefs and public policy. The policies which people endorse and the candidates for whom they vote are shaped by their beliefs, both about what is true and what values we ought to uphold. Further, the policies we collectively enact will impact the lives of others. When it comes to matters like, say, health care policy, who lives and who dies depends in part upon which policies that we adopt. Thus, one may argue that we have a moral duty to seek the truth in matters where our beliefs and thus our decisions have the potential to significantly impact others, as they do in the policy domain. To the extent that we knowingly remain in epistemic bubbles, particularly bubbles on matters relevant to policy, one may argue that we are violating our duties to others, specifically, our duties to hear arguments from differing perspectives, provided that hearing such accounts plays a role in discovering the truth.

This argument offers an important insight but its conclusion may be hasty. Ethical theories often distinguish between two sorts of duties. Some duties are perfect duties. Perfect duties are ones that we can complete. This is because perfect duties are the duties to refrain from wrongdoing. For instance, simply by refusing to kill others, you complete your duty to not kill. Other duties are imperfect duties. In contrast, we cannot complete these duties because they require active undertakings, rather than just refraining. For instance, most believe that we have a general duty to provide aid to others in need. However, in the world’s current conditions, no matter how much you aid others, more need will always remain. Thus, most theorists contend that imperfect duties have latitude; because you cannot complete them, you have some choice in when and how to fulfill them.

The distinction between perfect and imperfect duties now helps us see clearly the issue with the earlier argument. It seems that our duties to hear arguments from others in the pursuit of truth must be imperfect duties; we cannot possibly hear all arguments, nor can we discover all truths. Thus, this duty has latitude. We are not required to always pursue it, even if we have a general epistemic reason and a moral duty to seek out the perspectives of others, especially those who disagree with us.

As a result, it seems the arguments that we ought to leave platforms on which we have created epistemic bubbles may overreach. If our sources of information and outreach towards those who think differently from us are indeed epistemic bubbles, then we act culpably both as seekers of truth and as members of society. But imperfect duties, like our duty to hear from those we disagree with, are not ones that we must follow at all times. So long as we are doing enough to discharge the duty in other places and at other times, we have permission to opt out at least sometimes.

This suggests that the order to abandon social spaces free from ideological conflict is too hasty. Social media is ultimately, like any other technology, a tool. How we ought to use it depends on our purposes. Confronting alternative viewpoints needn’t be our sole driving motive. To make the point, we might consider a humorous post from Bluesky user Leon that describes a party with ideologically like-minded friends as an echo chamber. However, as long as you are putting in an effort elsewhere to engage with those who think differently than you, you should not be troubled that some spaces you occupy, digitally or physically, are not ideologically diverse.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Mill’s Dilemma

image of crowd with empty circle in the middle

One of the most famous defenders of free speech, John Stuart Mill, argued against the silencing of unpopular opinion on the grounds that we would potentially miss out on true ideas and that we need to check our own fallibility. Contained in this reasoning is the idea that in the marketplace of ideas that truth would hopefully emerge. But that marketplace is evolving into several niche submarkets. The use of algorithms and the creation of filter bubbles means that we no longer share a common pool of accepted facts. The recent US election has revealed just how polarized the electorate is, and this raises a moral question about to what extent a democratic public is obligated to look outside of their bubbles.

Regardless of whether the current president concedes at this point, it is difficult to think that the conspiracy theories about the 2020 election will disappear, and this will have long-term ramifications for the country. What this means is that at least two sets of citizens will have very different factual understandings of the world, especially after January. The fake news being spread about the election is filling a demand for a particular version of events, and on the right this demand is now being met with news sources whose content is ever more divorced from the reporting that the rest of us get. For example, the backlash by Trump supporters over Fox News’ projected win for Biden has led many on the right to label the network as “too liberal” and to switch to alternatives who are more willing to reflect the reality that is desired rather than the reality that exists. Similarly, conservatives feeling that their views have been censored on Facebook or Twitter have been drifting towards new platforms which are tailor-made to reflect their beliefs and are rife with misinformation.

The long-term concern of course is that as different political perspectives confine themselves to their own apps, sites, and feeds, the filter bubble effect becomes more pronounced. The concerns that Mill had about censorship in the marketplace of ideas isn’t the problem. The problem is that the pluralistic marketplaces that have spawned, and the different sets of political worldviews that have been created, are becoming insular and isolated from one another and thus more unrecognizable to each other. This is a problem for several reasons. Many have already pointed out that it allows for misinformation to spread, but the issue is more complicated.

The political bubbles of information and the echo chamber effect are making it easier to escape that check on fallibilism for those all across the political spectrum. It also makes addressing real world problems like climate change and COVID-19 more complicated. As one nurse has said, people are literally using their last breaths proclaiming that COVID isn’t real as they die from the disease. When recently asked about the fact that President Trump received over 70 million votes in the election, former President Obama opined that the nation is clearly divided and that the worldview presented in rightwing media is powerful. He noted, “It’s very hard for our democracy to function if we are operating on completely different sets of facts.”

As many experts have testified, this split in worldview is not going away. The moral issue isn’t merely that so many people can believe falsehoods or that truths may be buried; it’s the way that “facts,” as understood within an epistemic bubble, are related to each other and how political problems get defined by those relations which all lead to incommensurability. The moral issue is thus practical: how does a society where everyone is free to create their own worldview based on their preferences and have their views echoed back to them function when we can’t recognize what the other side is talking about? As the election debates demonstrated, certain dog whistles or narratives will resonate to some and not be recognized by others. Even if we put facts, fact-checking, and truth aside, do we still have a moral obligation to look outside of our own bubble and understand what our political opponents are saying?

In a recent paper from Episteme on the subject, C Thi Nguyen argues that we need to distinguish between epistemic bubbles and echo chambers. In the former, information is left out because a consumer is only provided certain sources. For example, if I open links to certain kinds of articles in a news feed, an algorithm may begin to provide more articles just like it and exclude articles that I am less likely to open. Thus leading to an epistemic bubble. On the other hand, if I specifically avoid certain sources or exclude certain sources I am creating an echo chamber. As described, “Both are structures of exclusion—but epistemic bubbles exclude through omission, while echo chambers exclude by manipulating trust.” Breaking free from an echo chamber is far more difficult because it involves using distrust of non-members to epistemically discredit them.

Trust is obviously important. Attempts to undermine fake news outlets or engage in censorship have only seemed to inspire more distrust. Fox News tries to maintain journalistic integrity by projecting an election, but this breaks the trust of Fox News viewers who leave for another network which will reflect their wishes. Since Twitter tags misleading tweets, conservatives are opting for other means of sharing their views. It seems the more that the so-called mainstream media tries to combat disinformation spread, the more it creates distrust. Simply trying to correct misinformation will not work either. Studies of disinformation campaigns reveal just how difficult it is to correct because even once a false claim is corrected, it is often the false claim that is remembered.

So, what is the alternative? As mainstream media attempts to prevent the spread of misinformation on their own platforms, trust in those platforms declines. And those who are left watching mainstream media, even if they do want truth, lose a check on their own biases and perspectives. Do the rest of us have an obligation to look at Newsmax, Breitbart, or Parler just so we can see what epistemic framework the other side is coming from? It may not be good for the cause of truth, but it might be necessary for the cause of democracy and for eventually getting the country to recognize and respond to the same problems. It may be that the only way to rebuild the epistemic trust required to break free from our echo chambers is to engage with our adversaries rather than merely fact-check them. By preventing the marketplace of ideas from balkanizing, there may still be a cheerful hope that through the exchange of ideas truth will eventually emerge. On the other hand, it may only cause more disinformation to spread even easier. Mill’s dilemma is still our dilemma.

YouTube and the Filter Bubble

photograph of bubble floating

If you were to get a hold of my laptop and go to YouTube, you’d see a grid of videos that are “recommended” to me, based on videos I’ve watched in the past and channels I’ve subscribed to. To me, my recommendations are not surprising: clips from The Late Show, a few music videos, and a bunch of videos about chess (don’t judge me). There are also some that are less expected – one about lockpicking, for example, and something called “Bruce Lee Lightsabers Scene Recreation (Dual of Fates edit).” All of this is pretty par for the course: YouTube will generally populate your own personalized version of your homepage with videos from channels you’re familiar with, and ones that it thinks you might like. In some cases this leads you down interesting paths to videos you’d like to see more of (that lockpicking one turned out to be pretty interesting) while in other cases they’re total duds (I just cannot suspend my disbelief when it comes to lightsaber nunchucks).

A concern with YouTube making these recommendations, however, is that one will get stuck seeing the same kind of content over and over again. While this might not be a worry when it comes to videos that are just for entertainment, it can be a much bigger problem when it comes to videos that present false or misleading information, or promote generally hateful agendas. This phenomenon – where one tends to be presented with similar kinds of information and sources based on one’s search history and browsing habits – is well documented, and results in what some have called a “filter bubble.” The worry is that once you watch videos of a particular type, you risk getting stuck in a bubble where you’ll be presented with many similar kinds of videos, making it more and more difficult to come across videos that may come from more reputable sources.

YouTube is well aware that there are all sorts of awful content on its platform, and has been attempting to combat it, although with mixed results. In a statement released in early June, YouTube stated that it was focused on removing a variety of types of hateful content, specifically by “prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” They provide some examples of such content that they were targeting, including “videos that promote or glorify Nazi ideology” and “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” They have not, however, been terribly successful in their efforts thus far: as Gizmodo reports, there are plenty of channels on YouTube making videos about conspiracy theories, white nationalism, and anti-LGBTQ hate groups that have not yet been removed from the site. So worries about filter bubbles full of hateful and misleading content persist.

There is another reason to be worried about the potential filter bubbles created by YouTube: if I am not in your bubble, then I will not know what kind of information you’re being exposed to. This can be a problem for a number of reasons: first, given my own YouTube history, it is extremely unlikely that a video about the “dangers” of vaccines, or videos glorifying white supremacy, will show up in my recommendations. Those parts of YouTube are essentially invisible to me, meaning that it is difficult to really tell how prevalent and popular these videos are. Second, since I don’t know what’s being recommended to you, I won’t know what kind of information you’re being exposed to: you may be exposed to a whole bunch of garbage that I don’t know exists, which makes it difficult for us to have a productive conversation if I don’t know, say, what you take to be a reputable source of information, or what the information conveyed by that source might be. 

There is, however, a way to see what’s going on outside of your bubble: simply create a new Google account, sign into YouTube, and its algorithms will quickly build you a new profile of recommended videos. I ran this experiment, and within minutes had created a profile that would be very out of character for myself, but would fit with the profile of someone with very different political views. For example, the top videos recommended to me on my fake account are the following:

FACTS NOT FEELINGS: Shapiro demolishes & humiliates little socialist comrade

CEO creates ‘Snowflake Test’ to weed out job applicants

Tucker: Not everyone in 2020 Democratic field is a lunatic

What Young Men NEED To Understand About Relationships – Jordan Peterson

This is not to say that I want to be recommended videos that push a misleading or hateful agenda, nor would I recommend that anyone actively go and seek them out. But one of the problems in creating filter bubbles is that if I’m not in your bubble then I’m not going to know what’s going on in there. YouTube, then, not only makes it much easier for someone to get caught up in a bubble of terrible recommended content, but also makes it more difficult to combat it.

Of course, this is also not to say that every alternative viewpoint has to be taken seriously: while it may be worth knowing what kinds of reasons antivaxxers are providing for their views, for example, I am under no obligation to take those views seriously. But with more and more people getting their news and seeking out political commentary from places like YouTube, next time you’re clicking through your recommendations it might be a good idea to consider what is not being shown to you. While creating a YouTube alter-ego is optional, it is worth keeping in mind that successfully communicating and having productive discussions with each other requires that we at least know where the other person is coming from, and this might require taking more active efforts to try to get out of one’s filter bubble.