← Return to search results
Back to Prindle Institute

A Duty to Pop?: On Our Obligation to Hear from Others

In my previous column, I surveyed recent criticisms of social media platforms, such as Bluesky, where the userbase skews toward one end of the political spectrum. Many argue that these platforms are echo chambers and that continuing to use them has various negative effects for the users, so users ought to leave for less polarized platforms.

Drawing on the work of C. Thi Nguyen, I argued that social media platforms, at worst, enable users to create epistemic bubbles, and thus one can resolve problems associated with them by seeking additional information from other sources. Further, I contended that polarization on social media platforms is just another facet of our general background polarization, so it is not obvious why we ought to pop this particular bubble first.

However, one might argue that there is something troubling about knowingly remaining in any epistemic bubble. Thus, even if we cannot show that the epistemic bubbles which social media lend themselves towards are especially troubling, then our mere knowledge of them implies that we ought to pop those bubbles. Further still, adding new voices and perspectives to one’s social media feed is much easier than moving, starting a new hobby, etc. So perhaps we should strive to pop epistemic bubbles in general, but we ought to specifically target bubbles on social media platforms because they are the least costly to pop.

But what precisely is troubling about knowingly remaining in an epistemic bubble? One argument is that it inhibits someone’s search for the truth. In the second chapter of On Liberty, John Stuart Mill offers a famous defense of the freedom of thought and discussion. Often echoed in arguments about free speech today, Mill contends that allowing open and broad discussion of any idea, however fringe, is vital for the pursuit and acquisition of knowledge. Even repugnant and widely rejected ideas, Mill contends, must be carefully examined lest our rejection of it become a dead dogma.

Consider the following. Most people in liberal democracies endorse racial equality, the idea that an individual’s race is irrelevant to her moral worth, character, and merit. Yet, if someone is pressed to justify this belief, they may find themselves pausing; while reflexively endorsing this belief after living in institutions that also endorse it, many find themselves unable to articulate why this view is more defensible than the alternatives. A proponent of Mill would argue that racial equality has become a dead dogma. It is the cultural default view, so to speak, and any who question it even for the sake of argument may face social punishment. As a result, many may know that they accept it as true but cannot explain why it is true.

So, without open discourse and discussion, people may find themselves unable to defend their beliefs or explain why alternative views ought to be rejected. This may not make a difference for us personally, but it might leave some in a troubling situation. If people are not able to articulate why their beliefs are true, those beliefs may be on more precarious ground. Perhaps part of the reason for online radicalization is that people have not been inoculated, so to speak, against views on the radical fringes; when one cannot articulate why they believe a basic moral tenant of their society, encountering a cogent argument against its truth may be a profoundly troubling experience.

So how does this connect to epistemic bubbles? The less that we encounter positions we disagree with, the more error prone we may become in our beliefs. The less frequently that our beliefs are challenged, the less we are forced to seriously scrutinize them. Without scrutiny, we may come to accept falsehoods without realizing it. However, so long as we remain within an epistemic bubble, we will not encounter challenges to our beliefs. Thus, knowingly remaining in such a bubble appears troubling from an epistemic perspective.

(Of course, it is worth noting that Mill seems to view our challengers as well-meaning truth seekers. If our interlocutors are trolls, motivated by hate or simply hoping to win an argument via rhetorical tricks, rather than discovering the truth, it is not clear that defending our beliefs against theirs benefits us and may in fact be an epistemic detriment for any audience to the discussion.)

The epistemic importance of encountering different beliefs may dovetail with a second troubling aspect of knowingly remaining in an epistemic bubble. When we perform an action, typically we do so based on our beliefs. If I am hungry, I enter the kitchen because I believe that I can find something to eat there; if I believe the fridge is empty, then I would leave in search of food. What I believe shapes what I do.

Of particular relevance here, though, is the link between beliefs and public policy. The policies which people endorse and the candidates for whom they vote are shaped by their beliefs, both about what is true and what values we ought to uphold. Further, the policies we collectively enact will impact the lives of others. When it comes to matters like, say, health care policy, who lives and who dies depends in part upon which policies that we adopt. Thus, one may argue that we have a moral duty to seek the truth in matters where our beliefs and thus our decisions have the potential to significantly impact others, as they do in the policy domain. To the extent that we knowingly remain in epistemic bubbles, particularly bubbles on matters relevant to policy, one may argue that we are violating our duties to others, specifically, our duties to hear arguments from differing perspectives, provided that hearing such accounts plays a role in discovering the truth.

This argument offers an important insight but its conclusion may be hasty. Ethical theories often distinguish between two sorts of duties. Some duties are perfect duties. Perfect duties are ones that we can complete. This is because perfect duties are the duties to refrain from wrongdoing. For instance, simply by refusing to kill others, you complete your duty to not kill. Other duties are imperfect duties. In contrast, we cannot complete these duties because they require active undertakings, rather than just refraining. For instance, most believe that we have a general duty to provide aid to others in need. However, in the world’s current conditions, no matter how much you aid others, more need will always remain. Thus, most theorists contend that imperfect duties have latitude; because you cannot complete them, you have some choice in when and how to fulfill them.

The distinction between perfect and imperfect duties now helps us see clearly the issue with the earlier argument. It seems that our duties to hear arguments from others in the pursuit of truth must be imperfect duties; we cannot possibly hear all arguments, nor can we discover all truths. Thus, this duty has latitude. We are not required to always pursue it, even if we have a general epistemic reason and a moral duty to seek out the perspectives of others, especially those who disagree with us.

As a result, it seems the arguments that we ought to leave platforms on which we have created epistemic bubbles may overreach. If our sources of information and outreach towards those who think differently from us are indeed epistemic bubbles, then we act culpably both as seekers of truth and as members of society. But imperfect duties, like our duty to hear from those we disagree with, are not ones that we must follow at all times. So long as we are doing enough to discharge the duty in other places and at other times, we have permission to opt out at least sometimes.

This suggests that the order to abandon social spaces free from ideological conflict is too hasty. Social media is ultimately, like any other technology, a tool. How we ought to use it depends on our purposes. Confronting alternative viewpoints needn’t be our sole driving motive. To make the point, we might consider a humorous post from Bluesky user Leon that describes a party with ideologically like-minded friends as an echo chamber. However, as long as you are putting in an effort elsewhere to engage with those who think differently than you, you should not be troubled that some spaces you occupy, digitally or physically, are not ideologically diverse.

Hearing Voices: Social Media and Echo Chambers

Following Donald Trump’s re-election, many users of X questioned whether they should remain on the platform given Elon Musk’s extensive public and financial support of Trump. Researchers at the Queensland University of Technology found that, following Musk’s endorsement of Trump, Republican-leaning accounts had significantly more views than posts by Democrat-leaning accounts. This was true even when both had a similar amount of likes and reposts, suggesting X’s algorithm amplified the reach of Republican-leaning accounts. Further still, many users felt the platform changed in other ways after Musk bought the site – prominent users regularly post pro-Nazi content, hate speech on the site increased by as much as 50% and, troubling for creators hoping to promote their work, X’s algorithm suppresses posts with external links.

Many users switched to a platform called Bluesky. Previously by Twitter, Bluesky was initially an experiment into “decentralizing” social media platforms. Developers aimed to create an open protocol which independent social media platforms could then adopt, allowing their users to access content from other platforms running the same protocol. After Musk bought Twitter, Bluesky became an independent entity, launched a closed beta in February 2023, then released publicly in February 2024. After the U.S. presidential election in November 2024, the website’s userbase expanded at a rapid rate, rising from about 10 million users in September, to over 30 million by the end of January 2025.

Given that many users left X for political reasons, some criticized the exodus. Discussing this in December, Kenneth Boyd helpfully chronicles some critiques that users who left Twitter for Bluesky were willfully entering an echo chamber. However, Boyd concludes that our duties to preserve our own well-being can outweigh our duty to engage in discourse, especially when the other party in the discourse is motivated by hatred or a desire to troll.

But in recent days, critiques have re-emerged. An op-ed in The Washington Post argues that liberals, by migrating to platforms like Bluesky, undermine their political causes. Billionaire Mark Cuban, in re-posting this editorial, commented that minimal diversity of thought is hurting the platform’s growth. Similarly, an opinion piece in Slate contends that having only like-minded voices on the platform inhibits both the Democratic party and growth of Bluesky itself.

Given the recent resurgence in discourse, it is worth revisiting the issue. Ultimately, there are two questions we must consider: 1) How do we spot an echo chamber? 2) Is it wrong to stay?

Let’s begin by considering what an echo chamber is. In doing so, I rely on the work of C. Thi Nguyen. Nguyen distinguishes between echo chambers and what he calls epistemic bubbles. Although we often use “bubble” and “echo chamber” interchangeably, Nguyen contends there are important theoretical and practical differences between them. According to Nguyen, one is in an epistemic bubble whenever one’s regular sources of information exclude certain perspectives. The term “bubble” is apt for two reasons. First, it accurately describes the situation. Bubbles clearly divide the interior and exterior. In this case, the perspectives covered by one’s community are inside the bubble, while those excluded are outside. Second, it makes the solution clear. Bubbles may be popped; once something pierces them, they are destroyed. To leave an epistemic bubble, one must merely be exposed to information and perspectives normally not covered within one’s community.

However, echo chambers pose a greater challenge, argues Nguyen. Among the features that differentiate echo chambers from epistemic bubbles are that echo chambers utilize what he calls disagreement reinforcement mechanisms. Unlike epistemic bubbles, echo chambers promote distrust of all information that comes from sources outside the chamber. To accomplish this, influential figures in the echo chamber may frame otherwise contrary information in a way that promotes both rejecting that information and further trusting the prominent voices within the current community.

For instance, imagine an isolated religious community. The leaders instruct followers that the outside world is corrupted, most people are possessed by wicked spirits and these spirits will try to tear adherents away from the community, so they too become corrupted. Suppose a member flees the compound and encounters an outsider. This outsider, upon learning about this religion and its teachings, cautions that this is a cult and the member should never return. This may, in fact, cause the member to gain further trust in the community from which she fled – the leaders accurately predicted what the outsiders would say and recommend. Hence, this warning serves as a disagreement reinforcement mechanism, as encountering a different perspective reinforces one’s trust of the prominent perspective in the echo chamber.

Although Nguyen argues there are other differences between echo chambers and epistemic bubbles, this is perhaps the most important and troubling feature: echo chambers encourage distrust of outsiders and provide members of its community with an explanation that enables them to dismiss all dissent.

With this taxonomy in hand, we can get a clearer handle on the situation for Bluesky and other social media platforms. Unless a platform is designed such that a) simply by virtue of being on the platform, users must consume content that b) encourages them to distrust and reject external sources of information and c) dissenting voices are either not present or their posts are suppressed, algorithmically or otherwise, then it does not seem that the platform itself is an echo chamber. At most, users can create or enter an echo chamber within the platform. But ultimately, without intervention by the designers of a social media platform, no platform is an echo chamber. These communities require intentional construction.

Further still, a platform is not automatically an epistemic bubble either. One may create an epistemic bubble on a platform through one’s choices about who to follow and content to consume. If a social media platform tends to disproportionately represent certain perspectives, then one may unintentionally create an epistemic bubble in their feed. But this happens naturally in many domains – where we live, shop, and worship (if we do) tends to correlate highly with our political beliefs. Given that we are likely to form friendships with people who live near us and share physical spaces with us, then social interaction in general has a propensity to create epistemic bubbles, at least when polarization is part of our background conditions. So, epistemic bubbles on social media platforms may simply be one symptom of a larger problem. And it is unclear why we ought to address this symptom first. Perhaps we should see a flood of columns exhorting liberals to take up hunting, or ads informing conservatives about the quality of Trader Joe’s proprietary brand products.

Of course, one might contend that this is simply a matter of semantics and that it does not matter whether a platform is an epistemic bubble or an echo chamber. But this misses the larger importance of the point. Whether a given community is an echo chamber of an epistemic bubble determines what we must do to resolve it. As noted earlier, we can simply pop epistemic bubbles by exposing those within one to the perspectives and views that it does not cover.

Echo chambers, however, pose a greater challenge. The presentation of previously unheard information may in fact lead one to double down. Nguyen argues that breaking out of an echo chamber requires first developing a prior trusting relationship with the individual who presents the contrary evidence. If one views the presenter of information as knowledgeable, trustworthy, and well-meaning, then they must weigh this contrary information against the echo chamber’s disagreement reinforcement mechanism. In time, this may cause them to lose confidence in the most prominent voices and beliefs within the echo chamber. Unfortunately, this process requires a lot of time and a great amount of patience on the part of the party outside the echo chamber.

Thankfully, though, if my above analysis is apt, (most) social media platforms are epistemic bubbles at worst, rather than echo chambers. Thus, one may not need to leave the platform. They must simply ensure that the platform is not their only source of information. Of course, one might argue that one has significant reason to leave a platform even if it merely lends itself to creating an epistemic bubble. We will consider ways of arguing this in my next column.

Is It Time to Nationalize YouTube and Facebook?

image collage of social media signs and symbols

Social media presents several moral challenges to contemporary society on issues ranging from privacy to the manipulation of public opinion via adaptive recommendation algorithms. One major ethical concern with social media is its addictive tendencies. For example, Frances Haugen, the whistleblower from Facebook, has warned about the addictive possibilities of the metaverse. Social media companies design their products to be addictive because their business model is based on an attention economy. And governments have struggled with how to respond to the dangers that social media creates including the notion of independent oversight bodies and new privacy regulations to limit the power of social media. But does the solution to this problem require changing the business model?

Social media companies like Facebook, Twitter, YouTube, and Instagram profit from an attention economy. This means that the primary product of social media companies is the attention of the people using their service which these companies can leverage to make money from advertisers. As Vikram Bhargava and Manuel Velasquez explain, because advertisers represent the real customers, corporations are free to be more indifferent to their user’s interests. What many of us fail to realize is that,

“built into the business model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions…the companies do not care whether it is better or worse for the user because the user does not matter; the user’s interests do not figure into the social media company’s decision making.”

As a result of this business model, social media is often designed with persuasive technology mechanisms. Intermittent variable rewards, nudging, and the erosion of natural stopping cues help create a kind of slot-machine effect, and the use of adaptive algorithms that take in user data in order to customize user experience only reinforces this. As a result, many experts have increasingly recognized social media addiction as a problem. A 2011 survey found that 59% of respondents felt they were addicted to social media. As Bhargava and Velasquez report, social media addiction mirrors many of the behaviors associated with substance addiction, and neuroimaging studies show that the same areas of the brain are active as in substance addiction. As a potential consequence of this addiction, it is well known that following the introduction of social media there has been a marked increase in teenage suicide as well.

But is there a way to mitigate the harmful effects of social media addiction? Bhargava and Velasquez suggest things like addiction warnings or prompts making them easier to quit can be important steps. Many have argued that breaking up social media companies like Facebook is necessary as they function like monopolies. However, it is worth considering the fact that breaking up such businesses to increase competition in a field centered around the same business model may not help. If anything, greater competition in the marketplace may only yield new and “innovative” ways to keep people hooked. If the root of the problem is the business model, perhaps it is the business model which should be changed.

For example, since in an attention economy business model users of social media are not the customers, one way to make social media companies less incentivized to addict their users is to make them customers. Should social media companies using adaptive algorithms be forced to switch to a subscription-based business model? If customers paid for Facebook directly, Facebook would still have incentive to provide a good experience for users (now being their customers), but they would have less incentive to focus their efforts on monopolizing users’ attention. Bhargava and Velasquez, for example, note that in a subscription steaming platform like Netflix, it is immaterial to the company how much users watch; “making a platform addictive is not an essential feature of the subscription-based service business model.”

But there are problems with this approach as well. As I have described previously, social media companies like Meta and Google have significant abilities to control knowledge production and knowledge communication. Even with a subscription model, the ability for social media companies to manipulate public opinion would still be present. Nor would it necessarily solve problems relating to echo-chambers and filter bubbles. It may also mean that the poorest elements of society would be unable to afford social media, essentially excluding socioeconomic groups from the platform. Is there another way to change the business model and avoid these problems?

In the early 20th century, during an age of the rise of a new mass media and its advertising, many believed that this new technology would be a threat to democracy. The solution was public broadcasting such as PBS, BBC, and CBC. Should a 21st century solution to the problem of social media be similar? Should there be a national YouTube or a national Facebook? Certainly, such platforms wouldn’t need to be based on an attention economy; they would not be designed to capture as much of their users’ attention as possible. Instead, they could be made available for all citizens to contribute for free if they wish without a subscription.

Such a platform would not only give the public greater control over how algorithms operate on it, but they would also have greater control over privacy settings as well. The platform could also be designed to strengthen democracy. Instead of having a corporation like Google determining the results of your video or news search, for instance, the public itself would now have a greater say about what news and information is most relevant. It could also bolster democracy by ensuring that recommendation algorithms do not create echo-chambers; users could be exposed to a diversity of posts or videos that don’t necessarily reflect their own political views.

Of course, such a proposal carries problems as well. Cost might be significant, however a service replicating the positive social benefits without the “innovative” and expensive process of creating addicting algorithms may partially offset that. Also, depending on the nation, such a service could be subject to abuse. Just like there is a difference between public broadcasting and state-run media (where the government has editorial control), the service would lose its purpose if all content on the platform were controlled directly by the government. Something more independent would be required.

However, another significant minefield for such a project would be agreeing on community standards for content. Obviously, the point would be not to allow the platform to become a breeding ground for misinformation, and so clear standards would be necessary. However, there would also have to be agreement that in the greater democratic interests of breaking free from our echo-chambers, that the public agree to and accept that others may post, and they may see, content they consider offensive. We need to be exposed to views we don’t like. In a post-pandemic world, this is a larger public conversation that needs to happen, regardless of how we choose to regulate social media.

Mill’s Dilemma

image of crowd with empty circle in the middle

One of the most famous defenders of free speech, John Stuart Mill, argued against the silencing of unpopular opinion on the grounds that we would potentially miss out on true ideas and that we need to check our own fallibility. Contained in this reasoning is the idea that in the marketplace of ideas that truth would hopefully emerge. But that marketplace is evolving into several niche submarkets. The use of algorithms and the creation of filter bubbles means that we no longer share a common pool of accepted facts. The recent US election has revealed just how polarized the electorate is, and this raises a moral question about to what extent a democratic public is obligated to look outside of their bubbles.

Regardless of whether the current president concedes at this point, it is difficult to think that the conspiracy theories about the 2020 election will disappear, and this will have long-term ramifications for the country. What this means is that at least two sets of citizens will have very different factual understandings of the world, especially after January. The fake news being spread about the election is filling a demand for a particular version of events, and on the right this demand is now being met with news sources whose content is ever more divorced from the reporting that the rest of us get. For example, the backlash by Trump supporters over Fox News’ projected win for Biden has led many on the right to label the network as “too liberal” and to switch to alternatives who are more willing to reflect the reality that is desired rather than the reality that exists. Similarly, conservatives feeling that their views have been censored on Facebook or Twitter have been drifting towards new platforms which are tailor-made to reflect their beliefs and are rife with misinformation.

The long-term concern of course is that as different political perspectives confine themselves to their own apps, sites, and feeds, the filter bubble effect becomes more pronounced. The concerns that Mill had about censorship in the marketplace of ideas isn’t the problem. The problem is that the pluralistic marketplaces that have spawned, and the different sets of political worldviews that have been created, are becoming insular and isolated from one another and thus more unrecognizable to each other. This is a problem for several reasons. Many have already pointed out that it allows for misinformation to spread, but the issue is more complicated.

The political bubbles of information and the echo chamber effect are making it easier to escape that check on fallibilism for those all across the political spectrum. It also makes addressing real world problems like climate change and COVID-19 more complicated. As one nurse has said, people are literally using their last breaths proclaiming that COVID isn’t real as they die from the disease. When recently asked about the fact that President Trump received over 70 million votes in the election, former President Obama opined that the nation is clearly divided and that the worldview presented in rightwing media is powerful. He noted, “It’s very hard for our democracy to function if we are operating on completely different sets of facts.”

As many experts have testified, this split in worldview is not going away. The moral issue isn’t merely that so many people can believe falsehoods or that truths may be buried; it’s the way that “facts,” as understood within an epistemic bubble, are related to each other and how political problems get defined by those relations which all lead to incommensurability. The moral issue is thus practical: how does a society where everyone is free to create their own worldview based on their preferences and have their views echoed back to them function when we can’t recognize what the other side is talking about? As the election debates demonstrated, certain dog whistles or narratives will resonate to some and not be recognized by others. Even if we put facts, fact-checking, and truth aside, do we still have a moral obligation to look outside of our own bubble and understand what our political opponents are saying?

In a recent paper from Episteme on the subject, C Thi Nguyen argues that we need to distinguish between epistemic bubbles and echo chambers. In the former, information is left out because a consumer is only provided certain sources. For example, if I open links to certain kinds of articles in a news feed, an algorithm may begin to provide more articles just like it and exclude articles that I am less likely to open. Thus leading to an epistemic bubble. On the other hand, if I specifically avoid certain sources or exclude certain sources I am creating an echo chamber. As described, “Both are structures of exclusion—but epistemic bubbles exclude through omission, while echo chambers exclude by manipulating trust.” Breaking free from an echo chamber is far more difficult because it involves using distrust of non-members to epistemically discredit them.

Trust is obviously important. Attempts to undermine fake news outlets or engage in censorship have only seemed to inspire more distrust. Fox News tries to maintain journalistic integrity by projecting an election, but this breaks the trust of Fox News viewers who leave for another network which will reflect their wishes. Since Twitter tags misleading tweets, conservatives are opting for other means of sharing their views. It seems the more that the so-called mainstream media tries to combat disinformation spread, the more it creates distrust. Simply trying to correct misinformation will not work either. Studies of disinformation campaigns reveal just how difficult it is to correct because even once a false claim is corrected, it is often the false claim that is remembered.

So, what is the alternative? As mainstream media attempts to prevent the spread of misinformation on their own platforms, trust in those platforms declines. And those who are left watching mainstream media, even if they do want truth, lose a check on their own biases and perspectives. Do the rest of us have an obligation to look at Newsmax, Breitbart, or Parler just so we can see what epistemic framework the other side is coming from? It may not be good for the cause of truth, but it might be necessary for the cause of democracy and for eventually getting the country to recognize and respond to the same problems. It may be that the only way to rebuild the epistemic trust required to break free from our echo chambers is to engage with our adversaries rather than merely fact-check them. By preventing the marketplace of ideas from balkanizing, there may still be a cheerful hope that through the exchange of ideas truth will eventually emerge. On the other hand, it may only cause more disinformation to spread even easier. Mill’s dilemma is still our dilemma.

YouTube and the Filter Bubble

photograph of bubble floating

If you were to get a hold of my laptop and go to YouTube, you’d see a grid of videos that are “recommended” to me, based on videos I’ve watched in the past and channels I’ve subscribed to. To me, my recommendations are not surprising: clips from The Late Show, a few music videos, and a bunch of videos about chess (don’t judge me). There are also some that are less expected – one about lockpicking, for example, and something called “Bruce Lee Lightsabers Scene Recreation (Dual of Fates edit).” All of this is pretty par for the course: YouTube will generally populate your own personalized version of your homepage with videos from channels you’re familiar with, and ones that it thinks you might like. In some cases this leads you down interesting paths to videos you’d like to see more of (that lockpicking one turned out to be pretty interesting) while in other cases they’re total duds (I just cannot suspend my disbelief when it comes to lightsaber nunchucks).

A concern with YouTube making these recommendations, however, is that one will get stuck seeing the same kind of content over and over again. While this might not be a worry when it comes to videos that are just for entertainment, it can be a much bigger problem when it comes to videos that present false or misleading information, or promote generally hateful agendas. This phenomenon – where one tends to be presented with similar kinds of information and sources based on one’s search history and browsing habits – is well documented, and results in what some have called a “filter bubble.” The worry is that once you watch videos of a particular type, you risk getting stuck in a bubble where you’ll be presented with many similar kinds of videos, making it more and more difficult to come across videos that may come from more reputable sources.

YouTube is well aware that there are all sorts of awful content on its platform, and has been attempting to combat it, although with mixed results. In a statement released in early June, YouTube stated that it was focused on removing a variety of types of hateful content, specifically by “prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” They provide some examples of such content that they were targeting, including “videos that promote or glorify Nazi ideology” and “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” They have not, however, been terribly successful in their efforts thus far: as Gizmodo reports, there are plenty of channels on YouTube making videos about conspiracy theories, white nationalism, and anti-LGBTQ hate groups that have not yet been removed from the site. So worries about filter bubbles full of hateful and misleading content persist.

There is another reason to be worried about the potential filter bubbles created by YouTube: if I am not in your bubble, then I will not know what kind of information you’re being exposed to. This can be a problem for a number of reasons: first, given my own YouTube history, it is extremely unlikely that a video about the “dangers” of vaccines, or videos glorifying white supremacy, will show up in my recommendations. Those parts of YouTube are essentially invisible to me, meaning that it is difficult to really tell how prevalent and popular these videos are. Second, since I don’t know what’s being recommended to you, I won’t know what kind of information you’re being exposed to: you may be exposed to a whole bunch of garbage that I don’t know exists, which makes it difficult for us to have a productive conversation if I don’t know, say, what you take to be a reputable source of information, or what the information conveyed by that source might be. 

There is, however, a way to see what’s going on outside of your bubble: simply create a new Google account, sign into YouTube, and its algorithms will quickly build you a new profile of recommended videos. I ran this experiment, and within minutes had created a profile that would be very out of character for myself, but would fit with the profile of someone with very different political views. For example, the top videos recommended to me on my fake account are the following:

FACTS NOT FEELINGS: Shapiro demolishes & humiliates little socialist comrade

CEO creates ‘Snowflake Test’ to weed out job applicants

Tucker: Not everyone in 2020 Democratic field is a lunatic

What Young Men NEED To Understand About Relationships – Jordan Peterson

This is not to say that I want to be recommended videos that push a misleading or hateful agenda, nor would I recommend that anyone actively go and seek them out. But one of the problems in creating filter bubbles is that if I’m not in your bubble then I’m not going to know what’s going on in there. YouTube, then, not only makes it much easier for someone to get caught up in a bubble of terrible recommended content, but also makes it more difficult to combat it.

Of course, this is also not to say that every alternative viewpoint has to be taken seriously: while it may be worth knowing what kinds of reasons antivaxxers are providing for their views, for example, I am under no obligation to take those views seriously. But with more and more people getting their news and seeking out political commentary from places like YouTube, next time you’re clicking through your recommendations it might be a good idea to consider what is not being shown to you. While creating a YouTube alter-ego is optional, it is worth keeping in mind that successfully communicating and having productive discussions with each other requires that we at least know where the other person is coming from, and this might require taking more active efforts to try to get out of one’s filter bubble.

The Rise of Political Echo Chambers

Photograph of the White House

Anyone who has spent even a little bit of time on the internet is no doubt familiar with its power to spread false information, as well as the insular communities that are built around the sharing of such information. Examples of such groups can readily be found on your social media of choice: anti-vaccination and climate change denial groups abound on Facebook, while groups like the subreddit “The Donald” boast 693,000 subscribers (self-identified as “patriots”) who consistently propagate racist, hateful, and false claims made by Trump and members of the far-right. While the existence of these groups is nothing new, it is worth considering their impact and ethical ramifications as 2019 gets underway.

Theorists have referred to these types of groups as echo chambers, namely groups in which a certain set of viewpoints and beliefs are shared amongst its members, but in such a way that views from outside the group are either paid no attention or actively thought of as misleading. Social media groups are often presented as examples: an anti-vaxx Facebook group, for example, may consist of members who share their views with other members of the group, but either ignore or consider misleading the tremendous amount of evidence that their beliefs are mistaken. These views tend to propagate because the more that one sees that one’s beliefs are shared and repeated (in other words, “echoed back”) the more confident they become that they’re actually correct.

The potential dangers of echo chambers have received a lot of attention recently, with some blaming such groups for contributing to the decrease in rate of parents vaccinating their children, and to increased political partisanship. Philosopher C Thi Nguyen compares echo chambers to “cults,” arguing that their existence can in part explain what appears to be an increasing disregard for the truth. Consider, for example, The Washington Post’s recent report that Trump made 7,645 false or misleading claims since the beginning of his presidency. While some of these claims required more complex fact-checking than others, numerous claims (e.g. that the border wall is already being built, or those concerning the size of his inauguration crowd) are much more easily assessed. The fact that Trump supporters continue to believe and propagate his claims can be partly explained by the existence of echo chambers: if one is a member of a group in which similar views are shared and outside sources are ignored or considered untrustworthy then it is easier to understand how such claims can continue to be believed, even when patently false.

The harms of echo chambers, then, are wide ranging and potentially significant. As a result it would seem that we have an obligation to attempt to break out of any echo chambers we happen to find ourselves in, and to convince others to get out of theirs. Nguyen urges us to attempt to “escape the echo chamber” but emphasizes that doing so might not be easy: members of echo chambers will continue to receive confirmation from those that they trust and share their beliefs, and, because they distrust outside sources of information, will not be persuaded by countervailing evidence.

As 2019 begins, the problem of echo chambers is perhaps getting worse. As a recent Pew Research Center study reports, polarization along partisan lines has been steadily increasing since the beginning of Trump’s presidency on a wide range of issues. Trump’s consistent labeling of numerous news sources and journalists as untrustworthy is clearly contributing to the problem: Trump supporters will be more likely to treat information provided by those sources deemed “fake news” as untrustworthy, and thus will fail to consider contradictory evidence.

So what do we do about the problem of echo chambers? David Robert Grimes at The Guardian suggests that while echo chambers can be comforting – it is nice, after all, to have our beliefs validated and not to have to challenge our convictions – that such comfort hardly outweighs the potential harms. Instead, Grimes suggests that “we need to become more discerning at analysing our sources” and that “we must learn not to cling to something solely because it chimes with our beliefs, and be willing to jettison any notion when it is contradicted by evidence.”

Grimes’ advice is reminiscent of the American philosopher Charles Sanders Peirce, who considered the type of person who forms beliefs using what he calls a “method of tenacity,” namely someone who sticks to one’s beliefs no matter what. As Peirce notes, such a path is comforting – “When an ostrich buries its head in the sand as danger approaches,” Peirce says, “it very likely takes the happiest course. It hides the danger, and then calmly says there is no danger; and, if it feels perfectly sure there is none, why should it raise its head to see?” – but nevertheless untenable, as no one can remain an ostrich for very long, and will thus be forced to come into contact with ideas that will ultimately force them to address challenges to their beliefs. Peirce insists that the we instead approach our beliefs scientifically, where “the scientific spirit requires a man to be at all times ready to dump his whole cart-load of beliefs, the moment experience is against them.”

Hopefully 2019 will see more people taking the advice of Grimes and Peirce seriously, and that the comfort of being surrounded by familiar beliefs and not having to perform any critical introspection will no longer win out over a concern for truth.