Back to Prindle Institute

YouTube and the Filter Bubble

photograph of bubble floating

If you were to get a hold of my laptop and go to YouTube, you’d see a grid of videos that are “recommended” to me, based on videos I’ve watched in the past and channels I’ve subscribed to. To me, my recommendations are not surprising: clips from The Late Show, a few music videos, and a bunch of videos about chess (don’t judge me). There are also some that are less expected – one about lockpicking, for example, and something called “Bruce Lee Lightsabers Scene Recreation (Dual of Fates edit).” All of this is pretty par for the course: YouTube will generally populate your own personalized version of your homepage with videos from channels you’re familiar with, and ones that it thinks you might like. In some cases this leads you down interesting paths to videos you’d like to see more of (that lockpicking one turned out to be pretty interesting) while in other cases they’re total duds (I just cannot suspend my disbelief when it comes to lightsaber nunchucks).

A concern with YouTube making these recommendations, however, is that one will get stuck seeing the same kind of content over and over again. While this might not be a worry when it comes to videos that are just for entertainment, it can be a much bigger problem when it comes to videos that present false or misleading information, or promote generally hateful agendas. This phenomenon – where one tends to be presented with similar kinds of information and sources based on one’s search history and browsing habits – is well documented, and results in what some have called a “filter bubble.” The worry is that once you watch videos of a particular type, you risk getting stuck in a bubble where you’ll be presented with many similar kinds of videos, making it more and more difficult to come across videos that may come from more reputable sources.

YouTube is well aware that there are all sorts of awful content on its platform, and has been attempting to combat it, although with mixed results. In a statement released in early June, YouTube stated that it was focused on removing a variety of types of hateful content, specifically by “prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” They provide some examples of such content that they were targeting, including “videos that promote or glorify Nazi ideology” and “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” They have not, however, been terribly successful in their efforts thus far: as Gizmodo reports, there are plenty of channels on YouTube making videos about conspiracy theories, white nationalism, and anti-LGBTQ hate groups that have not yet been removed from the site. So worries about filter bubbles full of hateful and misleading content persist.

There is another reason to be worried about the potential filter bubbles created by YouTube: if I am not in your bubble, then I will not know what kind of information you’re being exposed to. This can be a problem for a number of reasons: first, given my own YouTube history, it is extremely unlikely that a video about the “dangers” of vaccines, or videos glorifying white supremacy, will show up in my recommendations. Those parts of YouTube are essentially invisible to me, meaning that it is difficult to really tell how prevalent and popular these videos are. Second, since I don’t know what’s being recommended to you, I won’t know what kind of information you’re being exposed to: you may be exposed to a whole bunch of garbage that I don’t know exists, which makes it difficult for us to have a productive conversation if I don’t know, say, what you take to be a reputable source of information, or what the information conveyed by that source might be. 

There is, however, a way to see what’s going on outside of your bubble: simply create a new Google account, sign into YouTube, and its algorithms will quickly build you a new profile of recommended videos. I ran this experiment, and within minutes had created a profile that would be very out of character for myself, but would fit with the profile of someone with very different political views. For example, the top videos recommended to me on my fake account are the following:

FACTS NOT FEELINGS: Shapiro demolishes & humiliates little socialist comrade

CEO creates ‘Snowflake Test’ to weed out job applicants

Tucker: Not everyone in 2020 Democratic field is a lunatic

What Young Men NEED To Understand About Relationships – Jordan Peterson

This is not to say that I want to be recommended videos that push a misleading or hateful agenda, nor would I recommend that anyone actively go and seek them out. But one of the problems in creating filter bubbles is that if I’m not in your bubble then I’m not going to know what’s going on in there. YouTube, then, not only makes it much easier for someone to get caught up in a bubble of terrible recommended content, but also makes it more difficult to combat it.

Of course, this is also not to say that every alternative viewpoint has to be taken seriously: while it may be worth knowing what kinds of reasons antivaxxers are providing for their views, for example, I am under no obligation to take those views seriously. But with more and more people getting their news and seeking out political commentary from places like YouTube, next time you’re clicking through your recommendations it might be a good idea to consider what is not being shown to you. While creating a YouTube alter-ego is optional, it is worth keeping in mind that successfully communicating and having productive discussions with each other requires that we at least know where the other person is coming from, and this might require taking more active efforts to try to get out of one’s filter bubble.

On Tumblr, Adult Content is Banned – For Good?

Photo of a man speaking into a microphone, standing in front of a screen displaying a tumblr dashboard

In early December, blogging platform Tumblr announced that it would be banning adult content from its site starting mid-month. In a post explaining the decision, CEO Jeff D’Onofrio stated that removing such content would better allow Tumblr to be a “safe place for creative expression [and] self-discovery” and would result in “a place where more people feel comfortable expressing themselves.” D’Onofrio explained further that while he recognized that many users sought out Tumblr as a source of adult content, that “[t]here are no shortage of sites on the internet” that users could turn to. The content to be removed includes “images, videos, or GIFs that show real-life human genitals or female-presenting nipples—this includes content that is so photorealistic that it could be mistaken for featuring real-life humans (nice try, though)” although “certain types of artistic, educational, newsworthy, or political content featuring nudity” will be allowed to stay.

While creating safe places for expression and self-discovery are laudable goals, if a bit vague, recent problems with the platform are perhaps a better explanations of Tumblr’s decision. Notably, the fact that the Tumblr app was recently removed from the iOS App Store, ostensibly for the reason that Tumblr was not doing enough to ensure that illegal content – specifically in the form of child pornography – was being filtered out. Whether Tumblr’s decision to ban all adult content was based on genuine moral concern or simply concern for the bottom-line (it would no doubt be a major blow to Tumblr to be removed from the iOS store permanently), many online have speculated that the ban will spell the death of the platform. As Motherboard discovered, approximately a significant percentage of Tumblr blogs were based around providing adult content, with an even more significant percentage of users seeking out that content.

It is clear that Tumblr has a moral obligation to do as much as they can in order to prevent harmful material like child pornography from appearing on their platform. It also seems clear that Tumblr has, up until this point, failed to meet said obligation. The move to ban all adult content, then, might seem to be the straightforwardly right way to attempt to amend for their past transgressions, as well as to prevent further such harms in the future. And indeed, it may very well be the case that, overall, Tumblr’s ban on adult content will result in the prevention of many further harms (especially given many of the problems inherent to the type of pornography that tends to be propagated online).

Nevertheless, there has been a good amount of negative response to Tumblr’s decision, as well. In general, people appear to be expressing three main types of worries. The first is that Tumblr’s new filters are bad at distinguishing the kind of content they want to ban from content that they have deemed inoffensive, and thus that an attempted universal ban on adult content will potentially stifle legitimate creative expression; second, that Tumblr is being hypocritical in banning of adult content but not, for example, taking strides to address other problems on its platform, especially those involving hate speech; and third, that the ban on adult content disproportionately affects users from marginalized groups.

With regards to the first worry, many Tumblr users have already noticed that their filters are not great at separating inappropriate from appropriate content. As The Guardian reports, Tumblr has already flagged as inappropriate images of fully-clothed historical figures, artists, and ballet dancers, as well as a painting of Jesus in a loincloth. If Tumblr is meant to be a site that encourages self-expression, stifling said expression as a result of bad programming does not seem to be the best way to achieve this goal.

The second concern pertains to Tumblr’s policies generally about what kind of content is deemed acceptable on the platform. As The Washington Post reports, while searches involving sexually explicit terms on Tumblr gave no results, “racist and white supremacist content, including Nazi propaganda, was easily surfaced”, despite the fact that such posts violate Tumblr’s policies surrounding hate speech. It seems that just as Tumblr has an obligation to attempt to prevent illegal pornographic content, it similarly has obligations with respect to preventing users from promoting hate speech. It seems hypocritical, however, to focus only on one type of content and not another, especially given the widespread harms that can result from the dissemination of the type of racist and white supremacist content that is easily searchable on the platform.

The final worry pertains to the way that Tumblr’s new ban, combined with inefficient filtering technology, could potentially impact members of marginalized communities. Writing for BBC News, David Lee argues that,

“Unlike typical pornography sites, which overwhelmingly cater to men, and serve an often narrow definition of what is attractive, Tumblr has been a home for something else – content tailored at vibrant LGBT communities, or for those with tastes you might not necessarily share with all your friends.”

In a post on his blog, actor Wil Wheaton concurs:

“The reality is that for a lot of the LGTBQ+ community, particularly younger members still discovering themselves and members in extremely homophobic environments where most media sites were banned (but Tumblr wasn’t even considered important enough to be), this was a bastion of information and self-expression.”

Wheaton supported his view by performing an experiment, one in which he posted a series of images of “beautiful men kissing” on Tumblr. The post was flagged by as inappropriate, despite the images not being pornographic or in any clear violation of Tumblr’s newly stated policies. Wheaton laments that “it’s ludicrous and insulting that – especially in 2018 – this is flagged, either by some sort of badly-designed algorithm, or by shitty homophobic people”. Finally, Kaila Hale-Stern writing at The Mary Sue, argues that

“What D’Onofrio and his corporate overlords at Verizon’s Oath [who own Tumblr] don’t understand—or don’t care about—is that this sort of adult content is frequently generated by women, marginalized people, and all sorts of creatives struggling in our vicious ‘gig economy.’ They’re going to be hurt the most by the ban.”

Aside from stifling self-expression, then, Hale-Stern points to a more tangible harm in the form of creators of adult content losing their livelihoods as a result of Tumblr’s new ban.

What initially appeared as a seemingly straightforward fulfilment of Tumblr’s obligations to prevent harmful content from appearing on their platform is, on closer inspection, more complicated. As many have argued, there is perhaps room for a middle-ground: instead of issuing a universal ban on all adult content on the platform, Tumblr could have done a better job of implementing more effective algorithms to detect and filter out offensive content without removing the means for self-expression and livelihoods of many users (better filtering could also, hopefully, address concerns about hate speech, as well).

Should DePauw be Concerned about First-Year Students of Color?

A photo of DePauw's music school.

DePauw’s student of color community is incredibly unique, in the sense that each and every individual hails from a myriad of backgrounds. However, their diversity can call for major adaptation when coming to DePauw, a predominantly white institution (PWI). The process of adaptation can be made even more difficult if a student of color’s identity is tested through negative interactions with their white counterparts, as well as negative forces that push into DePauw’s campus.

Continue reading “Should DePauw be Concerned about First-Year Students of Color?”

Fighting Obscenity with Automation

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Continue reading “Fighting Obscenity with Automation”

In Defense of the Unpalatable: Protection of Hate Speech

On Wednesday, a group of fundamentalist Christians picketed the DePauw University campus, holding signs decrying the sins of “masturbators”, “feminists”, “pot-heads”, and “baby-killers”, while shouting at pedestrian women to “stop being whores” and to accept that “your sins are your fault, not your boyfriends.”

Continue reading “In Defense of the Unpalatable: Protection of Hate Speech”