← Return to search results
Back to Prindle Institute

On Ye and Edgelords

Vultures 1, released by ¥$ (a collaboration between Ye – née Kanye West – and Ty Dollar $ign) has been met with many different reactions, ranging from criticism to acclaim, and from cautious optimism that Ye is “back” in some way, to skepticism that he will ever produce good music again. The release of the album, however, has been overshadowed by Ye’s behavior: prior to the release of Vultures 1, Ye had long been making antisemitic remarks in interviews and spouting hate speech on social media, comments which have cost him endorsements and more than a few fans. At the same time, Vultures 1 has done well commercially, if not critically, with one track reaching the top spot on the Billboard 200, the first time for Ye in 13 years.

The critical reviews have focused both on the music and the surrounding controversies. One of the most notable came from popular YouTube music critic Anthony Fantano, who called the album “completely unreviewable trash.” “Unreviewable” because, according to Fantano, defenders of Ye’s music are not open to listening to any form of criticism, as they consist of those who agree with Ye’s hateful messages, “nihilistic teenagers” who are entertained by the controversy, and those who are too obsessed with Ye to be clear-headed.

While Fantano’s review is noteworthy for its bite, many other outlets have also been critical, and in similar ways:

“On the album closer, ‘King,’ he raps about being called ‘crazy, bipolar, antisemite,’ then puffs out his chest: ‘I’m still the king.’ King of who? Middle-school edgelords?” – The Washington Post

“Kanye West & Ty Dolla $ign: Vultures 1 review – weak lyrics from a witless edgelord” – The Guardian

“West is at best musically compelling, and at worst—more often than not—a wanton edgelord, intent on saying some of the foulest things imaginable.” – Variety

Much of the criticism has to do with the lyrics on Vultures 1, with many critics claiming that they contain the same kind of hate speech that Ye has been making for years, delivered with seemingly no concern for the consequences. Hence the verdict: Ye is an edgelord.

But what is an “edgelord” anyway, and should we think of Ye as one?

According to the Mirriam-Webster dictionary, an “edgelord” is “someone who makes wildly dark and exaggerated statements (as on an internet forum) with the intent of shocking others.” It is very much a concept born from the internet, a place where anonymity allows people to act in ways that they might not normally otherwise. You have no doubt come across people who qualify as edgelords, at least to some degree: they can be found in abundance in some of the internet’s seedier corners such as 4chan, as well as its more mainstream areas, like comment sections, Reddit or Twitter/X.

While Ye has made his share of exaggerated and shocking statements online, there’s no reason to restrict evaluations of edgelordiness to his social media presence. For example, on his latest #1 track off of Vultures 1, Ye compares himself to a who’s-who of problematic celebrities, including Bill Cosby, R. Kelly, and P. Diddy, and adds an extra shout-out to Chris Brown for good measure. Does this qualify as edgelord behavior? According to the dictionary definition, Ye does seem to be making “wildly dark and exaggerated statements” and, given the respective histories of the men Ye is comparing himself to, he does seem to be intending to shock his listeners.

But I think there is a missing component to the dictionary definition, which is that an edgelord’s speech need not be an expression of beliefs they actually hold. The primary intent of the edgelord is provocation, not the expression of something they legitimately think is true. While the expression of belief invites engagement and the sharing of reasons, edgelords merely want to cause a reaction, be it outrage, disgust, or even legitimate attempts to show why their views are wrong; any such reaction achieves their goal.

Since edgelords make statements solely with the intent to cause a reaction, their actions are not as bad as those who believe what they are saying. Put another way: an expression of a hateful statement is worse if it is an expression of a sincere belief than if it is made merely to provoke. This is not to relieve the edgelord of all moral responsibility: the disingenuous expression of intentionally extreme, exaggerated hateful statements is clearly reprehensible. But whereas the appropriate response to the expression of a sincere and hateful belief is to challenge it, the appropriate response to the edgelord is to ignore them. Hence, we do not, I argue, hold the edgelord as responsible for their actions as we would someone who was expressing a sincere belief with the same content.

Consider, for example, two tweets: both contain the same words, and both are generally considered to be hateful and extreme. In one case, the tweet is made by an edgelord, whose intent is to rile up others and see how much attention they can get. In the other, the tweet is made by a person who legitimately believes what they are tweeting, and attempts to engage others in discussion, however wrongheaded. Both are doing something wrong, but arguably the person who is sincere is worse. For instance, once the edgelord’s intentions are discovered, they are dismissed: they are rightly deemed immature, arrogant, and not worth anyone’s time. But the person who is expressing themselves sincerely is reprehensible, a holder of truly awful beliefs.

The concept of an edgelord, of course, does not have solid boundaries. For example, there are cases of edgelord behavior in which one will make an intentionally provocative statement that is an exaggerated version of a less-extreme belief they sincerely hold. In these cases, it can be difficult to determine how we should react. The extent to which one is morally responsible for their edgelord behavior is also dependent upon other variables: for example, the immature teenager who understands which words are provocative but is unable to appreciate the harms they cause is, perhaps, less responsible than the fully-grown adult who should really know better (although again, the teenager still deserves criticism).

Regardless of the murkiness of the concept, we can now return to our initial question: is Ye an edgelord? I think there is reason to think he isn’t. This is because, given his behavior, Ye’s statements, lyrics, and other actions do seem to be expressions of beliefs he actually holds. This is both in terms of his expressions of antisemitism, but also in his comparisons to controversial figures: he does legitimately see himself among these figures, either akin or superior to them in the sense of being “uncancellable.”

Ye is clearly trying to be shocking, and if that was the extent of his actions one may well have reason enough to condemn his music as that of an edgelord. But Ye qualifies as something worse: someone who intends to shock, but also means a lot of what he says.

YouTube and the Filter Bubble

photograph of bubble floating

If you were to get a hold of my laptop and go to YouTube, you’d see a grid of videos that are “recommended” to me, based on videos I’ve watched in the past and channels I’ve subscribed to. To me, my recommendations are not surprising: clips from The Late Show, a few music videos, and a bunch of videos about chess (don’t judge me). There are also some that are less expected – one about lockpicking, for example, and something called “Bruce Lee Lightsabers Scene Recreation (Dual of Fates edit).” All of this is pretty par for the course: YouTube will generally populate your own personalized version of your homepage with videos from channels you’re familiar with, and ones that it thinks you might like. In some cases this leads you down interesting paths to videos you’d like to see more of (that lockpicking one turned out to be pretty interesting) while in other cases they’re total duds (I just cannot suspend my disbelief when it comes to lightsaber nunchucks).

A concern with YouTube making these recommendations, however, is that one will get stuck seeing the same kind of content over and over again. While this might not be a worry when it comes to videos that are just for entertainment, it can be a much bigger problem when it comes to videos that present false or misleading information, or promote generally hateful agendas. This phenomenon – where one tends to be presented with similar kinds of information and sources based on one’s search history and browsing habits – is well documented, and results in what some have called a “filter bubble.” The worry is that once you watch videos of a particular type, you risk getting stuck in a bubble where you’ll be presented with many similar kinds of videos, making it more and more difficult to come across videos that may come from more reputable sources.

YouTube is well aware that there are all sorts of awful content on its platform, and has been attempting to combat it, although with mixed results. In a statement released in early June, YouTube stated that it was focused on removing a variety of types of hateful content, specifically by “prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” They provide some examples of such content that they were targeting, including “videos that promote or glorify Nazi ideology” and “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” They have not, however, been terribly successful in their efforts thus far: as Gizmodo reports, there are plenty of channels on YouTube making videos about conspiracy theories, white nationalism, and anti-LGBTQ hate groups that have not yet been removed from the site. So worries about filter bubbles full of hateful and misleading content persist.

There is another reason to be worried about the potential filter bubbles created by YouTube: if I am not in your bubble, then I will not know what kind of information you’re being exposed to. This can be a problem for a number of reasons: first, given my own YouTube history, it is extremely unlikely that a video about the “dangers” of vaccines, or videos glorifying white supremacy, will show up in my recommendations. Those parts of YouTube are essentially invisible to me, meaning that it is difficult to really tell how prevalent and popular these videos are. Second, since I don’t know what’s being recommended to you, I won’t know what kind of information you’re being exposed to: you may be exposed to a whole bunch of garbage that I don’t know exists, which makes it difficult for us to have a productive conversation if I don’t know, say, what you take to be a reputable source of information, or what the information conveyed by that source might be. 

There is, however, a way to see what’s going on outside of your bubble: simply create a new Google account, sign into YouTube, and its algorithms will quickly build you a new profile of recommended videos. I ran this experiment, and within minutes had created a profile that would be very out of character for myself, but would fit with the profile of someone with very different political views. For example, the top videos recommended to me on my fake account are the following:

FACTS NOT FEELINGS: Shapiro demolishes & humiliates little socialist comrade

CEO creates ‘Snowflake Test’ to weed out job applicants

Tucker: Not everyone in 2020 Democratic field is a lunatic

What Young Men NEED To Understand About Relationships – Jordan Peterson

This is not to say that I want to be recommended videos that push a misleading or hateful agenda, nor would I recommend that anyone actively go and seek them out. But one of the problems in creating filter bubbles is that if I’m not in your bubble then I’m not going to know what’s going on in there. YouTube, then, not only makes it much easier for someone to get caught up in a bubble of terrible recommended content, but also makes it more difficult to combat it.

Of course, this is also not to say that every alternative viewpoint has to be taken seriously: while it may be worth knowing what kinds of reasons antivaxxers are providing for their views, for example, I am under no obligation to take those views seriously. But with more and more people getting their news and seeking out political commentary from places like YouTube, next time you’re clicking through your recommendations it might be a good idea to consider what is not being shown to you. While creating a YouTube alter-ego is optional, it is worth keeping in mind that successfully communicating and having productive discussions with each other requires that we at least know where the other person is coming from, and this might require taking more active efforts to try to get out of one’s filter bubble.

On Tumblr, Adult Content is Banned – For Good?

Photo of a man speaking into a microphone, standing in front of a screen displaying a tumblr dashboard

In early December, blogging platform Tumblr announced that it would be banning adult content from its site starting mid-month. In a post explaining the decision, CEO Jeff D’Onofrio stated that removing such content would better allow Tumblr to be a “safe place for creative expression [and] self-discovery” and would result in “a place where more people feel comfortable expressing themselves.” D’Onofrio explained further that while he recognized that many users sought out Tumblr as a source of adult content, that “[t]here are no shortage of sites on the internet” that users could turn to. The content to be removed includes “images, videos, or GIFs that show real-life human genitals or female-presenting nipples—this includes content that is so photorealistic that it could be mistaken for featuring real-life humans (nice try, though)” although “certain types of artistic, educational, newsworthy, or political content featuring nudity” will be allowed to stay.

While creating safe places for expression and self-discovery are laudable goals, if a bit vague, recent problems with the platform are perhaps a better explanations of Tumblr’s decision. Notably, the fact that the Tumblr app was recently removed from the iOS App Store, ostensibly for the reason that Tumblr was not doing enough to ensure that illegal content – specifically in the form of child pornography – was being filtered out. Whether Tumblr’s decision to ban all adult content was based on genuine moral concern or simply concern for the bottom-line (it would no doubt be a major blow to Tumblr to be removed from the iOS store permanently), many online have speculated that the ban will spell the death of the platform. As Motherboard discovered, approximately a significant percentage of Tumblr blogs were based around providing adult content, with an even more significant percentage of users seeking out that content.

It is clear that Tumblr has a moral obligation to do as much as they can in order to prevent harmful material like child pornography from appearing on their platform. It also seems clear that Tumblr has, up until this point, failed to meet said obligation. The move to ban all adult content, then, might seem to be the straightforwardly right way to attempt to amend for their past transgressions, as well as to prevent further such harms in the future. And indeed, it may very well be the case that, overall, Tumblr’s ban on adult content will result in the prevention of many further harms (especially given many of the problems inherent to the type of pornography that tends to be propagated online).

Nevertheless, there has been a good amount of negative response to Tumblr’s decision, as well. In general, people appear to be expressing three main types of worries. The first is that Tumblr’s new filters are bad at distinguishing the kind of content they want to ban from content that they have deemed inoffensive, and thus that an attempted universal ban on adult content will potentially stifle legitimate creative expression; second, that Tumblr is being hypocritical in banning of adult content but not, for example, taking strides to address other problems on its platform, especially those involving hate speech; and third, that the ban on adult content disproportionately affects users from marginalized groups.

With regards to the first worry, many Tumblr users have already noticed that their filters are not great at separating inappropriate from appropriate content. As The Guardian reports, Tumblr has already flagged as inappropriate images of fully-clothed historical figures, artists, and ballet dancers, as well as a painting of Jesus in a loincloth. If Tumblr is meant to be a site that encourages self-expression, stifling said expression as a result of bad programming does not seem to be the best way to achieve this goal.

The second concern pertains to Tumblr’s policies generally about what kind of content is deemed acceptable on the platform. As The Washington Post reports, while searches involving sexually explicit terms on Tumblr gave no results, “racist and white supremacist content, including Nazi propaganda, was easily surfaced”, despite the fact that such posts violate Tumblr’s policies surrounding hate speech. It seems that just as Tumblr has an obligation to attempt to prevent illegal pornographic content, it similarly has obligations with respect to preventing users from promoting hate speech. It seems hypocritical, however, to focus only on one type of content and not another, especially given the widespread harms that can result from the dissemination of the type of racist and white supremacist content that is easily searchable on the platform.

The final worry pertains to the way that Tumblr’s new ban, combined with inefficient filtering technology, could potentially impact members of marginalized communities. Writing for BBC News, David Lee argues that,

“Unlike typical pornography sites, which overwhelmingly cater to men, and serve an often narrow definition of what is attractive, Tumblr has been a home for something else – content tailored at vibrant LGBT communities, or for those with tastes you might not necessarily share with all your friends.”

In a post on his blog, actor Wil Wheaton concurs:

“The reality is that for a lot of the LGTBQ+ community, particularly younger members still discovering themselves and members in extremely homophobic environments where most media sites were banned (but Tumblr wasn’t even considered important enough to be), this was a bastion of information and self-expression.”

Wheaton supported his view by performing an experiment, one in which he posted a series of images of “beautiful men kissing” on Tumblr. The post was flagged by as inappropriate, despite the images not being pornographic or in any clear violation of Tumblr’s newly stated policies. Wheaton laments that “it’s ludicrous and insulting that – especially in 2018 – this is flagged, either by some sort of badly-designed algorithm, or by shitty homophobic people”. Finally, Kaila Hale-Stern writing at The Mary Sue, argues that

“What D’Onofrio and his corporate overlords at Verizon’s Oath [who own Tumblr] don’t understand—or don’t care about—is that this sort of adult content is frequently generated by women, marginalized people, and all sorts of creatives struggling in our vicious ‘gig economy.’ They’re going to be hurt the most by the ban.”

Aside from stifling self-expression, then, Hale-Stern points to a more tangible harm in the form of creators of adult content losing their livelihoods as a result of Tumblr’s new ban.

What initially appeared as a seemingly straightforward fulfilment of Tumblr’s obligations to prevent harmful content from appearing on their platform is, on closer inspection, more complicated. As many have argued, there is perhaps room for a middle-ground: instead of issuing a universal ban on all adult content on the platform, Tumblr could have done a better job of implementing more effective algorithms to detect and filter out offensive content without removing the means for self-expression and livelihoods of many users (better filtering could also, hopefully, address concerns about hate speech, as well).

Should DePauw be Concerned about First-Year Students of Color?

A photo of DePauw's music school.

DePauw’s student of color community is incredibly unique, in the sense that each and every individual hails from a myriad of backgrounds. However, their diversity can call for major adaptation when coming to DePauw, a predominantly white institution (PWI). The process of adaptation can be made even more difficult if a student of color’s identity is tested through negative interactions with their white counterparts, as well as negative forces that push into DePauw’s campus.

Continue reading “Should DePauw be Concerned about First-Year Students of Color?”

Fighting Obscenity with Automation

When it comes to policing offensive content online, Facebook’s moderators often have their work cut out for them. With billions of users, filtering out offensive content ranging from pornographic images to videos promoting graphic violence and extremism is a never-ending task. And, for the most part, this job largely falls on teams of staffers who spend most of their days sifting through offensive content manually. The decisions of these staffers – which posts get deleted, which posts stay up – would be controversial in any case. Yet the politically charged context of content moderation in the digital age has left some users feeling censored over Facebook’s policies, sparking a debate on automated alternatives.

Continue reading “Fighting Obscenity with Automation”

In Defense of the Unpalatable: Protection of Hate Speech

On Wednesday, a group of fundamentalist Christians picketed the DePauw University campus, holding signs decrying the sins of “masturbators”, “feminists”, “pot-heads”, and “baby-killers”, while shouting at pedestrian women to “stop being whores” and to accept that “your sins are your fault, not your boyfriends.”

Continue reading “In Defense of the Unpalatable: Protection of Hate Speech”