← Return to search results
Back to Prindle Institute

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

The Dangers and Ethics of Social Media Censorship

"Alex Jones" by Sean P. Anderson licensed under CC BY 2.0 (via Flickr).

Alex Jones was removed from Youtube and other major social networks for repeatedly violating the site’s community guidelines. Among other things, Youtube’s community guidelines prohibit nudity or sexual content, harmful or dangerous content, violent or graphic content, and most relevant to this situation, hateful content and harassment. While the site describes its products as “platforms for free expression,” it also states in the same policy section that it does not permit hate speech. How both can be true simultaneously is not entirely clear to me.

Continue reading “The Dangers and Ethics of Social Media Censorship”

Disturbing Videos on YouTube Kids: Rethinking the Consequences of Automated Content Creation

"Youtube logo" by Andrew Perry liscensed under CC BY 2.0 (via Flickr)

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The rise of automation and artificial intelligence (AI) in everyday life has been a defining feature of this decade. These technologies have gotten surprisingly powerful in a short span of time. Computers now not only give directions, but also drive cars by themselves; algorithms predict not only the weather, but the immediate future, too. Voice-activated virtual assistants like Apple’s Siri and Amazon Alexa can carry out countless daily tasks like turning lights on, playing music, making phone calls, and searching the internet for information.

Of particular interest in recent years has been the automation of content creation.  Creative workers have long been thought immune to the sort of replacement by machines that has supplanted so many factory and manufacturing jobs, but developments in the last decade have changed that thinking. Computers have already been shown to be capable of covering sports analysis, with other types of news likely to follow; other programming allows computers to compose original music and convincingly imitate the styles of famous composers.

While these A.I. advancements are bemoaned by creative professionals concerned about their continued employment — a valid concern, to be sure — other uses for AI hint at a more widespread kind of problem. Social media sites like Twitter and Facebook — ostensibly forums for human connection — are increasingly populated by “bots”: user accounts managed via artificial intelligence. Some are simple, searching their sites for certain keywords and delivering pre-written responses, while others read and attempt to learn from the material available on each respective site. In at least one well-publicized incident, malicious human users were able to take advantage of the learning ability of a bot to dramatically alter its mannerisms. This and other incidents have rekindled age-old fears about whether a robot, completely impressionable and reprogrammable, can have a sense of morality.

But there’s another question worth considering in an age when an ever-greater portion of our interactions is with computers instead of humans: will humans be buried by the sheer volume of content being created by computers? Early in November, an essay by writer James Bridle on Medium exposed a disturbing trend on YouTube. On a side of YouTube not often encountered by adults, there is a vast trove of content produced specifically for young children. These videos are both prolific and highly formulaic. Some of the common tropes include nursery rhymes, videos teaching colors and numbers, and compilations of popular children’s shows. As Bridle points out, the formulaic nature of these videos makes them especially susceptible to automated generation. The evidence of this automated content generation is somewhat circumstantial; Bridle points to “stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

One byproduct of this method of video production is that some of the videos take on a mildly disturbing quality. There is nothing overtly offensive or inappropriate about these videos, but there is a clear lack of human creative oversight, and the result is, to an adult, cold and senseless. While the algorithm that produces these videos is unable to discern this, it is immediately apparent to a human viewer. While exposing children to strange, robotically generated videos is not by itself a great moral evil, there is little stopping these videos from becoming much more dark and disturbing. At the same time, they provide a cover for genuinely malicious content to be made using the same formulas. These videos take advantage of features in YouTube’s video search and recommendation algorithms to intentionally expose children to violence, profanity, and sexual themes. Often, they feature well-known children’s characters like Peppa Pig. Clearly, this kind of content presents a much more direct problem.

Should YouTube take steps to prevent children from seeing such videos? The company has already indicated its intent to improve on this situation, but the problem might require more than just tweaks to YouTube’s programming. With 400 hours of content published every minute,  hiring humans to personally watch every video is logistically impossible. Therefore, AI provides the only potential for vetting videos. It doesn’t seem likely that an algorithm will be able to consistently differentiate between normal and disturbing content in the near future. YouTube’s algorithm-based response so far has not inspired confidence: content creators have complained of unwarranted demonetization of videos through overzealous programming, when these videos were later shown to contain no objectionable content. Perhaps it is better to play this situation safe, but it is clear that YouTube’s system is a long way from perfection at this time.

Even if programmers could solve this problem, there is a potential here for an infinite arms race of ever more sophisticated algorithms generating and vetting content. Meanwhile, the comment sections of these videos, as well as social media and news outlets, are increasingly operated and populated by other AI, possibly resulting in an internet in which it is impossible for users to distinguish humans from robots (one software has already succeeded in breaking Google’s reCAPTCHA, the most common test used to prove humanity on the internet), and where the total sum of information is orders of magnitude greater than what any human or determined group of humans could ever understand or sort through, let alone manage and control.

Is it time for scientists and tech companies to reconsider the ways in which they use automation and AI? There doesn’t seem to be a way for YouTube to stem the flood of content, short of shutting down completely, which doesn’t really solve the wider problems. Attempting to halt the progress of technology has historically proven a fool’s errand — if 100 companies swear off the use of automation, the one company that does not will simply outpace and consume the rest. Parents can prevent their children from accessing YouTube, but that won’t completely eliminate the framework that created the problem in the first place. The issue requires a more fundamental societal response: as a society, we need to be more aware of the circumstances behind our daily interactions with AI, and carefully consider the long-term consequences before we turn over too much of our lives to systems that lie beyond our control.