← Return to search results
Back to Prindle Institute
Business ResourcesEducational ResourcesTechnology

Disturbing Videos on YouTube Kids: Rethinking the Consequences of Automated Content Creation

By Andrew Bobker
8 Dec 2017

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The rise of automation and artificial intelligence (AI) in everyday life has been a defining feature of this decade. These technologies have gotten surprisingly powerful in a short span of time. Computers now not only give directions, but also drive cars by themselves; algorithms predict not only the weather, but the immediate future, too. Voice-activated virtual assistants like Apple’s Siri and Amazon Alexa can carry out countless daily tasks like turning lights on, playing music, making phone calls, and searching the internet for information.

Of particular interest in recent years has been the automation of content creation.  Creative workers have long been thought immune to the sort of replacement by machines that has supplanted so many factory and manufacturing jobs, but developments in the last decade have changed that thinking. Computers have already been shown to be capable of covering sports analysis, with other types of news likely to follow; other programming allows computers to compose original music and convincingly imitate the styles of famous composers.

While these A.I. advancements are bemoaned by creative professionals concerned about their continued employment — a valid concern, to be sure — other uses for AI hint at a more widespread kind of problem. Social media sites like Twitter and Facebook — ostensibly forums for human connection — are increasingly populated by “bots”: user accounts managed via artificial intelligence. Some are simple, searching their sites for certain keywords and delivering pre-written responses, while others read and attempt to learn from the material available on each respective site. In at least one well-publicized incident, malicious human users were able to take advantage of the learning ability of a bot to dramatically alter its mannerisms. This and other incidents have rekindled age-old fears about whether a robot, completely impressionable and reprogrammable, can have a sense of morality.

But there’s another question worth considering in an age when an ever-greater portion of our interactions is with computers instead of humans: will humans be buried by the sheer volume of content being created by computers? Early in November, an essay by writer James Bridle on Medium exposed a disturbing trend on YouTube. On a side of YouTube not often encountered by adults, there is a vast trove of content produced specifically for young children. These videos are both prolific and highly formulaic. Some of the common tropes include nursery rhymes, videos teaching colors and numbers, and compilations of popular children’s shows. As Bridle points out, the formulaic nature of these videos makes them especially susceptible to automated generation. The evidence of this automated content generation is somewhat circumstantial; Bridle points to “stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

One byproduct of this method of video production is that some of the videos take on a mildly disturbing quality. There is nothing overtly offensive or inappropriate about these videos, but there is a clear lack of human creative oversight, and the result is, to an adult, cold and senseless. While the algorithm that produces these videos is unable to discern this, it is immediately apparent to a human viewer. While exposing children to strange, robotically generated videos is not by itself a great moral evil, there is little stopping these videos from becoming much more dark and disturbing. At the same time, they provide a cover for genuinely malicious content to be made using the same formulas. These videos take advantage of features in YouTube’s video search and recommendation algorithms to intentionally expose children to violence, profanity, and sexual themes. Often, they feature well-known children’s characters like Peppa Pig. Clearly, this kind of content presents a much more direct problem.

Should YouTube take steps to prevent children from seeing such videos? The company has already indicated its intent to improve on this situation, but the problem might require more than just tweaks to YouTube’s programming. With 400 hours of content published every minute,  hiring humans to personally watch every video is logistically impossible. Therefore, AI provides the only potential for vetting videos. It doesn’t seem likely that an algorithm will be able to consistently differentiate between normal and disturbing content in the near future. YouTube’s algorithm-based response so far has not inspired confidence: content creators have complained of unwarranted demonetization of videos through overzealous programming, when these videos were later shown to contain no objectionable content. Perhaps it is better to play this situation safe, but it is clear that YouTube’s system is a long way from perfection at this time.

Even if programmers could solve this problem, there is a potential here for an infinite arms race of ever more sophisticated algorithms generating and vetting content. Meanwhile, the comment sections of these videos, as well as social media and news outlets, are increasingly operated and populated by other AI, possibly resulting in an internet in which it is impossible for users to distinguish humans from robots (one software has already succeeded in breaking Google’s reCAPTCHA, the most common test used to prove humanity on the internet), and where the total sum of information is orders of magnitude greater than what any human or determined group of humans could ever understand or sort through, let alone manage and control.

Is it time for scientists and tech companies to reconsider the ways in which they use automation and AI? There doesn’t seem to be a way for YouTube to stem the flood of content, short of shutting down completely, which doesn’t really solve the wider problems. Attempting to halt the progress of technology has historically proven a fool’s errand — if 100 companies swear off the use of automation, the one company that does not will simply outpace and consume the rest. Parents can prevent their children from accessing YouTube, but that won’t completely eliminate the framework that created the problem in the first place. The issue requires a more fundamental societal response: as a society, we need to be more aware of the circumstances behind our daily interactions with AI, and carefully consider the long-term consequences before we turn over too much of our lives to systems that lie beyond our control.  

Andrew Bobker is a senior staff writer at DePauw University. He began writing for the Prindle Post in the fall of 2017. He is originally from the state of Maine.
Related Stories