← Return to search results
Back to Prindle Institute

The Real Threat of AI

digitized image of human raising fist in resistance

On Saturday, June 11th, Blake Lemoine, an employee at Google was suspended for violating his confidentiality agreement with the company. He violated this agreement by publishing a transcript of his conversation with LaMDA, a company chatbot. He wanted this transcript public as he believes it demonstrates LaMDA is ‘sentient’ – by which Lemoine means that LaMDA “has feelings, emotions and subjective experiences.” Additionally, Lemoine states that LaMDA uses language “productively, creatively and dynamically.”

The notion of AI performing creative tasks is significant.

The trope in fiction is that AI and other machinery will be used to remove repetitive, daily tasks in order to free up our time to engage in other pursuits.

And we’ve already begun to move towards this reality; we have robots that can clean for us, cars that are learning to drive themselves, and even household robots that serve as companions and personal assistants. The possibility of creative AI represents a significant advance from this.

Nonetheless, we are seeing creative AI emerge. Generative Pre-trained Transformer 3, or GPT-3, a program from OpenAI is capable of writing prose; GPT-3 can produce an article in response to a prompt, summarize a body of text, and if provided with an introduction, it can complete the essay in the same style of the first paragraph. Its creators claim it is difficult to distinguish between human-written text and GPT-3’s creations.

AI can also generate images – software like DALL-E 2 and Imagen produce images in response to a description, images that may be photo-realistic or in particular artistic styles. The speed at which these programs create, especially when compared to humans, is noteworthy; DALL-E mini generated nine different images of an avocado in the style of impressionist paintings for me in about 90 seconds.

This technology is worrisome in many respects. Bad actors could certainly use these tools to spread false information, to deceive and create further divisions on what is true and false. Fears of AI and machine uprising have been in pop culture for at least a century.

However, let us set those concerns aside.

Imagine a world where AI and other emergent technologies are incredibly powerful, safe, will never threaten humanity, and are only utilized by morally scrupulous individuals. There is still something quite unsettling to be found when we consider creative AI.

To demonstrate this, consider the following thought experiment. Call it Underwhelming Utopia.

Imagine a far, far distant future where technology has reached the heights imagined in sci-fi. We have machines like the replicators in Star Trek, capable of condensing energy into any material object, ending scarcity. In this future, humans have fully explored the universe, encountered all other forms of life, and achieved universal peace among intelligent beings. Medical technology has advanced to the point of curing all diseases and vastly increasing lifespans. This is partly due to a large army of robots, which are able to detect when a living being needs aid, and then provide that aid at a moment’s notice. Further, a unified theory of the sciences has been developed – we fully understand how the fundamental particles of the universe operate and can show how this relates to functioning on each successive level of organization.

In addition to these developments, the creative arts have also changed significantly. Due to both the amount of content created through sophisticated, creative AI, as well as a rigorous archival system for historical works, people have been exposed to a massive library of arts and literature. As a result, any new creations seem merely derivative of older works. Anything that would be a novel development was previously created by an AI, given their ability to create content much more rapidly than humans.

Underwhelming Utopia presents us with a very conflicted situation. In some sense, it is ideal. All materials needs are met, and we have reached a state of minimal conflict and suffering. Indeed, it seems to be, at least in one respect, the kind of world we are trying to build. On the other hand, something about it seems incredibly undesirable.

Although the world at present is severely faulted, life here seems to have something that Underwhelming Utopia lacks. But what?

In Anarchy, State and Utopia, Robert Nozick presents what is perhaps the most famous thought experiment of the 20th century. He asks his readers to imagine that neuroscientists can connect you to a machine that produces experiences – the Experience Machine. In particular, it provides those connected to it with a stream of the most pleasurable experiences possible. However, if you connect to the machine, you cannot return to reality. While connected to the machine, the experiences that you have will be indiscernible from reality, the only other beings you will encounter are simulations, and you will have no memory of connecting to the machine.

Most people say that they would not connect. As a result, many believe that the life offered to us by the Experience Machine must be lacking in some way. Many philosophers use this as the starting point to defend what they call an Objective List theory of well-being. Objective List theorists believe that there are certain things (e.g., love, friendship, knowledge, achievements) that are objectively good for you and other things that are objectively bad. One is made better-off when they attain the objectively good things, and worse-off to the extent that they do not attain the goods or to the extent that the bad things occur. Since life on the Experience Machine contains only pleasurable experiences, it lacks those objective goods which make us better off.

Among the goods that Objective List theorists point to are a sense of purpose. In order to live well, one must feel that one’s actions matter and are worth doing. And it is this that Underwhelming Utopia lacks.

It seems that everything worth doing has already been done, and every need that arises will be swiftly met without us having to lift a finger.

This is the world that we inch closer to as we empower machines to succeed at an increasingly greater number of tasks. The more that we empower programs to do, the less that there is left for us to do.

The worry here is not a concern about job loss, but rather, one about purpose. Perhaps we will hit a wall and fail to develop machines whose creative output is indistinguishable from our creations. But if advancements continue to come at an explosive rate, we may find ourselves in a world where machines are better and more efficient than humans at activities that were once thought to be distinctly human. In this world, it is unclear what projects, if any, would be worth pursuing. As we pursue emergent technologies, like machine learning, we should carefully consider what it is that makes our time in the world worthwhile. If we enable machines to perform these tasks better than we do, we may pull our own sense of purpose out from under our feet.

Disturbing Videos on YouTube Kids: Rethinking the Consequences of Automated Content Creation

"Youtube logo" by Andrew Perry liscensed under CC BY 2.0 (via Flickr)

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The rise of automation and artificial intelligence (AI) in everyday life has been a defining feature of this decade. These technologies have gotten surprisingly powerful in a short span of time. Computers now not only give directions, but also drive cars by themselves; algorithms predict not only the weather, but the immediate future, too. Voice-activated virtual assistants like Apple’s Siri and Amazon Alexa can carry out countless daily tasks like turning lights on, playing music, making phone calls, and searching the internet for information.

Of particular interest in recent years has been the automation of content creation.  Creative workers have long been thought immune to the sort of replacement by machines that has supplanted so many factory and manufacturing jobs, but developments in the last decade have changed that thinking. Computers have already been shown to be capable of covering sports analysis, with other types of news likely to follow; other programming allows computers to compose original music and convincingly imitate the styles of famous composers.

While these A.I. advancements are bemoaned by creative professionals concerned about their continued employment — a valid concern, to be sure — other uses for AI hint at a more widespread kind of problem. Social media sites like Twitter and Facebook — ostensibly forums for human connection — are increasingly populated by “bots”: user accounts managed via artificial intelligence. Some are simple, searching their sites for certain keywords and delivering pre-written responses, while others read and attempt to learn from the material available on each respective site. In at least one well-publicized incident, malicious human users were able to take advantage of the learning ability of a bot to dramatically alter its mannerisms. This and other incidents have rekindled age-old fears about whether a robot, completely impressionable and reprogrammable, can have a sense of morality.

But there’s another question worth considering in an age when an ever-greater portion of our interactions is with computers instead of humans: will humans be buried by the sheer volume of content being created by computers? Early in November, an essay by writer James Bridle on Medium exposed a disturbing trend on YouTube. On a side of YouTube not often encountered by adults, there is a vast trove of content produced specifically for young children. These videos are both prolific and highly formulaic. Some of the common tropes include nursery rhymes, videos teaching colors and numbers, and compilations of popular children’s shows. As Bridle points out, the formulaic nature of these videos makes them especially susceptible to automated generation. The evidence of this automated content generation is somewhat circumstantial; Bridle points to “stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

One byproduct of this method of video production is that some of the videos take on a mildly disturbing quality. There is nothing overtly offensive or inappropriate about these videos, but there is a clear lack of human creative oversight, and the result is, to an adult, cold and senseless. While the algorithm that produces these videos is unable to discern this, it is immediately apparent to a human viewer. While exposing children to strange, robotically generated videos is not by itself a great moral evil, there is little stopping these videos from becoming much more dark and disturbing. At the same time, they provide a cover for genuinely malicious content to be made using the same formulas. These videos take advantage of features in YouTube’s video search and recommendation algorithms to intentionally expose children to violence, profanity, and sexual themes. Often, they feature well-known children’s characters like Peppa Pig. Clearly, this kind of content presents a much more direct problem.

Should YouTube take steps to prevent children from seeing such videos? The company has already indicated its intent to improve on this situation, but the problem might require more than just tweaks to YouTube’s programming. With 400 hours of content published every minute,  hiring humans to personally watch every video is logistically impossible. Therefore, AI provides the only potential for vetting videos. It doesn’t seem likely that an algorithm will be able to consistently differentiate between normal and disturbing content in the near future. YouTube’s algorithm-based response so far has not inspired confidence: content creators have complained of unwarranted demonetization of videos through overzealous programming, when these videos were later shown to contain no objectionable content. Perhaps it is better to play this situation safe, but it is clear that YouTube’s system is a long way from perfection at this time.

Even if programmers could solve this problem, there is a potential here for an infinite arms race of ever more sophisticated algorithms generating and vetting content. Meanwhile, the comment sections of these videos, as well as social media and news outlets, are increasingly operated and populated by other AI, possibly resulting in an internet in which it is impossible for users to distinguish humans from robots (one software has already succeeded in breaking Google’s reCAPTCHA, the most common test used to prove humanity on the internet), and where the total sum of information is orders of magnitude greater than what any human or determined group of humans could ever understand or sort through, let alone manage and control.

Is it time for scientists and tech companies to reconsider the ways in which they use automation and AI? There doesn’t seem to be a way for YouTube to stem the flood of content, short of shutting down completely, which doesn’t really solve the wider problems. Attempting to halt the progress of technology has historically proven a fool’s errand — if 100 companies swear off the use of automation, the one company that does not will simply outpace and consume the rest. Parents can prevent their children from accessing YouTube, but that won’t completely eliminate the framework that created the problem in the first place. The issue requires a more fundamental societal response: as a society, we need to be more aware of the circumstances behind our daily interactions with AI, and carefully consider the long-term consequences before we turn over too much of our lives to systems that lie beyond our control.