It must have been a wondrous thing to enter a cathedral in medieval Europe. The stained-glass images of saints and angels would have shone like kaleidoscopes in an otherwise earthy world. Their rainbow patterns, beaming through the haze of incense, would have echoed the cantor’s chants, plangent and unintelligible, reverberating through the nave, casting sacred stories in a rarefied light. By providing a space for extraordinary physical sensations, the cathedral would have drawn the mind from the ordinariness of the everyday toward life’s sacredness and significance.
Today, the cathedral is a mere curiosity. We encounter more images scrolling for an hour on Instagram than a medieval peasant would have encountered in their entire life. The enchantment of the stained-glass saint is hardly visible against the blooming, buzzing digital confusion that surrounds us today. Our relationship to images has changed. Pictures hit differently.
There may be no better place to look for a pointed illustration of this confusion than Sora, OpenAI’s new TikTok-style social media app populated exclusively by AI-generated videos. Like TikTok, Sora feels like a digital conveyor belt designed to jam as much mechanized content into our conscious minds as possible. Videos on the platform vary in style, tone, and content. Many are uncanny. All are fake. The most distinctive feature of Sora is that users can create videos featuring likenesses of real people. By recording three words and a turn of your head, you can depict yourself in nearly any kind of fictional situation and, with permission, you can do the same with the likenesses of other users. You can even create videos depicting long-dead cultural figures like Martin Luther King, Jr. and Bob Ross.
OpenAI markets Sora using the term hyperreal: “Turn your ideas into videos with hyperreal motion and sound.” OpenAI’s use of this term presumably refers to the visual quality of the videos, which are nearly indistinguishable from lens-based recordings. The concept of hyperreality, however, has a deeper cultural pedigree. Ironically, critical theorists have used the term to describe a worrisome shift under capitalist visual culture, one that tech firms like OpenAI have been exacerbating in recent decades.
One of its earliest uses appeared in a 1975 essay by the Italian philosopher Umberto Eco, titled “Travels in the Hyperreal” — a critical travelogue documenting a road trip Eco took across the United States. Eco was fascinated by Americans’ obsession with simulations and replicas, such as wax museums, ghost town attractions, and, that ultimate “degenerate utopia,” Disneyland. People are drawn to such simulations, he suggests, because they package our messy realities into perfect, controllable images. These images are pleasant and easy to consume, and so they seem more understandable, and eventually more real, than their referents. This blurs the distinction between the fake and the authentic, which Eco describes as a culture of hyperreality.
The term gained wider traction, however, through the nearly contemporary writings of French philosopher Jean Baudrillard, who used it more radically to diagnose what he saw as the widespread and fundamentally alienating effect of image-saturated capitalist culture. For Baudrillard, hyperreality is a state in which cultural signs have detached from reality completely. Historically, images could be said to represent the real world either faithfully or falsely. Under capitalist culture, however, signs only reference themselves, pinging back and forth between advertisements, television, movies, magazines, newscasts, and now, social media, making the line between the fake and the authentic no longer relevant.
According to Baudrillard, the detachment of our visual culture from reality has massive consequences for human subjectivity. Immersed in this symbolic game of pinball, we gradually lose our connection to meaning as rooted in embodied human experience. Within the hyperreal, you are unable to interpret the world, or form a judgment about what matters, except by reference to the sign systems around you. You watch a sunset over the ocean, and you are awestruck because it looks almost as beautiful as a movie; you fall in love, and you are excited because you find yourself in your own Cinderella story. While it could be said that sign systems have always structured our experiences, the difference is that these are self-referential and self-generating, rooted in the attempt to dominate human consciousness for profit.
We thus live our lives in a simulation of meaning — what Baudrillard calls the simulacrum — yet we continue to think our beliefs, desires, and preferences are our own. As a simulation, hyperreality is an inescapable paradigm that covers its tracks. One effect of the hyperreal is a flattening out of our cultural signs. Internet culture has already deepened this effect. Taylor Swift, Zohran Mamdani, and Carrie Bradshaw inhabit the same plane of consciousness as we scroll through our seemingly personalized TikTok feeds. This flattening leads to a loss of historical consciousness, as everything becomes an image in the stream. Although it may be comfortable, the simulacrum draws us away from concerns and activities that are integral to human flourishing. There are no shadows in the grocery store; the television never stops flashing. There is no reason to think on death, or, for that matter, anything beyond the soothing and profitable signs in which we are immersed.
As others have suggested, generative AI seems to represent a culmination of Baudrillard’s hyperreality. Self-generating, detached from physical reality, yet sneakily naturalistic, images and videos produced by AI replicate many of the key features of the simulacrum. While making videos of yourself and your friends on Sora seems like meaningless fun on the surface, if there is any truth in Baudrillard’s observations, then we must consider the deeper impact these videos may have on our subjectivities as well as our relationships to others.
Disneyland, for all its fakery and capitalist mechanization, is still made and run by humans, as were the advertisements and media images when Baudrillard developed his analysis of the hyperreal. The near eradication of embodied humanity in generative AI’s processes, coupled with the increasingly heightened naturalism of these images, threatens to deepen the negative effects of the hyperreal by creating a media landscape that is nearly, or even completely, non-human. The Dead Internet Theory already claims that most of the internet, including social media, has been taken over by bots and AI-generated “slop.” Since the release of Sora 2 in September 2025, its videos are increasingly found on other social media platforms, often with the Sora watermark removed. Our media culture seems to be drifting further and further from embodied human experience.
However, it is also possible that the extreme mechanization of videos like those produced by Sora might pull the veil back and compel at least some of us to distrust media so deeply that we seek ways to reconnect with embodied experience. For Baudrillard, Disneyland played an important role in the maintenance of hyperreality. Its hyperbolic simulations made the surrounding environs of LA — the strip malls, billboards, Porsche dealerships, etc. — seem real by comparison. This comparative realness, Baudrillard argued, mollifies us by obfuscating the fact that our everyday life is completely and utterly shaped by the simulacrum. However, slop videos, and the technology behind those videos, render the artificiality of our sign systems more salient, at least for technologically savvy users. If you know you’re looking at slop, you know you’re looking at an image that is in some profound way detached from embodied human experience. For some people, this awareness might produce a tear in the veil. To the extent that humans still aim for an authentic engagement with reality, this tear could actually motivate an attempt to escape rather than deepen immersion in the hyperreal.
Yet, escape is not so easy as stepping away from the computer. Our built environment is affected by people’s expectations about what reality should look like, and these expectations are in turn shaped by visual culture. Traveling to Rome in pursuit of an authentic experience, tourists seek out cafés that conform to a Hollywood image. The act of ordering and drinking a cappuccino becomes a performance of that image. When the children of today, raised on Italian brainrot, begin to travel, how will their associations and expectations of Rome change? What kind of images will shape the cities of tomorrow? A better escape route might involve seeking embodied experiences in non-built environments. “Touch grass,” as they say. Yet the fact that “touch grass” is itself a meme embodies the difficulties with this route. Capitalist sign systems digest their own critiques. There may be no escape.
Throughout Silicon Valley, tech companies preach that generative AI will usher in a new age of transcendence, pushing humanity beyond the limitations of our frail, embodied selves. While it’s true that massive increases in computing power will undoubtedly lead to biomedical breakthroughs and other scientific advances, we must ask ourselves: at what cost? It is telling that OpenAI, whose core mission is to develop “artificial general intelligence that benefits all of humanity,” should create a product that seemingly takes us in the opposite direction.