Fears for the Fear Machine
The image shows a white woman on a bus, chin resting on her hand, frowning. Crowded onto the seat behind her are several identical-looking dark-skinned men looking over her with big smiles. The woman, who’s holding an item printed with the Union Jack, looks nervous and upset.
This image was shared widely on twitter. (In fact, the title of this piece was inspired by a tweet commenting on it.) It was produced using generative AI technology. AI image generators such as DALL-E and Microsoft’s Image Creator have been trained on a large set of images from human artists, using complex algorithms to represent various features of images, from their subject to their artistic style. When a user inputs a prompt describing the image they want, the AI image generator represents the elements in the prompt according to its training, generating an image as its output. The human user can then assess the output, refining their prompt and repeating the process if necessary until the generator outputs a satisfactory image.
In the case of the image described above, the output is a xenophobic representation of an imagined future, shared to stoke fear and hatred in viewers. Some of the moral ground here is easily covered. It’s morally wrong to further unjust aims by stoking fear and hatred. It’s wrong to represent ethnic groups as dangerous or morally depraved. These actions treat their subjects with profound disrespect, and the people enacting them are arguably complicit in the violence they contribute to by stoking others’ fear. That these images are often posted for monetary gain adds insult to moral injury. (For example, the account that posted the image discussed above gains money through twitter engagement; I declined to link to the original for this reason.)
While spreading hateful images is not new, the generative technology used to produce this image has made its creation much easier than it would otherwise have been — and social media has done the same to its proliferation. In my day, racists had to spend a few hours in an illustrator program producing a xenophobic image like this. Further back, those who were ambitious in their bigotry had to get a job at a newspaper drawing political cartoons to have anything close to this sort of reach.
The questions about the ethical implications of such technology for society are important. (Who bears the moral responsibility when hateful images like these are widely shared, and who should bear the responsibility of curbing their proliferation?) My focus here, however, will be a bit closer to home: what moral dangers — and possibilities — are opened up for individuals as a result of access to this technology? More specifically, what does it do to us to input our fears into the fear machine and receive a fearsome image in return?
Let’s focus on the xenophobic fears represented in the illustration of the woman on the bus. Producing xenophobic images most clearly wrongs others, but we have reasons to think that in producing these images one also wrongs oneself. The self-harm in these cases is both emotional and moral. Emotional harm results from the feelings of further fear that the person producing the image calls up in themselves. Images make one’s fears vivid, communicating their point more richly and viscerally than the handful of words used in the prompt could do. When you’ve seen an image that stokes fear, you’ve encountered something fearsome, and the person who produces these images representing their fears ends up paying an emotional cost. We can see this sort of self-harm with other feelings or attitudes, such as self-respect. A person who repeatedly listens to an abusive voicemail from their ex in order to feel bad about themselves is indulging in a kind of self-harm. Feeling afraid feels bad — though not all bad, if it’s accompanied by a thrill (perhaps unlikely in the case of creating images that stoke one’s own xenophobia). In the same way that the voicemail listener is pushing an emotional bruise, the person who produces an AI-generated image of what they fear for society is subjecting themselves to deeper anxiety.
More importantly, the person who produces these images self-inflicts a moral injury insofar as this action reinforces their own prejudice. In producing the image, the person makes something new in the world that contributes to their own moral shortcomings. So much attention has been paid to the problems of misinformation that arise from photorealistic images and deepfake videos that it can be easy to miss the reality of what is produced when illustrations like these are generated. Illustration-style images are not misinformation in the way that, say, a faked photograph or deepfake video would be. The person who produced them is not tricked into thinking an event that never happened did happen. But illustrations like these do represent the world as being a certain way. The image discussed in the introduction represents Britain as currently or potentially in danger from brown-skinned men, who are represented visually as interchangeable (with their near-identical appearance) as well as foreign, in contrast to the white woman who (in holding the Union Jack) is presented as “really” British. Producing the image is a kind of reality-making. It results in something real — the image — that portrays the world in prejudicial terms, reinforcing the fears and prejudice of the person who produced it.
Fear often feels disempowering, but in many ways we can still exercise agency in the face of our fears. We can choose to feed or to starve them through the media we consume — or in this case, the media we produce. Those of us who find xenophobia morally repugnant may still have internalized implicit biases that affect our understanding of the world. We would do well to think creatively of the ways generative technology can be used not to keep remaking the fractured and prejudiced world we inherited, but to imagine new ways of living without fear of each other.
Perhaps there is even some legitimate use in generating images that help one encounter one’s fears. A person with a phobia of grasshoppers, for example, could produce an image of someone who looks like them holding a grasshopper as a part of exposure therapy. Given how powerful images can be in affecting our perceptions of the world, image-generating AI puts us behind the wheel of a powerful machine. What we do from there matters morally, not only to society but also for our own moral trajectory as we consider the kind of people we want to be and the kind of world we want to create.