← Return to search results
Back to Prindle Institute
Technology

Deepfake Porn and the Pervert’s Dilemma

By Evan Arnet
15 Apr 2024
blurred image of woman on bed

This past week Representative Alexandra Ocasio-Cortez spoke of an incident where she was realistically depicted by a computer engaged in a sexual act. She recounted the harm and difficulty of being depicted in this manner. The age of AI-generated pornography is upon us and so-called deepfakes are becoming less visually distinguishable from real life every day. Emerging technology could allow people to generate true-to-life images and videos of their most forbidden fantasies.

What happened with Representative Ocasio-Cortez raises issues well beyond making pornography with AI of course. Deepfake pornographic images are not just used for personal satisfaction, they are used to bully, harass, and demean. Clearly, these uses are problematic, but what about the actual creation of the customized pornography itself? Is that unethical?

To think this through Carl Öhman articulates the “pervert’s dilemma”: We might think that any sexual fantasy conceived — but not enacted — in the privacy of our home and our own head is permissible. If we do find this ethical, then why exactly do we find it objectionable if a computer generates those images, also in the privacy of one’s home? (For the record, Öhman believes they have a way out of this dilemma.)

The underlying case for letting a thousand AI-generated pornographic flowers bloom is rooted in the famous Harm Principle of John Stuart Mill. His thought was that in a society which values individual liberty, behaviors should generally not be restricted unless they cause harm to others. Following from this, as long as no one is harmed in the generation of the pornographic image, the action should be permissible. We might find it gross or indecent. We might even find the behaviors depicted unethical or abhorrent. But if nobody is being hurt, then creating the image in private via AI is not itself unethical, or at least not something that should be forbidden.

Moreover, for pornography in which some of the worst ethical harms occur in the production process (the most extreme example being child pornography), AI-generated alternatives would be far preferable. (If it turns out that being able to generate such images increases the likelihood of the corresponding real-world behaviors, then that’s a different matter entirely.) Even if no actual sexual abuse is involved in the production of pornography, there have been general worries about the working conditions within the adult entertainment industry that AI-generated content could alleviate. Although, alternatively, just like in other areas, we may worry that AI-generated pornography undermines jobs in adult entertainment, depressing wages and replacing actors and editors with computers.

None of this disputes that AI-generated pornography can’t be put to bad ends, as the case of Representative Ocasio-Cortez clearly illustrates. And she is far from the only one to be targeted in this way (also see The Prindle Post discussion on revenge porn). The Harm Principle defender would argue that while this is obviously terrible, it is these uses of pornography that are the problem, and not simply the existence of customizable AI-generated pornography. From this perspective, society should target the use of deepfakes as a form of bullying or harassment, and not deepfakes themselves.

Crucially, though, this defense requires that AI-generated pornography be adequately contained. If we allow people to generate whatever images they want as long as they pinky-promise that they are over 18 and won’t use them to do anything nefarious, it could create an enforcement nightmare. Providing more restrictions on what can be generated may be the only way to meaningfully prevent the images from being distributed or weaponized even if, in theory, we believe that strictly private consumption squeaks by as ethically permissible.

Of course, pornography itself is far from uncontroversial, with longstanding concerns that it is demeaning, misogynistic, addictive, and encourages harmful attitudes and behaviors. Philosophers Jonathan Yang and Aaron Yarmel raise the worry that by providing additional creative control to the pornography consumer, AI turns these problematic features of pornography up to 11.  The argument, both in response to AI-generated pornography and pornography generally, depends on a data-driven understanding of the actual behavioral and societal effects of pornography — something which has so far eluded a decisive answer. While the Harm Principle is quite permissive about harm to oneself, as a society we may also find that the individual harms of endless customizable pornographic content are too much to bear even if there is no systematic impact.

Very broadly speaking, if the harms of pornography we are most worried about relate to its production, then AI pornography might be a godsend. If the harms we are most worried about relate to the images themselves and their consumption, then it’s a nightmare. Additional particularities are going to arise about labor, distribution, source images, copyright, real-world likeness, and much else besides as pornography and AI collide. Like everything sexual, openness and communication will be key as society navigates the emergence of a transformative technology in an already fraught ethical space.

Evan Arnet received his Ph.D. in History and Philosophy of Science and Medicine from Indiana University. His overarching philosophical interest is in institutions and how they shape and constrain human behavior. This is variously represented in writings on science, law, and labor. Read more about him at www.evanarnet.com
Related Stories