← Return to search results
Back to Prindle Institute

AI in Documentary Filmmaking: Blurring Reality in ‘What Jennifer Did’

image of camera lens and ring light in studio

Back in 2021, I wrote an article for The Prindle Post predicting the corrosive effect AI might have on documentary filmmaking. That piece centered around Roadrunner: A Film about Anthony Bourdain, in which an AI deepfake was used to read some of the celebrity chef’s emails posthumously. In that article, I raised three central concerns: (i) whether AI should be used to give voice and body to the dead, (ii) the potential for nefarious actors to use AI to deceive audiences, and (iii) whether AI could accurately communicate the facts of a situation or person.

Since that article’s publication, the danger AI poses to our ability to decipher fact from fiction in all facets of life has only grown, with increasing numbers of people able to produce ever more convincing fakery. And, while apprehensions about this are justifiably focused on the democratic process, with Time noting that “the world is experiencing its first AI elections without adequate protections,” the risk to our faith in documentary filmmaking remains. This is currently being discussed thanks to one of Netflix’s most recent releases — What Jennifer Did.

The documentary focuses on Jennifer Pan, a 24-year-old who, in 2015, was convicted of hiring hitmen to kill her parents (her father survived the attack, but her mother did not) because they disapproved of who she was dating. Pan is now serving a life sentence with the chance of parole after 25 years.

The story itself, as well as the interviews and people featured in it, is true. However, around 28 minutes into the documentary, some photographs which feature prominently on-screen raise doubt about the film’s fidelity to the truth. During a section where a school friend describes Jennifer’s personality — calling her “happy,” “bubbly,” and “outgoing” — we see some pictures of Jennifer smiling and giving the peace sign. These images illustrate how full of life Jenifer could be and draw a contradiction between the happy teen and the murderous adult.

But, these pictures have several hallmarks of being altered or just straight-up forgeries. Jenifer’s fingers are too long, and she doesn’t have the right number of them. She has misshapen facial features and an exceedingly long front tooth. There are weird shapes in the back- and foreground, and her shoulder appears out of joint (you can see the images in question on Futurism, where the story broke). As far as I’m aware, the documentary makers have not responded to requests for comments, but it does appear that, much like in Roadrunner, AI has been used to embellish and create primary sources for storytelling.

Now, this might not strike you as particularly important. After all, the story that What Jennifer Did tells is real. She did pay people to break into her parent’s house to kill them. So what does it matter if, in an attempt to make a more engaging piece of entertainment, a little bit of AI is used to create some still (and rather innocuous) images? It’s not like these images are of her handing over the money or doing things that she might never have done; she’s smiling for the camera in both, something we all do. But I think it does matter, and not simply because it’s a form of deception. It’s an example of AI’s escalating and increasingly transgressive application in documentaries, and particularly here, in documentaries where the interested parties are owed the truth of their lives being told.

In Roadrunner, AI is used to read Bourdain’s emails. This usage is deceptive, but the context in which it is done is not the most troubling that it could be. The chef sadly took his own life. But he was not murdered. He did not read the emails in question, but he did write them. And, while I suspect he would be furious that his voice had been replicated to read his writing, it is not like this recreation existed in isolation from other things he had written and said and did (but, to be clear, I still think it shouldn’t have been done).

In What Jennifer Did, however, we’re not talking about the recreation of a deceased person’s voice. Instead, we’re talking about fabricating images of a killer to portray a sense of humanity. The creative use of text, audio, and image shouldn’t, in itself, cause a massive backlash, as narrative and editing techniques always work towards this goal (indeed, no story is a totally faithful retelling of the facts). But, we must remember that the person to whom the documentary is trying to get us to relate – the person whom the images recreate and give a happy, bubbly, and outgoing demeanor – is someone who tried and, in one case, succeeded in killing her parents. Unlike in Roadrunner, What Jennifer Did uses AI not to give life to the lifeless but to give humanity to someone capable of the inhumane. And this difference matters.

Now, I’m not saying that Jennifer was or is some type of monster devoid of anything resembling humanity. People are capable of utter horrors. But by using AI to generate fake images at the point at which we’re supposed to identify with her, the filmmakers undermine the film’s integrity at a critical juncture. That’s when we’re supposed to think: “She looks like a normal person,” or even, “She looks like me.” But, if I can’t trust the film when it says she was just like any other teen, how can I trust it when it makes more extreme claims? And if a documentary can’t hold its viewer’s trust, with the most basic of things like “what you’re seeing is real,” what hope does it have in fulfilling its goal of education and informing? In short, how can we trust any of this if we can’t trust what we’re being shown?

This makes the usage of AI in What Jennifer Did so egregious. It invites doubt into a circumstance where doubt cannot, should not, be introduced. Jeniffer’s actions had real victims. Let’s not mince our words; she’s a murderer. By using AI to generate images — pictures of a younger version of her as a happy teen — we have reason to doubt the authenticity of everything in the documentary. Her victims deserve better than that, though. If Netflix is going to make documentaries about what is the worst, and in some cases the final, days in someone’s life, they owe those people the courtesy of the truth, even if they think they don’t owe it to the viewers.

Will the Real Anthony Bourdain Please Stand Up?

headshot of Anthony Bourdain

Released earlier this month, Roadrunner: A Film About Anthony Bourdain (hereafter referred to as Roadrunner) documents the life of the globetrotting gastronome and author. Rocketing to fame in the 2000’s thanks to his memoir Kitchen Confidential: Adventures in the Culinary Underbelly and subsequent appearances on series such as Top Chef and No Reservations, Bourdain was (in)famous for his raw, personable, and darkly funny outlook. Through his remarkable show Anthony Bourdain: Parts Unknown, the chef did more than introduce viewers to fascinating, delicious, and occasionally stomach-churning meals from around the globe. He used his gastronomic knowledge to connect with others. He reminded viewers of our common humanity through genuine engagement, curiosity, and passion for the people he met and the cultures in which he fully immersed himself. Bourdain tragically died in 2018 while filming Parts Unknown’s twelfth season. Nevertheless, he still garners admiration for his brutal honesty, inquisitiveness regarding the culinary arts, and eagerness to know people, cultures, and himself better.

To craft Roadrunner’s narrative, director Morgan Neville draws from thousands of hours of video and audio footage of Bourdain. As a result, Bourdain’s distinctive accent and stylistic lashings of profanity can be heard throughout the movie as both dialogue and voice-over. It is the latter of these, and precisely three voice-over lines equating to roughly 45-seconds, that are of particular interest. This is because the audio for these three lines is not drawn from pre-existing footage. An AI-generated version of Bourdain’s voice speaks them. In other words, Bourdain never uttered these lines. Instead, he is being mimicked via artificial means.

It’s unclear which three lines these are, although Neville has confirmed one of them, regarding Bourdain’s contemplation on success, appears in the film’s trailer. However, what is clear is that Neville’s use of deepfakes to give Bourdain’s written words life should give us pause for multiple reasons, three of which we’ll touch on here.

Firstly, one cannot escape the feeling of unease regarding the replication and animation of the likeness of individuals who have died, especially when that likeness is so realistic as to be passable. Whether that is using Audrey Hepburn’s image to sell chocolate, generating a hologram of Tupac Shakur to perform onstage, or indeed, having a Bourdain sound-alike read his emails, the idea that we have less control over our likeness, our speech, and actions in death than we did in life feels ghoulish. It’s common to think that the dead should be left in peace, and it could be argued that this use of technology to replicate the deceased’s voice, face, body, or all of the above somehow disturbs that peace in an unseemly and unethical manner.

However, while such a stance may seem intuitive, we don’t often think in these sorts of terms for other artefacts. We typically have no qualms about giving voice to texts written by people who died hundreds or even thousands of years ago. After all, the vast majority of biographies and biographical movies feature dead people. There is very little concern about the representation of those persons on-screen or the page because they are dead. We may have concerns about how they are being represented or whether that representation is faithful (more on these in a bit). But the mere fact that they are no longer with us is typically not a barrier to their likeness being imitated by others.

Thus, while we may feel uneasy about Bourdain’s voice being a synthetic replication, it is not clear why we should have such a feeling merely because he’s deceased. Does his passing really alter the ethics of AI-facilitated vocal recreation, or are we simply injecting our squeamishness about death into a discussion where it doesn’t belong?

Secondly, even if we find no issue with the representation of the dead through AI-assisted means, we may have concerns about the honesty of such work. Or, to put it another way, the potential for deepfake facilitated deception.

The problem of computer-generated images and their impact on social and political systems are well known. However, the use of deepfake techniques in Roadrunner represents something much more personable. The film does not attempt to destabilize governments or promote conspiracy theories. Rather, it tries to tell a story about a unique individual in their voice. But, how this is achieved feels underhanded.

Neville doesn’t make it clear in the film which parts of the audio are genuine or deepfaked. As a result, our faith in the trustworthiness of the entire project is potentially undermined – if the audio’s authenticity is uncertain, can we be safe in assuming the rest of the film is trustworthy?

Indeed, the fact that this technique had been used to create the audio footage was concealed, or at least obfuscated, until Neville was challenged about it during an interview reinforces such skepticism. That’s not to say that the rest of the film must be called into doubt. However, the nature of the product, especially as it is a documentary, requires a contract between the viewer and the filmmaker built upon honesty. We expect, rightly or wrongly, for documentaries to be faithful representations of those things they’re documenting, and there’s a question of whether an AI-generated version of Bourdain’s voice is faithful or not.

Thirdly, even if we accept that the recreation of the voices of the dead is acceptable, and even if we accept that a lack of clarity about when vocal recreations are being used isn’t an issue, we may still want to ask whether what’s being conveyed is an accurate representation of Bourdain’s views and personality. In essence, would Bourdain have said these things in this way?

You may think this isn’t a particular issue for Roadrunner as the AI-generated voice-over isn’t speaking sentences written by Neville. It speaks text which Bourdain himself wrote. For example, the line regarding success featured in the film’s trailer was taken from emails written by Bourdain. Thus, you may think that this isn’t too much of an issue because Neville simply gives a voice to Bourdain’s unspoken words.

However, to take such a stance overlooks how much information – how much meaning – is derivable not from the specific words we use but how we say them. We may have the words Bourdain wrote on the page, but we have no idea how he would have delivered them. The AI algorithm in Roadrunner may be passable, and the technology will likely continue to develop to the point where distinguishing between ‘real’ voices and synthetic ones becomes all but impossible. But such a faithful re-creation would do little to tell us about how lines would be delivered.

Bourdain may ask his friend the question about happiness in a tone that is playful, angry, melancholic, disgusted, or a myriad of other possibilities. We simply have no way of knowing, nor does Neville. By using the AI-deepfake to voice Bourdain, Neville is imbuing meaning into the chef’s words – a meaning which is derived from Neville’s interpretation and the black-box of AI-algorithmic functioning.

Roadrunner is a poignant example of an increasingly ubiquitous problem – how can we trust the world around us given technology’s increasingly convincing fabrications? If we cannot be sure that the words within a documentary, words that sound like they’re being said by one of the most famous chefs of the past twenty years, are genuine, then what else are we justified in doubting? If we can’t trust our own eyes and ears, what can we trust?

Should History be Kinder to George Patton?

A black-and-white photo of George Patton marching through a cemetery.

General George Patton has been an enigmatic figure for many years. Robert Orlando’s recently released documentary, Silence Pattonattempts to portray him as a Cassandra-like figure who warned against the risk of Stalin and communism, and nobody would listen. Patton died under strange circumstances, and there has always been talk of a conspiracy to kill him. According to the film, there was an attempt to silence him, because he turned out to be a nuisance to the Allies’ post-war plans.

Continue reading “Should History be Kinder to George Patton?”

Mes Aynak’s Intrinsic Cultural Value

One of the many reasons that weighing ethical dilemmas is such a challenge is because we’re often faced with a conflict between measurable and immeasurable value. We see this often in relation to environmental issues. Because we can’t place an exact value on the intrinsic worth of nature, we struggle to cognitively compare environmental health with economic benefits. Thus, many companies pursue profit over environmental wellness, without fully understanding the detrimental consequences. The inability to directly quantify something doesn’t entail that it is value-less or that its interest should be disregarded, but it can be awfully difficult to convince some groups of this. Continue reading “Mes Aynak’s Intrinsic Cultural Value”

Prindle and Conflict Studies to host ‘The Trials of Muhammad Ali’ outdoor screening on September 17

Come out to the Prindle Institute on Wednesday, September 17 at 8 PM for an outdoor screening of the 2014 documentary The Trials of Muhammad Ali. Movie snacks will be provided including apple cider and DIY s’mores around Prindle’s fire pit. Bring blankets to enjoy this film on the Prindle Courtyard lawn as you get a closer look at the life of one of the most celebrated athletes and public figures of all time. The film, directed by Bill Siegel of Kartemquin Films, recounts the difficulties that Ali faced as he converted to Islam and refused to serve in the Vietnam War, specifically addressing the conflict between personal values and public image.

Here’s an excerpt from Mick LaSalle’s review of the film in the San Francisco Chronicle:

“There’s history as it’s remembered, and then there’s history as it happened. This documentary gives us the latter, and it’s a true education…’The Trials of Muhammad Ali’…documents a crucial decade in Ali’s life, and over the course of the film you can see him changing his public strategies, or perhaps just changing as a person.”

And another from Bruce Ingram’s review for the Chicago Sun-Times:

“…As a testament to inner strength, or epic stubbornness, or both, “Trials” is a mind-blower. Especially when you see the way Ali stood firm — and seemed to grow and mature — as he became a highly visible lightning rod for all of the most hotly contested social and political issues of the late ’60s.”

Watch the trailer on the film’s website here. This screening is co-sponsored by Prindle and the Conflict Studies department.

Need a ride? The DePauw bus will leave the UB for Prindle at 7:45. We hope to see you there!