← Return to search results
Back to Prindle Institute
FeaturedTechnology

Has AI Made Photos Untrustworthy?

By Kenneth Boyd
16 Sep 2024

Since the widescale introduction and adoption of generative AI, AI image generation and manipulation tools have always felt a step behind the more widely used chatbots. While publicly available apps have become more and more impressive over time, whenever you would come across a truly spectacular AI-generated image it was likely created by a program that required a bit of technical know-how to use, or at least had a few hoops that you had to jump through.

But these barriers have been disappearing. For example, Google’s Magic Editor, available on the latest version of their Pixel line of phones, provides users with free, powerful tools that can convincingly alter images, with no tech-savviness required. It’s not hard to see why these features would be attractive to users. But some have worried that giving everyone these powers undermines one of our most important sources of evidence.

If someone is unsure whether something happened, or people disagree about some relevant facts, a photograph can often provide conclusive evidence. Photographs serve this role not only in mundane cases of everyday disagreement but when the stakes are much higher, for example in reporting the news or in a court of law.

The worry, however, is that if photos can be so easily manipulated – and so convincingly, and by anyone, and at any time – then the assumption that they can be relied upon to provide conclusive evidence is no longer warranted. AI may then undermine the evidential value of photos in general, and with it a foundational way that we conduct inquiries and resolve disputes.

The potential implications are widespread: as vividly illustrated in a recent article from The Verge, one could easily manipulate images to fabricate events, alter news stories, and even implicate people in crimes. Furthermore, the existence of AI image-manipulating programs can cause people to doubt the veracity of genuine photos. Indeed, we have already seen this kind of doubt weaponized in high-profile cases, for example when Trump accused the Harris campaign of posting an AI-generated photo to exaggerate the crowd size at an event. If one can always “cry AI” when a photo doesn’t support one’s preferred narrative, then baseless claims that would have otherwise definitively been disproven can more easily survive scrutiny.

So have these new, easy-to-use image-manipulating tools completely undermined the evidential value of the photograph? Have we lost a pillar of our inquiries, to the point that photos should no longer be relied upon to resolve disputes?

Here’s a thought that may have come to mind: tools like Photoshop have been around for decades, and worries around photo manipulation have been around for even longer. Of course, a tool like Photoshop requires at least some know-how to use. But the mere fact that any photo we come across has the potential of having been digitally manipulated has not, it seems, undermined the evidential value of photographs in general. AI tools, then, really are nothing new.

Indeed, this response has been so common that The Verge decided to address it in a separate article, calling it a “sloppy, bad-faith argument.” The authors argue that new AI tools are importantly dissimilar to Photoshop: after all, it’s likely that only a small percentage of people will actually take the time to learn how to use Photoshop to manipulate images in a way that’s truly convincing, so giving everyone the power of a seasoned Photoshop veteran with no need for technical know-how represents not merely a different degree of an existing problem, but a new kind of problem altogether.

However, even granting that AI tools are accessible to everyone in a way that Photoshop isn’t, AI will still not undermine the evidential value of photographs.

To see why, let’s take a step back. What is a photo, anyway? We might think that a photo is an objective snapshot of the world, a frozen moment in time of the way things were, or at least the way they were from a certain point of view. In this sense, viewing a photo of something is akin to perceiving it, as if it were there in front of you, although separated in time and space.

If this is what photos are then we can see how they could serve as a definitive and conclusive source of evidence. But they aren’t really like this: the information provided by a photo can’t be interpreted out of context. For instance, photos are taken by photographers, who choose what to focus on and what to ignore. Relying on photos for evidence requires that we not simply ask what’s in the photo, but who took it, what their intentions were, and if they’re trustworthy.

Photos do not, then, provide evidence that is independent of our social practices: when we rely on photos we necessarily rely on other people. So if the worry is that new AI tools represent a fundamental change in the way that we treat photos as evidence because we can no longer treat photos as an objective pillar of truth, then it is misplaced. Instead, AI imposes a requirement on us when drawing information from photos: part of determining the evidential value of a photo will now partly depend on whether we think that the source of the photo would try to intentionally mislead us using AI.

The fact that we evaluate photographs not as independent touchpoints of truth but as sources of information in the context of our relationships with other people explains why few took seriously Trump’s claim that the photo of Harris’ supporters was AI-generated. This was not because the photo was in any sense “clearly” or “obviously” real: the content of the photo itself could very well have been generated by an AI program. But the fact that the accusations were made by Trump and that he has a history of lying about events depicted in photographs, as well as the fact that there were many corroborating witnesses to the actual event, means that the photo could be relied upon.

So new AI programs do, in a way, make our jobs as inquirers harder. But they do so by adding to problems we already have, not by creating a new type of problem never before seen.

But perhaps we’re missing the point. Is it not still a blow to the way we rely on photos that we now have a new, ever-present suspicion that any photo we see could have been manipulated by anyone? And isn’t this suspicion likely to have some effect on the way we rely on photographic evidence, the ways we settle disputes, and corroborate or disprove different people’s versions of events?

There may very well be an increasing number of attempts at appealing to AI to discredit photographic evidence, or to attempt to fabricate it. But compare our reliance on photographs to another form of evidence: the testimony of other people. Every person is capable of lying, and it is arguably easy to do so convincingly. But the mere possibility of deception does not undermine our general practices of relying on others, nor does it undermine the potential for the testimony of other people to be definitive evidence – for example, when an eyewitness provides evidence at a trial.

Of course, when the stakes are high, we might look for additional, corroborating evidence to support someone’s testimony. But the same is the case with photos, as the evidential value of a photograph cannot be evaluated separately from the person who took it. So as the ever-present possibility of lying has not undermined our reliance on other people, the ever-present possibility of AI manipulation will not undermine our reliance on photographs.

This is not to deny that new AI image-manipulating tools will cause problems. But the argument that they will cause brand new problems because they create doubts that undermine a pillar of inquiry, I argue, relies upon a misconception of the nature of photos and the way we rely on them as evidence. We have not lost a pillar of truth that provides objective evidence that has up until recently been distinct from the fallible practice of relying on others, since photographs never served this role. New AI tools may still create problems, but if they do, they can still be overcome.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories