← Return to search results
Back to Prindle Institute

Can We Trust AI Chatbots?

While more and more people are using AI-powered chatbots like ChatGPT, that’s not to say that people are trusting their outputs. Despite being hailed as a potential replacement for Google, Wikipedia, and a bona fide disruptor of education, a recent survey found that when it comes to information about important issues like the 2024 U.S. election, ChatGPT users overwhelmingly distrust it.

A familiar refrain in contemporary AI discourse is that while the programs that exist now have significant flaws, what’s most exciting about AI is its potential. However, for chatbots and other AI programs to play the roles in our lives that techno-optimists foresee, people will start having to trust them. Is such a thing even possible?

Addressing this question requires thinking about what it means to trust in general, and whether it is possible to trust a machine or an AI in particular. There is one sense in which it certainly does seem possible, namely the sense in which “trustworthy” means something like “reliable”: many of the machines that we rely on are, indeed, reliable, and thus ones that we at least describe as things that we trust. If chatbots fix many of their current problems – such as their propensity to fabricate information – then perhaps users would be more likely to trust them.

However, when we talk about trust we are often talking about something more robust than mere reliability. Instead, we tend to think about the kind of relationship that we have with another person, usually someone we know pretty well. One kind of trusting relationship we have with others is based on us having each others’ best interests in mind: in this sense, trust is an interpersonal relationship that exists because of familiarity, experience, and good intentions. Could we have this kind of relationship with artificial intelligence?

This perhaps depends on how artificial or intelligent we think some relevant AI is. Some are willing, even at this point, to ascribe many human or human-like characteristics to AI, including consciousness, intentionality, and understanding. There is reason to think, however, that these claims are hyperbolic. So let’s instead assume, for the sake of argument, that AI is, in fact, much closer to machine than human. Could we still trust it in a sense that goes beyond mere reliability?

One of the hallmarks of trust is that trusting leaves one open to the possibility of betrayal, where the object of our trust turns out to not have our interests in mind after all, or otherwise fails to live up to certain responsibilities. And we do often feel betrayed when machines let us down. For example, say I set my alarm clock so I can wake up early to get to the airport, but it doesn’t go off and I miss my flight. I may very well feel a sense of betrayal towards my alarm clock, and would likely never rely on it again.

However, if my sense of betrayal at my alarm clock is apt, it still does not indicate that I trust it in the sense of ascribing any kind of good will to it. Instead, we may have trusted it insofar as we have adopted what Thi Nguyen calls an “unquestioning attitude” towards it. In this sense, we trust the clock precisely because we have come to rely on it to the extent that we’ve stopped thinking about whether it’s reliable or not. Nguyen provides an illustrative example: a rock climber trusts their climbing equipment, not in the sense of thinking it has good intentions (since ropes and such are not the kinds of things that have intentions), but in the sense that they rely on it unquestioningly.

People may well one day incorporate chatbots into their lives to such a degree that they adopt unquestioning attitudes toward them. But our relationships with AI are, I think, fundamentally different from those that we have towards other machines.

Part of the reason why we form unquestioning attitudes towards pieces of technology is because they are predictable. When I trust my alarm clock to go off at the time I programmed it, I might trust in the sense that I can put it out of my mind as to whether it will do what it’s supposed to. But a reason I am able to put it out of my mind is because I have every reason to believe that it will do all and only that which I’ve told it to do. Other trusting relationships that we have towards technology work in the same way: most pieces of technology that we rely on, after all, are built to be predictable. Our sense of betrayal when technology breaks is based on it doing something surprising, namely when it does anything other than the thing that it has been programmed to do.

AI chatbots, on the other hand, are not predictable, since they can provide us with new and surprising information. In this sense, they are more akin to people: other people are unpredictable insofar as when we rely on them for information, we do not predictably know what they are going to say (otherwise we probably wouldn’t be trying to get information from them).

So it seems that we do not trust AI chatbots in the way that we trust other machines. Their inability to have positive intentions and form interpersonal relationships prevents them from being trusted in the way that we trust other people. Where does that leave us?

I think there might be one different kind of trust we could ascribe to AI chatbots. Instead of thinking about them as things that have good intentions, we might trust them precisely because they lack any intentions at all. For instance, if we find ourselves in an environment in which we think that others are consistently trying to mislead us, we might not look to someone or something that has our best interests in mind, but instead to that which simply lacks the intention to deceive us. In this sense, neutrality is the most trustworthy trait of all.

Generative AI may very well be seen as trustworthy in the sense of being a neutral voice among a sea of deceivers. Since it is not an individual agent with its own beliefs, agendas, or values, and has no good or ill intentions, if one finds oneself in an environment they think of as untrustworthy then AI chatbots may be considered a trustworthy alternative.

A recent study suggests that some people may trust chatbots in this way. It found that the strength of people’s beliefs in conspiracy theories dropped after having a conversation with an AI chatbot. While the authors of the study do not propose a single explanation as to why this happened, part of this explanation may lie in the user trusting the chatbot: since someone who believes in conspiracy theories is likely to also think that people are generally trying to mislead them, they may look to something that they perceive as neutral as being trustworthy.

While it may then be possible to trust an AI because of its perceived neutrality, it can only be as neutral as the content it draws from; no information comes from nowhere, despite its appearances. So while it may be conceptually possible to trust AI, the question of whether one should do so at any point in the future remains open.

AI in Documentary Filmmaking: Blurring Reality in ‘What Jennifer Did’

image of camera lens and ring light in studio

Back in 2021, I wrote an article for The Prindle Post predicting the corrosive effect AI might have on documentary filmmaking. That piece centered around Roadrunner: A Film about Anthony Bourdain, in which an AI deepfake was used to read some of the celebrity chef’s emails posthumously. In that article, I raised three central concerns: (i) whether AI should be used to give voice and body to the dead, (ii) the potential for nefarious actors to use AI to deceive audiences, and (iii) whether AI could accurately communicate the facts of a situation or person.

Since that article’s publication, the danger AI poses to our ability to decipher fact from fiction in all facets of life has only grown, with increasing numbers of people able to produce ever more convincing fakery. And, while apprehensions about this are justifiably focused on the democratic process, with Time noting that “the world is experiencing its first AI elections without adequate protections,” the risk to our faith in documentary filmmaking remains. This is currently being discussed thanks to one of Netflix’s most recent releases — What Jennifer Did.

The documentary focuses on Jennifer Pan, a 24-year-old who, in 2015, was convicted of hiring hitmen to kill her parents (her father survived the attack, but her mother did not) because they disapproved of who she was dating. Pan is now serving a life sentence with the chance of parole after 25 years.

The story itself, as well as the interviews and people featured in it, is true. However, around 28 minutes into the documentary, some photographs which feature prominently on-screen raise doubt about the film’s fidelity to the truth. During a section where a school friend describes Jennifer’s personality — calling her “happy,” “bubbly,” and “outgoing” — we see some pictures of Jennifer smiling and giving the peace sign. These images illustrate how full of life Jenifer could be and draw a contradiction between the happy teen and the murderous adult.

But, these pictures have several hallmarks of being altered or just straight-up forgeries. Jenifer’s fingers are too long, and she doesn’t have the right number of them. She has misshapen facial features and an exceedingly long front tooth. There are weird shapes in the back- and foreground, and her shoulder appears out of joint (you can see the images in question on Futurism, where the story broke). As far as I’m aware, the documentary makers have not responded to requests for comments, but it does appear that, much like in Roadrunner, AI has been used to embellish and create primary sources for storytelling.

Now, this might not strike you as particularly important. After all, the story that What Jennifer Did tells is real. She did pay people to break into her parent’s house to kill them. So what does it matter if, in an attempt to make a more engaging piece of entertainment, a little bit of AI is used to create some still (and rather innocuous) images? It’s not like these images are of her handing over the money or doing things that she might never have done; she’s smiling for the camera in both, something we all do. But I think it does matter, and not simply because it’s a form of deception. It’s an example of AI’s escalating and increasingly transgressive application in documentaries, and particularly here, in documentaries where the interested parties are owed the truth of their lives being told.

In Roadrunner, AI is used to read Bourdain’s emails. This usage is deceptive, but the context in which it is done is not the most troubling that it could be. The chef sadly took his own life. But he was not murdered. He did not read the emails in question, but he did write them. And, while I suspect he would be furious that his voice had been replicated to read his writing, it is not like this recreation existed in isolation from other things he had written and said and did (but, to be clear, I still think it shouldn’t have been done).

In What Jennifer Did, however, we’re not talking about the recreation of a deceased person’s voice. Instead, we’re talking about fabricating images of a killer to portray a sense of humanity. The creative use of text, audio, and image shouldn’t, in itself, cause a massive backlash, as narrative and editing techniques always work towards this goal (indeed, no story is a totally faithful retelling of the facts). But, we must remember that the person to whom the documentary is trying to get us to relate – the person whom the images recreate and give a happy, bubbly, and outgoing demeanor – is someone who tried and, in one case, succeeded in killing her parents. Unlike in Roadrunner, What Jennifer Did uses AI not to give life to the lifeless but to give humanity to someone capable of the inhumane. And this difference matters.

Now, I’m not saying that Jennifer was or is some type of monster devoid of anything resembling humanity. People are capable of utter horrors. But by using AI to generate fake images at the point at which we’re supposed to identify with her, the filmmakers undermine the film’s integrity at a critical juncture. That’s when we’re supposed to think: “She looks like a normal person,” or even, “She looks like me.” But, if I can’t trust the film when it says she was just like any other teen, how can I trust it when it makes more extreme claims? And if a documentary can’t hold its viewer’s trust, with the most basic of things like “what you’re seeing is real,” what hope does it have in fulfilling its goal of education and informing? In short, how can we trust any of this if we can’t trust what we’re being shown?

This makes the usage of AI in What Jennifer Did so egregious. It invites doubt into a circumstance where doubt cannot, should not, be introduced. Jeniffer’s actions had real victims. Let’s not mince our words; she’s a murderer. By using AI to generate images — pictures of a younger version of her as a happy teen — we have reason to doubt the authenticity of everything in the documentary. Her victims deserve better than that, though. If Netflix is going to make documentaries about what is the worst, and in some cases the final, days in someone’s life, they owe those people the courtesy of the truth, even if they think they don’t owe it to the viewers.

What Should We Do About AI Identity Theft?

image of synthetic face and voice

A recent George Carlin comedy special from Dudesy — an AI comedy podcast created by Will Sasso and Chad Kultgen — has sparked substantial controversy. In the special, a voice model emulating the signature delivery and social commentary of Carlin, one of America’s most prominent 20th-century comedians and social critics, discusses contemporary topics ranging from mass shootings to AI itself. The voice model, which was trained on five decades of the comic’s work, sounds eerily similar to Carlin who died in 2008.

In response to controversy over the AI special, the late comedian’s estate filed a suit in January, accusing Sasso and Kultgen of copyright infringement. As a result, the podcast hosts agreed to take down the hour-long comedy special and refrain from using Carlin’s “image, voice or likeness on any platform without approval from the estate.” This kind of scenario, which is becoming increasingly common, generates more than just legal questions about copyright infringement. It also raises a variety of philosophical questions about the ethics of emerging technology connected to human autonomy and personal identity.

In particular, there are a range of ethical questions concerning what I’ve referred to elsewhere as single-agent models. Single-agent models are a subset of generative artificial intelligence that concentrates on modeling some identifying feature(s) of a single human agent through machine learning.

Most of the public conversation around single-agent models focuses on the impact on individuals’ privacy and property rights. These privacy and property rights violations generally occur as a function of the single-agent modeling outputs not crediting and compensating the individuals whose data was used in the training process, a process that often relies on the non-consensual scraping of data under fair use doctrine in the United States. Modeled individuals find themselves competing in a marketplace saturated with derivative works that fail to acknowledge their contributory role in supplying the training data, all while also being deprived of monetary compensation. Although this is a significant concern that jeopardizes the sustainability of creative careers in a capitalist economy, it is not the only concern.

One particularly worrisome function of single-agent models is their unique capacity to generate outputs practically indistinguishable from those of individuals whose intellectual and creative abilities or likeness are being modeled. When an audience with an average level of familiarity with an individual’s creative output cannot distinguish whether the digital media they engage with is authentic or synthetic, this presents numerous concerns. Perhaps most obviously, single-agent models’ ability to generate indistinguishable outputs raises concerns about what works and depictions of a modeled individual’s behavior become associated with their reputation. Suppose the average individual can’t discern whether an output came from an AI or the modeled individual themself. In that case, unwanted associations between the modeled individual and AI outputs may form.

Although these unwanted associations are most likely to harm when the individual generating the outputs does so in a deliberate effort to tarnish the modeled individual’s reputation (e.g., defamation), one need not have this sort of intent for harm to occur. Instead, one might use the modeled individual’s likeness to deceive others by spreading disinformation, especially if that individual is perceived as epistemically credible. Recently, scammers have begun incorporating single-agent models in the form of voice cloning to call families in a loved one’s voice and defraud them into transferring money. On a broader scale, a bad actor might flood social media with an emulation of the President of the United States, relaying false information about the election. In both cases, the audience is deceived into adopting and acting on false beliefs.

Moreover, some philosophers, such as Regina Rini, have pointed to the disturbing implications of single-agent modeling on our ability to treat digital media and testimony as veridical. If one can never be sure if the digital media they engage with is true, how might this negatively impact our abilities to consider digital media a reliable source for transmitting knowledge? Put otherwise, how can we continue to trust testimony shared online?

Some, like Keith Raymond Harris, have pushed back against the notion that certain forms of single-agent modeling, especially those that fall under the category of deepfakes (e.g., digitally fabricated videos or audio recordings), pose a substantial risk to our epistemic practices. Skeptics argue that single-agent models like deepfakes do not differ radically from previous methods of media manipulation (e.g., photoshop, CGI). Furthermore, they contend that the evidential worth of digital media also stems from its source. In other words, audiences should exercise discretion when evaluating the source of the digital media rather than relying solely on the digital media itself when considering its credibility.

These attempts to allay the concerns about the harms of single-agent modeling overlook several critical differences between previous methods of media manipulation and single-agent modeling. Earlier methods of media manipulation were often costly, time-consuming, and, in many cases, distinguishable from their authentic counterparts. Instead, single-agent modeling is accessible, affordable, and capable of producing outputs that bypass an audience’s ability to distinguish them from authentic media.

In addition, many individuals lack the media literacy to discern between trustworthy and untrustworthy media sources, in the way Harris suggests. Moreover, individuals who primarily receive news from social media platforms generally tend to engage with the stories and perspectives that reach their feeds rather than content outside their digitally curated information stream. These concerns are exacerbated by social media algorithms prioritizing engagement, siloing users into polarized informational communities, and rewarding stimulating content by placing it at the top of users’ feeds, irrespective of its truth value. Social science research demonstrates that the more an individual is exposed to false information, the more willing they will be to believe it due to familiarity (i.e., illusory truth effect). Thus, it appears that single-agent models pose genuinely novel challenges that require new solutions.

Given the increasing accessibility, affordability, and indistinguishability of AI modeling, how might we begin to confront its potential for harm? Some have expressed the possibility of digitally watermarking AI outputs. Proponents argue that this would allow individuals to recognize whether media was generated by AI, perhaps mitigating the concerns I’ve raised relating to credit and compensation. Consequently, these safeguards could reduce reputational harm by diminishing the potential for unwanted associations. This approach would integrate blockchain — the same technology used by cryptocurrency — allowing the public to access a shared digital trail of AI outputs. Unfortunately, as of now, this cross-platform AI metadata technology has yet to see widespread implementation. Even with cross-platform AI metadata, we remain reliant on the goodwill of big tech in implementing it. Moreover, this doesn’t address concerns about the non-consensual sourcing of training data through fair use doctrine.

Given the potential harms of single-agent modeling, it’s pertinent that we critically examine and reformulate our epistemic and legal frameworks to accommodate these novel technologies.

Deepfake Porn and the Pervert’s Dilemma

blurred image of woman on bed

This past week Representative Alexandra Ocasio-Cortez spoke of an incident where she was realistically depicted by a computer engaged in a sexual act. She recounted the harm and difficulty of being depicted in this manner. The age of AI-generated pornography is upon us and so-called deepfakes are becoming less visually distinguishable from real life every day. Emerging technology could allow people to generate true-to-life images and videos of their most forbidden fantasies.

What happened with Representative Ocasio-Cortez raises issues well beyond making pornography with AI of course. Deepfake pornographic images are not just used for personal satisfaction, they are used to bully, harass, and demean. Clearly, these uses are problematic, but what about the actual creation of the customized pornography itself? Is that unethical?

To think this through Carl Öhman articulates the “pervert’s dilemma”: We might think that any sexual fantasy conceived — but not enacted — in the privacy of our home and our own head is permissible. If we do find this ethical, then why exactly do we find it objectionable if a computer generates those images, also in the privacy of one’s home? (For the record, Öhman believes they have a way out of this dilemma.)

The underlying case for letting a thousand AI-generated pornographic flowers bloom is rooted in the famous Harm Principle of John Stuart Mill. His thought was that in a society which values individual liberty, behaviors should generally not be restricted unless they cause harm to others. Following from this, as long as no one is harmed in the generation of the pornographic image, the action should be permissible. We might find it gross or indecent. We might even find the behaviors depicted unethical or abhorrent. But if nobody is being hurt, then creating the image in private via AI is not itself unethical, or at least not something that should be forbidden.

Moreover, for pornography in which some of the worst ethical harms occur in the production process (the most extreme example being child pornography), AI-generated alternatives would be far preferable. (If it turns out that being able to generate such images increases the likelihood of the corresponding real-world behaviors, then that’s a different matter entirely.) Even if no actual sexual abuse is involved in the production of pornography, there have been general worries about the working conditions within the adult entertainment industry that AI-generated content could alleviate. Although, alternatively, just like in other areas, we may worry that AI-generated pornography undermines jobs in adult entertainment, depressing wages and replacing actors and editors with computers.

None of this disputes that AI-generated pornography can’t be put to bad ends, as the case of Representative Ocasio-Cortez clearly illustrates. And she is far from the only one to be targeted in this way (also see The Prindle Post discussion on revenge porn). The Harm Principle defender would argue that while this is obviously terrible, it is these uses of pornography that are the problem, and not simply the existence of customizable AI-generated pornography. From this perspective, society should target the use of deepfakes as a form of bullying or harassment, and not deepfakes themselves.

Crucially, though, this defense requires that AI-generated pornography be adequately contained. If we allow people to generate whatever images they want as long as they pinky-promise that they are over 18 and won’t use them to do anything nefarious, it could create an enforcement nightmare. Providing more restrictions on what can be generated may be the only way to meaningfully prevent the images from being distributed or weaponized even if, in theory, we believe that strictly private consumption squeaks by as ethically permissible.

Of course, pornography itself is far from uncontroversial, with longstanding concerns that it is demeaning, misogynistic, addictive, and encourages harmful attitudes and behaviors. Philosophers Jonathan Yang and Aaron Yarmel raise the worry that by providing additional creative control to the pornography consumer, AI turns these problematic features of pornography up to 11.  The argument, both in response to AI-generated pornography and pornography generally, depends on a data-driven understanding of the actual behavioral and societal effects of pornography — something which has so far eluded a decisive answer. While the Harm Principle is quite permissive about harm to oneself, as a society we may also find that the individual harms of endless customizable pornographic content are too much to bear even if there is no systematic impact.

Very broadly speaking, if the harms of pornography we are most worried about relate to its production, then AI pornography might be a godsend. If the harms we are most worried about relate to the images themselves and their consumption, then it’s a nightmare. Additional particularities are going to arise about labor, distribution, source images, copyright, real-world likeness, and much else besides as pornography and AI collide. Like everything sexual, openness and communication will be key as society navigates the emergence of a transformative technology in an already fraught ethical space.

Military AI and the Illusion of Authority

Israel has recruited an AI program called Lavender into its ongoing assault against Palestinians. Lavender processes military intelligence that previously would have been processed by humans, producing a list of targets for the Israel Defense Forces (IDF) to kill. This novel use of AI, which has drawn swift condemnation from legal scholars and human rights advocates, represents a new role for technology in warfare. In what follows, I explore how the technological aspects of AI such as Lavender contribute to a false sense of its authority and credibility. (All details and quotations not otherwise attributed are sourced from this April 5 report on Lavender.)

While I will focus on the technological aspect of Lavender, let us be clear about the larger ethical picture. Israel’s extended campaign — with tactics like mass starvation, high-casualty bombing, dehumanizing language, and destroying health infrastructure — is increasingly being recognized as a genocide. The evil of genocide almost exceeds comprehension; and in the wake of tens of thousands of deaths, there is no point quibbling about methods. I offer the below analysis as a way to help us understand the role that AI actually plays — and does not play — not because its role is central in the overall ethical picture, but because it is a new element in the picture that bears explaining. It is my hope that identifying the role of technology in this instance will give us insight into AI’s ethical and epistemic dangers, as well as insight into how oppression will be mechanized in the coming years. As a political project, we must use every tool we have to resist the structures and acts of oppression that make these atrocities possible. Understanding may prove a helpful tool.

Let’s start with understanding how Lavender works. In its training phase, Lavender used data concerning known Hamas operatives to determine a set of characteristics, each of which indicates that an individual is likely to be a member of Hamas. Lavender scans data regarding every Gazan in the IDF’s database and, using this set of characteristics, generates a score from 1 to 100. The higher the number, the more likely that individual is to be a member of Hamas, according to the set of characteristics the AI produced. Lavender outputs these names onto a kill list. Then, after a brief check to confirm that a target is male, commanders turn the name over to additional tracking technologies, ordering the air force to bomb the target once their surveillance technology indicates that he is at home.

What role does this new technology play in apparently authorizing the military actions that are causally downstream of its output? I will highlight three aspects of its role. The use of AI such as Lavender alienates the people involved from their actions, inserting a non-agent into an apparent role of authority in a high-stakes process, while relying on its technological features to boost the credibility of ultimately human decisions.

This technology affords a degree of alienation for the human person who authorizes the subsequent violence. My main interest here is not whether we should pity the person pushing their lever in the war machine, alienated as they are from their work. The point, rather, is that alienation from the causes and consequences of our actions dulls the conscience, and in this case the oppressed suffer for it. As one source from the Israeli military puts it, “I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago…. The machine did it coldly. And that made it easier.” Says another, “even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end.” The swiftness and ease of the technology separates people from the reality of what they are taking part in, paving the way for an immensely deadly campaign.

With Lavender in place, people are seemingly relieved of their decision-making. But the computer is not an agent, and its technology cannot properly bear moral responsibility for the human actions that it plays a causal role in. This is not to say that no one is morally responsible for Lavender’s output; those who put it in place knew what it would do. However, the AI’s programming does not determinately cause its output, giving the appearance that the creators have invented something independent that can make decisions on its own. Thus, Lavender offers a blank space in the midst of a causal chain of moral responsibility between genocidal intent and genocidal action, while paradoxically providing a veneer of authority for that action. (More on that authority below.) Israel’s use of Lavender offloads moral responsibility onto the one entity in the process that can’t actually bear it — in the process obscuring the amount of human decision-making that really goes into what Lavender produces and how it’s used.

The technological aspect of Lavender is not incidental to its authorizing role. In “The Seductions of Clarity,” philosopher C. Thi Nguyen argues that clarity, far from always being helpful to us as knowers, can sometimes obscure the truth. When a message seems clear — easily digested, neatly quantified — this ease can lull us into accepting it without further inquiry. Clarity can thus be used to manipulate, depriving us of the impetus to investigate further.

In a similar fashion, Lavender’s output offers a kind of ease and definiteness that plausibly acts as a cognitive balm. A computer told us to! It’s intelligent! This effect is internal to the decision-making process, reassuring the people who act on Lavender’s output that what they are doing is right, or perhaps that it is out of their hands. (This effect could also be used externally in the form of propaganda, though Israel’s current tactic is to downplay the role of AI in their decisions.)

Machines have long been the tools that settle disputes when people can’t agree. You wouldn’t argue with a calculator, because the numbers don’t lie. As one source internal to the IDF put it, “Everything was statistical, everything was neat — it was very dry.” But the cold clarity of technology cannot absolve us of our sins, whether moral or epistemic. Humans gave this technology the parameters in which to operate. Humans entrust it with producing its death list. And it is humans who press play on the process that kills the targets the AI churns out. The veneer of credibility and objectivity afforded by the technical process obscures a familiar reality: that the people who enact this violence choose to do so. That it is up to the local human agents, their commanders, and their government.

So in the end we find that this technology is aptly named. Lavender — the plant — has long been known to help people fall asleep. Lavender — the AI — can have an effect that is similarly lulling. When used to automate and accelerate genocidal intelligence, this technology alienates humans from their own actions. It lends the illusion of authority to an entity that can’t bear moral responsibility, easing the minds of those involved with the comforting authority of statistics. But it can only have this effect if we use it to — and we should rail against the use of it when so much is at stake.

Ashes to Ashes, Moondust to Moondust

image of full moon with clouds

On January 8 of this year, Astrobotic Technology launched the first ever commercial moon lander, Peregrine. While the mission marked a significant step in the growing commercialization of space exploration, it was Peregrine’s payload that saw the probe attain notoriety. On board – courtesy of U.S. companies Celestis and Elysium Space – were the remains of at least 70 people and one dog. Sold as “a truly extraordinary… memorial experience,” these companies provide the option of having one’s ashes deposited in “a new sacred place for remembrance” – that is, the moon’s surface. Such a memorial might seem a fitting way to honor a loved one (provided, of course, you can afford the hefty $12,000 price tag). But serious concerns have been raised regarding the morality of such an endeavor.

For one, the moon is considered sacred in many cultures. Writing in Nature, Alvin D. Harvey explains that for the people of the Navajo Nation, the moon is seen as an ancient relative (“Grandmother Moon”), and that we should be “careful, diligent, and respectful when visiting her.” It was for this very reason that Navajo Nation President Buu Nygren contacted NASA, protesting the Peregrine mission prior to launch. He noted that the moon was a part of his culture’s “spiritual heritage” and an “object of reverence and respect.” Depositing human remains upon it was, therefore, “tantamount to desecration of [that] sacred space.”

But the cultural significance of the moon doesn’t stop there. Hinduism links the moon with Shiva – the god of destruction and regeneration – while Chinese folklore tells the story of the goddess Chang’e who became immortal and flew to the sky. Ancient Greek mythology held the moon to be a creation of Zeus, while the ancient Egyptians associated the moon with Isis – the goddess of magic and healing. For New Zealand Māori, the moon – or marama – has important symbolic meaning, with the lunar cycle being likened to “the opening and closing of a portal through which departed spirits returned to the origin of life.”

What, then, should this cultural significance mean for our exploratory endeavors? Should we refrain from depositing human remains on the moon and other celestial bodies of cultural significance? It’s important to note that in raising their concerns, the Navajo Nation sought only to be consulted about such missions – not to ban them outright. Interestingly, this consultation is precisely what NASA had already promised the Navajo Nation back in 1998 after similar concerns were raised when the remains of planetary scientist Eugene Shoemaker were transported to the moon by Lunar Prospector. This promise, it seems, was soon forgotten – though the Biden administration has since made attempts to remedy this.

Space exploration necessarily involves our intimate interaction with celestial bodies that have long held cultural significance. If we are being purely consequentialist, we might argue that the scientific knowledge – and subsequent benefit to humanity – gained from these missions far outweighs the cultural offense such exploration might induce. The Parker Solar Probe, for example, will – in 2025 – be the first man-made artifact to “touch” the sun – an object of enormous cultural importance, and for many, a deity in its own right. The probe will, however, revolutionize our understanding of the solar wind, and how it affects life on Earth.

But the deposition of human remains can avail itself of no such arguments. There is no scientific understanding to be gained, nor “greater good” for humanity. It’s a vanity project – albeit, an understandable one – concerned solely with ensuring a legacy for the dead. We might argue that if these individuals expressed a strong desire to have their remains dealt with in this way, it would be wrong not to fulfill their wishes. It’s this sentiment that usually drives our insistence on respecting the funerary wishes of the dead, despite no legal obligation. It’s unclear, however, whether we can even wrong the dead. There are, of course, also the wishes of those who survive the dead. Elysium is careful to describe their memorial service in a way that appeals chiefly to those left behind, describing this “majestic memorial” as “a connective experience for families and friends” in which they can “remember a loved one throughout the night sky.”

But while such a memorial undoubtedly creates a good for families and friends of the dead, it’s unclear that this good is sufficient to outweigh the harms experienced by those for whom the moon has significant cultural importance. There’s also nothing to suggest that a “majestic memorial” to their loved ones can’t be sufficiently achieved via other means that don’t involve the desecration of celestial bodies of cultural value.

Though this leads to an interesting implication for how we deal with our dead. Celestial bodies are not the only parts of our natural world with enormous cultural significance. There are many more down-to-earth examples. For some, it’s certain mountains, for others, it’s the sea. Yet these are also locations over which we routinely dispose of human remains. So what does this mean? Well, if we think there is a good argument to be made for refraining from depositing human remains in locations of cultural significance (or, at least for consulting representatives of those groups for whom the location is important), then it seems that we must seriously reconsider the simple – and, for many, widely accepted – practices like depositing a love one’s ashes by the sea.

Several months ago, I briefly summarized some of the ways in which philosophy – and ethics more specifically – might help us better understand how we should conduct ourselves as we explore the cosmos. This case provides just one more example. For better or worse, the humans (and one dog) aboard Peregrine never made it to their lunar destination. A propellant leak scuttled any chance of the probe arriving on the moon, and at 20:59 GMT on January 18 – just ten days after launch – the lander burned up in the Earth’s atmosphere.

Originalism, Hypocrisy, and the Role of the Judiciary

photograph of faded US constitution

David French’s brief column responding to a New York Times interview of retired Supreme Court Justice Stephen Breyer illustrates beautifully the bind that originalists – those who believe that constitutional provisions ought to be interpreted according to how they were understood by the people who enacted them – have put themselves in. French, a conservative pundit and former lawyer, admirably owns up to the fact that, because American history is “deeply confused,” originalism enables judges to pick and choose the “particular strand of history he or she prefers and then import personal preferences into what is supposed to be an objective analysis of meaning.” While he does not put it this way, French is in effect accusing originalists of intellectual dishonesty or hypocrisy: they profess to be interpreting the Constitution pursuant to “original meaning,” but in fact they often surreptitiously apply other, non-originalist criteria to select among multiple historical meanings.

But French stubbornly cleaves to the line that the “judiciary’s role is to interpret the law, not to change the law.” Thus, French seems to find himself without any tools for interpreting the Constitution: history is too complicated for judges and too susceptible to intellectual hypocrisy, but judges must not “make law” by interpreting the Constitution in accordance with their values and other “subjective determinations.”

As I will show, on the one hand, and contra French, there is a way for originalism to be intellectually respectable — but it probably is not palatable to most originalists. On the other hand, the originalists’ insistence that judges do not make law is simply untenable, and ironically, quite contrary to the history and tradition of American constitutional jurisprudence.

Before suggesting how originalism can avoid hypocrisy, I would like to expand briefly on French’s critique of originalism. Undoubtedly, when interpreting the law, the text matters a great deal. When the meaning of the text is clear and unambiguous, that is often the end of the inquiry. For example, there is no doubt that Article II of the Constitution sets an age limit of 35 for any U.S. President. The text is clear, and no “living constitutionalist” believes that “the Age of thirty five Years” ought to mean anything other than 35 years old. You might say that practically everyone agrees that the “plain meaning” criterion of interpretation gets lexical priority over other tools of constitutional construction: if a provision has a plain meaning, then the plain meaning controls.

However, there are many clauses of the Constitution that do not have a plain, unambiguous meaning. Moreover, these tend to be among the most important clauses. For example, what does the Fourteenth Amendment mean when it prohibits the States from depriving any person of “life, liberty, or property” without “due process of law”? The originalist answers that it means whatever people understood it to mean when it was ratified in 1868.

The trouble with this answer is that oftentimes the historical record either lacks suitably detailed discussions of a given constitutional provision, or it contains multiple, diametrically opposed interpretations, each of which had some currency among those who enacted the provision.

The former situation can happen when framers or ratifiers deliberately leave the meaning of a provision ambiguous to maximize the chances of finding favor with different constituencies. The latter situation is what French means when he says that American history is often “confused.” Since the originalist believes that the original meaning is the only legitimate criterion of constitutional interpretation, such situations leave originalists in the position of having to pick and choose among original meanings, or inventing them. At that point, the only criteria of interpretive choice available to them are criteria that have no place in their interpretive theory. And this leads to intellectual dishonesty where, for example, originalists purport to find that one strand of historical interpretation is the “mainstream view” while the others are “outliers” or “anomalies,” when in fact they have chosen that strand based on their own policy preferences, moral values, or political philosophy.

There is a way for originalists to avoid intellectual dishonesty, however: embrace radical judicial restraint. So, whenever the historical record contains multiple interpretations of the provision at issue or, alternatively, is too sparse to determine a unique meaning, the originalist should simply refrain from the task of interpretation and leave it to Congress to sort out the mess through the amendment process. This approach would certainly lighten the federal judiciary’s workload. I suspect it would also make the Constitution essentially a dead letter.

But surely, French is right that allowing judges to make “invariably subjective determinations” about the practical consequences of their rulings and the way that society’s values evolve in interpreting the Constitution would subvert democracy itself because “it is the democratically elected branches of government that are responsible for that evolution, not the judiciary.” Well, this would come as news to the Founders. Reading judicial opinions that interpret the Constitution from the Early Republic period, one is struck above all by their repeated and unabashed invocations of principles of natural law and political philosophy to guide their interpretation of America’s fundamental law. As far as I am aware, none of the Founders or ratifiers — categories which included, in some cases, the judges themselves — ever took issue with this interpretive method, although they may have disagreed violently about the particular principles at play.

But that argument is no more than an appeal to tradition. The better answer to French’s concern is that, if the judiciary’s “lawmaking” role has been sanctioned by a Constitution duly ratified by the people, then that role cannot be un- or anti-democratic — at least not by the lights of the polity created by that particular Constitution, which sets forth the conditions of democratic legitimacy in that polity. Ironically, as I indicated above, there is every reason to think that the Constitution’s drafters and ratifiers authorized the judiciary to use tools of constitutional interpretation other than, and in addition to, original meaning. In other words, we should believe that the Constitution as understood by those who enacted it gave judges the authority to make just the sorts of subjective determinations of which French disapproves. Of course, one might ask why what the “people” decided the judiciary should do in 1787 is entitled to such deference. But this is no more than an application of the so-called “dead hand problem,” and not a special problem for the jurisprudential approach I am proposing here.

French’s column is almost poignant in capturing something akin to the transition of a religious person from unclouded faith to reluctant and guarded skepticism. I would argue that his reasons for reluctance are misguided: there is nothing contrary to conservatism in the view that judges should make value judgments and other “subjective” determinations. Actually, that view is entirely consistent with history and tradition.