← Return to search results
Back to Prindle Institute

The Trouble with Tasers

photograph of stun gun held up by hand

Recently a Sheriff’s Deputy in Florida was charged with culpable negligence for tasing a fleeing suspect soaked in gasoline and so causing a fire that resulted in second and third degree burns over seventy-five percent of the man’s body. Should the Deputy be criminally liable in such unusual circumstances? Hard cases make bad law, the saying goes. That is, good law is based on common, ordinary occurrences, rather than on rare and extraordinary ones. But the case does raise issues about something quite common but, unfortunately, less and less discussed: the wide-spread use of Tasers by law enforcement in America. Civilian, non-federal police alone are involved in approximately 421,000 use-of-force incidents a year (cases where they resort to some sort of physical force) and deploy a Taser in 36% of these – making for over 150, 000 tasings a year. How concerned with Tasers should we be?

U.S. law enforcement agencies own nearly a million Tasers. “Taser” is a brand name, but it is often used generically to characterize many different energy-directed weapons that deliver painful and debilitating shocks to their victims. Originally marketed as “non-lethal,” Tasers are now supposed to be a “less lethal” alternative to the use of a firearm – although they can kill. Reporting practices make it difficult to say exactly how many people have died from being tased, but estimates suggest as many as a thousand people have been killed between 2000-2018 in the United States. Furthermore, since 2001 at least ten people have been shot by police officers who later said that they were attempting to draw their Taser, but mistakenly drew their service revolver and fired.

Still, might Tasers be a less lethal alternative to firearms? Actually, they don’t seem to be an alternative to firearms at all. The only comprehensive study of the question reviewed 36,112 use-of-force incidents by the Chicago police and found no evidence that carrying and deploying Tasers reduced the use of firearms or that a Taser played the role of substituting for the use of a firearm. In fact, the study’s principal investigator, Professor Jeffrey Grogger, said unequivocally, “We find no substitutions between Tasers and firearms.” Hence, despite widespread acceptance of the practice of police carrying Tasers, and using them as weapons of compliance, the reality is that tasing is often not a substitute for a firearm, but a form of intentional, or unintentional, torture.

“Torture” is standardly defined as “the action or practice of inflicting severe pain or suffering on someone as a punishment or in order to force them to do or say something.” When Tasers are used by the police in the United States to assure immediate, unhesitating compliance with police orders, they are torture devices. Lower voltages are even referred to by police themselves as “pain-compliance” settings. At higher voltages, tasing renders the victim completely physically incapacitated via neuromuscular spasms. But higher voltages are also, of course, quite painful.

So, tasing inflicts severe pain and suffering on people in order to force them to do or say what the police want them to do or say without hesitation or negotiation. Given that the Bill of Rights in general, and the Eighth Amendment in particular, with its ban on “cruel and unusual punishments,” are generally taken to prohibit torture by the United States’ government and its representatives, why is torture via Taser so widely ignored?

Police would argue that this is a mischaracterization of the purpose and use of Tasers. Typical policy statements by police (follow Axon Enterprises’, the company that still manufacturers the original Taser, model statement) reserve the use of Tasers to cases where the suspect is “violent or physically resisting” or “has demonstrated an intention to be violent or to physically resist and who reasonably appears to present the potential to harm officers, him/herself or others.” But then adds: or where the police have “a reasonable belief that an individual has committed or threatened to commit a serious offense.” In other words, the criteria start with violence but end up incredibly broad. Who doesn’t have the “potential to harm” or to commit an offense in the future?

Despite these issues, some experts, including, for example, the Stanford Criminal Justice Center, still advocate for the use of Tasers – if they are used in the right way. They say, for example, Tasers should not be used on children, pregnant women, the elderly, the mentally ill, and those under the influence of drugs. As they admit, however, it’s not always easy to tell if someone is pregnant, mentally ill, on drugs, or even that they are a child. Still, they argue, “The purpose of Tasers and other weapons is to subdue violent and dangerous individuals…and [should] never [used] on individuals who are passively resisting arrest.” Perhaps, then, it is the misuse, rather than the use, of Tasers we should be worried about.

If we are not going to eliminate, or very strictly limit, police use of Tasers, how do we ensure that Tasers are used in the right way by law enforcement? What should our goal be?

Maybe, we should end on the one unequivocally good thing about Tasers. They have, in fact, reduced the number of police injured in use-of-force events. “But,” as Professor Grogan puts it, it might “be better if the distribution of injury reduction was better split between [police] and suspects.”

Your Political Party Isn’t Right About Everything: Intellectual Humility and the Public Square

photograph of cluttered telephone switchboard

Our political affiliations affect us in many ways, influencing how we vote, where we live, and even who we connect with as friends. But let’s take a moment to consider not just how our political affiliations influence how we vote, but also what we believe. Before reading the rest of this article, pause and consider how you would answer each of the following questions:

Is abortion morally wrong in most circumstances?
Should homosexual couples be allowed to marry?
Is illegal immigration into the United States a serious problem?
Should the United States federal minimum wage be raised?
Is it okay to consider race when making college admissions decisions?

Chances are, if someone knows your answer to one of these questions, they could probably predict how you answered the others. If you said that the United States should not raise its federal minimum wage, then you likely also said that abortion is wrong in most circumstances, and if you thought that homosexual couples should be allowed to marry, you probably also think that race should be a factor in college admissions.

But this is rather surprising. After all, these issues are very complex. To decide whether we should raise the minimum wage, we would need to know a good deal of economics, and to settle whether abortion is wrong, we would need to know a fair bit of ethical theory. Furthermore, these questions appear to be unrelated. What does the morality of abortion have to do with the minimum wage? And what does college admissions have to do with illegal immigration? Having a particular position on whether we should raise the minimum wage seems to have very little, if anything, to do with deciding who gets into college.

Because these issues are both complex and unrelated, someone that is not familiar with United States politics would probably expect that people’s opinions would be all over the map. Just because a person thinks that homosexual couples should be allowed to marry does not mean that they will think that illegal immigration is not a serious problem. But this, of course, is not what we find. Instead, the answers that people give to these questions are highly correlated. If someone thinks that affirmative action is okay, then they are much more likely to think that the minimum wage should be raised.

The reason we can predict how people will answer is because of our political affiliations. Republicans and Democrats tend to give diverging answers to the above questions, and so the party that you typically vote for can be used to predict what your answers will be. So just by knowing what party you vote for, we can anticipate what you think about issues ranging from abortion to affirmative action to minimum wage laws.

Perhaps this is not much of an issue. After all, maybe the political party you chose is the right one, and it gets things correct the vast majority of the time. But is this really plausible? Let’s examine a couple possibilities.

It could be, for example, that one political party is simply more intelligent than the other. There has been research suggesting that Republicans IQ scores might be 1-3 points higher than Democrats, but other research has shown that, once we take socioeconomic status into account, those differences disappear. And both Republicans and Democrats make the same sorts of logical errors when evaluating arguments for and against their respective positions.

It could also be that one side has respect for experts and defers to their opinions while the other does not, enabling the former to be right far more often than the latter. But again, the evidence here is mixed at best. While Republicans may be less likely to listen to the experts when it comes to anthropogenic climate change, their views on economic policy are more aligned with economists than those of Democrats.

So maybe there is not a good reason to think that one political party gets things mostly right, while the other side gets things mostly wrong. But if we do not have a strong reason to think that Democrats are consistently better at answering political questions than Republicans, or vice versa, then we are faced with a bit of a dilemma. If our opinions on many complex and unrelated political issues are best explained by our party affiliation, and we have no reason to think that one side is right more often than the other, then maybe we ought to be less confident in our political beliefs.

One way to put this plan into action would be to become more intellectually humble. Not only does intellectual humility reduce polarization and increase empathy, but most importantly for our purposes, intellectual humility increases how open we are to considering opposing points of view. Maybe if we were all a bit more intellectually humble, we would be less likely to simply parrot the beliefs of our chosen political party.

But even though it might seem obvious that we could all benefit from having some intellectual humility, some have argued that this comes with certain political drawbacks as well. In order to thrive, democracies not only need their citizens to take care when forming their political beliefs, but democracies also need their citizens to put those beliefs into action by becoming civically engaged. But at the same time that intellectual humility opens us up to reconsidering our political beliefs, it also makes us less politically involved. As our confidence that our way of seeing the world is correct decreases, we also lose some of our motivation to make sure our views are politically enforced.

Is there any middle ground here? Can we stay civically engaged while at the same time growing in intellectual humility? Perhaps there is a way to balance the two considerations, but it will likely transform our current forms of political participation. Instead of engaging in politics like overly confident activists, we might think long and hard about a single issue before taking a stand. Instead of dreaming up grand ways that the government should transform society, we might admit that any utopian vision likely has many unforeseen flaws. And instead of rushing to judgment, we might admit that we do not know the answers to many of the questions our society faces.

On Academic Freedom and Striking the Right Balance

photograph of campus gates shut

In a recent column, Eli Schantz argues that academic freedom is not absolute, and that “academic freedoms must be balanced against and limited by” academics’ other obligations, such as their duty not to engage in invidious discrimination. This is an important point. For example, while academic freedom plausibly requires some sort of commitment to permitting academics to speak freely, speech that constitutes verbal harassment should not be tolerated.

However, as Schantz recognizes, how the balance is struck is a matter of vital importance — really, the whole ball game. And many now seem to believe that the following standard strikes the right balance: academic speech can be legitimately proscribed when either (a) someone claims that the speech is demeaning or disrespectful or (b) there is some degree of likelihood that the speech will cause harm.

This standard is unworkable and, if applied consistently — as it must be, in deference to the moral equality of persons — it would undermine the academic enterprise.

Examples illustrating the broad sweep of this standard are easy to come by. Imagine a fervently Christian student, who prior to arriving on campus had never been exposed to atheist or anti-trinitarian arguments. Exposing the student to these arguments might very well be psychologically devastating for them, and might even make them feel disrespected. Or consider an academic who, based on her scholarship, makes a policy recommendation that is then implemented by a state government. Suppose the academic’s recommendation, while made in good faith, was mistaken, and the policy ends up causing serious harm. This outcome was surely foreseeable, given the ever-present possibility of error and the stakes involved; so, the standard would imply that the academic should have been restrained from making the recommendation.

The general point is that if a topic is of significance to human life, then speech about that topic likely can be harmful. Therefore, a standard that makes foreseeable harm sufficient for censorship would cripple any serious academic discussion of humanly significant topics.

This does not mean we should engage in such discussions in an insensitive manner or in inappropriate contexts. But such “time, place, and manner” restrictions are perfectly compatible with a robust commitment to academic free speech.

The Supreme Court’s First Amendment jurisprudence is instructive on the standards that should apply to potentially harmful speech. First Amendment doctrine recognizes that some categories of harmful speech do not warrant protection. This includes defamation, true threats, incitement, and speech integral to unlawful conduct, such as fraud or verbal harassment. But the Supreme Court — not the current Court, but mainly the liberal Warren Court — has held that the possibility, or even the likelihood, that speech will cause some form of harm down the line is not generally sufficient to justify government censorship. In Brandenburg v. Ohio (1969), for example, the Court held that speech advocating for the use of violence in service of political ends is protected by the First Amendment, so long as it is not intended and objectively likely to cause imminent violence. This ruling, of course, applies equally to left- and right-wing advocates of political violence. The Court’s rationale was not that such advocacy is harmless — if it were, the legitimate bounds of free speech would be an easy question — but that on balance, the costs of censorship outweigh the benefits.

Similarly, while First Amendment protection from civil liability does not extend to defamation, a plaintiff who seeks to recover from an alleged defamer nevertheless has the burden of proving that the statement was defamatory. Simply claiming that the statement injured their reputation is generally insufficient unless they can show that the statement falls under certain narrow categories of statements considered defamatory per se, such as an allegation that they were involved in criminal activity. The standard of proof is not the demanding proof beyond reasonable doubt, but rather proof by a preponderance of the evidence. Nevertheless, the burden lies with them to show that the statements were defamatory, and not with the speaker to show that the statements were not defamatory.

Some may argue that the standards which apply to government censorship are not relevant to the limits academic communities ought to impose upon the speech of their members. In my view, this is mistaken.

As I have argued previously, free speech is particularly important for academic communities because their fundamental purpose is to generate and transmit knowledge. Without a robust free speech regime on campus, academics and students cannot engage in the kind of probing, multi-perspective discussions most conducive to this goal. Such a regime requires not only that the institutional rules of the community not unduly burden speech, but that members not impose social and economic penalties on other members for their speech without a compelling justification. For this reason, there should be a high presumption in favor of free speech in academia. On most campuses today, that presumption is defeated, and properly so, only in the case of speech that harasses or discriminates. But “harassment” and “discrimination” should continue to be defined narrowly. They should not extend to good faith discussion of controversial topics, or to one-off remarks by thoughtless or immature students and professors.

Would a robust free speech regime on campus cause harm to its members or others? In some instances, yes. No speech regime, whether restrictive or libertarian, is without costs. The discussion we should be having about speech on campus is about the net benefits of different kinds of speech regime. Just as it is insufficient to invoke academic freedom to shield academics from institutional and social liability for their speech, it is also not enough to invoke the fact that academic freedom is not absolute to justify imposing such liabilities.

Why Academics Should Be Activists

photograph of impassioned teacher lecturing

In a recent, engaging Prindle Post piece, Ben Rossi comes down decisively against the idea that academics should be activists. I disagree. Or, at least, I think trying to avoid being labeled an “activist” is a waste of time.

I don’t think this statement is very controversial: “Academics have a right, and sometimes an obligation, to share their knowledge, expertise, and research with the public where it’s relevant – even on controversial and divisive political issues.” Compare that to this (which I take to be Rossi’s position): “Academics should not be activists, particularly in areas directly relevant to their specialty, because it will undermine their objectivity and credibility.” These can’t both be right, can they?

Think about the differences between these academics. Catherine Mackinnon was a professor who pioneered the claim that sexual harassment is a form of sex discrimination and, as a lawyer, argued cases that led to a lot of new law in that area. Many economists move back and forth between being professors, working at influential think-tanks, and having powerful, agenda-shaping, government positions. Conservative economists, like Ben Bernake, work at conservative think-tanks and for Republican presidents. Liberal economists, like Janet Yellen, work for liberal think-tanks and Democratic presidents. Philosophy professors – from Jeremy Bentham to Jeff McMahan – have been at the forefront of a social movement, Animal Liberation, to use the title of professor Peter Singer’s popular book (with half a million copies in print), bringing attention to the idea that killing and eating animals capable of experiencing pleasure and pain is morally problematic.

Which of these academics are “activists” and which are doing what Rossi endorses and calls “public outreach”? “The line between public outreach and campaigning is admittedly a blurry one,” he says, “but not to the extent of rendering the distinction meaningless.” I disagree. These examples suggest to me that the distinction is, in fact, meaningless. Or, at least, that it means something different from what Rossi implies. “Activists,” it seems to me, are people pursuing goals of which you do not approve, while people pursuing goals you commend are simply doing public outreach. The point of trying to draw this line between activism and outreach, I would argue, is to turn controversial moral and political disputes into (supposedly) less controversial professional or pedagogical ones.

However, pushing the claim that “activists” are not as objective as nonactivists is essentially a way of trying to get “activists” to not take their own side in an argument. Asking them to avoid active engagement and conceal their hard-earned knowledge is not only unfair, it’s unhelpful as a model of objectivity. Objectivity is about aspiring to have defensible views based on reasons and empirical evidence, not on having no views at all or concealing your views. Nor does objectivity have anything to do with how firmly you hold a particular belief. The undecided or waffling are not prima facie more objective than the firmly committed. Look at the evidence on undecided voters: they are the least well-informed and the least-interested part of the electorate.

Consider as another example the charge of judicial “activism.” Conservatives complained for a long time that the judges who made the civil rights revolution happen, by explicitly recognizing rights less enshrined previously, were “activist judges” – that is, bad judges – with insufficient respect for previous legal findings. Now that conservatives have a majority, many liberals argue that conservatives are activists – that is, bad judges with no respect for previous legal findings. I think this suggests that the accusation of judicial “activism” is an empty rhetorical gesture. By labeling others “activists,” we’re really just saying “I am against what they are for.”

What about teaching? In introductory undergraduate courses, it’s certainly important to focus on presenting a balanced approach without excessively privileging your own views. But this only goes so far. First, because, as teachers, we must implicitly operate (for lack of a better phrase) in the realm of the reasonable – within the space of positions and reasons generally recognized by professionals in our fields. So, we are already not “objective” from the get-go about all kinds of things. When teaching political philosophy, for example, I never present slave-holding as a live option worth discussing the pros and cons of – even though there are more slaves in the world today than there were before the Civil War.

Second, in my experiences with both law school courses and less introductory undergraduate philosophy classes, disguising your own views is nearly impossible – and pointless. In any high-level discussion in the fields I know, the views of the participants will emerge if the discussion is detailed enough or goes on long enough. I don’t know what to make of the suggestion that maybe this shouldn’t happen. If someone asks my expert opinion on a topic, why should I only present them with the most prominent positions that other people take and withhold my opinion of which position I believe is correct? That seems like intellectual malpractice to me. And in my experience, as both student and teacher, taking a position is just part of pedagogy. (I once supervised a Master’s thesis the author of which used the following jokey subtitle right up to the final draft: “Why Tim Sommers is so Very, Very Wrong about Communitarianism.”)

Further, I worry that sometimes the suggestion that someone is not objective or credible because of the positions they take, or defend vigorously,  on an issue is just a condescending way of disagreeing with them. There’s no neutral position from which to disagree with someone in a somehow more objective way than how they disagree with you. If you think that someone is too passionate or too loud in support of their positions, well, that’s just your opinion. You can express that opinion by calling them activists if you’d like, but that doesn’t earn the other side of the argument any extra points.

Rossi writes, “The defining purpose of academic institutions is to generate, and then to transmit, knowledge.” But we deprive ourselves of the knowledge and opinions of some of the best-informed people in our society when we insist that academics not advocate too forcefully for the positions they think they are most right about. Rossi thinks that the answer is that there’s a clear, principled line between activism and advocacy that we should avoid crossing. I don’t. I say transmit knowledge. Be active. Act on what you know.

Should AI Development Be Stopped?

photograph of Arnold Schwarznegger's Terminator wax figure

It was a bit of a surprise this month when the so-called “Godfather of AI” Geoffrey Hinton announced that he was quitting at Google after working there for more than a decade developing Google’s AI research division. With his newfound freedom to speak openly, Hinton has expressed ethical concerns about the use of the technology for its capacity to destabilize society and exacerbate income equality. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told The New York Times this month. That such an authoritative figure within the AI field has now condemned the technology is a significant addition to a growing call for a halt on AI development. Last month more than 1,000 AI researchers published an open letter calling for a six-month pause on training AI systems more powerful than the newest ChatGPT. But does AI really pose such a risk that we ought to halt its development?

Hinton worries about humanity losing control of AI. He was surprised, for instance, when Google’s AI language model was able to explain to him why a joke he made up was funny. He is also concerned that despite AI models being far less complex than the human brain, they are quickly becoming able to do complex tasks on par with a human. Part of his concern is the idea of algorithms seeking greater control and that he doesn’t know how to control the AI that Google and others are building. This concern is part of the reason for the call for a moratorium as the recent letter explains, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? […] Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Eliezer Yudkowsky, a decision theorist, recently suggested that a 6-month moratorium is not sufficient. Because he is concerned that AI will become smarter than humans. His concern is that building anything that is smarter than humans will definitely result in the death of everyone on Earth. Thus, he has called for completely ending the development of powerful AI and believes that an international treaty should ban its use with its provisions subject to military action if necessary. “If intelligence says that a country outside the agreement is building a GPU cluster,” he warned, “be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”

These fears aren’t new. In the 1920s and 1930s there were concerns that developments in science and technology were destabilizing society and would strip away jobs and exacerbate income inequality. In response, many called for moratoriums on further research – moratoriums that did not happen. In fact, Hinton does not seem to think this is practical since competitive markets and competitive nations are already involved in an arms race that will only compel further research.

There is also the fact that over 400 billion dollars has been invested in AI in just 2022, meaning that it will be difficult to convince people to bring all of this research to a halt given the investment and potentially lucrative benefits. Artificial intelligence has the capability to make certain tasks far more efficient and productive, from medicine to communication. Even Hinton believes that development should continue because AI can do “wonderful things.” Given these , one response to the proposed moratorium insists that “a pause on AI work is not only vague, but also unfeasible.” They argue, instead, that we simply need to be especially clear about what we consider “safe” and “successful” AI development to avoid potential missteps.

Where does this leave us? Certainly we can applaud the researchers who take their moral responsibilities seriously and feel compelled to share their concerns about the risks of development. But these kinds of warnings are vague, and researchers need to do a better job at explaining the risks. What exactly does it mean to say that you are worried about losing control of AI? Saying something like this encourages the public to imagine fantastical sci-fi ideas akin to 2001: A Space Odyssey or The Terminator. (Unhelpfully, Hinton has even agreed with the sentiment that our situation is like the movies. Ultimately, people like Yudkowsky and Hinton don’t exactly draw a clear picture of how we get from ChatGPT to Skynet. The fact that deep neural networks are so successful despite their simplicity compared to a human brain might be a cause for concern, but why exactly? Hinton says: “What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is get more power—get more control.”  Yudkowsky suggests: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” He suggests that “A sufficiently intelligent AI won’t stay confined to computers for long.” But how?

These are hypothetical worries about what AI might do, somehow, if they become more intelligent than us. These concepts remain hopelessly vague. In the meantime, there are already real problems that AI is causing such as predictive policing and discriminatory biases. There’s also the fact that AI is incredibly environmentally unfriendly. One AI model can emit five times more carbon dioxide than the lifetime emissions of a car. Putting aside how advanced AI might become relative to humans, it is already proving to pose significant challenges that will require society to adapt. For example, there has been a surge in AI-generated music recently and this presents problems for the music industry. Do artists own the rights to the sound of their own voice or does a record company? A 2020 paper revealed that a malicious actor could deliberately create a biased algorithm and then conceal this fact from potential regulators owing to their black box nature. There are so many areas where AI is being developed and deployed where it might take years of legal reform before clear and understandable frameworks can be developed to govern their use. (Hinton points at the capacity for AI to negatively affect the electoral process as well). Perhaps this is a reason to slow AI development until the rest of society can catch up.

If scientists are going to be taken seriously by the public,the nature of the threat will need to be made much more clear. Most of the more serious ethical issues involving AI such as labor reform, policing, and bias are more significant, not because of AI itself, but because AI will allow smaller groups to benefit without transparency and accountability. In other words, the ethical risks with AI are still mostly owing to the humans who control that AI, rather than the AI itself. While humans can make great advancements in science, this is often in advance of understanding how that knowledge is best used.

In the 1930s, the concern that science would destroy the labor market only subsided when a world war made mass production and full employment necessary. We never addressed the underlying problem. We still need to grapple with the question of what science is for. Should AI development be dictated by a relatively small group of financial interests who can benefit from the technology while it harms the rest of society? Are we, as a society, ready to collectively say “no” to certain kinds of scientific research until social progress catches up with scientific progress?

Putin and the Friend-Enemy Distinction

photograph of Putin walking with security detail

Each May 9th, Russia, alongside several former Soviet Union territories, celebrates Victory Day. This annual holiday, first held in 1945 in the Soviet Union’s then 16 republics, commemorates Nazi Germany’s defeat in WWII. Traditionally, the holiday has acted as a day to remember and give thanks to the 27 million people of the Soviet Union who lost their lives during the conflict.

Since Vladimir Putin’s rise to power, however, the holiday has morphed from a day of remembrance to a day devoted to projecting political, ideological, and military strength. Unsurprisingly for a strongman dictator, Putin takes center stage in the festivities. Since 2000, he has given a speech in Moscow’s Red Square which, while the specifics change each time, the core remains eternal – patriotic duty requires that Russians remember their historical traditions and that they must continuously fight for their way of life.

These themes of tradition, strength, and duty were boosted in 2014 after Russia illegally annexed Ukraine’s Crimean Peninsula in a supposed attempt to free it from Nazi control. Indeed, during that year’s celebrations, Sergei Aksyonov, the Kremlin-appointed leader of the annexed region, appeared alongside Putin, discussing rescuing the peninsula from Ukrainian fascists. Of course, rescuing people requires weapons, and in recent years, in scenes reflecting those in North Korea, China, or the U.S., Victory Day has come with displays of military might. During the celebrations, an endless parade of soldiers, tanks, missiles, and other military hardware rolls down the streets of Moscow, Saint Petersburg, Yekaterinburg, and multiple other cities.

One might wonder why all this is necessary. Why has Putin sought to isolate Russia from the West, invade its neighbors, and act as if it’s constantly under ideological, and recently literal, attack? It is unlikely that a single answer exists to this question as multiple economic, political, and personal factors have a causal impact. Nevertheless, through German jurist and political theorist Carl Schmitt’s work, we can likely uncover at least one answer – Putin needs an enemy.

Before getting to Schmitt’s work, we need to acknowledge his background. Schmitt was a member of Germany’s Nazi party, something for which he remained unrepentant. Despite this (perhaps because of it), however, Schmitt’s work has significantly influenced legal and political philosophy. Indeed, his theories are often seen as one of the most substantial and robust criticisms of liberalism as embodied in the West today. Moreover, as Western liberalism is something against which Putin claims to be rallying, even calling it obsolete during a 2019 interview, it makes Schmitt’s work the ideal tool to interpret the dictator’s actions.

According to Schmitt, Liberalism emerged in response to the historical “threat” of unconstrained power as wielded by heads of state unencumbered by systems of checks and balances. In other words, liberal democracies have predesigned norms and rules (in things like constitutions) to prevent the politically powerful – kings, dictators, emperors – from wielding unlimited power. In such states, if a president or prime minister oversteps their authority, the people can wave the constitution in their face and tell them “no.” We saw an example of this in 2019 when then-Prime Minister Boris Johnson attempted to prorogue the U.K. parliament, only to be told by the supreme court that his actions were “null and of no legal effect.” Essentially, Johnson tried to exercise power beyond his station, and the U.K.’s system of checks and balances prevented him from doing so.

For Schmitt, however, this system of enforceable restraint of a sovereign’s power is ineffectual because those in political control can declare states of emergencies – what Schmitt terms “states of exceptions” – in which rulers can suspend constitutional rules and reclaim their power. The COVID pandemic provides a useful example, where multiple nations curtailed seemingly uncurtailable freedoms, like the freedom of movement and assembly. While in “normal” times, such restrictions would go against the foundations of what so many liberal nations purportedly hold dear, during the pandemic, those in power reclaimed such all-encompassing controls. This is not to say that doing so was unnecessary or wrong. Instead, it is simply to note that liberalism’s foundational rules and norms are easily discarded when things become challenging, and those policies may not be reinstatable afterwards. For Schmitt, constitutions only prevent sovereigns from acting as they want in non-emergency situations when they wouldn’t behave like that anyway.

According to Schmitt, however, liberalism isn’t bad just because it hides the ruling class’ power under the guise of procedures and rules. Instead, it’s bad because the commonly held goal of liberalism – welcoming differences to the political sphere – runs counter to politics’ very nature. As Schmitt summarizes in his book The Concept of the Political, “the specific political distinction to which political actions and motives can be reduced is that between friend and enemy.” Schmitt argues that every aspect of politics is, at its core, about being with those you relate to and defending against those who are different. This confrontational philosophy even gets down to one’s sense of identity as we define ourselves not by who we are or what we do or believe, but in opposition to the actions and beliefs of others. Or, as Schmitt writes, “tell me who your enemy is and I will tell you who you are.” As such, liberalism, with its envisioned goal of debating political differences and living alongside those you may vehemently disagree, will ultimately fail because it is politically ignorant.

We can see a similar ideology play out in Putin’s Russia. Those things deemed different or non-Russian are being systematically erased from mainstream Russian culture. A prime example of this has been the growing efforts to wipe out the LGBTQ+ community within the nation’s boundary. Since coming to power, Putin has reversed much of Russia’s progress on LGBTQ+ rights since the Soviet Union’s fall. In Putin’s Russia, if you don’t conform to the stereotypical gender roles of masculinity and femininity, you are not a friend but an enemy. And this mentality of conformity applies beyond one’s sexuality.

Ultimately, those that embrace diversity and difference are, according to Schmitt, weak. He argues that these societies lack the cohesion to form established collective identities. Without this collective identity, the nation has no sense of self and cannot effectively manage its internal or external affairs; it’s like trying to herd cats, dogs, dolphins, ants, oak trees, and sparrows simultaneously. This ineffectualness means that liberal states, according to Schmitt, are sitting ducks, just waiting for those who can enforce a collective identity to do so; through military power or ideological inception. His answer is to reject liberalism and embrace the friend-enemy distinction. It is to unify the populace over which one rules by giving them an enemy against which they can collectively define themselves, strengthening the state and cementing one’s position of power.

In the case of Putin’s invasion of Ukraine, it is to position his nation as the saviors of the Donbas and its associated regions. The Russian military is not invading another country to expand its territory but to fight an enemy against which the whole nation can get behind – Ukranian Nazis. It is to rescue those people living in eastern Ukraine who are, according to Putin, truly Russian and bring them back into the fold of others like them. Indeed, the more one explores the actions of Putin and his government over the past two decades, the more one sees an embodiment of the political and legal philosophy Carl Schmitt thought made for a strong nation.

None of this is to say that Schmitt or Putin are in the right. On the contrary, both seem just terrible people with truly uncomfortable capacities for inflicting harm upon others. Nevertheless, the ideas of the former seemingly describe the latter’s actions rather well. And understanding Putin’s actions is essential to making sense of the world in which he operates.

The Perils of Perfectionism in Energy Policy

nuclear power plant tucked in rolling green hills

Last month, Germany closed its three remaining nuclear power plants, eliciting an open letter of protest from two Nobel laureates, senior professors, and climate scientists. Nuclear energy is one of, perhaps the, least carbon-intensive power sources, additionally boasting a smaller environmental impact than some other low-carbon alternatives due to its compact footprint. However, Germany has struggled to replace its fossil fuel plants with greener options. Consequently, phasing out nuclear energy will require burning more coal and gas, increasing emissions of CO2 and deadly air pollutants.

Ironically, the political movement against German nuclear power was led by ecological activists and the Green Party. According to their election manifesto, nuclear energy is “a high-risk technology.” Steffi Lemke, Federal Minister for the Environment and Nuclear Safety, argued, “The phase-out of nuclear power makes our country safer; ultimately, the risks of nuclear power are uncontrollable.”

While there is some risk associated with nuclear energy, as evidenced by disasters like Chernobyl, the question remains: Are the German Greens justified in shutting down nuclear power plants due to these risks?

Risks, even very deadly ones, can be justified if the benefits are significant and the chance of a bad outcome is sufficiently low. The tradeoff with nuclear power is receiving energy at some level of associated risk, such as a nuclear meltdown or terrorist attack. Despite these risks, having access to energy is crucial for maintaining modern life and its conveniences – lights, computers, the internet. In fact, our lives might be more dangerous without energy, as our society would be much poorer and less capable of caring for its citizens.

It might be argued that another energy source could provide the same benefits without the risks of nuclear power. However, it is essential to gain perspective on the relative risks involved. Despite the fixation on nuclear meltdowns, nuclear power is significantly less risky than alternatives.

For every terawatt hour (TWh) produced, coal energy, still widely used in Germany, causes an estimated 25 deaths through accidents and air pollution. Natural gas, which is growing in German energy production, is safer, causing around three deaths per TWh. In contrast, nuclear power results in only 0.07 deaths/TWh, making it 467 times safer than brown coal and 40 times safer than natural gas. Accounting for deaths linked to climate change would further widen these disparities. A coal plant emits 273 times more CO2 (and 100 times more radiation) than a similar-sized nuclear plant. By eliminating the risks of nuclear energy, Germany inadvertently takes on even greater environmental and health risks.

Germany is in the process of transitioning to renewable energy sources, such as wind and solar. It may be justifiable to shut down nuclear power and eliminate the associated risks assuming that nuclear power is being entirely replaced with renewable sources. However, as of 2021, 75% of German energy came from fossil fuels. Had Germany maintained its nuclear power plants, its growing renewables could be replacing much more fossil fuel energy production. Replacing good with good is not as impactful as replacing bad with good.

The German Greens are correct that nuclear power has some associated environmental and health risks. They chose a strategy of moral perfectionism, doing whatever was necessary to eliminate those risks.

But pushing to eliminate nuclear energy, in the name of safety and environmentalism, has inadvertently led to increased reliance on fossil fuels and heightened environmental and health risks. This demonstrates the potential pitfalls of adhering to our principles and values without considering compromises and trade-offs.

We should, however, be cautious. Just as moral perfectionism can lead us astray, too easily abandoning our principles in the name of pragmatism risks ethical failures of other kinds.

Act consequentialism is probably the most “pragmatic” moral theory. It posits that the right action is whatever creates the best consequences. You should lie, steal, and kill whenever it produces the best outcome (although it rarely does).

Critics of consequentialism argue that it leaves little room for individuals to maintain their integrity or act on their personal values. The philosopher Bernard Williams provided an illustration: Jim, a tourist in a small South American town, finds himself with a terrible choice to either kill one innocent villager or let the local captain kill all twenty villagers. The utilitarian answer is clear: Jim should kill one villager to save the others, as it produces the best outcome. However, Williams argued that we could understand if Jim couldn’t bring himself to kill the innocent villager. If Jim failed to do so, we might not blame him, or at least not blame him harshly. Yet, utilitarianism suggests that Jim would be doing just as much wrong as if he personally killed all but one of the villagers. His action resulted in nineteen more deaths. This example demonstrates the extreme moral pragmatism of consequentialism, which seemingly overlooks the importance of personal integrity and living according to one’s beliefs and values. This is the danger of taking moral pragmatism too far.

But the anti-nuclear Greens may provide an example of moral perfectionism going too far. Morality is not solely about sticking to your principles. Balancing costs and benefits, compromising, and prioritizing are all equally important. We cannot afford to let the pursuit of perfection prevent us from doing the good we can. But neither can we entirely abandon our personal values and principles, as doing so risks devaluing the personal factors that allow us to make sense of our lives. Perhaps there is some room, in some cases, for acting on principle even if it doesn’t result in the best consequences.

Specious Content and the Need for Experts

photograph of freestanding faucet on lake

A recent tweet shows what looks to be a photo of a woman wearing a kimono. It looks authentic enough, although not knowing much about kimonos myself I couldn’t tell you much about it. After learning that the image is AI-generated, my opinion hasn’t really changed: it looks fine to me, and if I ever needed to use a photo of someone wearing a kimono, I may very well choose something that looked the same.

However, reading further we see that the image is full of flaws. According to the author of the tweet who identifies themselves as a kimono consultant, the fabric doesn’t hang correctly, and there are pieces seemingly jutting out of nowhere. Folds are in the wrong place, the adornments are wrong, and nothing really matches. Perhaps most egregiously, it is styled in a way that is reserved only for the deceased, which would make the person someone who was committing a serious faux pas, or a zombie.

While mistakes like these would fly under the radar of the vast majority of those viewing it, it’s indicative of the ability of AI-powered generative image and text programs to produce content that appears authoritative but is riddled with errors.

Let’s give this kind of content a name: specious content. It’s the kind of content – be it in the form of text, images, or video – that appears to be plausible or realistic on its surface, but is false or misleading in a way that can only be identified with some effort and relevant knowledge. While there was specious content before AI programs became ubiquitous, the ability of such programs to produce content on a massive scale for free significantly increases the likelihood of misleading users and causing harm.

Given the importance of identifying AI-generated text and images, what should our approach be when dealing with content that we suspect is specious? The most common advice seems to be that we should rely on our own powers of observation. However, this approach may very well do more harm than good.

A quick Googling of how to avoid being fooled by AI-generated images will turn up much of the same advice: look closely and see if anything looks weird. Media outlets have been quick to point out that AI image-generating tools often mess up hands and fingers, that sometimes glasses don’t quite fit right on someone’s face, or that body parts or clothes overlap in places where they shouldn’t. A recent New York Times article goes even further and suggests that people look for mismatched fashion accessories, eyes that are too symmetrically spaced, glasses with mismatching end pieces, indents in ears, weird stuff in someone’s hair, and a blurred background.

The problem with all these suggestions is that they’re either so obvious as to not be worth mentioning, or so subtle that they would escape noticing even under scrutiny.

If an image portrays someone with three arms you are probably confident enough already that the image isn’t real. But people blur their backgrounds on purpose all the time, sometimes they have weird stuff in their hair, and whether a face is “too symmetrical” is a judgment beyond the ability of most people.

A study recently discussed in Scientific American underscores how scrutinizing a picture for signs of imperfections is a strategy that’s doomed to fail. It found that while participants performed no better than chance at identifying AI-generated images without any instruction, their detection rate increased by a mere 10% after reading advice on how to look closely for imperfections. With AI technology getting better every day, it seems likely that even these meager improvements won’t last long.

We’re not only bad at analyzing specious content, but going through checklists of subtle indicators is just going to make things worse. The problem is that it’s easy to interpret the lack of noticeable mistakes as a mark of authenticity: if we are unable to locate any signs that an image is fake, then we may be more likely to think that it’s genuine, even though the problems may be too subtle for us to notice. Or we might simply not be knowledgeable or patient enough to find them. In the case of the kimono picture, for example, what might be glaringly obvious to someone who was familiar with kimonos goes straight over my head.

But these problems also guide us to better ways of dealing with specious content. Instead of relying on our own limited capacity to notice mistakes in AI-generated images, we should outsource these tasks.

One new approach to detecting these images comes from AI itself: as tools to produce images have improved, so have tools that have been designed to detect those images (although it seems as though the former is winning, for now).

The other place to look for help is from experts. Philosophers debate about what, exactly, makes an expert, but in general, they typically possess a lot of knowledge and understanding of a subject, make reliable judgments about matters within their domain of expertise, are often considered authoritative, and can explain concepts to others. While identifying experts is not always straightforward, what will perhaps become a salient marker of expertise in the current age of AI will be one’s ability to distinguish specious content from that which is trustworthy.

While we certainly can’t get expert advice for every piece of AI-generated content we might come across, increasing amounts of authoritative-looking nonsense should cause us to recognize our own limitations and attempt to look to those who possess expertise in a relevant area. While even experts are sometimes prone to being fooled by AI-generated content, the track record of non-experts should lead us to stop looking closely for weird hands and overly-symmetrical features and start looking for outside help.

Narrowly Defined: Corruption in the Court

photograph of empty suit at podium with money hanging out of jacket pocket

U.S. Supreme Court Justice Clarence Thomas was revealed by journalists at ProPublica to have received millions of dollars in undisclosed gifts from real estate billionaire and political megadonor Harlan Crow. Failing to disclose was almost certainly in violation of the Ethics in Government Act. Moreover, Harlan Crow had business before the Supreme Court both indirectly as a board member of the American Enterprise Institute and directly via a business partly owned by Crow. When invited before the Senate Judiciary Committee to testify on the Supreme Court’s ethical standards, Chief Justice John Roberts declined to appear. Unlike federal judges the Supreme Court has no formal code of ethics, and in a joint statement issued by all nine justices the Court refused to implement an enforceable code of ethics.

This latest scandal follows in the wake of prior ethical concerns about Justice Thomas specifically as well as declining trust in the Supreme Court generally (as discussed previously by the Prindle Post here and here.) The Senate Judiciary Committee hearing on the matter was held May 2nd and the story is ongoing. Nonetheless, it is worth stepping back to take a broader look at influence peddling and the Supreme Court. For beyond the details regarding this or that justice is a legacy of Supreme Court jurisprudence that has created a narrow understanding of malfeasance – one which merits greater ethical scrutiny: What counts as corruption? How broadly or narrowly should it be defined? Who gets to say?

One significant line of decisions concerns money in elections. In 1976, the Supreme Court ruled on Buckley v. Valeo. This controversial campaign finance decision had far-reaching implications, establishing first, that money is speech, and second, that money in the context of elections can still be regulated to prevent both corruption and the appearance of corruption. This left open the question of just what constitutes corruption and who gets to define it. Some, such as legal scholar Deborah Hellman, have argued that courts should largely defer to elected representatives as the appearance of corruption depends on the official’s role and the institution’s function.

The Supreme Court, however, has often taken a much more active role and tends to define corruption quite narrowly. In Citizens United v. Federal Election Commission (2010), for example, the Court held that corporations and nonprofits could spend unlimited money on political candidates as long as they were not formally coordinating with campaigns. In further campaign finance decisions, the Supreme Court shot down an Arizona law passed by public referendum which sought to limit the influence of money in state elections and loosened limits on individual campaign contributions. Beyond campaign finance, in a unanimous decision in McDonnell v. United States (2016), the Court held that Virginia Governor Bob McDonnell accepting $177,000 worth of gifts from business owner Jonnie Williams, Sr. was not corruption, because Governor McDonnell only returned the favor by facilitating meetings or hosting events, as opposed to, say, implementing a specific executive order in Williams’ interest.

Supreme Court rulings on corruption occur in several legal contexts and cannot be synthesized perfectly. But generally speaking, the current Court appears to consider quid pro quo (“something for something”) corruption as the primary concern. In other words, there needs to be a specific documentable instance in which a public official was offered money or other gifts in exchange for a particular executive, legislative, or judicial action. Moreover, per the Supreme Court, providing political access in exchange for money does not count as quid pro quo.

What concerns might this loose interpretation raise? Why should anyone object?

First, the Supreme Court has prioritized those with resources, namely corporations and the wealthy, by refusing to restrict their political access and spending ability. The Court has been less interested in ensuring that those with less resources have equal access to the political process. In short, their calculus about balancing rights of the powerful with the rights of the less powerful may be off. The political theorist Mark Warren has called attention specifically to the exclusionary dimension of corruption in which people are denied due influence on decisions for which they are impacted by the outcome. The Supreme Court’s lack of concern with selling political access may exacerbate this form of corruption.

Second, the Supreme Court assumes that if a public official is not deliberately doing someone an explicit favor then they have not been influenced, but this is out of step with current psychology. It is well-established that humans are influenced by our social networks and often engage in motivated reasoning, where individuals consciously or unconsciously alter their uptake and analysis of information on the basis of their personal and community biases. Conflicts of interest appear to cause broad, and sometimes unconscious, influence – a problem that cannot be addressed simply by transparency measures. The Supreme Court has attempted to reassure Congress that their decisions are not impacted by gifts. But even if it is true that no justice has deliberately changed an opinion, this does not exclude more subtle forms of influence that come with the territory of being human.

Third, by focusing almost exclusively on quid pro quo corruption, the Supreme Court ignores a larger culture of moneyed influence and pay-to-play politics. It is not always a mystery what the wealthy and powerful want – especially those such as Harlan Crow who are extensively involved in politics. If political officials are allowed to be the beneficiaries of private largesse, they can read the room and see what actions would be well-received – no shady deals necessary. Moreover, in a context where the wealthy are specifically allowed to buy access to the politically powerful (as ruled in McDonnell), quid pro quo corruption is difficult to prevent and detect. Presumably public servants, such as Clarence Thomas, do not provide the courtesy of marking in red ink where they compromised their values.

All these concerns are heightened when it comes to the Supreme Court. Elected officials are supposed to be responsive to their electorate, so there is at least some question as to how to precisely draw the line between reasonable access and undue influence. However, justices are appointed and are supposed to be above the electoral fray. On the basis of this reasoning, the ethics codes covering appointed judges should be more strict than for elected officials, not less so.

The overarching concern is that the Supreme Court has enabled an undemocratic system in which elected officials are not responsive to their voters, and judges are not (reasonably) unbiased decision-makers, but instead both favor the interests of the select few that can afford access. There may be defensible reasons for the Supreme Court’s jurisprudence, however, in light of the Clarence Thomas scandal, one wonders if the Supreme Court’s decades-long crusade against anti-corruptions laws needs to be viewed with renewed suspicion.

The Garden of Simulated Delights

digitized image of woman relaxing on beach dock

There’s no easy way to say this, so I’ll be blunt. There is a significant chance that we are living in a computer simulation.

I know this sounds pretty silly. Unhinged, even. But I believe it to be true. And I’m not alone. This idea is becoming increasingly popular. Some people even believe that we are almost certainly living in a computer simulation.

It may turn out that this idea is mistaken. But it’s not a delusion. Unlike delusions, this idea is supported by coherent arguments. So, let’s talk about it. Why do people think we’re in a simulation? And why does it matter?

We should begin by unpacking the idea. Computer simulations are familiar phenomena. For example, popular programs like The Sims and Second Life contain simulated worlds filled with virtual things (trees, houses, people, etc.) interacting and relating in ways that resemble the outside world. The Simulation Hypothesis says that we are part of a virtual world like that. We have no reality outside of a computer. Rather, our minds, bodies, and environment are all parts of an advanced computer simulation. So, for example, when you look at the Moon, your visual experience is the product of the calculations of a computer simulating what would happen in the visual system of a biological human if they were to look at Earth’s satellite.

The Simulation Hypothesis is one member of a much-discussed class of hypotheses that invoke descriptions of the world that appear to be compatible with all our experience and evidence yet, if true, would contradict many of our fundamental beliefs or systematically undermine our knowledge. An archetype is the 17th-century philosopher René Descartes’s Evil Demon Hypothesis, according to which all of your sensations are being produced by a malicious demon who is tricking you into thinking you have a body, live on Earth, and so forth when in fact none of those things are true. Descartes did not think the Evil Demon Hypothesis was true. Rather, he used it to illustrate the elusiveness of certainty: Since all your sensations are compatible with the Evil Demon Hypothesis, you can’t rule it out with certainty, and consequently you can’t be certain you have a body, live on Earth, and so forth. What’s special about the Simulation Hypothesis relative to the Evil Demon Hypothesis is that there’s a case to be made for thinking that the former is true.

The basic idea behind this argument can be expressed in two key premises. The first is that it’s possible for conscious, human-like minds to exist in a computer. Note that the organ out of which consciousness arises in biological beings – the brain – is a complex physical system composed of simple parts interacting in law-governed ways. If a civilization’s technology and understanding of human-like brains becomes advanced enough, then they should be able to simulate human-like brains at a level of accuracy and detail that replicates their functioning, much like humans can now simulate a nematode brain. Simulated minds would have a nonorganic substrate. But substrate is probably less important than functioning. If the functional characteristics of a simulated brain were to replicate a biological human brain, then, the argument goes, the simulation would probably give rise to a conscious mind.

The second premise is that some non-negligible fraction of intelligent civilizations will eventually develop the capacity and motivation to run hugely many simulations of planets or universes that are populated with human-like minds, such that, across time, there are many simulated human-like minds for every non-simulated human-like mind. The exponential pace of technological development we observe within our own history lends plausibility to the claim that some intelligent civilizations will eventually develop this capacity. (In fact, it suggests that we may develop this capacity ourselves.) And there are many potential reasons why intelligent civilizations might be motivated to run hugely many simulations. For example, advanced civilizations could learn a great deal about the universe or, say, the spread of coronaviruses on Earth-like planets through simulations of universes or planets like ours, running in parallel and much faster than real time. Alternatively, spending time in a hyper realistic simulation might be entertaining to people in advanced civilizations. After all, many humans are entertained by this sort of thing today.

If you accept these premises, you should think that the Simulation Hypothesis is probably true. This is because the premises suggest that across time there are many more simulated beings than biological beings with experiences like yours. And since all your experiences are compatible with both the possibility that you’re simulated and the possibility that you aren’t, you should follow the numbers and accept that you’re likely in a simulation. By analogy, if you purchase a lottery ticket and don’t have any special reason to think you’ve won, then you should follow the numbers and accept that you’ve likely lost the lottery.

This argument is controversial. Interested readers can wiggle themselves down a rabbit hole of clarifications, refinements, extensions, empirical predictions, objections, replies (and more objections, and more replies). My own view is that this argument cannot be easily dismissed.

Suppose you agree with me. How do we live in the light of this argument? What are the personal and ethical implications of accepting that the Simulation Hypothesis is probably (or at least very possibly) true?

Well, we certainly have to accept that our world may be much weirder than we thought. But in some respects things aren’t as bad as they might seem. The Simulation Hypothesis needn’t lead to nihilism. The importance of most of what we care about – pleasure, happiness, love, achievement, pain, sadness, injustice, death, etc. – doesn’t hinge on whether we’re simulated. Moreover, it’s not clear that the Simulation Hypothesis systematically undermines our knowledge. Some philosophers have argued that most of our quotidian and scientific beliefs, like the beliefs that I am sitting in a chair and that chairs are made of atoms, are compatible with the Simulation Hypothesis because the hypothesis is best construed as a claim about what physical things are made of at a fundamental level. If we’re simulated, the thinking goes, there are still chairs and atoms. It’s just that atoms (and by extension chairs) are composed of patterns of bits in a computer.

On the other hand, the Simulation Hypothesis seems to significantly increase the likelihood of a range of unsettling possibilities.

Suppose a scientist today wants to simulate two galaxies colliding. There is no need to run a complete simulation of the universe. Starting and ending the simulation a few billion years before and after the collision may be sufficient. Moreover, it’s unnecessary and infeasible to simulate distant phenomena or every constituent of the colliding galaxies. A coarse-grained simulation containing only large celestial phenomena within the colliding galaxies may work just fine.

Similarly, the simulation we live in might be incomplete. If humans are the primary subjects of our simulation, then it might only be necessary to continuously simulate our immediate environment at the level of macroscopic objects, whereas subatomic and distant phenomena could be simulated on an ad hoc basis (just as hidden-surface removal is used to reduce computational costs in graphics programming).

More disturbingly, it’s possible that our universe or our lives are much shorter-lived than we think. For example, suppose our simulators are interested in a particular event, such as a nuclear war or an AI takeover, that will happen tomorrow. Alternatively, suppose our simulators are interested in figuring out if a simulated being like you who encounters the Simulation Hypothesis can discover they are in a simulation. It would have made sense relative to these sorts of purposes to have started our simulation recently, perhaps just a few minutes ago. It would be important for simulated beings in this scenario to think they have a longer history, remember doing things yesterday, and so on, but that would be an illusion. Worse, it would make sense to end our simulation after the event in question has run its course. That would likely mean that we will all die much sooner than we expect. For example, if you finish this article and dismiss it as a frivolous distraction, resolving never again to think about the Simulation Hypothesis, our simulators might decide that they’ve gotten their answer–a person like you can’t figure out they’re in a simulation–and terminate the simulation, destroying our universe and you with it.

Yet another disturbing possibility is that our simulation contains fewer minds than it seems. It could be that you are the sole subject of the simulation and consequently yours is the only mind that is simulated in detail, while other “humans” are merely programmed to act in convincing ways when you’re around, like non-playable characters. Alternatively, it could be that humans are simulated in full detail, but animals aren’t.

Now, the Simulation Hypothesis doesn’t entail that any of these things are true. We could be in a complete simulation. Plus, these things could be true even if we aren’t simulated. Philosophers have long grappled with solipsism, and Bertrand Russell once discussed the possibility that the universe sprung into existence minutes ago. However, there doesn’t seem to be any available explanation as to why these things might be true if we aren’t simulated. But there is a readily available explanation as to why these things might be true if we are in a simulation: Our simulators want to save time or reduce computational costs. This suggests that the Simulation Hypothesis should lead us to raise our credence in these possibilities. By analogy, a jury should be more confident that a defendant is guilty if the defendant had an identifiable motive than if not, all else being equal.

What are the ethical implications of these possibilities? The answer depends on how likely we take them to be. Since they are highly speculative, we shouldn’t assign them a high probability. But I don’t think we can assign them a negligible probability, either. My own view is that if you think the simulation argument is plausible, you should think there’s at least a .1% chance that we live in some sort of significantly incomplete simulation. That’s a small number. However, as some philosophers have noted, we routinely take seriously less likely possibilities, like plane crashes (<.00003%). This kind of thing is sometimes irrational. But sometimes assigning a small probability to incompleteness possibilities should make a practical difference. For example, it should probably produce slight preferences for short-term benefits and egocentric actions. Perhaps it should even lead you to take your past commitments less seriously. If it’s Saturday night and you can’t decide between going to a bar or initiating a promised snail-mail correspondence with your lonely cousin, a small chance that the universe will end very soon, that yours is the only mind in the universe, or that you never actually promised your cousin to write should perhaps tip the scales towards the bar. Compare: If you really believed there’s at least a 1/1000 chance that you and everyone you love will die tomorrow, wouldn’t that reasonably make a practical difference?

The Simulation Hypothesis has other sorts of ethically relevant implications. Some people argue that it creates avenues for fresh approaches to old theological questions, like the question of why, if there is a God, we see so much evil in the world. And while the Simulation Hypothesis does not entail a traditional God, it strongly suggests that our universe has a creator (the transhumanist David Pearce once described the simulation argument as “perhaps the first interesting argument for the existence of a Creator in 2000 years”). Unfortunately, our simulator need not be benevolent. For all we know, our universe was created by “a sadistic adolescent gamer about to unleash Godzilla” or someone who just wants to escape into a virtual world for a few hours after work. To the extent this seems likely, some argue that it’s prudent to be as “funny, outrageous, violent, sexy, strange, pathetic, heroic, …in a word ‘dramatic’” as possible, so as to avoid boring a creator who could kill us with the click of a button.

It may be that the Simulation Hypothesis is, as one author puts it, “intrinsically untethering.” Personally, I find that even under the most favorable assumptions the Simulation Hypothesis can produce deliciously terrible feelings of giddiness and unsettlement. And yet, for all its power, I do not believe it inhibits my ability to flourish. For me, the key is to respond to these feelings by plunging head first into the pleasures of life. Speaking of his own escape from “the deepest darkness” of uncertainty brought on by philosophical reasoning, the 18th-century philosopher David Hume once wrote:

Most fortunately it happens, that since reason is incapable of dispelling these clouds, nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of backgammon, I converse, and am merry with my friends; and when after three or four hours’ amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther.

I suppose that, ideally, we should respond to the simulation argument by striving to create meaning that doesn’t rely on any particular cosmology or metaphysical theory, to laugh at the radical precarity of the human condition, to explore The Big Questions without expecting The Big Answers, to make peace with our pathetic lives, which might, in the final analysis, be wholly contained within some poor graduate student’s dusty computer. But that stuff is pretty hard. When all else fails, Hume’s strategy is available to us: Do fun stuff. Have stimulating conversations; eat tasty food; drink fine wine; play exciting games; read thoughtful books; watch entertaining movies; listen to great music; have pleasurable sex; create beautiful art. Climb trees. Pet dogs. The Simulation Hypothesis will probably start to feel ridiculous, at least for a while. And, fortunately, this is all worth doing regardless of whether we’re in a simulation.

Academic Work and Justice for AIs

drawing of robot in a library

As the U.S. academic year draws to a close, the specter of AI-generated essays and exam answers looms large for teachers. The increased use of AI “chatbots” has forced a rapid and fundamental shift in the way that many schools are conducting assessments, exacerbated by the fact that – in a number of cases – they have been able to pass all kinds of academic assessments. Some colleges are now going so far as to offer amnesty for students who confess to cheating with the assistance of AI.

The use of AI as a novel plagiarism tool has all kinds of ethical implications. Here at The Prindle Post, Richard Gibson previously discussed how this practice creates deception and negatively impacts education, while D’Arcy Blaxell instead looked at the repetitive and homogeneous nature of the content they will produce. I want to focus on a different question, however – one that, so far, has been largely neglected in ethical discussions of the ethics of AI:

Does justice demand that AIs receive credit for the academic work they create?

The concept of “justice” is a tricky one. Though, at its simplest, we might understand justice merely as fairness. And many of us already have an intuitive sense of what this looks like. Suppose, for example, that I am grading a pile of my students’ essays. One of my students, Alejandro, submits a fantastic essay showing a masterful understanding of the course content. I remember, however, that Alejandro has a penchant for wearing yellow t-shirts – a color I abhor. For this reason (and this reason alone) I decide to give him an “F.” Another student of mine, Fiona, instead writes a dismal essay that shows no understanding whatsoever of anything she’s been taught. I, however, am friends with Fiona’s father, and decide to give her an “A” on this basis.

There’s something terribly unfair – or unjust – about this outcome. The grade a student receives should depend solely on the quality of their work, not the color of their T-shirt or whether their parent is a friend of their teacher. Alejandro receives an F when he deserves an A, while Fiona receives an A when she deserves an F.

Consider, now, the case where a student uses an AI chatbot to write their essay. Clearly, it would be unjust for this student to receive a passing grade – they do not deserve to receive credit for work that is not their own. But, then, who should receive credit? If the essay is pass-worthy, then might justice demand that we award this grade to the AI itself? And if that AI passes enough assessments to be awarded a degree, then should it receive this very qualification?

It might seem a preposterous suggestion. But it turns out that it’s difficult to explain why justice would not claim as much.

One response might be to say that the concept of justice don’t apply to AIs because AIs aren’t human. But this relies on the very controversial assumption that justice only applies to Homo sapiens – and this is a difficult claim to make. There is, for example, a growing recognition of the interests of non-human animals. These interests make appropriate the application of certain principles of justice to those animals, arguing – for example – that it is unjust for an animal to suffer for the mere amusement of a human audience. Restricting our discussions of justice to humans would preclude us from making claims like this.

Perhaps, then, we might expand our considerations of justice to all beings that are sentient – that is, those that are able to feel pain and pleasure. This is precisely the basis of Peter Singer’s utilitarian approach to the ethical treatment of animals. According to Singer, if an animal can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. These interests then form the basis of ways in which it is just or unjust to treat not just humans, but non-human animals too. AIs are not sentient (at least, not yet) – they can experience neither pain nor pleasure. This, then, might be an apt basis on which to exclude them from our discussions of justice. But here’s the thing: we don’t want to make sentience a prerequisite for justice. Why not? Because there are many humans who also lack this feature. Consider, for example, a comatose patient or someone with Congenital Pain Insensitivity. Despite the inability of these individuals to experience pain, it would seem unjust to, say, deprive them of medical treatment. Given this, then, sentience cannot be necessary for the application of justice.

Consider, then, a final alternative: We might argue that justice claims are inapplicable to AIs not because they aren’t human or sentient, but because they fail to understand what they write. This is a perennial problem for AIs, and is often explained in terms of the distinction between the syntax (structure) and semantics (meaning) of what we say. Computer programs – by their very nature – run on input/output algorithms. When, for example, a chatbot receives the input “who is your favourite band?” it is programmed to respond with an appropriate output such as “my favorite band is Rage Against the Machine.” Yet, while the structure (i.e., syntax) of this response is correct, there’s no meaning (i.e., semantics) behind the words. It doesn’t understand what a “band” or a “favorite” is. And when it answers with “Rage Against the Machine”, it is not doing so on the basis of its love for the anarchistic lyrics of Zach de la Rocha, or the surreal sonifications of guitarist Tom Morello. Instead, “Rage Against the Machine” is merely a string of words that it knows to be an appropriate output when given the input “who is your favourite band?” This is fundamentally different to what happens when a human answers the very same question.

But here’s the thing: There are many cases where a student’s understanding of a concept is precisely the same as an AI’s understanding of Rage Against the Machine.

When asked what ethical theory Thomas Hobbes was famous for, many students can (correctly) answer “Contractarianism” without any understanding of what that term means. They have merely learned that this is an appropriate output for the given input. What an AI does when answering an essay or exam question, then, might not be so different to what many students have done for as long as educational institutions have existed.

If a human would deserve to receive a passing grade for a particular piece of academic work, then it remains unclear why justice would not require us to award the same grade to an AI for the very same work. We cannot exclude AIs from our considerations of justice merely on the basis that they lack humanity or sentience, as this would also require the (unacceptable) exclusion of many other beings such as animals and coma patients. Similarly, excluding AIs on the basis that they do not understand what they are writing would create a standard that even many students would fall short of. If we wish to deny AIs credit for their work, we need to look elsewhere for a justification.

Are Safeguards Enough for Canada’s Medical Assistance in Dying Law?

photograph of empty hospital bed

Just last month the Canadian government announced that it was seeking to delay an expansion to Canada’s medical assistance in dying program. Since prohibitions on assisted suicide were declared to be in violation of Canada’s Charter of Rights and Freedoms, the program has expanded to include those without terminal illness. Now, Maid is set to expand further to include those not only with physical illness, but those with mental illness. While some groups were disappointed by the delay, others welcomed the opportunity to further consider the lack of appropriate safeguards in place. Given that Canadian policy is much more permissive compared to other nations in seeking to allow non-terminal mental illness patients to be eligible, it is worth considering the moral merit of this expansion.

There are a great many reasons supporting and  medically assisted suicide in general. Those who favor the practice tend to emphasize cases where unbearable pain and suffering is present and where the patient’s prognosis is terminal. It seems reasonable to limit or prevent suffering when death is assured. But it is much more complicated to consider cases outside of these narrow limits. What if the patient has some hope of recovery? What if a mental condition undermines their ability to voluntarily request death? What if the patient is suffering, not from a physical illness, but from severe clinical depression, or post-traumatic stress syndrome, or dementia?

Those who defend the idea of expanding the medical assistance in dying program emphasize the suffering that exists, regardless of the condition being neither physical nor terminal. For example, the advocacy group Dying with Dignity responded to the government’s move to delay by noting, “For those who have been denied compassion, autonomy and personal choice, this is not a short delay but yet another barrier.” Mental illness can be difficult to treat, and it seems arbitrary to treat physical suffering so markedly different from mental suffering.

A similar argument goes for those with dementia. Individuals with dementia or Alzheimer’s undoubtedly suffer from their afflictions – many report feeling that the condition has robbed them of their identity. And, if one has dementia, it can undermine the notion that one can with sound-mind voluntarily choose euthanasia for themselves. This is why many have called for the ability to use advance directives. But what if there is a conflict with what the patient comes to desire later once dementia sets in?

Even those who agree, in principle, that people suffering from these conditions deserve equal access to medical assistance in ending their life, might still worry that there are insufficient safeguards in place. As an article from the Canadian Medical Association Journal reports, arguments for the inclusion of mental illness tend to focus only on severe depression, but in Belgium and the Netherlands this has also included chronic schizophrenia, posttraumatic stress disorder, severe eating disorders, autism, and even prolonged grief. “Discussions, much less evidence-based guidance, of how to evaluate people who request assisted dying because of prolonged grief, autism, schizophrenia or personality disorders are lacking.” The health care system is simply not prepared to provide adequate support for these patients.

In Canada, the standard for receiving assistance in dying is that the condition must be “grievous and irremediable,”  indicating a patient is in an advanced state of decline which cannot be reversed. Various legal safeguards are supposed to be in place, including independent witnesses, the agreement of two medical opinions, and a signed written request. Yet, many are concerned about those who might be pressured into receiving assisted death due to lack of alternatives. For example, recently there were reports of Canadian Armed Forces members being offered assistance in dying when they couldn’t get a wheelchair ramp installed.

There was also a report last year of a 51-year-old woman named Sophia, who received medical assistance in dying due to her chemical sensitivity. Sophia was allergic to chemical cleaners and cigarette smoke but was unable to find affordable housing and was instead forced to live in charity-run residential apartment. When COVID-19 forced her to be at home full-time, it only exacerbated the problem until she finally ended her life. The fact that it was easier to receive death than accessible housing is obviously a problem, as Sophia herself remarked, “The government sees me as expendable trash.” Cases like these have led to the United Nations to criticize Canada’s proposed law for violating the UN Convention on the Rights of Persons with Disabilities. Canada’s Minister of Disability Inclusion has expressed shock at the number of cases of disabled people seeking death due to a lack of social supports.

As a recent article points out, “most would be hard-pressed to argue it reflects true autonomy within a range of choices when the marginalized poor are enticed towards ‘painless’ death to escape a painful life of poverty.” This undermines the idea that expansions to medically assisted dying are only being done for the sake of compassion and to preserve dignity. If the concern truly was the preservation of dignity, for example, there would be additional measures put in place to ensure that marginalized people don’t feel like death is their only real choice.

Those who support medically assisted dying for these cases might have good intentions, but good intentions can also lead to horrific outcomes. For example, the road to the opioid epistemic was paved with good intentions in the form of letters in the 1980s calling for the use of the drugs on the basis of compassion, and those who resisted were labeled “ophiophobic” for their hesitancy. Compassion without critical scrutiny is dangerous.

Some might argue that even if the system isn’t perfect and people fall through the cracks, it is still ultimately better off that we have the system available for those who need it. The thinking here holds that while some might receive assisted death when they shouldn’t, it is still better overall that those that are eligible can receive it. However, it’s important to remember this is often not considered a good argument in the case of the death penalty. One might respond that assisted suicide is done for the sake of compassion rather than punishment, so therefore there is a significant moral difference. However, all one needs to say is that the death penalty is used for the sake of compassion for families of victims, and it no longer holds water. Good intentions are not sufficient without a practical concern for the real-world consequences that will follow.