← Return to search results
Back to Prindle Institute

Trust, Trouble, and Generative Slip-Ups in Academic Philosophy

Philosophy can happen anywhere and at any time. That’s part of its appeal. Unlike many disciplines, it doesn’t require specialized equipment or exclusive access to rare materials. It can just be. While some of the richest philosophical work emerges in dialogue with others (and I think this is a prerequisite), anyone can reflect on existence, knowledge, or value with nothing more than time and curiosity. Philosophy is a truly democratic exercise.

And yet, there is an undeniable hierarchy within the discipline. While anyone can philosophize, only some can turn those reflections into published work which subsequently gets read by others. And while the advent of technologies like the printing press, the internet, and social media has expanded the boundaries of who writes and gets read, it is true that some writing, merely by virtue of its format and locale, gets more eyes on it than others. This is not necessarily how things ought to be, but it is how they are. When philosophers want to understand the current state of debate — what arguments are live, which questions remain unresolved, how the boundaries of knowledge might be pushed — they almost always turn to written texts, especially those that have passed through academic channels.

Nowhere is this more noticeable than in peer-reviewed journals, where much (but not all) of the discipline’s cutting-edge work is produced and shared. Obviously, the reliability of this body of work has always been up for debate. Questions about honesty, interpretation, and scholarly integrity are not new. Yet, with the rise of (you guessed it) generative AI, concerns about academic publishing have taken on a new urgency.

I’ve written about the problems facing academic publishing before for the Prindle Post (see The Boldt Scandal and Academic Fraud). What I’d like to look at here is something a little different, or at least, something more specific. It concerns a paper by Ognjen Arandjelović, published in the journal Bioethics, titled Against Moral Panic and Citation Fiction: A Critique of “Panem, Corticoids and Circenses” and a Proposal for Editorial Gatekeeping on Reference Integrity. What makes this piece particularly interesting is that it scrutinizes the publication process behind another article; that being Panem, Corticoids and Circenses: The Ethical Fallout of Enhanced Games, by Alexis Demas, published in the Journal of Medical Ethics.

At first glance, this might not sound especially interesting. Authors criticize each other’s work all the time. Indeed, it can be said to be the essence of academia. But taken together, these two articles, and an accompanying editorial, suggest something more significant: a moment of tension between two of the biggest journals within the same intellectual space. If one were inclined to speculate (which I am), Arandjelović’s piece reads almost like a direct challenge; a moment of “shots fired” across the bow of another major publication. I think it may signal a shift toward more explicit forms of inter-journal critique, particularly as academia grapples with the implications of generative AI.

Perhaps a bit of context might help. Let me set the scene.

In August 2025, the Journal of Medical Ethics published Demas’s article, which argues that the Enhanced Games — an Olympic-style competition permitting (even encouraging) performance-enhancing drugs — represent a dangerous transformation of sport. According to Demas, such games undermine ethical and health standards while harmfully redefining athletic excellence. This is a familiar line of argument, and not an especially controversial one. It’s easy to imagine, for instance, how ancient Greek virtue ethicists might respond to the idea of pharmacologically enhanced achievement instead of putting in the hard graft.

For a time, nothing seemed amiss. That changed in March 2026, when Arandjelović’s critique appeared. His response operates on two levels. First, he challenges the substance of Demas’ argument, claiming it relies on weak reasoning, exaggerated claims, and inconsistent views about risk and autonomy. That, in itself, is unremarkable as academics, and most certainly philosophers, disagree all the time.

The second line of critique, however, is far more interesting. Arandjelović argues that the article is not just flawed but unreliable, pointing to what he sees as clear signs of AI involvement: fabricated citations and references that do not support the claims they are invoked to justify. This is not a minor scholarly misstep. Passing off unsupported or entirely invented sources as legitimate evidence undermines the basic trust on which academic publishing depends. And Arandjelović does more than simply note this. He goes on to highlight why this is an important problem for the field of medical ethics (and I think this applies beyond there). As he writes:

A journal publishing a commentary with multiple false citations, and seemingly nobody noticing this — not the reviewers, not the editors, not the readers — highlights a serious problem. This is especially worrying in medical ethics, where commentary can shape public and professional perceptions quickly, and where ethics language can launder factual unreliability.

That, ultimately, is the crux of the issue. This is not merely a case of AI-assisted writing slipping through the cracks. It suggests a more systemic breakdown in the quality-control mechanisms meant to safeguard scholarly standards. The question, then, is whether this is indeed a systematic problem, and if so, how much of the system has been compromised? If one believes Udo Schuklenk’s editorial in the issue of Bioethics in which the Arandjelović article appears, while it may not be limited to just the Journal of Medical Ethics, it isn’t a factor at Bioethics, with him writing:

I’m pleased to say that, apparently unlike the BMJ group of journals, Wiley, the publisher of this journal, has in place a highly sophisticated automated reference check that is available to the editorial team. This manuscript would not have gone out for peer review if it had been submitted to Bioethics, because it would have been eliminated after the reference (and possibly the AI generated content) screening. Surprisingly, the BMJ group of journals doesn’t seem to possess this sort of capacity, or it hasn’t been deployed in this instance.

To me, that reads very much as a “shots fired” moment. Schuklenk states, in no uncertain terms, that this paper wouldn’t have been published in Bioethics. That the journal’s processes, be they automated or human, are too good to allow such a piece of work onto its pages. That Bioethics is pulling its weight in the protection of the published body of philosophical works more than the Journal of Medical Ethics is.

It is this that, to me, is notable. This isn’t just a case of criticizing the work of another academic. It’s one publication pointing directly at another and saying, “when it comes to AI, you’re falling short.” That’s new. Or at least, it’s not something which I’ve seen being done with such clarity before. That’s not to say that it hasn’t. Obviously, I’m unable to read everything, and this may not be entirely new. But, nevertheless, I think it’s something to be remarked upon. Journals are coming out swinging.

Now, what I find most interesting about all of this is that it brings into focus something philosophy tends to take for granted: trust. Not just trust in individual authors, but in the entire system that produces the work we read. Philosophy runs on a good-faith agreement: we assume citations exist, that arguments are made sincerely, and that peer review and editorial processes are doing their jobs. These assumptions sit in the background, but they make the whole thing work. When generative AI makes it easier to produce work that looks convincing without meeting those standards, then what’s at stake isn’t just a handful of questionable papers, but the conditions that allow philosophical inquiry to function at all.

We should also note the pace, or lack thereof, when it comes to philosophy’s ability to scrutinize its own institutions. We’re supposed to be good at critical thinking and scrutinization. That’s meant to be our thing. So, then, why have we been so slow to respond to the advent of generative AI? We’re quick to analyze arguments, but less inclined to question the systems that give those arguments legitimacy. What this moment does, I think, is force that question. It highlights that peer review, citation practices, and editorial judgment are not infallible and may need to adapt. That doesn’t mean abandoning new technologies or giving in to panic, but it does mean taking standards seriously; collectively, not just individually. And I know the practicalities of this are hard. I’m an editor at a journal (not one involved here), and finding reviewers is a nightmare. But it still needs to be done, and the checks still need to be made.

Ultimately, philosophy may indeed be open to anyone who considers the big questions in life, and we all do that at some point or another. But if it’s going to remain meaningful, the work that solidifies it, the work that we recommend people go and read, must be something we can trust. Without that, we’re all in trouble.

What’s Wrong with Betting on Death and Destruction?

“Prediction markets” describe themselves as platforms where people can buy and sell “shares” that represent possible outcomes of events, actions that most people would call “placing bets.” The two major players – Kalshi and Polymarket – have been lauded by their supporters as ways to gather important information about the future: by making everything available to bet on, the reasoning goes, prediction markets can harvest the wisdom of crowds in real time. On Polymarket’s and Kalshi’s respective homepages you might find people collectively predicting the value of Bitcoin, winners and losers of basketball games, Oscar award winners, and outcomes of elections in countries around the world, among an endless stream of other things that may or may not happen.

According to their many, many critics, however, prediction markets are thinly-veiled online casinos that are able to remain in operation solely through the exploitation of regulatory loopholes. Since entering the mainstream late in 2025, there have already been several calls for greater regulation of prediction markets, alongside some calls to shut them down entirely.

A recent wave of criticism came after Kalshi allowed users to bet on two events: whether former leader of Iran Ayatollah Ali Khamenei would retain power, and whether nuclear weapons would be launched by a certain date. While there are certainly reasons to be concerned about the existence of platforms that allow anyone to bet on anything, there seems to be something particularly disturbing about allowing people to bet on whether someone will die, or, in the case of a nuclear weapons launch, whether a lot of us will die.

In the US, Connecticut Senator Chris Murphy called the situation “dystopian,” stating that:

“Once events that involve good and evil simply become a financial product, I don’t know how right and wrong matters any longer… People shouldn’t be rooting for people to die because they placed a bet.”

This feels like a perfectly natural reaction. But why? What’s so bad about betting on death and destruction?

We get one answer from Murphy, who says that there is something wrong with rooting for people to suffer or die. We might wonder, though, whether the act of placing a bet is the same thing as “rooting” or hoping for it to happen. There are definitely times when this is the case: when I bet on my favorite team to win a game, my betting and my desire for that outcome to occur go hand-in-hand. There are other times, though, when this isn’t necessarily the case: I might bet on something that I don’t particularly want to happen, but just think is inevitable.

For example, maybe I’ve been tracking the increased frequency of hurricanes as a result of climate change, and am confident that there will be more hurricanes than usual this year. By wagering in Kalshi’s “natural disasters” section I am not necessarily rooting for hurricanes in the sense that I actively desire there to be more of them, but am instead just confident that they will occur. The gambler might then argue that by wagering on death and destruction they do not actively hope that people will suffer or die; instead, they are just hoping to make a buck.

We might also distinguish “hoping that an event that we bet on occurs” from “hoping that all the consequences of that event occur.” For example, when I bet on my favorite team to win a game, I hope that they win. But I also know that a consequence of my team winning is that fans of the other team will feel sad. In hoping that my team wins, though, I don’t hope that other people suffer: that their suffering is a consequence of my desired outcome does not mean that it’s something I also desire. So the prediction market user might argue that even though they bet on events that have death and destruction as consequences, that does not mean that they are rooting for death and destruction.

Of course, this kind of reasoning is harder to justify when someone is betting that a specific person dies. We might then make the distinction between something be a direct consequence of a bet and an indirect one; for example, “fans of a rival team feeling sad” is an indirect consequence of my team winning, but “many people dying” is a direct consequence of nuclear war. It might not always be obvious how direct the consequences are when it comes to events we bet on, but the connection between mass death and nuclear bombs at least seems pretty clear. So perhaps we can say that just as it’s wrong to root for death and destruction, it is also wrong to bet on death and destruction, since even though you might not technically be rooting for it, it is a direct consequence of what you are rooting for.

However, not everyone betting on death and destruction are betting for it to happen. What about the people who bet against someone dying, or bet that there will be fewer natural disasters, or that nuclear Armageddon won’t happen any time soon?

There still seems to be something unsavory about wagering on these events, even if you’re betting that they won’t happen. Nothing we’ve said so far, though, can explain why. If I bet that there won’t be nuclear war in the near future, then it hardly seems right to say that I am rooting for anyone to die, and the direct consequences of what I am betting on also seem to be perfectly fine (e.g., no one dies from a nuclear bomb). So what’s wrong with betting against death and destruction?

Here we might look to our old friend Immanuel Kant for an explanation, who argued that it is never morally permissible to use people as means to an end. Sometimes this idea is spelled out in terms of a requirement to respect the dignity of other people: when we bet on whether one or more people will die, for instance, we are then arguably treating their lives as a means to make money, and not as something that is worth valuing for their own sake. Note that this is the case even if we are betting on people not dying: the fact that we are treating their lives as something to be bet on at all fails to ascribe them the dignity they deserve.

We might worry that this kind of reasoning would lead us to the conclusion that every bet involving other people would be morally wrong: the minute I treat you not as a person but as something that can make me money, Kant (or someone Kant-adjacent) might say you are failing to treat them with the dignity they deserve. We might think that it’s significantly worse, though, to bet on an event that potentially involves suffering and death than betting on whether the Raptors will win their upcoming game, even though in both cases we are using people as means to an end.

Perhaps there is something in the vicinity of Kant’s way of thinking that can explain why it feels wrong to take any bet on events involving death and destruction: these bets treat human life with a kind of disregard that is not present when we are betting on things like sports or some of the goofier prediction market options. In betting on death and destruction the problem is not so much that one possesses the active desire to see people get hurt or die, it’s that one is exemplifying a callousness and cynicism that fails to respect their fellow human beings. We might then agree with Kant that there is a failure to respect the dignity of other human beings, not simply because we are using them to make a buck, but because the way we are looking to make a buck off of them treats their lives as something that is only valuable to us insofar as we can profit from them.

There’s good reason to remove the ability to bet on whether someone will die or whether nuclear weapons will be launched from prediction markets, as Kalshi has already done. But we also seem to have good reason to call for the removal of other categories involving death and destruction, such as tornados, hurricanes, earthquakes, volcanoes, and pandemics (all available on Polymarket) as well as bets on individual events like “measles cases this year?”, “new pandemic?”, and “Which vaccines will RFK end recommendations for in 2026?” (all available on Kalshi). If there is something morally repugnant about betting on death and destruction then it seems that prediction markets still have a lot of work to do.

Incendiary Insincerity: On the Ethics of Trolling

Florida, deep red and with a large population, provides a valuable window into American conservative politics. What, then, should we make of a bevy of leaked, slur-filled, text messages? A group chat started by Abel Carvajal, the secretary of the Miami-Dade County Republican Party, for students at Florida International University, quickly became a swamp of violent language, n-word utterances, and Nazi jokes. This is hardly a lone finding. James Fishback’s campaign, currently running an uphill battle for the Republican nomination in the primary race for Florida governor, employs similarly incendiary language (especially among staffers). At the national level, polling by the free-market oriented Manhattan Institute found just over 30% of GOP members under 50 claimed they held racist views. Trump himself recently controversially posted, then withdrew, an image of Barack and Michelle Obama as apes.

Yet all these events share an ambiguity in interpretation. Are racism and other extreme views rampant, especially among younger Republicans … or are people simply “trolling”?

Trolling refers to a range of behaviors, from malicious bullying to harmless pranks. One of these is what media studies scholar Whitney Phillips calls subcultural trolling. The intent is to provoke a strong reaction by someone (gullible fool that they are) assuming the troll is sincere. Boldly asserting something widely condemned, like racism, is perfect for subcultural trolling.

Philosopher Ralph DiFranco argues such trolling is typically ethically suspect. It violates general norms of good conversation such as respect, honesty, and treating one’s interlocutor as an equal. The intent of trolling is often to shame, embarrass, and belittle, or at least waste time and attention. But this does not mean trolling can never be a force for good. DiFranco provides the example of trolling the trolls – countertrolling those who abandon good-faith conversation. Additionally, like satire, trolling can be used to deflate the haughty and the hypocritical.

From a certain perspective, proclaiming racism or using racist language could be a way to push back against a presumed overweening political correctness or cancel culture, rather than a sincere belief in racism. (To be clear, one can still consider the trolls’ language and behavior harmful and worthy of condemnation, even if the proclaimed belief is not sincere.) But trolling is not automatically satire. For someone to understand something as satire, they need to know the intended message, what it’s really saying. (Even if the victim is not necessarily in on the joke.) But trolls often hold their true beliefs close to their chests, making the underlying intent ambiguous. This may even be part of the emotional appeal of trolling. The slipperiness of belief involved in trolling leads to further ethical challenges.

One of those challenges involves the identification of bad actors. From the outside, a sincere racist and an insincere racist look the same. The sincere racist (the bad actor) can use the cover of irony or trolling to advocate for and desensitize people to sincerely held, deeply racist beliefs. But it is not merely onlookers who can get confused by trolling. The troll themself can use trolling to avoid fully committing to a moral stance. In this sense, it is perfect for social media, facilitating the escape into irony to avoid the pain of having one’s views the subject of constant scrutiny and judgment. One can engage in a behavior, say mock racism, and if they receive approval from surrounding individuals, they can stick with it. If they receive condemnation, they can turn the tables on their condemner — ”I can’t believe you fell for it. You idiot.”

This can harm moral self-development. While philosophers disagree about the details, it is generally accepted that virtuous behavior requires a process of cultivation. Putting one’s beliefs out there and receiving feedback can be a way to grow morally. But that growth requires sincerity. By holding beliefs in a perpetually half-joking way, the troll can avoid having to actually wrestle with their implications. What’s more, the troll never has to be honest with themselves. They can reassure themselves that they don’t really hold a belief, even while acting as if they do. In this way, harms associated with pernicious beliefs such as racism or antisemitism can occur, even without people being ideologically committed to the viewpoints.

Moral issues associated with trolling become especially complex as the trolling-style achieves political prominence. Like satire and mockery, trolling is a form of discursive offense. To the extent trolling can result in good, it is likely as pushback against a stuffy, self-serious status quo. But to what end? Policies cannot be enacted ironically. Worse yet, by being inherently ambiguous in its sincerity, trolling masks candidates’ real positions. From a campaign perspective, it performs the same function as politicians lying. Only post-election can one tell trolling from truth.

Still, we should not be too quick to blame the trolls alone. Trolling thrives in a media ecosystem in which “rage-baiting” drives clicks and sincerity is for suckers. It is worth asking how politics became such a good habitat for trolls.

Dodging Blame: Iran, AI, and International Law

On Saturday, February 28th, the United States and Israel launched a new military campaign against Iran. The campaign has consisted of a series of air strikes against Iran, the first wave of which killed numerous Iranian officials, including the supreme leader Ayatollah Ali Kahmenei. In subsequent days, Iran launched attacks against Israel and U.S.-aligned Gulf states in the region, as the U.S. and Israel continued their campaign.

The U.S. has not been transparent in providing its motivating reasons. Reasons cited to justify the campaign include (but may not be limited to) (discredited) claims that this was a preemptive strike to thwart an imminent Iranian attack, that the attacks are meant to prompt regime change in Iran (which the National Intelligence Council concluded was unlikely), and that the bombings are meant to dismantle Iran’s nuclear program, even though the White House claimed it “completely and totally obliterated” Iran’s nuclear enrichment facilities in June 2025.

By the time this column is published, the situation will have almost certainly evolved. Despite President Donald Trump’s comments that the conflict is “very complete, pretty much” and that the U.S. and Israel have “already won in many ways,” Secretary of Defense Pete Hegseth, on March 10th declared that this would be the U.S.’ most intense day of strikes against Iran yet. Intelligence now suggests that Iran has begun laying mines in the Strait of Hormuz and striking oil tankers traversing the strait, creating the potential for catastrophic ripple effects on the global economy; between 20 and 30% of the world’s crude oil supply is shipped through this waterway.

Rather than discussing the broad details, I want to focus on a particular incident in this conflict and its moral ramifications. In the initial wave of strikes against Iran, a missile struck an all-girls elementary school outside an Islamic Revolutionary Guard Corps base. Iranian officials claim that the attack killed 168 people, about 110 of whom were children. Investigations have revealed that a Tomahawk missile struck the school. Tomahawks, manufactured by Raytheon, have only been sold to the governments of the U.S., Australia, the United Kingdom, Japan and the Netherlands. Thus, it appears that the U.S. struck the school.

If the U.S. military knowingly targeted the school then this, by violating international humanitarian law, is almost certainly a war crime. The Fourth Geneva Convention, to which the U.S. is signatory, prohibits any intentional attacks on civilians. Even if a school is on a military base, the children (and likely most of the staff) are civilians – I personally was a student at such a school. Yet some sources in the intelligence community suggest that the school was struck due to outdated intelligence. So this may have been unintentional.

But even tragic accidents may be the product of negligence, and thus blameworthy. It is worth noting that new military leadership in the U.S. appears critical of restraint and oversight. Hegseth has repeatedly emphasized lethality as the “calling card” of the U.S. military. Offices responsible for preventing and investigating civilian casualties have been gutted during his tenure. While criticizing allies who are “hemming and hawing about the use of force,” he emphasized that U.S. military actions in Iran will have “no stupid rules of engagement.” In a recent interview with 60 Minutes, he declared that the only people who should be worried about the conflict are “Iranians that think they’re gonna live,” although he went on to state that the U.S. military does not target civilians. Thus, there is reason to believe that even if accidental, the strike may have been the result of the willful dismantling of protective measures.

This occurs against a backdrop of increasing AI use in military operations. Reportedly, the U.S. is utilizing AI systems from Palantir in order to select targets for the strikes. If true, you can imagine the bombing of the elementary school to have plausibly progressed as follows. (Admittedly, this is speculation on my part – do not take this as a report.) Perhaps a program designed to identify potential military targets presented the building containing the school as such, due to faulty intelligence. The targets were then insufficiently vetted against current intelligence, leading to a missile strike against an elementary school. Multiple small errors compounding into tragedy. Due to the number of errors, it may be difficult to determine who is responsible for this specific incident.

This links to what just war theorists refer to as the accountability gap. Commonly discussed in the context of autonomous weapon systems, the accountability gap emerges when it is unclear who is responsible for a particular outcome. This accountability gap emerges because activated autonomous weapons systems select and strike against targets without human input. Even when such a machine makes a grave error, it is unclear who is responsible; there’s no specific person who chose this.

The bombing of the Iranian elementary school did not involve an autonomous weapon system. Yet a gap is not a binary – a gap can be large or small. The introduction of nonhuman decision makers introduces an accountability gap. In this case, the use of AI systems to present potential strike targets.

So, upon whom should responsibility fall for striking the school fall? One answer may be those who approved the strike. AI systems are “black box” technology – although we can see that a system outputs some result, we can never access the reasons why it reached that conclusion. Further, AI systems may reach biased conclusions. Thus, the appropriate standard of care, especially in the life-and-death context of military decision-making, is to carefully scrutinize the results of AI systems rather than accept results uncritically. Perhaps that standard of care was not met here.

However, there are practical problems with this view. First, although we may cognitively recognize the faults in AI decision-making, these systems are utilized because they appear to be informed and objective. Even if one knows the results of these systems ought to be scrutinized, their results may appear more weighty than conclusions of human thinkers. Second, AI systems can make data driven decisions quicker and perhaps more efficiently than human decision-makers. Decisions made in a military context are often incredibly time sensitive and may only command limited resources. There may be only so much scrutinizing decision-makers can do before acting.

Alternatively, one could hold responsible those who created and/or authorized the use of AI-programs in military decision-making. However, as D’arcy Blackwell describes, this may be practically very difficult. Finding the culprit for unacceptable decisions will require searching through paper trails, spatially and temporally far away from the incident itself. Further still, some who at first glance appear responsible may not be. Consider the Pentagon and AI firm Anthropic’s recent public conflict over how Anthropic’s Claude could be utilized. Despite labeling Anthrophic a supply chain risk, and thus prohibiting the use of Claude in fulfilling government contracts, the Pentagon utilized systems which rely upon Claude in target selection during recent strikes in Iran. We may see cases where military decision-makers utilize AI systems in ways not foreseen or endorsed by their creators. Thus, allocating responsibility will face yet another hurdle in these cases.

Thus, the potential role of AI in the decision to strike the elementary appears to muddy the waters. It seems clear in principle that either (or perhaps both) those who approved the strike, those decided to utilize AI in target selection, and/or those who developed the program are responsible for the resultant horrific outcome. Yet once we consider the practical realities facing these decision-makers, coming to a real judgment about who precisely is responsible becomes far more difficult. Even more so if we hope to determine if anyone ought to be punished for the strike.

Ultimately, I worry that these difficulties with allocating responsibility will serve as a significant detriment to human rights practice and international law. The more AI systems are integrated into military decision-making, the harder it will be to determine who bears specific responsibility for the violation of international law. Part of the function of international law, and the law in general, is to create deterrence. We punish people for offenses, in part, to deter others from engaging in the same behavior. But as it becomes more difficult to cleanly allocate responsibility, the more difficult it is to dole out punishment and thus the less the prospect of punishment can serve as a deterrent. So the integration of AI into military decision-making may reduce the status of international humanitarian law to mere norms – the global community has decided that we should not engage in this behavior but punishment will not be forthcoming if you do. This technology may serve as a shield, one that protects those willing to callously throw innocents into the line of fire from facing consequences for their actions.

Why You’re Not Entitled to Fight the Police

You’re sitting in an emergency room waiting area when a commotion breaks out near the doors. Two staff members and a paramedic are pushing a gurney down the hall. The patient is strapped in. He is awake, coherent, and furious. “Let me go. I’m fine. You can’t do this. You’re violating my rights.” A concerned bystander follows, insisting the same thing. A few strangers in the waiting room start to chime in. Someone tells the staff to stop.

Should you do anything? You could join the protest. You could even try to physically block the staff.  But you do not know the patient’s vitals or medical history. You do not know what he initially said. You do not know what the paramedic saw ten minutes ago. Sure, the staff might be wrong. Errors and abuse do happen. But even so, the scene is not yours to take over. Doing so would be wrong.

The reason why it would be wrong for you to intervene has to do with what’s called epistemic asymmetry. It is the simple fact that in some situations the parties do not stand in anything like equal relation to the relevant information. One party knows things that the other parties cannot know on the spot. The asymmetry is not a matter of one party being morally superior, but about access to knowledge. In high-stakes situations, access to knowledge matters because it determines who can act responsibly in the moment.

The Limits of the Bystander

Now consider a scene that plays out routinely in the real world. You are walking down the street and you come across a police officer and a man on the ground, grappling. The officer is trying to get the man’s hands behind his back. In the distance, you hear sirens. The man is yelling that he did nothing wrong and that the officer is oppressing him. A small crowd forms. Someone shouts, “He didn’t do anything wrong!” The arrestee begins to scream for others to help. Meanwhile, the officer keeps saying, “Stop resisting!”

You have no idea what happened two minutes earlier. You do not know whether there is a warrant. You do not know whether the officer saw a weapon. You do not know whether the man just punched someone around the corner. You do not know whether the man is wanted for something serious. You simply have a struggle in front of you and competing claims.

Should you do anything? Who is more reasonable to trust in that moment?

The answer is not “the officer, because he wears a uniform.” It is “the officer, because of the structure of the situation.” The officer is likely acting on information you do not have. He is also acting under a set of constraints and responsibilities that you do not share. The man being arrested is an interested party. He might be innocent. He might be guilty. Either way, he has an incentive to say whatever will get him out of handcuffs. The bystanders have even less information than he does.

If we take epistemic asymmetry seriously, then intervening on an arrestee’s behalf is morally wrong in the typical case. It is not your job to adjudicate the legality of the arrest by force on the basis of a few shouted sentences and a chaotic struggle. The same is true for the arrestee. Even if he believes the arrest is unjust, he is not entitled to settle that dispute by escalating violence against an officer.

The Presumption of Reliability

What justifies this restraint is not blind obedience, but rather a presumption of reliability. Now, a presumption is not a conclusion. It is a rational starting point, adopted in the absence of defeating evidence. We rely on such presumptions constantly. When a pilot gives instructions during turbulence, passengers comply even though pilots sometimes make mistakes. When a lifeguard orders swimmers out of the water, most people do not demand proof that a riptide has formed. In each case, authority is provisional and defeasible, but still owed deference because the person issuing commands is positioned closer to the relevant facts.

Police officers occupy a similar role. They receive dispatch calls. They are given descriptions. They observe behavior before bystanders arrive. They are trained to notice things that mean little to spectators. They also operate under rules that matter even when no one else knows what those rules are. That combination generates a prima facie reason to trust their judgment in the moment.

The word trust might seem to provoke resistance, as if trusting the police required moral submission. It does not. Trust here means something modest. It means treating an officer’s actions as more likely to be grounded in relevant information than the claims of the person being arrested or the guesses of the crowd. That is a comparative judgment, not an endorsement of infallibility.

Some may object that this gives the state too much power. But it does not. It gives the moment less power. It insists that disputes over justification be resolved where reasons can be assessed, evidence weighed, and incentives constrained.

Another objection is that police abuse is real, and in some communities persistent. That is true, but also besides the point. A presumption can be defeated. If an officer is clearly acting outside the bounds of law and restraint, self-defense principles apply. But most cases that people label unjust arrests are not cases of obvious criminal violence by police. They are cases of disputed authority under conditions of uncertainty. Those are precisely the cases where epistemic humility is required.

Just and Unjust Arrests

Most jurisdictions criminalize resisting arrest, or using force against a law enforcement officer, even if the person being arrested believes the arrest is unjust. For example, in Kansas (where I work), KSA 21-5229 states that “a person is not authorized to use force to resist an arrest which such person knows is being made either by a law enforcement officer… even if the person arrested believes that the arrest is unlawful.”

That can strike some people as odd. If you sincerely believe that the state is acting unjustly, why can’t you resist? If you believe that an officer is violating your rights, why must you submit? The intuition is understandable. We are used to thinking that perceived injustice licenses immediate opposition, and that compliance amounts to complicity. But that way of thinking ignores the moral significance of not knowing what you do not know.

Laws against resisting seemingly unjust arrests are not premised on the absurd idea that police officers are infallible. They are premised on the reality that arrests occur under conditions of severe informational asymmetry. The officer is acting on facts that are typically unavailable to the person being arrested and entirely unavailable to bystanders. The law treats that asymmetry as morally relevant. It builds in a presumption that the officer’s use of force is more likely than not to be justified in the moment, even when that presumption later turns out to be false.

Belief is cheap and knowledge is hard. People routinely believe they are being wronged when they are not, and just as routinely claim injustice when it is convenient to do so. The law cannot treat sincerity as a moral trump card in situations where bad faith is common and information is unevenly distributed.

What matters is not whether an arrestee believes the arrest is unjust, but whether it is unjust. And that is not something the arrestee, much less the crowd, is typically in a position to determine in the moment. Epistemic asymmetry makes belief an unreliable guide to reasonableness, which is precisely why the use of force cannot be conditioned on one’s own assessment of injustice.

Trust, Then Verify

The point, then, is not that the police are always right. It is that force is a moral instrument with demanding preconditions, and one of those preconditions is appropriate knowledge. In the context of an arrest, that knowledge is almost never available to the crowd watching from the sidewalk.

If an arrest is truly unjust, what matters is that it be shown to be so under standards designed to distinguish error from excuse and authority from abuse. That work cannot be done in the middle of a struggle. It can only be done once the moment has passed and reasons can be given their due. The place for that is the courtroom.

Do Digital Age-Gates Threaten Our Privacy?

In a press release in February 2026, Discord announced that it would be introducing “teen safety features” to create a “safer and more inclusive experience for users over the age of 13.” In practice, this means that users will be required to verify their age to access certain content, either by uploading a piece of ID or a picture that is then scanned using “facial age estimation” technology.

Discord is one of many apps that are beginning to implement age-checking to use their platforms in different parts of the world, especially in the UK and Australia. It is also not alone in receiving backlash for its decision: a common complaint among users is that using age-verification tools risks violating their privacy, since tech companies have often not had the best track record in keeping personal data safe from bad actors. Indeed, Discord itself delayed the implementation of its age-checking tools in response to concerns over one of its vendors being hacked.

It’s perfectly reasonable to be wary of how tech companies handle personal data. But when it comes to needing to prove one’s identity online, privacy concerns can also represent something more than just worries about hacks. To see why, we should ask: what do we mean when we say we’re concerned about the privacy of our information?

We should have realistic expectations about the privacy of our information online. People ought to work under the assumption that any information they post publicly will be available to everyone and in perpetuity, and that by interacting with anything online, companies gather information about you. Discord (and other apps) don’t typically need to count the number of wrinkles on your face to determine your age: your online habits paint a picture, enough to estimate whether you are or aren’t a teenager.

While users are often content to trade access to some of their information for free services online, few would be willing to make their lives an open book for tech companies. We value control over the information communicated to others about us. When we worry about companies violating our privacy we worry about losing this kind of control: even if we have a sense of how a company might use our information – e.g., to try to sell us things or present us with content that will keep us on the platform – we might worry that information could be used in ways that we don’t endorse.

Discord’s latest policies attempt to address these concerns by promising that not all users will need to verify their age, that facial scans are not uploaded to Discord and “IDs are used to get your age only and then deleted,” and that “your identity is never associated with your account.” If your personal information is never actually received by Discord (or if it is but then deleted immediately), then presumably our worries over control seem to have less pull.

But even if we are guaranteed no data breaches, being asked to provide personal information can still feel like a violation of privacy. This is because privacy isn’t only about keeping secrets, it also involves choosing which information we reveal and how it is interpreted. Philosopher Daniel Susser, for instance, argues that one of the reasons we value online privacy is because the way that we conceal and reveal personal information helps to “shape the way others perceive and understand who we are,” what he calls “social self-authorship.”

Say, for example, that in my professional life I like to maintain an air of seriousness and competence. At work, I value my privacy insofar as I am able to both conceal and reveal information about myself, and do so to create the persona of someone who gets down to business. In my personal life, however, I might be quite different: around my friends, I take myself much less seriously and am forthcoming about my beliefs and feelings. I still choose to conceal some information and reveal other information, but it is different from what I reveal and conceal at work.

This likely sounds familiar: we take on different personas depending on the context we’re in and the people we’re around. One way your privacy can be violated is if information about yourself from one sphere of your life is made known to those in another in a way that threatens your ability to determine how others see you. If, for example, I like singing the works of Celine Dion at karaoke in my personal life but do not want my work persona to be that of an appreciator of a French-Canadian chanteuse, then that information becoming known to my colleagues can constitute a violation of my privacy insofar as it undermines my ability to be the author of my work-self.

Why is the notion of privacy as social self-authorship important when it comes to providing information about ourselves to tech companies? It reminds us that concerns about the privacy of information are not simply about keeping things secret, but about choosing how we present ourselves – which information to present to whom and when. Even if a tech company ensures that our information is kept secret, the act of sharing that information can constitute crossing a boundary that we don’t want to cross. For instance, if part of the appeal of using certain apps or websites is the ability to do so anonymously, or to explore niche interests, or express part of your personality that you wouldn’t be able to express in other areas of your life, then attaching your identity to something can feel like an intrusion – you must introduce aspects of your life that you have created for yourself in one sphere into another.

Consider the difference between being in favor of something as a fact about what you believe and signing your name on a petition: the act of attaching your identity to something incorporates that thing into the person you are. Likewise, being required to provide information about your identity to use a product can then feel like a violation of privacy insofar as it does not respect your ability to separate your online persona from that which you author in other spheres of your life.

One reason we might not want to give tech companies our information is that we do not trust them and want to take a stand against overreach. By continuing to use their products, one thus brands oneself as someone who capitulates to the demands of those companies: using Discord says that you are willing to provide information about your identity and are at least somewhat trusting of the company behind it. Whether this is something that we want to be part of our identities is something that users must now grapple with.

Mengzi, Xunzi, and Punch the Monkey

Punch, a baby macaque monkey at Ichikawa City Zoo in Japan, has tugged on millions of heartstrings in his short seven months of life. He has become a worldwide sensation due to his inseparable relationship with an unlikely companion: an orange stuffed orangutan. Zookeepers gave the stuffed orangutan to Punch after his macaque mother rejected him shortly after birth. The zoo has seen nearly 5,000 visitors a day in late February owing to the phenomenon; posts and videos of Punch have attracted millions of views and likes.

How does little Punch connect with two ancient philosophers and a millennia-old philosophical question? It turns out our spontaneous reactions to watching videos and viewing images of Punch may just be a modern-day viral equivalent of a 2,000-year-old philosophical thought experiment.

Mengzi (c. 372-289 BCE) was an ancient Confucian philosopher who is famous for advancing the view that human nature is good. In making this claim, Mengzi was responding to a vibrant debate in ancient Confucian philosophy on the question of whether human nature was fundamentally good, bad, or neither. Contemporary philosophers are still asking the same questions.

Mengzi wasn’t a rosy-eyed, naïve optimist about human beings — he didn’t think all of us are fully virtuous or good people. Rather, he thought all human beings had the potential to become good. He thought this predisposition toward goodness helped to explain why we could achieve a flourishing human life, exhibited through practice of certain key virtues. He compares our innate moral dispositions toward goodness as “sprouts” which can be nurtured to full virtues. According to an ancient collection of sayings, Mengzi says:

As for what they are inherently, [human beings] can become good. This is what I mean by calling their natures good. As for their becoming not good, this is not the fault of their potential. Humans all have the feeling of compassion. Humans all have the feeling of disdain. Humans all have the feeling of respect. Humans all have the feeling of approval and disapproval… Benevolence, righteousness, propriety, and wisdom are not welded to us externally. We inherently have them. It is simply that we do not reflect upon them [to develop them into full virtues]. (Mengzi 2A6.5-7; Bryan Van Norden trans.)

To support this claim, Mengzi deploys a thought experiment to show that human beings naturally have the “sprout” of benevolence within them. He considers what is called the “child and the well” case. In it, Mengzi asks us to imagine a small child toddling toward a well, about to fall in. Mengzi thinks that nearly all of us would show spontaneous and unthinking care for this child, stepping in to save it from falling into the depths of the well. Mengzi says that we would do this even if we thought there would be no reward money, we knew nothing of the child’s parents, and whether or not we would be annoyed by the child’s cries.

This is where Punch comes in. The widespread Internet engagement with Punch may suggest that (at the very least) millions upon millions of people have the “sprout” or starting-point of benevolence within them, providing us a contemporary viral version of Mengzi’s “child and the well” case. When we see little Punch missing a mother and needing love and care, the thinking goes, we feel a spontaneous surge of benevolence, hoping to help this little child-like animal. Morton Kringelbach, Professor of Neuroscience at the University of Oxford, commenting on the underlying moral neurocircuitry at work as we watch Punch, says that witnessing Punch’s plight “reminds us of what it is really to be human.” He says that these experiences are the gateway to empathy and compassion.

Xunzi (313-238 BCE), another important ancient Confucian philosopher, disagrees. He criticizes Mengzi, saying that following our human natures is a sure path to destruction. Human nature, he argues, is bad. He says:

People’s nature is bad. Their goodness is a matter of deliberate effort. Now people’s nature is such that they are born with [a fondness for profit, feelings of hate and dislike, desires of the eyes and ears…] [If] people follow along with their inborn dispositions and obey their nature, they are sure to come to struggle and contention, turn to disrupting social divisions and order, and end up becoming violent. So, it is necessary to await the transforming influence of teachers and models and the guidance of ritual and yi (righteousness), and only then will they come to yielding and deference, turn to proper form and order, and end up becoming controlled. (Xunzi 23.1-17; Eric Hutton trans.)

Xunzi points to war, cruelty, greed, and desires for pleasure as the roots of human evil. He thinks our natures can’t be trusted and questions whether we all really have the sprouts of virtue Mengzi describes.

But Xunzi faces challenges in arguing for this view. He still wants to think we can develop and grow into better people with the transforming influence of moral exemplars he calls sages. But, if we’re all that bad, how can we ever become good? Interestingly, Xunzi eventually argues that we may be something of a mixed bag — our natures exhibiting some positive, prosocial motivations common to all social animals and exhibiting destructive, selfish human motivations that pit us against one another.

The overwhelmingly loving and caring reaction to Punch’s all-too-relatable loneliness and exclusion (and primal need for love and connection), we might think, could be seen to tip the scales in favor of Mengzi in this ancient (and modern) debate about humanity’s starting points. It doesn’t evidence fully formed virtues in viewers, to be sure, but perhaps it does show something like sprouts of benevolence, caring, and compassion in millions upon millions of viewers around the world.