On Saturday, February 28th, the United States and Israel launched a new military campaign against Iran. The campaign has consisted of a series of air strikes against Iran, the first wave of which killed numerous Iranian officials, including the supreme leader Ayatollah Ali Kahmenei. In subsequent days, Iran launched attacks against Israel and U.S.-aligned Gulf states in the region, as the U.S. and Israel continued their campaign.
The U.S. has not been transparent in providing its motivating reasons. Reasons cited to justify the campaign include (but may not be limited to) (discredited) claims that this was a preemptive strike to thwart an imminent Iranian attack, that the attacks are meant to prompt regime change in Iran (which the National Intelligence Council concluded was unlikely), and that the bombings are meant to dismantle Iran’s nuclear program, even though the White House claimed it “completely and totally obliterated” Iran’s nuclear enrichment facilities in June 2025.
By the time this column is published, the situation will have almost certainly evolved. Despite President Donald Trump’s comments that the conflict is “very complete, pretty much” and that the U.S. and Israel have “already won in many ways,” Secretary of Defense Pete Hegseth, on March 10th declared that this would be the U.S.’ most intense day of strikes against Iran yet. Intelligence now suggests that Iran has begun laying mines in the Strait of Hormuz and striking oil tankers traversing the strait, creating the potential for catastrophic ripple effects on the global economy; between 20 and 30% of the world’s crude oil supply is shipped through this waterway.
Rather than discussing the broad details, I want to focus on a particular incident in this conflict and its moral ramifications. In the initial wave of strikes against Iran, a missile struck an all-girls elementary school outside an Islamic Revolutionary Guard Corps base. Iranian officials claim that the attack killed 168 people, about 110 of whom were children. Investigations have revealed that a Tomahawk missile struck the school. Tomahawks, manufactured by Raytheon, have only been sold to the governments of the U.S., Australia, the United Kingdom, Japan and the Netherlands. Thus, it appears that the U.S. struck the school.
If the U.S. military knowingly targeted the school then this, by violating international humanitarian law, is almost certainly a war crime. The Fourth Geneva Convention, to which the U.S. is signatory, prohibits any intentional attacks on civilians. Even if a school is on a military base, the children (and likely most of the staff) are civilians – I personally was a student at such a school. Yet some sources in the intelligence community suggest that the school was struck due to outdated intelligence. So this may have been unintentional.
But even tragic accidents may be the product of negligence, and thus blameworthy. It is worth noting that new military leadership in the U.S. appears critical of restraint and oversight. Hegseth has repeatedly emphasized lethality as the “calling card” of the U.S. military. Offices responsible for preventing and investigating civilian casualties have been gutted during his tenure. While criticizing allies who are “hemming and hawing about the use of force,” he emphasized that U.S. military actions in Iran will have “no stupid rules of engagement.” In a recent interview with 60 Minutes, he declared that the only people who should be worried about the conflict are “Iranians that think they’re gonna live,” although he went on to state that the U.S. military does not target civilians. Thus, there is reason to believe that even if accidental, the strike may have been the result of the willful dismantling of protective measures.
This occurs against a backdrop of increasing AI use in military operations. Reportedly, the U.S. is utilizing AI systems from Palantir in order to select targets for the strikes. If true, you can imagine the bombing of the elementary school to have plausibly progressed as follows. (Admittedly, this is speculation on my part – do not take this as a report.) Perhaps a program designed to identify potential military targets presented the building containing the school as such, due to faulty intelligence. The targets were then insufficiently vetted against current intelligence, leading to a missile strike against an elementary school. Multiple small errors compounding into tragedy. Due to the number of errors, it may be difficult to determine who is responsible for this specific incident.
This links to what just war theorists refer to as the accountability gap. Commonly discussed in the context of autonomous weapon systems, the accountability gap emerges when it is unclear who is responsible for a particular outcome. This accountability gap emerges because activated autonomous weapons systems select and strike against targets without human input. Even when such a machine makes a grave error, it is unclear who is responsible; there’s no specific person who chose this.
The bombing of the Iranian elementary school did not involve an autonomous weapon system. Yet a gap is not a binary – a gap can be large or small. The introduction of nonhuman decision makers introduces an accountability gap. In this case, the use of AI systems to present potential strike targets.
So, upon whom should responsibility fall for striking the school fall? One answer may be those who approved the strike. AI systems are “black box” technology – although we can see that a system outputs some result, we can never access the reasons why it reached that conclusion. Further, AI systems may reach biased conclusions. Thus, the appropriate standard of care, especially in the life-and-death context of military decision-making, is to carefully scrutinize the results of AI systems rather than accept results uncritically. Perhaps that standard of care was not met here.
However, there are practical problems with this view. First, although we may cognitively recognize the faults in AI decision-making, these systems are utilized because they appear to be informed and objective. Even if one knows the results of these systems ought to be scrutinized, their results may appear more weighty than conclusions of human thinkers. Second, AI systems can make data driven decisions quicker and perhaps more efficiently than human decision-makers. Decisions made in a military context are often incredibly time sensitive and may only command limited resources. There may be only so much scrutinizing decision-makers can do before acting.
Alternatively, one could hold responsible those who created and/or authorized the use of AI-programs in military decision-making. However, as D’arcy Blackwell describes, this may be practically very difficult. Finding the culprit for unacceptable decisions will require searching through paper trails, spatially and temporally far away from the incident itself. Further still, some who at first glance appear responsible may not be. Consider the Pentagon and AI firm Anthropic’s recent public conflict over how Anthropic’s Claude could be utilized. Despite labeling Anthrophic a supply chain risk, and thus prohibiting the use of Claude in fulfilling government contracts, the Pentagon utilized systems which rely upon Claude in target selection during recent strikes in Iran. We may see cases where military decision-makers utilize AI systems in ways not foreseen or endorsed by their creators. Thus, allocating responsibility will face yet another hurdle in these cases.
Thus, the potential role of AI in the decision to strike the elementary appears to muddy the waters. It seems clear in principle that either (or perhaps both) those who approved the strike, those decided to utilize AI in target selection, and/or those who developed the program are responsible for the resultant horrific outcome. Yet once we consider the practical realities facing these decision-makers, coming to a real judgment about who precisely is responsible becomes far more difficult. Even more so if we hope to determine if anyone ought to be punished for the strike.
Ultimately, I worry that these difficulties with allocating responsibility will serve as a significant detriment to human rights practice and international law. The more AI systems are integrated into military decision-making, the harder it will be to determine who bears specific responsibility for the violation of international law. Part of the function of international law, and the law in general, is to create deterrence. We punish people for offenses, in part, to deter others from engaging in the same behavior. But as it becomes more difficult to cleanly allocate responsibility, the more difficult it is to dole out punishment and thus the less the prospect of punishment can serve as a deterrent. So the integration of AI into military decision-making may reduce the status of international humanitarian law to mere norms – the global community has decided that we should not engage in this behavior but punishment will not be forthcoming if you do. This technology may serve as a shield, one that protects those willing to callously throw innocents into the line of fire from facing consequences for their actions.