← Return to search results
Back to Prindle Institute

Dodging Blame: Iran, AI, and International Law

On Saturday, February 28th, the United States and Israel launched a new military campaign against Iran. The campaign has consisted of a series of air strikes against Iran, the first wave of which killed numerous Iranian officials, including the supreme leader Ayatollah Ali Kahmenei. In subsequent days, Iran launched attacks against Israel and U.S.-aligned Gulf states in the region, as the U.S. and Israel continued their campaign.

The U.S. has not been transparent in providing its motivating reasons. Reasons cited to justify the campaign include (but may not be limited to) (discredited) claims that this was a preemptive strike to thwart an imminent Iranian attack, that the attacks are meant to prompt regime change in Iran (which the National Intelligence Council concluded was unlikely), and that the bombings are meant to dismantle Iran’s nuclear program, even though the White House claimed it “completely and totally obliterated” Iran’s nuclear enrichment facilities in June 2025.

By the time this column is published, the situation will have almost certainly evolved. Despite President Donald Trump’s comments that the conflict is “very complete, pretty much” and that the U.S. and Israel have “already won in many ways,” Secretary of Defense Pete Hegseth, on March 10th declared that this would be the U.S.’ most intense day of strikes against Iran yet. Intelligence now suggests that Iran has begun laying mines in the Strait of Hormuz and striking oil tankers traversing the strait, creating the potential for catastrophic ripple effects on the global economy; between 20 and 30% of the world’s crude oil supply is shipped through this waterway.

Rather than discussing the broad details, I want to focus on a particular incident in this conflict and its moral ramifications. In the initial wave of strikes against Iran, a missile struck an all-girls elementary school outside an Islamic Revolutionary Guard Corps base. Iranian officials claim that the attack killed 168 people, about 110 of whom were children. Investigations have revealed that a Tomahawk missile struck the school. Tomahawks, manufactured by Raytheon, have only been sold to the governments of the U.S., Australia, the United Kingdom, Japan and the Netherlands. Thus, it appears that the U.S. struck the school.

If the U.S. military knowingly targeted the school then this, by violating international humanitarian law, is almost certainly a war crime. The Fourth Geneva Convention, to which the U.S. is signatory, prohibits any intentional attacks on civilians. Even if a school is on a military base, the children (and likely most of the staff) are civilians – I personally was a student at such a school. Yet some sources in the intelligence community suggest that the school was struck due to outdated intelligence. So this may have been unintentional.

But even tragic accidents may be the product of negligence, and thus blameworthy. It is worth noting that new military leadership in the U.S. appears critical of restraint and oversight. Hegseth has repeatedly emphasized lethality as the “calling card” of the U.S. military. Offices responsible for preventing and investigating civilian casualties have been gutted during his tenure. While criticizing allies who are “hemming and hawing about the use of force,” he emphasized that U.S. military actions in Iran will have “no stupid rules of engagement.” In a recent interview with 60 Minutes, he declared that the only people who should be worried about the conflict are “Iranians that think they’re gonna live,” although he went on to state that the U.S. military does not target civilians. Thus, there is reason to believe that even if accidental, the strike may have been the result of the willful dismantling of protective measures.

This occurs against a backdrop of increasing AI use in military operations. Reportedly, the U.S. is utilizing AI systems from Palantir in order to select targets for the strikes. If true, you can imagine the bombing of the elementary school to have plausibly progressed as follows. (Admittedly, this is speculation on my part – do not take this as a report.) Perhaps a program designed to identify potential military targets presented the building containing the school as such, due to faulty intelligence. The targets were then insufficiently vetted against current intelligence, leading to a missile strike against an elementary school. Multiple small errors compounding into tragedy. Due to the number of errors, it may be difficult to determine who is responsible for this specific incident.

This links to what just war theorists refer to as the accountability gap. Commonly discussed in the context of autonomous weapon systems, the accountability gap emerges when it is unclear who is responsible for a particular outcome. This accountability gap emerges because activated autonomous weapons systems select and strike against targets without human input. Even when such a machine makes a grave error, it is unclear who is responsible; there’s no specific person who chose this.

The bombing of the Iranian elementary school did not involve an autonomous weapon system. Yet a gap is not a binary – a gap can be large or small. The introduction of nonhuman decision makers introduces an accountability gap. In this case, the use of AI systems to present potential strike targets.

So, upon whom should responsibility fall for striking the school fall? One answer may be those who approved the strike. AI systems are “black box” technology – although we can see that a system outputs some result, we can never access the reasons why it reached that conclusion. Further, AI systems may reach biased conclusions. Thus, the appropriate standard of care, especially in the life-and-death context of military decision-making, is to carefully scrutinize the results of AI systems rather than accept results uncritically. Perhaps that standard of care was not met here.

However, there are practical problems with this view. First, although we may cognitively recognize the faults in AI decision-making, these systems are utilized because they appear to be informed and objective. Even if one knows the results of these systems ought to be scrutinized, their results may appear more weighty than conclusions of human thinkers. Second, AI systems can make data driven decisions quicker and perhaps more efficiently than human decision-makers. Decisions made in a military context are often incredibly time sensitive and may only command limited resources. There may be only so much scrutinizing decision-makers can do before acting.

Alternatively, one could hold responsible those who created and/or authorized the use of AI-programs in military decision-making. However, as D’arcy Blackwell describes, this may be practically very difficult. Finding the culprit for unacceptable decisions will require searching through paper trails, spatially and temporally far away from the incident itself. Further still, some who at first glance appear responsible may not be. Consider the Pentagon and AI firm Anthropic’s recent public conflict over how Anthropic’s Claude could be utilized. Despite labeling Anthrophic a supply chain risk, and thus prohibiting the use of Claude in fulfilling government contracts, the Pentagon utilized systems which rely upon Claude in target selection during recent strikes in Iran. We may see cases where military decision-makers utilize AI systems in ways not foreseen or endorsed by their creators. Thus, allocating responsibility will face yet another hurdle in these cases.

Thus, the potential role of AI in the decision to strike the elementary appears to muddy the waters. It seems clear in principle that either (or perhaps both) those who approved the strike, those decided to utilize AI in target selection, and/or those who developed the program are responsible for the resultant horrific outcome. Yet once we consider the practical realities facing these decision-makers, coming to a real judgment about who precisely is responsible becomes far more difficult. Even more so if we hope to determine if anyone ought to be punished for the strike.

Ultimately, I worry that these difficulties with allocating responsibility will serve as a significant detriment to human rights practice and international law. The more AI systems are integrated into military decision-making, the harder it will be to determine who bears specific responsibility for the violation of international law. Part of the function of international law, and the law in general, is to create deterrence. We punish people for offenses, in part, to deter others from engaging in the same behavior. But as it becomes more difficult to cleanly allocate responsibility, the more difficult it is to dole out punishment and thus the less the prospect of punishment can serve as a deterrent. So the integration of AI into military decision-making may reduce the status of international humanitarian law to mere norms – the global community has decided that we should not engage in this behavior but punishment will not be forthcoming if you do. This technology may serve as a shield, one that protects those willing to callously throw innocents into the line of fire from facing consequences for their actions.

Should Canada Join the Golden Dome?

Canada, despite a rash of democratic, economic, and separation crises, must make a difficult choice with regards to its neighbor to the south. While the United States has, on the one hand, threatened Canada with annexation and is currently attempting to push its industrialized heartland into oblivion, it has also recently offered a new deal with regards to continental defense in the form of the “Golden Dome” proposal. As new technology brings new military threats, concerns about missile and drone defense loom larger than ever before. But what (and who) should Canada be defending itself against? Does it make sense to further integrate Canadian military capabilities with a country that prefers it didn’t exist?

On May 20th, Donald Trump announced planes for the space-based missile defense system known as the “Golden Dome” to protect America from long-range and hypersonic missile threats and from drones. The desire for a missile defense system comes after increasing concerns in recent years about the threat of hypersonic missiles, ballistic missiles, and drones which have proved very effective on the battlefields of Ukraine. Similar in ways to Israel’s iron dome concept, the system would include a huge network of sensors, satellites, and ground-based (and possibly space-based) interceptors to eliminate aerial threats to the North American continent. Following the announcement, Trump said that Canada has been asked to join and that the Canadian government has expressed interest. Given Canada’s reluctance to sign on to similar projects in the past and a rocky relationship with the second Trump administration, one wonders whether Canada should once again reject the proposal or decide to break with tradition.

On the one hand, Canada has good reasons to refuse. As mentioned, Canada has been reluctant to join major missile defense projects in the past. In the early 1960s, the Kennedy administration attempted to get Canada to host nuclear missiles. In the 1980s, the Reagan administration proposed the Strategic Defense Initiative (aka “Star Wars”) which was turned down by Canada. In 2004, the Bush administration proposed another missile defense system which was rejected by Prime Minister Paul Martin. The reasons for these rejections can be complex, but each time the proposal ran counter to Canadian skepticism of military procurement in general and heightened Canadian fears about getting too close militarily to the United States.

Many may not realize that modern Canada is the result of the fact that the American Revolution was more of a civil war than a revolution within a single nation. Modern English Canada is culturally tied to the losing side of that civil war, the loyalists who fled the United States and wished to remain British. While Canadians like Americans, they have always been wary of getting too close or being too American. Meanwhile, the Trump administration threats of annexing Canada as the 51st state have pushed these sentiments into overdrive. If Canada must think of the United States as a potential threat rather than an ally, the prospect of military integration looks problematic.

There is also the fact that Canadians are skeptical of large military spending. While the administration has said that the Golden Dome will cost under $200 billion dollars, Space Force has said the costs could be closer to one trillion dollars. Meanwhile, Canada is facing budgetary issues owing to the Trudeau administration’s spending and lack of economic growth, as well as the United States’s recent trade war.

Given this, Canada is in no hurry to invest billions of dollars in American defense contractors. Tariffs have already made Canada consider a pause on its purchase of the F-35 jet after a decade of dragging their feet over the decision to purchase them. Not only are there concerns about giving money to American companies for this, but also the fact that Canada will not control any spare parts or maintenance on the jets. Any support of the Golden Dome project will no doubt haunt Canada should it only benefit the American economy and limit Canada’s ability to make independent defense decisions.

On the other hand, missile and drone threats are increasing, and we are living in an increasingly perilous time. Missile defense would be beneficial for Canada, who is also looking to limit defense spending to 2% of GDP in line with NATO targets. They are also looking to modernize NORAD, the continental air defense system that already includes Canada. Investing in the Golden Dome could not only mean better NORAD integration, but it would also presumably mean that Canada would have a larger voice at the table. Currently, for example, Canadian sensors and radar provide early warning for aerial attack, but Canada is more limited in terms of how to respond to threats without the United States.

There are also political reasons to suggest we might be interested. Given Trump’s unhappiness with Canada’s lack of military defense spending and continued threats to our economy, there’s reason to give the appearance that we are willing to play ball. Despite administration projections, it’s unlikely that the Golden Dome proposal will come to fruition by the time Trump’s time in office ends. Verbal commitments now may not lead to any more concrete investments in the near future.

Not only could joining the Golden Dome project yield some economic and industrial benefits, it may also offer leverage when it comes to other international issues such as Canadian sovereignty in the Arctic and the opening of the Northwest Passage. If the United States wants Canada to host and maintain a bunch of equipment, this may provide greater strategic influence for Canada than if we were to refuse to participate from the start.

No doubt many Canadians would be happy for a US missile to intercept an attack on a Canadian city, and it isn’t as if there are great options for Canada to develop its own missile defense system – particularly given our skepticism towards military spending. Still, it’s hard to jump into bed with someone who has expressed a desire to annex your country. Not only is it a difficult policy decision, but it is also a difficult political decision as the announcement comes just as Canadians’ views on America have soured. For a Prime Minister who just ran on a campaign of defending Canadian sovereignty (“elbows up” being the popular slogan), it sends a contradictory message to voters to join the Golden Dome initiative. It’s said that moral decisions are not about making easy choices between good and bad, right and wrong, but instead hard choices between competing values and uncertain outcomes. Canada has a difficult decision to make.

Real Life Terminators: The Inevitable Rise of Autonomous Weapons

image of predator drones in formation

Slaughterbots, a YouTube video by the Future of Life Institute, has racked up nearly three and a half million views for its dystopic nightmare where automated killing machines use facial recognition to track down and murder dissident students. Meanwhile, New Zealand and Austria have called for a ban on autonomous weapons, citing ethical and equity concerns, while a group of parliamentarians from thirty countries have also advocated for a treaty banning the development and use of so-called “killer-robots.” In the U.S., however, a bipartisan committee found that a ban on autonomous weapons “is not currently in the interest of U.S. or international security.”

Despite the sci-fi futurism of slaughterbots, autonomous weapons are not far off. Loitering munitions, which can hover over an area before self-selecting and destroying a target (and themselves), have proliferated since the first reports of their use by Turkish-backed forces in Libya last year. They were used on both sides of the conflict between Armenia and Azerbaijan, while U.S.-made switchblade and Russian Zala KYB kamikaze drones have recently been employed in Ukraine. China has even revealed a ship which can not only operate and navigate autonomously, but deploy drones of its own (although the ship is, mercifully, unarmed).

Proponents of autonomous weapons hope that they will reduce casualties overall, as they replace front-line soldiers on the battlefield.

As well as getting humans out of harm’s way, autonomous weapons might be more precise than their human counterparts, reducing collateral damage and risk to civilians.

A survey of Australian Defence Force officers found that the possibility of risk reduction was a significant factor in troops’ attitudes to autonomous weapons, although many retained strong misgivings about operating alongside them. Yet detractors of autonomous weapons, like the group Stop Killer Robots, worry about the ethics of turning life-or-death decisions over to machines. Apart from the dehumanizing nature of the whole endeavor, there are concerns about a lack of accountability and the potential for algorithms to entrench discrimination – with deadly results.

If autonomous weapons can reduce casualties, the concerns over dehumanization and algorithmic discrimination might fade away. What could be a better affirmation of humanity than saving human lives? At this stage, however, data on precision is hard to come by. And there is little reason to think that truly autonomous weapons will be more precise than ‘human-in-the-loop’ systems, which require a flesh-and-blood human to sign off on any aggressive action (although arguments for removing the human from the loop do exist).

There is also the risk that the development of autonomous weapons will lower the barrier of entry to war: if we only have to worry about losing machines, and not people, we might lose sight of the true horrors of armed conflict.

So should we trust robots with life-or-death decisions? Peter Maurer, President of the International Committee of the Red Cross, worries that abrogating responsibility for killing – even in the heat of battle – will decrease the value of human life. Moreover, the outsourcing of such significant decisions might lead to an accountability gap, where we are left with no recourse when things go wrong. We can hold soldiers to account for killing innocent civilians, but how can we hold a robot to account – especially one which destroys itself on impact?

Technological ethicist Steven Umbrello dismisses the accountability gap, arguing that autonomous weapons are no more troubling than traditional ones. By focusing on the broader system, accountability can be conferred upon decisionmakers in the military chain of command and the designers and engineers of the weapons themselves. There is never a case where the robot is solely at fault: if something goes wrong, we will still be able to find out who is accountable. This response can also apply to the dehumanization problem: it isn’t truly robots who are making life or death decisions, but the people who create and deploy them.

The issue with this approach is that knowing who is accountable isn’t the only factor in accountability: it will, undoubtedly, be far harder to hold those responsible to account.

They won’t be soldiers on the battlefield, but programmers in offices and on campuses thousands of kilometers away. So although the accountability gap may not be an insurmountable philosophical problem, it will still be a difficult practical one.

Although currently confined to the battlefield, we also ought to consider the inevitable spread of autonomous weapons into the domestic sphere. As of last year, over 15 billion dollars in surplus military technology had found its way into the hands of American police. There are already concerns that the proliferation of autonomous systems in southeast Asia could lead to increases in “repression and internal surveillance.” And Human Rights Watch worries that “Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response.”

But how widespread are these ‘human qualities’ in humans? Police kill over a thousand people each year in the U.S. Robots might be worse – but they could be better. They are unlikely to reflect the fear, short tempers, poor self-control, or lack of training of their human counterparts.

Indeed, an optimist might hope that autonomous systems can increase the effectiveness of policing while reducing danger to both police and civilians.

There is a catch, however: not even AI is free of bias. Studies have found racial bias in algorithms used in risk assessments and facial recognition, and a Microsoft chatbot had to be shut down after it started tweeting offensive statements. Autonomous weapons with biases against particular ethnicities, genders, or societal groups would be a truly frightening prospect.

Finally, we can return to science fiction. What if one of our favorite space-traveling billionaires decides that a private human army isn’t enough, and they’d rather a private robot army? In 2017, a group of billionaires, AI researchers, and academics – including Elon Musk – signed an open letter warning about the dangers of autonomous weapons. That warning wasn’t heeded, and development has continued unabated. With the widespread military adoption of autonomous weapons already occurring, it is only a matter of time before they wind up in private hands. If dehumanization and algorithmic discrimination are serious concerns, then we’re running out of time to address them.

 

Thanks to my friend CAPT Andrew Pham for his input.

On Drones: Helpful versus Harmful

During the Super Bowl halftime show this past month, Lady Gaga masterfully demonstrated one of the most unique mass uses of drones to date. At the conclusion of her show, drones powered by Intel were used to form the American flag and then were rearranged to identify one of the main sponsors of the show, Pepsi. This demonstration represented the artistic side of drones and one of the more positive images of them.

Continue reading “On Drones: Helpful versus Harmful”

What Does Ant-Man Say about our Morals?

If you have not yet viewed Marvel’s latest production, Ant-Man, take this as the obligatory spoiler alert. Those who have viewed this perplexing film about an ant-size superhero that saves the world, however, probably have several questions running through their minds: How can such a small superhero be so powerful? Will Ant-Man join other Marvel heroes in future films? But the most important question, one that has yet to be asked by the masses, is what the very idea of Ant-Man and the plot of Marvel’s film says about our morals and whether the ideas in this film allude to a bigger problem in terms of warfare.

Continue reading “What Does Ant-Man Say about our Morals?”