← Return to search results
Back to Prindle Institute

Should Canada Join the Golden Dome?

Canada, despite a rash of democratic, economic, and separation crises, must make a difficult choice with regards to its neighbor to the south. While the United States has, on the one hand, threatened Canada with annexation and is currently attempting to push its industrialized heartland into oblivion, it has also recently offered a new deal with regards to continental defense in the form of the “Golden Dome” proposal. As new technology brings new military threats, concerns about missile and drone defense loom larger than ever before. But what (and who) should Canada be defending itself against? Does it make sense to further integrate Canadian military capabilities with a country that prefers it didn’t exist?

On May 20th, Donald Trump announced planes for the space-based missile defense system known as the “Golden Dome” to protect America from long-range and hypersonic missile threats and from drones. The desire for a missile defense system comes after increasing concerns in recent years about the threat of hypersonic missiles, ballistic missiles, and drones which have proved very effective on the battlefields of Ukraine. Similar in ways to Israel’s iron dome concept, the system would include a huge network of sensors, satellites, and ground-based (and possibly space-based) interceptors to eliminate aerial threats to the North American continent. Following the announcement, Trump said that Canada has been asked to join and that the Canadian government has expressed interest. Given Canada’s reluctance to sign on to similar projects in the past and a rocky relationship with the second Trump administration, one wonders whether Canada should once again reject the proposal or decide to break with tradition.

On the one hand, Canada has good reasons to refuse. As mentioned, Canada has been reluctant to join major missile defense projects in the past. In the early 1960s, the Kennedy administration attempted to get Canada to host nuclear missiles. In the 1980s, the Reagan administration proposed the Strategic Defense Initiative (aka “Star Wars”) which was turned down by Canada. In 2004, the Bush administration proposed another missile defense system which was rejected by Prime Minister Paul Martin. The reasons for these rejections can be complex, but each time the proposal ran counter to Canadian skepticism of military procurement in general and heightened Canadian fears about getting too close militarily to the United States.

Many may not realize that modern Canada is the result of the fact that the American Revolution was more of a civil war than a revolution within a single nation. Modern English Canada is culturally tied to the losing side of that civil war, the loyalists who fled the United States and wished to remain British. While Canadians like Americans, they have always been wary of getting too close or being too American. Meanwhile, the Trump administration threats of annexing Canada as the 51st state have pushed these sentiments into overdrive. If Canada must think of the United States as a potential threat rather than an ally, the prospect of military integration looks problematic.

There is also the fact that Canadians are skeptical of large military spending. While the administration has said that the Golden Dome will cost under $200 billion dollars, Space Force has said the costs could be closer to one trillion dollars. Meanwhile, Canada is facing budgetary issues owing to the Trudeau administration’s spending and lack of economic growth, as well as the United States’s recent trade war.

Given this, Canada is in no hurry to invest billions of dollars in American defense contractors. Tariffs have already made Canada consider a pause on its purchase of the F-35 jet after a decade of dragging their feet over the decision to purchase them. Not only are there concerns about giving money to American companies for this, but also the fact that Canada will not control any spare parts or maintenance on the jets. Any support of the Golden Dome project will no doubt haunt Canada should it only benefit the American economy and limit Canada’s ability to make independent defense decisions.

On the other hand, missile and drone threats are increasing, and we are living in an increasingly perilous time. Missile defense would be beneficial for Canada, who is also looking to limit defense spending to 2% of GDP in line with NATO targets. They are also looking to modernize NORAD, the continental air defense system that already includes Canada. Investing in the Golden Dome could not only mean better NORAD integration, but it would also presumably mean that Canada would have a larger voice at the table. Currently, for example, Canadian sensors and radar provide early warning for aerial attack, but Canada is more limited in terms of how to respond to threats without the United States.

There are also political reasons to suggest we might be interested. Given Trump’s unhappiness with Canada’s lack of military defense spending and continued threats to our economy, there’s reason to give the appearance that we are willing to play ball. Despite administration projections, it’s unlikely that the Golden Dome proposal will come to fruition by the time Trump’s time in office ends. Verbal commitments now may not lead to any more concrete investments in the near future.

Not only could joining the Golden Dome project yield some economic and industrial benefits, it may also offer leverage when it comes to other international issues such as Canadian sovereignty in the Arctic and the opening of the Northwest Passage. If the United States wants Canada to host and maintain a bunch of equipment, this may provide greater strategic influence for Canada than if we were to refuse to participate from the start.

No doubt many Canadians would be happy for a US missile to intercept an attack on a Canadian city, and it isn’t as if there are great options for Canada to develop its own missile defense system – particularly given our skepticism towards military spending. Still, it’s hard to jump into bed with someone who has expressed a desire to annex your country. Not only is it a difficult policy decision, but it is also a difficult political decision as the announcement comes just as Canadians’ views on America have soured. For a Prime Minister who just ran on a campaign of defending Canadian sovereignty (“elbows up” being the popular slogan), it sends a contradictory message to voters to join the Golden Dome initiative. It’s said that moral decisions are not about making easy choices between good and bad, right and wrong, but instead hard choices between competing values and uncertain outcomes. Canada has a difficult decision to make.

Real Life Terminators: The Inevitable Rise of Autonomous Weapons

image of predator drones in formation

Slaughterbots, a YouTube video by the Future of Life Institute, has racked up nearly three and a half million views for its dystopic nightmare where automated killing machines use facial recognition to track down and murder dissident students. Meanwhile, New Zealand and Austria have called for a ban on autonomous weapons, citing ethical and equity concerns, while a group of parliamentarians from thirty countries have also advocated for a treaty banning the development and use of so-called “killer-robots.” In the U.S., however, a bipartisan committee found that a ban on autonomous weapons “is not currently in the interest of U.S. or international security.”

Despite the sci-fi futurism of slaughterbots, autonomous weapons are not far off. Loitering munitions, which can hover over an area before self-selecting and destroying a target (and themselves), have proliferated since the first reports of their use by Turkish-backed forces in Libya last year. They were used on both sides of the conflict between Armenia and Azerbaijan, while U.S.-made switchblade and Russian Zala KYB kamikaze drones have recently been employed in Ukraine. China has even revealed a ship which can not only operate and navigate autonomously, but deploy drones of its own (although the ship is, mercifully, unarmed).

Proponents of autonomous weapons hope that they will reduce casualties overall, as they replace front-line soldiers on the battlefield.

As well as getting humans out of harm’s way, autonomous weapons might be more precise than their human counterparts, reducing collateral damage and risk to civilians.

A survey of Australian Defence Force officers found that the possibility of risk reduction was a significant factor in troops’ attitudes to autonomous weapons, although many retained strong misgivings about operating alongside them. Yet detractors of autonomous weapons, like the group Stop Killer Robots, worry about the ethics of turning life-or-death decisions over to machines. Apart from the dehumanizing nature of the whole endeavor, there are concerns about a lack of accountability and the potential for algorithms to entrench discrimination – with deadly results.

If autonomous weapons can reduce casualties, the concerns over dehumanization and algorithmic discrimination might fade away. What could be a better affirmation of humanity than saving human lives? At this stage, however, data on precision is hard to come by. And there is little reason to think that truly autonomous weapons will be more precise than ‘human-in-the-loop’ systems, which require a flesh-and-blood human to sign off on any aggressive action (although arguments for removing the human from the loop do exist).

There is also the risk that the development of autonomous weapons will lower the barrier of entry to war: if we only have to worry about losing machines, and not people, we might lose sight of the true horrors of armed conflict.

So should we trust robots with life-or-death decisions? Peter Maurer, President of the International Committee of the Red Cross, worries that abrogating responsibility for killing – even in the heat of battle – will decrease the value of human life. Moreover, the outsourcing of such significant decisions might lead to an accountability gap, where we are left with no recourse when things go wrong. We can hold soldiers to account for killing innocent civilians, but how can we hold a robot to account – especially one which destroys itself on impact?

Technological ethicist Steven Umbrello dismisses the accountability gap, arguing that autonomous weapons are no more troubling than traditional ones. By focusing on the broader system, accountability can be conferred upon decisionmakers in the military chain of command and the designers and engineers of the weapons themselves. There is never a case where the robot is solely at fault: if something goes wrong, we will still be able to find out who is accountable. This response can also apply to the dehumanization problem: it isn’t truly robots who are making life or death decisions, but the people who create and deploy them.

The issue with this approach is that knowing who is accountable isn’t the only factor in accountability: it will, undoubtedly, be far harder to hold those responsible to account.

They won’t be soldiers on the battlefield, but programmers in offices and on campuses thousands of kilometers away. So although the accountability gap may not be an insurmountable philosophical problem, it will still be a difficult practical one.

Although currently confined to the battlefield, we also ought to consider the inevitable spread of autonomous weapons into the domestic sphere. As of last year, over 15 billion dollars in surplus military technology had found its way into the hands of American police. There are already concerns that the proliferation of autonomous systems in southeast Asia could lead to increases in “repression and internal surveillance.” And Human Rights Watch worries that “Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response.”

But how widespread are these ‘human qualities’ in humans? Police kill over a thousand people each year in the U.S. Robots might be worse – but they could be better. They are unlikely to reflect the fear, short tempers, poor self-control, or lack of training of their human counterparts.

Indeed, an optimist might hope that autonomous systems can increase the effectiveness of policing while reducing danger to both police and civilians.

There is a catch, however: not even AI is free of bias. Studies have found racial bias in algorithms used in risk assessments and facial recognition, and a Microsoft chatbot had to be shut down after it started tweeting offensive statements. Autonomous weapons with biases against particular ethnicities, genders, or societal groups would be a truly frightening prospect.

Finally, we can return to science fiction. What if one of our favorite space-traveling billionaires decides that a private human army isn’t enough, and they’d rather a private robot army? In 2017, a group of billionaires, AI researchers, and academics – including Elon Musk – signed an open letter warning about the dangers of autonomous weapons. That warning wasn’t heeded, and development has continued unabated. With the widespread military adoption of autonomous weapons already occurring, it is only a matter of time before they wind up in private hands. If dehumanization and algorithmic discrimination are serious concerns, then we’re running out of time to address them.

 

Thanks to my friend CAPT Andrew Pham for his input.

On Drones: Helpful versus Harmful

During the Super Bowl halftime show this past month, Lady Gaga masterfully demonstrated one of the most unique mass uses of drones to date. At the conclusion of her show, drones powered by Intel were used to form the American flag and then were rearranged to identify one of the main sponsors of the show, Pepsi. This demonstration represented the artistic side of drones and one of the more positive images of them.

Continue reading “On Drones: Helpful versus Harmful”

What Does Ant-Man Say about our Morals?

If you have not yet viewed Marvel’s latest production, Ant-Man, take this as the obligatory spoiler alert. Those who have viewed this perplexing film about an ant-size superhero that saves the world, however, probably have several questions running through their minds: How can such a small superhero be so powerful? Will Ant-Man join other Marvel heroes in future films? But the most important question, one that has yet to be asked by the masses, is what the very idea of Ant-Man and the plot of Marvel’s film says about our morals and whether the ideas in this film allude to a bigger problem in terms of warfare.

Continue reading “What Does Ant-Man Say about our Morals?”