← Return to search results
Back to Prindle Institute
OpinionTechnology

Should AI Development Be Stopped?

By Matthew S.W. Silk
17 May 2023
photograph of Arnold Schwarznegger's Terminator wax figure

It was a bit of a surprise this month when the so-called “Godfather of AI” Geoffrey Hinton announced that he was quitting at Google after working there for more than a decade developing Google’s AI research division. With his newfound freedom to speak openly, Hinton has expressed ethical concerns about the use of the technology for its capacity to destabilize society and exacerbate income equality. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told The New York Times this month. That such an authoritative figure within the AI field has now condemned the technology is a significant addition to a growing call for a halt on AI development. Last month more than 1,000 AI researchers published an open letter calling for a six-month pause on training AI systems more powerful than the newest ChatGPT. But does AI really pose such a risk that we ought to halt its development?

Hinton worries about humanity losing control of AI. He was surprised, for instance, when Google’s AI language model was able to explain to him why a joke he made up was funny. He is also concerned that despite AI models being far less complex than the human brain, they are quickly becoming able to do complex tasks on par with a human. Part of his concern is the idea of algorithms seeking greater control and that he doesn’t know how to control the AI that Google and others are building. This concern is part of the reason for the call for a moratorium as the recent letter explains, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? […] Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Eliezer Yudkowsky, a decision theorist, recently suggested that a 6-month moratorium is not sufficient. Because he is concerned that AI will become smarter than humans. His concern is that building anything that is smarter than humans will definitely result in the death of everyone on Earth. Thus, he has called for completely ending the development of powerful AI and believes that an international treaty should ban its use with its provisions subject to military action if necessary. “If intelligence says that a country outside the agreement is building a GPU cluster,” he warned, “be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”

These fears aren’t new. In the 1920s and 1930s there were concerns that developments in science and technology were destabilizing society and would strip away jobs and exacerbate income inequality. In response, many called for moratoriums on further research – moratoriums that did not happen. In fact, Hinton does not seem to think this is practical since competitive markets and competitive nations are already involved in an arms race that will only compel further research.

There is also the fact that over 400 billion dollars has been invested in AI in just 2022, meaning that it will be difficult to convince people to bring all of this research to a halt given the investment and potentially lucrative benefits. Artificial intelligence has the capability to make certain tasks far more efficient and productive, from medicine to communication. Even Hinton believes that development should continue because AI can do “wonderful things.” Given these , one response to the proposed moratorium insists that “a pause on AI work is not only vague, but also unfeasible.” They argue, instead, that we simply need to be especially clear about what we consider “safe” and “successful” AI development to avoid potential missteps.

Where does this leave us? Certainly we can applaud the researchers who take their moral responsibilities seriously and feel compelled to share their concerns about the risks of development. But these kinds of warnings are vague, and researchers need to do a better job at explaining the risks. What exactly does it mean to say that you are worried about losing control of AI? Saying something like this encourages the public to imagine fantastical sci-fi ideas akin to 2001: A Space Odyssey or The Terminator. (Unhelpfully, Hinton has even agreed with the sentiment that our situation is like the movies. Ultimately, people like Yudkowsky and Hinton don’t exactly draw a clear picture of how we get from ChatGPT to Skynet. The fact that deep neural networks are so successful despite their simplicity compared to a human brain might be a cause for concern, but why exactly? Hinton says: “What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is get more power—get more control.”  Yudkowsky suggests: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” He suggests that “A sufficiently intelligent AI won’t stay confined to computers for long.” But how?

These are hypothetical worries about what AI might do, somehow, if they become more intelligent than us. These concepts remain hopelessly vague. In the meantime, there are already real problems that AI is causing such as predictive policing and discriminatory biases. There’s also the fact that AI is incredibly environmentally unfriendly. One AI model can emit five times more carbon dioxide than the lifetime emissions of a car. Putting aside how advanced AI might become relative to humans, it is already proving to pose significant challenges that will require society to adapt. For example, there has been a surge in AI-generated music recently and this presents problems for the music industry. Do artists own the rights to the sound of their own voice or does a record company? A 2020 paper revealed that a malicious actor could deliberately create a biased algorithm and then conceal this fact from potential regulators owing to their black box nature. There are so many areas where AI is being developed and deployed where it might take years of legal reform before clear and understandable frameworks can be developed to govern their use. (Hinton points at the capacity for AI to negatively affect the electoral process as well). Perhaps this is a reason to slow AI development until the rest of society can catch up.

If scientists are going to be taken seriously by the public,the nature of the threat will need to be made much more clear. Most of the more serious ethical issues involving AI such as labor reform, policing, and bias are more significant, not because of AI itself, but because AI will allow smaller groups to benefit without transparency and accountability. In other words, the ethical risks with AI are still mostly owing to the humans who control that AI, rather than the AI itself. While humans can make great advancements in science, this is often in advance of understanding how that knowledge is best used.

In the 1930s, the concern that science would destroy the labor market only subsided when a world war made mass production and full employment necessary. We never addressed the underlying problem. We still need to grapple with the question of what science is for. Should AI development be dictated by a relatively small group of financial interests who can benefit from the technology while it harms the rest of society? Are we, as a society, ready to collectively say “no” to certain kinds of scientific research until social progress catches up with scientific progress?

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories