← Return to search results
Back to Prindle Institute
Technology

AI and Pure Science

By Matthew S.W. Silk
19 Apr 2022

In September 2019, four researchers wrote to the academic publisher Wiley to request that it retract a scientific paper relating to facial recognition technology. The request was made not because the research was wrong or reflected bad methodology, but rather because of how the technology was likely to be used. The paper discussed the process by which algorithms were trained to detect faces of Uyghur people, a Muslim minority group in China. While researchers believed publishing the paper presented an ethical problem, Wiley defended the article noting that it was about a specific technology, not about the application of that technology. This event raises a number of important questions, but, in particular, it demands that we consider whether there is an ethical boundary between pure science and applied science when it comes to AI development – that is, whether we can so cleanly separate knowledge from use as Wiley suggested.

The 2019 article for the journal WIREs Data Mining and Knowledge Discovery discusses discoveries made by the research term in the work on ethic group facial recognition which included datasets of Chinese Uyghur, Tibetan, and Korean students at Dalian University. In response a number of researchers, believing that it is disturbing that academics tried to build such algorithms, called for the article to be retracted. China has been condemned for its heavy surveillance and mass detention of Uyghurs, and this study and a number of other studies, some scientists claim, are helping to facilitate the development of technology which can make this surveillance and oppression more effective. As Richard Van Noorden reports, there has been a growing push by some scientists to get the scientific community to take a firmer stance against unethical facial-recognition research, not only denouncing controversial uses of technology, but the research foundations of it as well. They call on researchers to avoid working with firms or universities linked to unethical projects.

For its part, Wiley has defended the article, noting “We are aware of the persecution of the Uyghur communities … However, this article is about a specific technology and not an application of that technology.” In other words, Wiley seems to be adopting an ethical position based on the long-held distinction between pure and applied science. This distinction is old, tracing back to the time of Francis Bacon and the 16th century as part of a compromise between the state and scientists. As Robert Proctor reports, “the founders of the first scientific societies promised to ignore moral concerns” in return for funding and for freedom of inquiry in return for science keeping out of political and religious matters. In keeping with Bacon’s urging that we pursue science “for its own sake,” many began to distinguish science as “pure” affair, interested in knowledge and truth by themselves, and applied science which seeks to use engineering to apply science in order to secure various social goods.

In the 20th century the division between pure and applied science was used as a rallying cry for scientific freedom and to avoid “politicizing science.” This took place against a historical backdrop of chemists facilitating great suffering in World War I followed by physicists facilitating much more suffering in World War II. Maintaining the political neutrality of science was thought to make it more objective by ensuring value-freedom. The notion that science requires freedom was touted by well-known physicists like Percy Bridgman who argued,

The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.

For Bridgman, science just wasn’t science unless it was pure. He explains, “Popular usage lumps under the single world ‘science’ all the technological activities of engineering and industrial development, together with those of so-called ‘pure science.’ It would clarify matters to reserve the word science for ‘pure’ science.” For Bridgman it is society that must decide how to use a discovery rather than the discoverer, and thus it is society’s responsibility to determining how to use pure science rather than the scientists’. As such, Wiley’s argument seems to echo those of Bridgman. There is nothing wrong with developing the technology of facial recognition in and of itself; if China wishes to use that technology to oppress people with it, that’s China’s problem.

On the other hand, many have argued that the supposed distinction between pure and applied science is not ethically sustainable. Indeed, many such arguments were driven by the reaction to the proliferation of science during the war. Janet Kourany, for example, has argued that science and scientists have moral responsibilities because of the harms that science has caused, because science is supported through taxes and consumer spending, and because society is shaped by science. Heather Douglas has argued that scientists shoulder the same moral responsibilities as the rest of us not to engage in reckless or negligent research, and that due to the highly technical nature of the field, it is not reasonable for the rest of society to carry those responsibilities for scientists. While the kind of pure knowledge that Bridgman or Bacon favor has value, these values need to be weighed against other goods like basic human rights, quality of life, and environmental health.

In other words, the distinction between pure and applied science is ethically problematic. As John Dewey argues the distinction is a sham because science is always connected to human concerns. He notes,

It is an incident of human history, and a rather appalling incident, that applied science has been so largely made equivalent for use for private and economic class purposes and privileges. When inquiry is narrowed by such motivation or interest, the consequence is in so far disastrous both to science and to human life.

Perhaps this is why many scientists do not accept Wiley’s argument for refusing retraction; discovery doesn’t happen in a vacuum. It isn’t as if we don’t know why the Chinese government has an interest in this technology. So, at what point does such research become morally reckless given the very likely consequences?

This is also why debate around this case has centered on the issue of informed consent. Critics charge that the Uyghur students who participated in the study were not likely fully informed of its purposes and this could not provide truly informed consent. The fact that informed consent is relevant at all, which Wiley admits, seems to undermine their entire argument as informed consent in this case appears explicitly tied to how the technology will be used. If informed consent is ethically required, this is not a case where we can simply consider pure research with no regard to its application. And these considerations prompted scientists like Yves Moreau to argue that all unethical biometric research should be retracted.

But regardless of how we think about these specifics, this case serves to highlight a much larger issue: given the large number of ethical issues associated with AI and its potential uses we need to dedicate much more of our time and attention to the question of whether some certain forms of research should be considered forbidden knowledge. Do AI scientists and developers have moral responsibilities for their work? Is it more important to develop this research for its own sake or are there other ethical goods that should take precedence?

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories