← Return to search results
Back to Prindle Institute

AI and Pure Science

Pixelated image of a man's head and shoulders made up of pink and purple squares

In September 2019, four researchers wrote to the academic publisher Wiley to request that it retract a scientific paper relating to facial recognition technology. The request was made not because the research was wrong or reflected bad methodology, but rather because of how the technology was likely to be used. The paper discussed the process by which algorithms were trained to detect faces of Uyghur people, a Muslim minority group in China. While researchers believed publishing the paper presented an ethical problem, Wiley defended the article noting that it was about a specific technology, not about the application of that technology. This event raises a number of important questions, but, in particular, it demands that we consider whether there is an ethical boundary between pure science and applied science when it comes to AI development – that is, whether we can so cleanly separate knowledge from use as Wiley suggested.

The 2019 article for the journal WIREs Data Mining and Knowledge Discovery discusses discoveries made by the research term in the work on ethic group facial recognition which included datasets of Chinese Uyghur, Tibetan, and Korean students at Dalian University. In response a number of researchers, believing that it is disturbing that academics tried to build such algorithms, called for the article to be retracted. China has been condemned for its heavy surveillance and mass detention of Uyghurs, and this study and a number of other studies, some scientists claim, are helping to facilitate the development of technology which can make this surveillance and oppression more effective. As Richard Van Noorden reports, there has been a growing push by some scientists to get the scientific community to take a firmer stance against unethical facial-recognition research, not only denouncing controversial uses of technology, but the research foundations of it as well. They call on researchers to avoid working with firms or universities linked to unethical projects.

For its part, Wiley has defended the article, noting “We are aware of the persecution of the Uyghur communities … However, this article is about a specific technology and not an application of that technology.” In other words, Wiley seems to be adopting an ethical position based on the long-held distinction between pure and applied science. This distinction is old, tracing back to the time of Francis Bacon and the 16th century as part of a compromise between the state and scientists. As Robert Proctor reports, “the founders of the first scientific societies promised to ignore moral concerns” in return for funding and for freedom of inquiry in return for science keeping out of political and religious matters. In keeping with Bacon’s urging that we pursue science “for its own sake,” many began to distinguish science as “pure” affair, interested in knowledge and truth by themselves, and applied science which seeks to use engineering to apply science in order to secure various social goods.

In the 20th century the division between pure and applied science was used as a rallying cry for scientific freedom and to avoid “politicizing science.” This took place against a historical backdrop of chemists facilitating great suffering in World War I followed by physicists facilitating much more suffering in World War II. Maintaining the political neutrality of science was thought to make it more objective by ensuring value-freedom. The notion that science requires freedom was touted by well-known physicists like Percy Bridgman who argued,

The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.

For Bridgman, science just wasn’t science unless it was pure. He explains, “Popular usage lumps under the single world ‘science’ all the technological activities of engineering and industrial development, together with those of so-called ‘pure science.’ It would clarify matters to reserve the word science for ‘pure’ science.” For Bridgman it is society that must decide how to use a discovery rather than the discoverer, and thus it is society’s responsibility to determining how to use pure science rather than the scientists’. As such, Wiley’s argument seems to echo those of Bridgman. There is nothing wrong with developing the technology of facial recognition in and of itself; if China wishes to use that technology to oppress people with it, that’s China’s problem.

On the other hand, many have argued that the supposed distinction between pure and applied science is not ethically sustainable. Indeed, many such arguments were driven by the reaction to the proliferation of science during the war. Janet Kourany, for example, has argued that science and scientists have moral responsibilities because of the harms that science has caused, because science is supported through taxes and consumer spending, and because society is shaped by science. Heather Douglas has argued that scientists shoulder the same moral responsibilities as the rest of us not to engage in reckless or negligent research, and that due to the highly technical nature of the field, it is not reasonable for the rest of society to carry those responsibilities for scientists. While the kind of pure knowledge that Bridgman or Bacon favor has value, these values need to be weighed against other goods like basic human rights, quality of life, and environmental health.

In other words, the distinction between pure and applied science is ethically problematic. As John Dewey argues the distinction is a sham because science is always connected to human concerns. He notes,

It is an incident of human history, and a rather appalling incident, that applied science has been so largely made equivalent for use for private and economic class purposes and privileges. When inquiry is narrowed by such motivation or interest, the consequence is in so far disastrous both to science and to human life.

Perhaps this is why many scientists do not accept Wiley’s argument for refusing retraction; discovery doesn’t happen in a vacuum. It isn’t as if we don’t know why the Chinese government has an interest in this technology. So, at what point does such research become morally reckless given the very likely consequences?

This is also why debate around this case has centered on the issue of informed consent. Critics charge that the Uyghur students who participated in the study were not likely fully informed of its purposes and this could not provide truly informed consent. The fact that informed consent is relevant at all, which Wiley admits, seems to undermine their entire argument as informed consent in this case appears explicitly tied to how the technology will be used. If informed consent is ethically required, this is not a case where we can simply consider pure research with no regard to its application. And these considerations prompted scientists like Yves Moreau to argue that all unethical biometric research should be retracted.

But regardless of how we think about these specifics, this case serves to highlight a much larger issue: given the large number of ethical issues associated with AI and its potential uses we need to dedicate much more of our time and attention to the question of whether some certain forms of research should be considered forbidden knowledge. Do AI scientists and developers have moral responsibilities for their work? Is it more important to develop this research for its own sake or are there other ethical goods that should take precedence?

Search Engines and Data Voids

photograph of woman at computer, Christmas tree in background

If you’re like me, going home over the holidays means dealing with a host of computer problems from well-meaning but not very tech-savvy family members. While I’m no expert myself, it is nevertheless jarring to see the family computer desktop covered in icons for long-abandoned programs, browser tabs that read “Hotmail” and “how do I log into my Hotmail” side-by-side, and the use of default programs like Edge (or, if the computer is ancient enough, Internet Explorer) and search engines like Bing.

And while it’s perhaps a bit of a pain to have to fix the same computer problems every year, and it’s annoying to use programs that you’re not used to, there might be more substantial problems afoot. This is because according to a recent study from Stanford’s Internet Observatory, Bing search results “contain an alarming amount of disinformation.” That default search engine that your parents never bothered changing, then, could actually be doing some harm.

While no search engine is perfect, the study suggests that, at least in comparison to Google, Bing lists known disinformation sites in its top results much more frequently (including searches for important issues like vaccine safety, where a search for “vaccines autism” returns “six anti-vax sites in its top 50 results”). It also presents results from known Russian propaganda sites much more frequently than Google, places student-essay writing sites in its top 50 results for some search terms, and is much more likely to “dredge up gratuitous white-supremacist content in response to unrelated queries.” In general, then, while Bing will not necessarily present one only with disinformation – the site will still return results for trustworthy sites most of the time – it seems worthwhile to be extra vigilant when using the search engine.

But even if one commits to simply avoiding Bing (at least for the kinds of searches that are most likely to be connected to disinformation sites), problems can arise when Edge is made a default browser (which uses Bing as its default search engine), and when those who are not terribly tech-savvy don’t know how to use a different browser, or else aren’t aware of the alternatives. After all, there is no particular reason to think that results from different search engines should be different, and given that Microsoft is a household name, one might not be inclined to question the kinds of results their search engine provides.

How can we combat these problems? Certainly a good amount of responsibility falls on Microsoft themselves for making more of an effort to keep disinformation sites out of their search results. And while we might not want to say that one should never use Bing (Google knows enough about me as it is), there is perhaps some general advice that we could give in order to try to make sure that we are getting as little disinformation as possible when searching.

For example, the Internet Observatory report posits that one of the reasons why there is so much more disinformation in search results from Bing as opposed to Google is due to how the engines deal with “data voids.” The idea is the following: for some search terms, you’re going to get tons of results because there’s tons of information out there, and it’s a lot easier to weed out possible disinformation sites about these kinds of results because there are so many more well-established and trusted sites that already exist. But there are also lots of search terms that have very few results, possibly because they are about idiosyncratic topics, or because the search terms are unusual, or just because the thing you’re looking for is brand new. It’s when there are these relative voids of data about a term that makes results ripe for manipulation by sites looking to spread misinformation.

For example, Michael Golebiewski and danah boyd write that there are five major types of data voids that can be most easily manipulated: breaking news, strategic new terms (e.g. when the term “crisis actor” was introduced by Sandy Hook conspiracy theorists), outdated terms, fragmented concepts (e.g. when the same event is referred to by different terms, for example “undocumented” and “illegal aliens”), and problematic queries (e.g. when instead of searching for information about the “Holocaust” someone searches for “did the Holocaust happen?”). Since there tends to be comparatively little information about these topics online, those looking to spread disinformation can create sites that exploit these data voids.

Golebiewski and boyd provide an example in which the term “Sutherland Springs, Texas” was a much more popular search than it had ever previously been in response to news reports of an active shooting in November of 2017. However, since there was so little information online about Sutherland Springs prior to the event, it was more difficult for search engines to determine out of the new sites and posts which should be sent to the top of the search results and which should be sent to the bottom of the pile. This is the kind of data void that can be exploited by those looking to spread disinformation, especially when it comes to search engines like Bing that seem to struggle with distinguishing trustworthy sites from the untrustworthy.

We’ve seen that there is clearly some responsibility on Bing itself to help with the flow of disinformation, but we perhaps need to be more vigilant when it comes to trusting sites with regards to the kinds of terms Golebiewski and boyd describe. And, of course, we could try our best to convince those who are less computer-literate in our lives to change some of their browsing habits.

Questioning the Morality of Raising Neanderthal

A human and neanderthal skull facing each other on a black background

In 2013, Harvard professor George Church raised the possibility that an “extremely adventurous female human” might one day serve as a surrogate mother to a cloned Neanderthal child. In the recent bestseller Sapiens, author and historian Yuval Noah Harari discusses the possibility of resurrecting Neanderthal, reintroducing the idea into popular discussion. Bringing Neanderthal back to life would be an instance of what many scientists call de-extinction. The idea that something like this may be possible is certainly both intriguing and entertaining. But, to paraphrase Jurassic Park, when it comes to using emerging technology in this way, are people spending so much time thinking about whether they could that they haven’t stopped to consider whether they should?

One argument in favor of using the technology is an argument for a kind of reparations paid to other species for our history of destructive, parasitic behavior. Many historians believe that homo sapiens were likely largely responsible for the extinction of Neanderthal. The answer to the question of whether we ought to resurrect our long lost cousins is a species of a larger question pertaining to human obligation. It is undeniable that homo sapiens have been among the planets most destructive animals—we are responsible for the extinction of a startling number of species. As genetic technology progresses, do humans have an obligation to revive the species they’ve wiped out? How many species ought we to resurrect? All of them? Only some of them? How should we decide? Should we only revive the species that continue to exist in collective memory? Do we have an obligation to revive long lost species?

This motivates questions about the nature of our moral obligations. Do we have such obligations to species, or only to concrete entities? If we have obligations to species, then perhaps we do have an obligation to create new members of long lost species. If we have obligations only to individual sentient beings, then we aren’t really atoning for our sins by bringing to life those new individual beings; after all, those aren’t the beings that we harmed.

Is biodiversity good in itself, regardless of what it brings about? If so, perhaps the more species we revive the better? Should we instead focus on the potential consequences of bringing a species back to life? If so, consequences for whom? If it was anthropocentrism that took the species out in the first place, and if we take ourselves to have moral obligations to bring species back into existence, we probably shouldn’t appeal to anthropocentric considerations in our arguments for which species, if any, to bring back.

Many of those interested in reviving extinct species are not motivated by ethical considerations. Instead, they want to test the limits of what is scientifically possible. This raises a distinct set of ethical questions. Many argue that knowledge is intrinsically valuable. If this is the case, perhaps we ought to maximize the things that we know. This would include knowledge of how to revive lost species and of what would happen if we did. It may well be, however, that not everything that is, in principle, knowable, is something that should be known. Perhaps we need to put some checks on what we do with emerging technologies.

Some raise ethical concerns with the cloning of Neanderthal in particular. The history of western thought is replete with examples of philosophers extolling the virtues of human reason. Neanderthals, if revived, would share many of the same reasoning capabilities as humans. Is it morally defensible to intentionally create creatures that share features in common with humans, but whom we know will be significantly less intelligent?

As it is, homo sapiens aren’t great at reasoning about species membership.  We tend to treat the suffering of animals like cats and dogs very seriously, while ignoring the suffering of others, especially those that we are accustomed to using for food. In what ways should we expect humans to treat Neanderthals? Would homo sapiens treat differences in species membership as if it justified differences in moral treatment? Neanderthals would surely be intelligent enough to include in the category of persons. Would homo sapiens be disinclined to treat them with the dignity that membership in the moral community carries?  

Objections to reviving Neanderthals tend to rest on the idea that the species is very similar to our own, but less intelligent. Interestingly, intelligence doesn’t seem to be a factor in the arguments for or against bringing other species back from extinction. When scientists consider reviving, for example, a woolly mammoth, they aren’t worried about whether the resulting creatures will be too smart or not smart enough. Is it simply that we are concerned about creating intelligent beings that resemble us too closely? Could that possibly be a morally relevant consideration?

Another objection to the whole enterprise has to do with the fact that the planet faces overpopulation and resource depletion already. Do we really want to introduce new beings into a situation like that? Would doing so make matters worse for the beings that already exist?

Other arguments have to do with the scope of the project. Arguably, it is cruel to create a member of a species without creating others with which that being can interact. We know that, like humans, Neanderthals were social animals. A crucial part of living a flourishing life for a Neanderthal would be the ability to interact feely with other Neanderthals. If our experiment doesn’t allow for that possibility, it is probably best to refrain from engaging in it at all.

Finally, were this experiment to ever take place, researchers would need to be very careful. Specific family relationships would likely matter to Neanderthals in very similar ways to the ways in which they matter to humans. These beings wouldn’t be reducible to an experiment.

Admittedly, this scenario is farfetched, but it is more than just sci-fi.  In 2003, scientists brought back the previously extinct Pyrenean ibex, only to watch it die shortly after birth.

Genetic Research in the Navajo Nation

A photo of Native Americans marching along a highway with flags.

In 2002, the Navajo Nation placed a moratorium on genetic research within its territorial jurisdiction.  Among the motivations were concerns about the misuse of data and the potential for privacy violations.  Many members of the Navajo Nation were opposed to the moratorium, primarily because of the medical benefits of genetic testing.  This month, the Navajo Nation announced that they are considering lifting the moratorium.

Continue reading “Genetic Research in the Navajo Nation”

The Moral Dimensions of the Research Reproducibility Crisis

A close-up photo of a microscope slide.

The labor of scientists has benefited society tremendously. Advancements in medicine and technology have improved both the length and the quality of human lives. Scientific studies have been and continue to be a crucial part of that process. Science, when done well, is indispensable to a healthy, happy, curious human race. Unfortunately, science isn’t always done well. When done poorly, studies can have disastrous effects. People tend to trust claims made by scientists, and that trust turns out to be unwarranted if something has gone wrong with the research.

Continue reading “The Moral Dimensions of the Research Reproducibility Crisis”

Remembering the Underground Schools of Kosovo

An abandoned school lies among the neighborhoods dotting the outskirts of Prishtina, Kosovo’s capital city. Forgotten by some and removed from the public eye, the school is unimposing, yet instantly recognizable. Jutting out amongst the winding alleyways, the building’s unmistakeable silhouette rises above the surrounding homes. The home is now abandoned, its fire-blackened walls crumbling into the hallways. There are few indications that it used to be a place of learning; looking from the outside, it appeared to simply be a burned-out shell of a building. Yet the piles of desks and faded chalkboards at the head of several rooms made the building’s former purpose clear.

Continue reading “Remembering the Underground Schools of Kosovo”

Autonomy and Non-Domination in International Clinical Research

In February, Dr. Danielle Wenner presented her talk “Autonomy and Non-Domination in International Clinical Research” as a part of the Young Philosophers Lecture Series hosted by the Prindle Institute and the DePauw Philosophy Department. Next week, we’ll post Dr. Michael Hannon’s talk, “Does Knowledge Matter?”

Throughout May and June, we’ll continue to post videos of each talk (also available on YouTube).

Enjoy, and be sure to let us know what you think in the comments!

Optimizing the IRB

For the average person, the notion of medical research may conjure dramatic images of lab-coated scientists handing test tubes and analyzing data. What hardly ever comes up, though, is a process some researchers dread: approval by an institutional review board (IRB). Notoriously lengthy and sometimes difficult to navigate, the process is an oft-unseen yet critical piece of conducting research. And, as CNN contributor Robert Klitzman argues, the demands it places in its current form may have become more of a burden on research than anything.

Continue reading “Optimizing the IRB”