← Return to search results
Back to Prindle Institute

The Wrong of Explicit Simulated Depictions

photograph of Taylor Swift performing on stage with image on screen in background

In late January, images began circulating on social media that appeared to be sexually explicit images of pop star Taylor Swift. One particular post on X (i.e., Twitter) reached 47 million views before it was deleted. However, the images were, in fact, fake. The products of generative AI to be specific. Recent reporting traces the origin of the images to a thread on the online forum 4chan, wherein users “played a game” which involved utilizing generative AI to create violent and/or sexual images of female celebrities.

This incident has drawn renewed public attention to another potentially negative use of AI, prompting action on the part of legislators. H.R. 6943, the “No AI Fraud Act”, introduced in January albeit before the Swift incident, if passed, would hold individuals who distribute or create simulated likenesses of an individual or their voice liable for damages. EU Negotiators agreed on a bill that would criminalize sharing explicit simulated content. The Utah State legislature has introduced a bill, expanding previous legislation, to outlaw sharing AI-generated sexually explicit images.

There is certainly much to find disturbing about the ability of AI to create these fakes. However, it is worth carefully considering how and why explicit fakes are harmful to those depicted. Developing a clear explanation for why such images are harmful (and what makes some images more harmful than others) goes some way toward determining how we ought to respond to the creators and distributors. Intuitively, it seems that the more significant harm is, the more appropriate a greater punishment would be.

For the purposes of this discussion, I will refer to content that is created by an AI as a “fake” or a “simulation.” AI generated content that depicts the subject in a sexualized manner will be referred to as an “explicit fake” or “explicit simulation.”

Often, the worry about simulated likenesses of people deals with the potential for deception. Recently, in New Hampshire, a series of robocalls utilizing an AI generated voice mimicking President Joe Biden instructed Democrats to not vote in the upcoming primary election. An employee of a multinational corporation transferred $26 million to a scammer after a video call with AI generated videos resembling their co-workers. The examples go on. Regardless, each of these cases are morally troubling because they involve using AI deceptively for personal or political gain.

However, it is unclear that we can apply the same rationale to explicit fakes. They may be generated purely for the sake of sexual satisfaction rather than material or competitive gains. As a result, the potential for ill-gotten personal and political gains are not as high.  Further, they may not necessarily require deception or trickery to achieve their end (more on this later, though). So, what precisely is morally wrong with creating and sharing explicit simulations?

In an earlier analysis, Kiara Goodwine notes that one ethical objection to explicit simulations is that they depict a person’s likeness without their consent. Goodwine is right. However, it seems that there is more wrong here than this. If it were merely a matter of depicting someone’s likeness, particularly their unclothed likeness, without their consent, then imagining someone naked for the purposes of sexual gratification would be as wrong as creating an explicit fake. I am uncertain of the morality of imagining others in sexual situations for the sake of personal gratification. Having never reflected seriously on the morality of the practice, I am open to being convinced that it is wrong. Nonetheless, even if imagining another sexually without their consent is wrong, it is surely less wrong than creating or distributing an explicit fake. Thus, we must find further factors that differentiate AI creations from private mental images.

Perhaps the word “private” does significant work here. When one imagines another in a sexualized way without their consent, one cannot share that image with others. Yet, as we saw with the depiction of Swift, images posted on the internet may be easily and widely shared. Thus, a crucial component of what makes explicit fakes harmful is their publicity or at least their potential for publicity. Of course, simulations are not the only potentially public forms of content. Compare an explicit fake to, say, a painting that depicts the subject nude. Both may violate the subject’s consent and both have the potential for publicity. Nonetheless, even if both are wrong, the explicit deepfake seems in some way worse than the painting. So, there must be an additional factor contributing to the wrongs of explicit simulations.

What makes a painting different from the AI created image is its believability. When one observes a painting or other human created work, one recognizes that it depicts something which may or may not have occurred. Perhaps the subject sat down for the creator and allowed them to depict the event. Or perhaps it was purely fabricated by the author. Yet what appear to be videos, photos or recorded audio seem different. They strike us with an air of authenticity or believability. You need pics or it didn’t happen. When explicit content is presented in these forms, it is much easier for the viewers to believe that it does indeed depict real events. Note that viewers are not required to believe the depictions are real for them to achieve their purpose, unlike in the deception cases earlier. Nonetheless, the likelihood that viewers believe the veracity of an explicit simulation is significantly higher than with other explicit depictions like painting.

So, explicit fakes seem to generate harms due to a triangulation of three factors. First, those depicted did not consent. Second, explicit fakes are often shared publicly or at least may easily be shared. Third and finally, they seem worse than other false sexualized depictions because they are more believable. These are the reasons why explicit fakes are harmful but what precisely is the nature of the harm?

The harms may come in two forms. First, explicit simulations may create material harms. As we see with Swift, those depicted in explicit fakes are often celebrities. A significant portion of a celebrity’s appeal depends on their brand; they cultivate a particular audience based on the content they produce and their public behavior, among other factors. Explicit fakes threaten a celebrity’s career by damaging their brand. For instance, someone who makes a career by creating content that derives its appeal, in part, from its inoffensive nature may see their career suffer as a result of public, believable simulations depicting them in a sexualized fashion. Indeed, the No AI Fraud act stipulates that victims ought to be compensated for the material harms that fakes have caused for their career earnings. Furthermore, even for a non-celebrity, explicit fakes can be damaging. They could place one in a position where they have to explain away fraudulent sexualized images of them to an employer, a partner or a family member. Even if one understands that the images are not real, it may nonetheless bias their judgment against the person depicted.

However, explicit fakes still produce harms even if material consequences do not come to bear. The harm takes the form of disrespect. Ultimately, by ignoring the consent of the parties depicted, those who create and distribute explicit fakes are failing to acknowledge the depicted as agents whose decisions about their body ought to be respected. To generate and distribute these images seems to reduce the person depicted to a sexual object whose purpose is strictly to gratify the desires of those viewing the image. Even if no larger harms are produced, the mere willingness to engage in the practice speaks volumes about one’s attitudes towards the subjects of explicit fakes.

Ethics of Science

Science as an institution holds an influential role in our society. A 2016 Pew Research Center survey, for instance, found that the scientific community is the second most trusted institution in the U.S. But there are a host of ethical questions that arise both in the practice of science and in the study of its history.

Test Subject Guide

Diverse group of Indiana high school ethics bowl students working on a case.

Gallia est omnis divisa in partes tres, quarum. Fabio vel iudice vincam, sunt in culpa qui officia. Salutantibus vitae elit libero, a pharetra augue. Hi omnes lingua, institutis, legibus inter se differunt.

Bias in Tech with Meredith Broussard

Meredith Broussard is a data journalist working in the field of algorithmic accountability. She writes about the ways in which race, gender and ability bias seep into the technology we use every day.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Meredith Broussard, “More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Funk and Flash” by Blue Dot Sessions

Rambling” by Blue Dot Sessions

The Ethics of AI Behavior Manipulation

photograph of server room

Recently, news came from California that police were playing loud, copyrighted music when responding to criminal activity. While investigating a stolen vehicle report, video was taken of the police blasting Disney songs like those from the movie Toy Story. The reason the police were doing this was to make it easier to take down footage of their activities. If the footage has copyrighted music, then a streaming service like YouTube will flag it and remove it, so the reasoning goes.

A case like this presents several ethical problems, but in particular it highlights an issue of how AI can change the way that people behave.

The police were taking advantage of what they knew about the algorithm to manipulate events in their favor. This raises obvious questions: Does the way AI affects our behavior present unique ethical concerns? Should we be worried about how our behavior is adapting to suit an algorithm? When is it wrong to use one’s understanding of an algorithm as leverage to their own benefit? And, if there are ethical concerns about algorithms having this effect on our behavior should they be designed in ways to encourage you to act ethically?

It is already well-known that algorithms can affect your behavior by creating addictive impulses. Not long ago, I noted how the attention economy incentivizes companies to make their recommendation algorithms as addictive as possible, but there are other ways in which AI is altering our behavior. Plastic surgeons, for example, have noted a rise in what is being called “snapchat dysmorphia,” or patients who desperately want to look like their snapchat filter. The rise of deepfakes are also encouraging manipulation and deception, making it more difficult to tell reality apart from fiction. Recently, philosophers John Symons and Ramón Alvarado have even argued that such technologies undermine our capacity as knowers and diminishes our epistemic standing.

Algorithms can also manipulate people’s behavior by creating measurable proxies for otherwise immeasurable concepts. Once the proxy is known, people begin to strategically manipulate the algorithm to their advantage. It’s like knowing in advance what a test will include and then simply teaching the test. YouTubers chase whatever feature, function, length, or title they believe the algorithm will pick up and turn their video into a viral hit. It’s been reported that music artists like Halsey are frustrated by record labels who want a “fake viral moment on TikTok” before they will release a song.

This is problematic not only because viral TikTok success may be a poor proxy for musical success, but also because the proxies in the video that the algorithm is looking for also may have nothing to do with musical success.

This looks like a clear example of someone adapting their behavior to suit an algorithm for bad reasons. On top of that, the lack of transparency creates a market for those who know more about the algorithm and can manipulate it to take advantage of those that do not.

Should greater attention be paid to how algorithms generated by AI affect the way we behave? Some may argue that these kinds of cases are nothing new. The rise of the internet and new technologies may have changed the means of promotion, but trying anything to drum up publicity is something artists and labels have always done. Arguments about airbrushing and body image also predate the debate about deepfakes. However, if there is one aspect of this issue that appears unique, it is the scale at which algorithms can operate – a scale which dramatically affects their ability to alter the behavior of great swaths of people. As philosopher Thomas Christiano notes (and many others have echoed), “the distinctive character of algorithmic communications is the sheer scale of the data.”

If this is true, and one of the most distinctive aspects of AI’s ability to change our behavior is the scale at which it is capable of operating, do we have an obligation to design them so as to make people act more ethically?

For example, in the book The Ethical Algorithm, the authors present the case of an app that gives directions. When an algorithm is considering the direction to give you, it could choose to try and ensure that your directions are the most efficient for you. However, by doing the same for everyone it could lead to a great deal of congestion on some roads while other roads are under-used, making for an inefficient use of infrastructure. Alternatively, the algorithm could be designed to coordinate traffic, making for a more efficient overall solution, but at the cost of potentially getting personally less efficient directions. Should an app cater to your self-interest or the city’s overall best-interest?

These issues have already led to real world changes in behavior as people attempt to cheat the algorithm to their benefit. In 2015, there were reports of people reporting false traffic accidents or traffic jams to the app Waze in order to deliberately re-route traffic elsewhere. Cases like this highlight the ethical issues involved. An algorithm can systematically change behavior, and just like trying to ease congestion, it can attempt to achieve better overall outcomes for a group without everyone having to deliberately coordinate. However, anyone who becomes aware of the system of rules and how they operate will have the opportunity to try to leverage those rules to their advantage, just like the YouTube algorithm expert who knows how to make your next video go viral.

This in turn raises issues about transparency and trust. The fact that it is known that algorithms can be biased and discriminatory weakens trust that people may have in an algorithm. To resolve this, the urge is to make algorithms more transparent. If the algorithm is transparent, then everyone can understand how it works, what it is looking for, and why certain things get recommended. It also prevents those who would otherwise understand or reverse engineer the algorithm from leveraging insider knowledge for their own benefit. However, as Andrew Burt of the Harvard Business Review notes, this introduces a paradox.

The more transparent you make the algorithm, the greater the chances that it can be manipulated and the larger the security risks that you incur.

This trade off between security, accountability, and manipulation is only going to become more important the more that algorithms are used and the more that they begin to affect people’s behaviors. Some outline of the specific purposes and intentions of an algorithm as it pertains to its potential large-scale effect on human behavior should be a matter of record if there is going to be public trust. Particularly when we look to cases like climate change or even the pandemic, we see the benefit of coordinated action, but there is clearly a growing need to address whether algorithms should be designed to support these collective efforts. There also needs to be greater focus on how proxies are being selected when measuring something and whether those approximations continue to make sense when it’s known that there are deliberate efforts to manipulate them and turned to an individual’s advantage.

Phantom Patterns and Online Misinformation with Megan Fritts

We take in massive amounts of information on a daily basis. Our brains use something called pattern-recognition to try and sort through and make sense of this information. My guest today, the philosopher Megan Fritts, argues that in many cases, the stories we tell ourselves about the patterns we see aren’t actually all that meaningful. And worse, these so-called phantom patterns can amplify the problem of misinformation.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com.

Links to people and ideas mentioned in the show

  1. Online Misinformation and ‘Phantom Patterns’: Epistemic Exploitation in the Era of Big Data” by Megan Fritts and Frank Cabrera
  2. The Right to Know by Lani Watson
  3. Definition of the term “epistemic”
  4. Section 230 of the Communications Decency Act

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Golden Grass” by Blue Dot Sessions

Pintle 1 Min” by Blue Dot Sessions

 

Informed Consent and the Joe Rogan Experience

photograph of microphone and headphones n recording studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Joe Rogan Experience (JRE) podcast was again the subject of controversy when a recent episode was criticized by scientific experts for spreading misinformation about COVID-19 vaccinations. It was not the first time this has happened: Rogan has frequently been on the hot seat for espousing views on COVID-19 that contradict the advice of scientific experts, and for entertaining guests who provided similar views. The most recent incident involved Dr. Robert Malone, who relied on his medical credentials to make views that have been widely rejected seem more reliable. Malone has himself recently been at the center of a few controversies: he was recently kicked off of YouTube and Twitter for violating their respective policies regarding the spread of misinformation, and his appearance on the JRE podcast has prompted some to call for Spotify (where the podcast is hosted) to employ a more rigorous misinformation policy.

While Malone made many dubious claims during his talk with Rogan – including that the public has been “hypnotized,” and that policies that have been enforced by governments are comparable to policies enforced during the Holocaust – there was a specific, ethical argument that perhaps passed under the radar. Malone made the case that it was, in fact, the moral duty of himself (and presumably other doctors and healthcare workers) to tell those considering the COVID-19 vaccine about a wide range of potential detrimental effects. For instance, in the podcast he stated:

So, you know my position all the way through this comes off of the platform of bioethics and the importance of informed consent, so my position is that people should have the freedom of choice particularly for their children… so I’ve tried really hard to make sure that people have access to the information about those risks and potential benefits, the true unfiltered academic papers and raw data, etc., … People like me that do clinical research for a living, we get drummed into our head bioethics on a regular basis, it’s obligatory training, and we have to be retrained all the time… because there’s a long history of physicians doing bad stuff.

Here, then, is an argument that someone like Malone may be making, and that you’ve potentially heard at some point over the past two years: Doctors and healthcare workers have a moral obligation to provide patients who are receiving any kind of health care with adequate information in order for them to make an informed decision. Failing to provide the full extent of information about possible side-effects of the COVID-19 vaccine represents a failure to provide the full extent of information needed for patients to make informed decisions. It is therefore morally impermissible to refrain from informing patients about the full extent of possible consequences of receiving the COVID-19 vaccine.

Is this a good argument? Let’s think about how it might work.

The first thing to consider is the notion of informed consent. The general idea is that providing patients with adequate information is required for them to have agency in their decisions: patients should understand the nature of a procedure and its potential risks so that the decision they make really is their decision. Withholding relevant information would thus constitute a failure to respect the agency of the patient.

The extent and nature of information that patients need to be informed of, however, is open for debate. Of course, there’s no obligation for doctors and healthcare workers to provide false or misleading information to patients: being adequately informed means receiving the best possible information at the doctor’s disposal. Many of the worries surrounding the advice given by Malone, and others like him, pertain to just this worry: the concerns that they have are overblown, or have been debunked, or are generally not accepted by the scientific community, and thus there is no obligation to provide information that falls under those categories to patients.

Regardless, one might still think that in order to have fully informed consent, one should be presented with the widest range of possible information, after which the patient can make up their own mind. Of course, Malone’s thinking is much closer to the realm of the conspiratorial – for example, he stated during his interview with Rogan that scientists manipulate data in order to appease drug companies, as well as his aforementioned claims to mass hypnosis. Even so, if these views are genuinely held by a healthcare practitioner, should they present them to their patients?

While informed consent is important, there is also debate about how fully informed, exactly, one ought to be, or can be. For instance, while an ideal situation would be one in which patients had a complete, comprehensive understanding of the nature of a relevant procedure, treatment, etc., there is reason to think that many patients fail to achieve that degree of understanding even after being informed. This isn’t really surprising: most patients aren’t doctors, and so will be at a disadvantage when it comes to having a complete medical understanding, especially if the issue is complex. A consequence, then, may be that patients who are not experts could end up in a worse position when it comes to understanding the nature of a medical procedure when presented with too much information, or else information that could lead them astray.

Malone’s charge that doctors are failing to adhere to their moral duties by not fully informing patients of a full range of all possible consequences of the COVID-19 vaccination therefore seems misplaced. While people may disagree about what constitutes relevant information, a failure to disclose all possible information is not a violation of a patient’s right to be informed.

Thinking about Trust with C. Thi Nguyen

Many of us rely heavily on our smartphones and computers. But does it make sense to say we “trust” them? On today’s episode of Examining Ethics, the philosopher C. Thi Nguyen explores the relationship of trust we form with the technology we use. We not only can trust non-human objects like smartphones, we tend to trust those objects in an unquestioning way; we’re not thinking about it all that much. While this unquestioning trust makes our everyday lives easier, we don’t recognize just how vulnerable we’re making ourselves to large and increasingly powerful corporations.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Big Ten by Blue Dot Sessions

Lemon and Melon by Blue Dot Sessions

Vaccine Equity with Govind Persad

Many of us have vaccines on the brain recently–whether because we’ve just received a shot, or because we are trying to access one. Who gets vaccinated and when they get their doses is a decision largely in the hands of state public health officials. Many states use age as the primary factor in determining who gets priority. On this episode of Examining Ethics, Dr. Govind Persad–an expert in bioethics and health care law–argues that legislators should think through more equitable options for distributing vaccines.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Dr. Govind Persad
    1. Setting Priorities Fairly in Response to Covid-19…”
    2. Recorded talk, “Implementing COVID-19 Vaccine Distribution: Legal and Equity Dimensions”
  2. CDC’s COVID-19 Vaccine Rollout Recommendations
  3. Myths and Facts about COVID-19 Vaccines

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Partly Sage by Blue Dot Sessions

Colrain by Blue Dot Sessions

The Kindness of Strangers with Michael McCullough

How did humans turn from animals who were only inclined to help their offspring to the creatures we are today–who regularly send precious resources to total strangers? With me on the show today is Michael McCullough, who explores this difficult question in his book, The Kindness of Strangers: How a Selfish Ape Invented a New Moral Code.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Michael McCullough, The Kindness of Strangers: How a Selfish Ape Invented a New Moral Code
  2. W.D. Hamilton and the gene for altruism
  3. Robert Trivers and reciprocal altruism
  4. Ancient Mesopotamia
  5. Humanity’s turn to agriculture (the Neolithic Revolution)
  6. The Code of Hammurabi
  7. The Axial Age
  8. The Golden Rule
  9. Peter Singer

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Zeppelin” by Blue Dot Sessions from sessions.blue (CC BY-NC 4.0)

Silk and Silver” by Blue Dot Sessions from sessions.blue (CC BY-NC 4.0)

The Quandary of Contact-Tracing Tech

image of iphone indicating nearby infections

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


All over the country, states are re-opening their economies. This is happening in defiance of recommendations from experts in infectious disease, which suggest that states only re-open after they have seen a fourteen-day decline in cases, have capacities to contact trace, have sufficient personal protective equipment for healthcare workers, and have sufficient testing capabilities to identify hotspots and deal with problems when they arise.

Experts do not insist that things need to be shut down until the virus disappears. Instead, we need to change our practices; we need to open only when it is safe to do so and we need to employ common sense practices like social distancing, mask-wearing, and hand-washing and sanitizing when we take that step. The ability to identify people who either have or might have coronavirus and to contact those with whom they might have come into contact could play a significant role in this process. Instead of isolating everyone, we could isolate those we have good reason to believe may have become infected.

Different countries have approached this challenge differently. Many have made use of technology to track outbreaks of the virus. Without a doubt, these approaches involve balancing the value of public safety against concerns about personal privacy and undue governmental intrusion into the lives of private citizens.

Many in the West were surprised to hear that Shanghai Disney was scheduled to re-open, which it did on May 11th. Visitors to the park won’t have the Disney experience that they would have had last summer. First, unsurprisingly, Disney is restricting the number of people it will allow into the park at any one time to 24,000 people a day. This is down from its typical 80,000 daily guests. When guests arrive, they must have their temperatures taken, must use hand sanitizer, and must wear masks. Crucially, they must open an app on their phone at the gate that demonstrates to the attendant that their risk level is green.

Since the COVID-19 outbreak, people in China have been required to participate in a system that they call the “Alipay Health Code.” To participate, people download an app on their phones which makes use of geolocation to track the whereabouts of everyone who has it. People are not required to have a COVID-19 test in order to comply with the demands of the app. Instead, the app tracks how close people have come to others who have confirmed cases of the virus. The app assigns a person a QR code depending on their risk level. People with a green designation are low risk and can travel through the country and can go to places like restaurants, shopping malls, and amusement parks with no restrictions. Those with a yellow designation must self-quarantine for nine days. If a person has a red designation, they must enter mandatory government quarantine.

At first glance, this app appears to be a reasonable way of finding balance between preventing the spread of disease on one hand, and opening up the economy and freeing people from isolation on the other. China isn’t simply accepting the inevitable—opening up the economy and disregarding its obligation to vulnerable populations. Instead, it is trying to maximize the well-being of society at large.

Things are more complicated than they might originally appear. First, the process is not transparent to citizens. The standards for reassignment from one color designation to another are not made public. Some people are stuck in mandatory government quarantine without knowing why they are there or how long they might expect to be detained.

There are also concerns about regional discrimination. It appears that a person can be designated a particular threat level simply because they are from or have recently visited a particular region. Citizens have no control over how this process is implemented, and the concern is that decision-making metrics might be discriminatory and might serve to reinforce oppressive social conditions that existed before COVID-19 was an issue. We know that COVID-19 disproportionately affects people living in poverty who are forced to work in unsafe conditions. This kind of tracking may make life for these populations even worse.

There are also significant concerns about the introduction of a heightened degree of governmental surveillance. Before COVID-19 hit, the Chinese government had already slowly begun to implement a social credit system that assigns points to people based on their social behaviors. These points then dictate the quality of services for which the people might be eligible. The Alipay Health Code increases governmental surveillance and encroachment. When people download the Alipay app, the program that is launched includes a command labeled “reportInfoAndLocationToPolice” that sends information about that person to a secure server. It is unclear for what purpose that information will be used in the future. It is also unclear how long it will be mandatory for people in China to have this app on their phones.

But China is not the only country that is using tracking technology to manage the spread of COVID-19. Other countries doing this include South Korea, Singapore, Taiwan, Austria, Poland, the U.K., and the United States. There are advantages and disadvantages to each system. Each system reflects a different balance of important societal values.

South Korea’s system keeps its residents informed of the movement of people who have tested positive for COVID-19. The government sends out texts informing people of places these individuals have been so that others who have also been to those places know whether they might be at risk. This information also lets people know which places might be hotspots so they know to avoid those places. All of this information is useful to prevent the spread of the virus. That said, there are serious challenges here too. Information about the location of individuals at particular times leads to speculation about their behaviors that might lead to discrimination and harassment. The information is anonymous in principle; COVID-19 patients are assigned numbers that are used in reports. In practice, however, it is often fairly easy to deduce who the people are.

Some countries, like the U.K., Singapore, and the United States have “opt-in” tracking programs. Participation in these programs is voluntary and there tend to be regional differences in what they do and how they operate. Singapore uses a system called “TraceTogether.” Users of the app turn on Bluetooth capabilities for their devices. Each device is associated with an anonymous code. Devices communicate with one another and store each other’s anonymous codes. Then, if a person has interacted with someone who later tests positive, they are informed that they are at risk. They can then take action; they may be tested or may self-quarantine. This system appears to have established a comfortable balance between competing interests.

One problem, however, is that its voluntary nature results in low participation numbers—only 1.5 of Singapore’s 5.7 million people are using the app. What follows from this is that a person has the peace of mind of knowing that if they have been in contact with another app user who contracts COVID-19, they’ll know about it. However, this kind of system doesn’t achieve that much-desired balance between concerns for public safety and concerns for a healthy functioning economy. If a person knows only about some, but not all, of the people they’ve encountered who have tested positive for COVID-19, they’re no safer out in the world as a consumer in a newly-opened economy. This app also does nothing to prevent the spread of the virus by asymptomatic people who may never feel the need to get tested because they feel fine.

There are other, less straightforward ways of collecting and using data about the spread of the virus. Government agencies are attaining geo-tracking information from corporations like Google and Facebook. Most users don’t pay much attention when an app asks if it can track the user’s location. People tend to provide a morally meaningless level of consent—they click “okay” without even glancing at terms and conditions. Corporations use this information for all sorts of purposes. For example, police agencies have accessed this information to help them solve crimes through a process of “digital dragnet.” Because these apps track people’s movements, they can help the government to see who was present at sites later identified as hotspots and can identify where people at those sites at the time in question went next. This can help governments direct their attention to where it might do the most good.

Again, in many ways, this seems like a good thing. We don’t want to waste valuable time searching for information where there isn’t any to be found. It’s best instead to find the clues and follow them. On the other hand, this method of attaining information highlights something troubling about trust and privacy in the United States. A Pew poll from November, 2019 suggests that citizens view themselves as having very little control over who is collecting data about them and very little knowledge about what data is being collected or the purposes for which it is being used. Even so, people tend to pay very little attention to the fact that they are being tracked. They simply accept the notion that, if they want to use an app, they have to accept the terms and conditions.

People concerned about personal liberties are front and center on the public stage right now as their protests make for attention-catching headlines. People are unlikely to want to be forced by the government to use a tracking app. Their fears are not entirely unfounded—China’s program seems to open the door for human rights violations and a troubling amount of governmental surveillance of private citizens. Ironically, though, these people give that same information without any fuss to corporations through the use of apps. This may be even worse. At least in principle, governments exist for the good of the people, while the raison d’être of corporations is to make a profit.

The case of tracking poses a genuine moral dilemma. There are very good public health reasons to use technology to track and control the spread of the virus. There are also very good reasons to be concerned about privacy and human rights violations. Around 3,000 people died in the tragic terrorist attacks that took place on September 11th, 2001. As a result, Congress passed The Patriot Act, which significantly limited privacy rights of the people. Its effect on the way respect for individual privacy changed at airports is also noteworthy. How much privacy should we be willing to give up in exchange for safety? If we were willing to give up privacy for safety in response to 911, how much more willing should we be to do so when the death count is so much higher?

Moral Luck, Universalization, and COVID-19

photograph of toast and swank gathering

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


All over the country, people are making headlines for violating shelter-in-place and stay-at-home orders. Motivations for this behavior are diverse; some fail to recognize the gravity of the situation, some acknowledge that COVID-19 is bad, but doubt that it is a threat to them personally; others, despite a lack of expertise in infectious disease, trust their gut instincts more than they trust the opinions of experts. Some people who defiantly resist orders insist that they are doing so to protect their constitutional rights. People are hosting parties, attending church services, and engaging in life-as-usual activity. Those who have been sheltering in place for over a month look on with incredulity and, often, anger. Why do these people behave as if rules, created in emergency circumstances for the health and safety of the community at large, don’t apply to them?

Some people who choose to go out and spend time near others live in states in which doing so is currently against the law. Others live in Arkansas, Iowa, Nebraska, North Dakota, South Dakota, Utah, or Wyoming — states in which staying at home has been recommended, but not required by their respective governors. An answer to the question of whether going out in these conditions is legal doesn’t settle the question of whether it is ethical.

Plenty of people appear to be comfortable gambling with general health and well-being. In one case that made headlines, notorious libertarian Ammon Bundy defied Idaho’s stay-at-home order, routinely hosting in-person meetings on the topic of the order as a restriction of civil liberties. Bundy announced his intention to host a massive Easter get together of 1,000 people or more. In reality, 60 people attended the event, none of which took any social distancing precautions. They did so in defiance of what they viewed as a governmental infringement on their right to choose.

What is it to make a choice? One plausible way of looking at it is that a choice is an endorsement—it is a recommendation. When I choose a course of action, I affirm that the action is, on some description, valuable. I affirm that it would be acceptable for another person to make the choice that I make under similar circumstances. In performing an action, I express that I view the action not only as an action that can be performed, but as an action that ought to be performed. After all, if I didn’t think it ought to be performed, what on earth possessed me to perform it? If that is the implication of choice, then we should be very selective in our choices. In his 1946 lecture Existentialism is a Humanism, philosopher Jean-Paul Sartre emphasizes the responsibility each person bears for their own choice. He said,

“When a man commits himself to anything, fully realizing that he is not only choosing what he will be, but is thereby at the same time a legislator deciding for the whole of mankind – in such a moment a man cannot escape from the sense of complete and profound responsibility.”

Our choices then, even when they seem to us to be somewhat narrow in scope, are not entirely private or personal matters.

A number of things follow from the idea that our choices are endorsements. First, our choices are no small matter because they define who we are as people. People may want to conceive of themselves as kind, empathetic, and caring, but the question of whether a person has those traits is determined by what they actually do, rather than by what they claim to value. In pandemic conditions, a choice to attend a party or to go into a crowded place when doing so is not necessary may seem to be of little consequence if, ultimately, no one gets hurt. On the other hand, those choices say something about the kinds of risks a person is willing to take on and the kind of danger to which that person is willing to expose others.

Second, if choices are recommendations, then there is a good chance that people will follow them—that’s what happens with recommendations. If, for instance, college students observe that some of their peers are gathering together with no apparent consequences, there is some chance that they might conclude that doing so is, after all, no big deal. Others their age are making themselves exceptions to shelter-in-place rules, why can’t they do so as well?

Many philosophers have had much to say about the morality of making an exception of oneself. Eighteenth century philosopher Immanuel Kant urges us to think about whether our actions can be universalized—roughly, would it be acceptable if everyone performed the action we are considering performing? If not, then we are treating a principle, morally binding on everyone else, as if it doesn’t apply to us.

Decision-making in a pandemic demonstrates the moral importance of universalization powerfully. People who violate stay-at-home and shelter-in place-orders are counting on the fact that they are behaving as exceptions to the rules. If everyone followed the recommendations suggested by their actions, the disease would spread like wildfire, even faster than the rate at which it is now spreading. “But,” they might argue, “what is the real harm? If I don’t get sick, and if I don’t spread the disease, does it really matter if I saw some friends one Friday night in April?”

A person who makes this argument fails to recognize themselves as the recipient of what philosophers often refer to as moral luck. In his 1877 essay The Ethics of Belief, philosopher W.K. Clifford describes a ship owner who sends his ship out to sea despite the fact that he had reason to believe it might not be seaworthy. The ship sinks and the passengers die. What if, instead, the ship didn’t sink? What if all of the passengers survived? Would this diminish the guilt of the ship owner? Clifford answers, “Not one jot. When an action is once done, it is right or wrong forever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.” The shipowner got lucky in this case—no one discovered that he did something irresponsible. This doesn’t change how we should view his decision to send the ship off to sea; whatever the consequences turned out to be, his action was reckless.

Consider the following two cases. Tom and Mary both go out to a bar and become equally intoxicated. They both make the decision to drive their respective cars home while too impaired to operate a vehicle safely. They both live roughly the same distance from the bar. On the way home, Tom encounters a pedestrian whom he hits and kills. A pedestrian does not cross Mary’s path, and she arrives home safely. The fact that a pedestrian was present in one case but not the other was a matter of moral luck—neither Tom nor Mary had any control over that. That said, they both behaved equally recklessly and that is the decision for which they are morally responsible.

The same thing can be said about the decision to ignore critical recommendations during the COVID-19 pandemic. Such actions are reckless. Some people who disregard orders may not get the virus and they may not spread it to others. Nevertheless, their actions are not universalizable. They can’t be reasonably recommended to others. When these people take themselves to be defending their own liberties, they are really behaving selfishly and diminishing the liberty and well-being of others.

Expertise in the Time of COVID

photograph of child with mask hugging her mother

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Admitting that someone has special knowledge that we don’t or can do a job that we aren’t trained for is not very controversial. We rarely hesitate to hire a car mechanic, accountant, carpenter, and so on, when we need them. Even if some of us could do parts of their jobs passably well, these experts have specialized training that gives them an important advantage over us: They can do it faster, and they are less likely to get it wrong. In these everyday cases, figuring out who is an expert and how much we can trust them is straightforward. They have a sign out front, a degree on the wall, a robustly positive Google review, and so on. If we happen pick the wrong person—someone who happens to be incompetent or a fraud—we haven’t lost much. We try harder next time.

But as our needs get more complicated, for example, when we need information about a pandemic disease and how best to fight it, as our need for that kind of scientific information is politicized, figuring out who the experts are and how much to trust them is less clear.

Consider a question as seemingly simple as whether surgical masks help contain COVID-19. At first, experts said everyone should wear masks. Then other experts said masks won’t help against airborne viruses because the masks do not seal well enough to stop the tiny viral particles. Some said that surgical masks won’t help, but N95 masks will. Then some experts said that surgical masks could at least help keep you from getting the disease from others’ spittle, as they talk, cough, and sneeze. Still other experts said that even this won’t do because we touch the masks too often, undermining their protective capacity. Yet still others say that while the masks cannot protect you from the virus, they can protect others from you if you happen to be infected, “contradicting,” as one physician told me, “years of dogma.”

What are we to believe from this cacophony of authorities?

To be sure, some of the confusion stems from the novelty of the novel coronavirus. Months into the global spread, we still don’t know much about it. But a large part of the burden of addressing the public health implications lies not just in expert analysis but how expert judgments are disseminated. And yet, I have questions: If surgical masks won’t keep me from getting the infection because they don’t seal well enough, then how could they keep me from giving it to others? Is the virus airborne or isn’t it? What does “airborne” mean in this context? How do we pick the experts out of this crowd of voices?

Most experts are happy to admit that the world is messier than they would prefer, that they are often beset by the fickleness of nature. And after decades of research on error and bias, we know that experts, just like the rest of us, struggle with biased assumptions and cognitive limitations, the biases inherent in how those before them framed questions in their fields, and by the influence of competing interests—even if from the purest motives—for personal or financial ends. People who are skeptical of expertise point to these deficiencies as reasons to dismiss experts.

But if expertise exists, really exists, not merely as a political buzzword or as an ideal in the minds of ivory tower elitists, then, it demands something from us.

Experts understand their fields better than novices. They are better at their jobs than people who have not spent years or decades doing their work. And thus, when they speak about what they do, they deserve some degree of trust.

Happily, general skepticism about expertise is not widely championed. Few of us — even in the full throes of, for example, the Dunning-Kruger Effect — would hazard jumping into the cockpit of an airplane without special training. Few of us would refuse medical help for a severe burn or a broken limb. Unfortunately, much of the skepticism worth taking seriously attaches to topics that are likely to do more harm to others than to the skeptic: skepticism about vaccinations, climate change, and the Holocaust. If you happen to fall into one of these groups at some point in your life — I grew up a six-day creationist and evolution-denier — you know how hard it is to break free from that sort of echo chamber.

But even if you have extricated yourself from one distorted worldview, how do you know you’re not trapped in another? That you aren’t inadvertently filtering out or dismissing voices worth listening to? This is a challenge we all face when up against a high degree of risk in a short amount of time from a threat that is new and largely unknown and that is now heavily politicized.

Part of what makes identifying and trusting experts so hard is that not all expertise is alike. Different experts have differing degrees of authority.

Consider someone working in an internship in the first year out of medical school. They are an MD, and thus, an expert of sorts. Unfortunately, they have very little clinical experience. They have technical knowledge but little competence applying it to complex medical situations.

Modern medicine has figured out how to compensate for this lack of experience. New doctors have to train for several years under a licensed physician before they can practice on their own. To acquire sufficient expertise, they have to be immersed into the domain of their medical specialty. The point is that not every doctor has the same authority as every other, and this is true for other expert domains, as well.

A further complication is that types of expertise differ in how much background information and training is required to do their jobs well. Some types of expertise are closer to what philosopher Thi Nguyen calls our “cognitive mainland.” This mainland refers to the world that novices are familiar with, the language they can make sense of. For example, most novices understand enough about what landscape designers do to assess their competence. They can usually find reviews of their work online. They can even go look at some of their work for themselves. Even if they don’t know much about horticulture, they know whether a yard looks nice.

But expertise varies in how close to us it is. For example, what mortgage brokers do is not as close to us as landscapers. It is further away from our cognitive mainland, out at sea, as it were. First-time home buyers need a lot of time to learn the language associated with the mortgage industry and what it means for them. The farther out an expert domain is from a novice’s mainland, the more likely they are on what Nguyen calls a “cognitive island,” isolated from resources that would let novices make sense of their abilities and authority.

Under normal circumstances, novices have some tools for deciding who is an expert and who is not, and for deciding which experts to trust and which to ignore. This is not easy, but it can be done. Looking up someone’s credentials, certifications, years of experience, recommendations, track records, and so on, can give novices a sense of someone’s competence.

As the expertise gets farther from novices’ cognitive mainland, they can turn to other experts in closely related fields to help them make sense of it. In the case of mortgages, for example, they might have a friend who works in real estate or someone in banking to help translate the relevant bits to us in a way that meets our need. In other words, they can use “meta-experts,” experts in a closely related domain who understand enough of the domain to help them choose experts in that domain wisely.

Unfortunately, during a public health emergency, uncertainty, time constraints, and politicization mean that all of these typical strategies can easily go awry. Experts who feel pressured by society or threatened by politicians can — even if inadvertently — manufacture a type of consensus. They can double-down on a way of thinking about a problem for the sake of maintaining the authority of their testimony. In some cases, this is a simple matter of groupthink. In other cases, it can seem more intentional, even if it isn’t.

Psychologist Philip Tetlock, in his book with Dan Gardner Superforcasting: The Art and Science of Prediction (2015), explains how to prevent this sort of consensus problem by bringing together diverse experts on the same problem and suspending any hierarchical relationships among them. If everyone feels free to comment and if honest critique is welcomed, better decisions are made. In Are We All Scientific Experts Now? (2014), sociologist Harry Collins contends that this is also how peer review works in academic settings. Not everyone who reviews a scientific paper for publication is an expert in the narrow specialization of the researcher. Rather, they understand how scientific research works, the basic terminology used in that domain, and how new information in domains like it is generated. Not only can experts in related domains allow us to challenge groupthink and spur more creative solutions, they can help identify errors in research and reasoning because they understand how expertise works.

These findings are helpful for novices, too. They suggest that our best tool for identifying and evaluating expertise is, rather than pure consensus, consensus among a mix of voices close to the domain in question.

We might call this meta-expert consensus. Novices need not be especially close to a specialized domain to know whether someone working in it is trustworthy. They only have to be close enough to people close to that domain to recognize broad consensus among those who understand the basics in a domain.

Of course, how we spend our energy on experts matters. There are many questions that political and institutional leaders face that the average citizen will not. The average person need not invest energy on highly specialized questions like:

  • How should hospitals fairly allocate scarce resources?
  • How do health care facilities protect health care workers and vulnerable populations from unnecessary risks?
  • How can we stabilize volatile markets?
  • How do we identify people who are immune from the virus quickly so they can return to the workforce?

The payoff is too low and the investment too significant.

On the other hand, there are questions worth everyone’s time and effort:

  • Should I sanitize my groceries before or when I bring them into my living space?
  • How often can I reasonably go out to get groceries and supplies?
  • How can I safely care for my aging parent if I still have to go to work?
  • Should I reallocate my investment portfolio?
  • Can I still exercise outdoors?

Where are we on the mask thing? It turns out, experts at the CDC are still debating their usefulness under different conditions. But here’s an article that helps make sense of what experts are thinking about when they are making recommendations about mask-wearing.

The work required to find and assess experts is not elegant. But neither is the world this pandemic is creating. And understanding how expertise works can help us cultivate a set of beliefs that, if not elegant, is at least more responsible.

Owning a Monopoly on Knowledge Production

photograph of Monopoly game board

With Elizabeth Warren’s call to break up companies like Facebook, Google, and Amazon, there has been increasing attention to the role that large corporations play on the internet. The matter of limited competition within different markets has become an important area of focus, however much of the debate tends to focus on the economic and legal factors involved (such as whether there should be greater antitrust enforcement). However, the philosophical and moral issues have not received as much attention. If a select few corporations are responsible for the kinds of information we get to see, they are capable of exerting a significant influence on our epistemic standards, practices, and conclusions. This also makes the issue a moral one.

Last year Facebook co-founder Chris Hughes surprised many with his call for Facebook to be broken up. Referencing America’s history of breaking up monopolies such as Standard Oil and AT&T, Hughes charged that Facebook dominates social networking and faces no market-based accountability. Earlier, Elizabeth Warren had also called for large companies such as Facebook, Google, and Amazon to be broken apart, claiming that they have bulldozed competition and are using private information for profit. Much of the focus on the issue has been on the mergers of companies like Facebook and Instagram or Google and Nest. The argument holds that these mergers are anti-competitive and are creating economics problems. According to lawyer and professor Tim Wu, “If you took a hard look at the acquisition of WhatsApp and Instagram, the argument that the effect of those acquisitions have been anticompetitive would be easy to prove for a number of reasons.” For one, he cites the significant effect that such mergers have had on innovation.

Still, others have argued that breaking up such companies would be a bad idea. They will note that a concept like social networking is not clearly defined, and thus it is difficult to say that a company like Facebook constitutes a monopoly in its market. Also, unlike Standard Oil, companies like Facebook or Instagram are not essential services for the economy which undermines potential legal justifications for breaking these companies up. Most of these corporations also offer their services for free which means that the typical concerns about monopolies and anticompetitive practices regarding prices and rising costs of services do not apply. Those who argue this tend to suggest that the problem lies with the capitalist system or that there is a lack of proper regulation of these industries.

Most of the proponents and opponents focus on the legal and economic factors involved. However, there are epistemic factors at stake as well. Social epistemologists study matters relating to questions like “how do groups come to know things?” or “how can communities of inquirers affect what individuals come to accept as knowledge?” In recent years, philosophers like Kevin Zollman have provided accounts of how individual knowers are affected by communication within their network of fellow knowers. Some of these studies have demonstrated that different communication structures within an epistemic network in terms of the beliefs, evidence, and testimonies that are shared can affect what conclusions an epistemic community will settle on. The way that evidence, beliefs, and testimony of other knowers within the network is shared will affect what other people in the network believe is rational.

Once we factor the ways that a handful of corporations are able to influence the communication of information in epistemic communities on the internet, a real concern emerges. Google and Facebook are responsible for roughly 70% of referral traffic on the internet. For different categories of articles the number changes. Facebook is responsible for referring 87% of “lifestyle” content. Google is responsible for 84% of referrals of job postings. Facebook and Google together are responsible for 79% of referral traffic regarding the world economy. Internet searching is a common way of getting knowledge and information and Google controls almost 90% of this field.

What this means is that a few companies are responsible for the communication of the incredibly large amounts of information, beliefs, and testimony that is shared by knowers all over the world. If we think about a global epistemic community or even smaller sub-communities learning and eventually knowing things through referral of services like Google or Facebook, this means that few large corporations are capable of affecting what we are capable of knowing and will call knowledge. As Hughes noted in his criticism of Facebook, Mark Zuckerberg can alone decide how to configure Facebook’s algorithms to determine what people see in their News Feed, what messages get delivered, and what constitutes violent and incendiary speech. What this means is that if a person comes to adopt many or most of their beliefs because of what they are exposed to on Facebook, then Zuckerberg alone can significantly determine what that person can know.

A specific example of this kind of dominance is YouTube. When it comes to the online video hosting platform marketplace, YouTube holds a significantly larger share than competitors like Vimeo or Dailymotion. Content creators know this all too well YouTube’s policies on content and monetization have led many on the platform to lament the lack of competition. YouTube creators are often confused by why certain videos get demonetized, what is and is not acceptable content, and what standards should be followed. In recent weeks demonetization of history focused channels has been particularly interesting. For example, a channel devoted to the history of the First World War had over 200 videos demonetized. Many of these channels have had to begin censoring themselves based on what they think is not allowed. So, history channels have started censoring words that would be totally acceptable on network television.

The problem isn’t merely one of monetization either. If a video is demonetized, it will no longer be promoted and recommended by YouTube’s algorithm. Thus, if you wish to learn something about history on YouTube, Google is going to play a large role in terms of who gets to learn what. This can affect the ways that people evaluate information on these (sometimes controversial) topics and thus what epistemic communities will call knowledge. Some of these content creators have begun looking for alternatives to YouTube because of these issues, however it remains to be seen whether they will offer a real source of competition. In the meantime, however, much of the information that gets referred to us comes from a select few companies. These voices have significant influence (intentionally or not) over what we as an epistemic community come to know or believe.

This makes the issue of competition an epistemic issue, but it also inherently is a moral one. This is because as a global society we are capable of regulating in one way or another the ways in which corporations are capable of impacting our lives. This raises an important moral question: is it morally acceptable for a select few companies to determine what constitutes knowledge? Having information being referred by corporations provides the opportunity for some to benefit over others, and we as a global society will have to determine whether we are okay with the significant influence they wield.

Forbidden Knowledge in Scientific Research

cloeup photograph of lock on gate with iron chain

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

When Your Will Is Not Enough: Ethical Restrictions on Entering into Agreements

CRISPR image

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


A 43-year-old with a deadly skin cancer is asking doctors to use the recent developments in CRISPR to experiment with treatments that may help him as well as advance medical understanding. Malakkar Vohryzek is offering to be a test subject, contacting a number of researchers and doctors asking if they would be interested in modifying his genetic code. Such treatment falls well outside approved parameters for human exposure to risk with the gene-editing technology, but the potential patient seems to be providing straightforward consent. In medicine and law, however, consent is often not enough. Currently the international scientific community remains critical of researchers in China that edited the genes of twin children last year, saying that such interference was premature and that the understanding of CRISPR and the impact on human subjects was not advanced enough for such research (for discussion see A.G. Holdier’s “Lulu and Nana: The Surprise of Genetically-Modified Humans”). Vohrysek’s case is interesting, though, because with a terminal illness and clearly expressed desire, why stick to standards that aim to promote and protect a subject’s welfare? If Vohrysek is willing to risk his health (what is left of it given his illness), why should doctors and researchers hesitate to proceed?

The ethics surrounding agreements or contracts incorporate a number of dimensions of our agency and the way we relate to one another. These standards attempt to take seriously the import of being able to direct one’s own life and the significance of the harm of manipulating the lives of others.

Paternalism is the term used to describe efforts to promote others’ best interests when those actions run counter to their expressed wishes. In such cases, someone believes that if a person’s will were effective, it wouldn’t promote what is in their best interests, and therefore interference is justified. The standard case of paternalism is that of a parent who overrules the will of a child. Say, for example, a 5-year-old wants ice cream for dinner but a parent disregards this preference and instead makes the child eat a nutritious meal believing that this will be better for the child. Typically, we think parents are morally justified in disregarding the child’s expressed preferences in circumstances like these. But when, and under what circumstances, paternalism can be justified outside of these clear-cut parent-child cases is much less clear. In Vorysek’s case, there is something paternalistic about not prioritizing the autonomous choice he is communicating. In general, regulatory standards are meant to promote subjects’ welfare and interests, but Vorysek isn’t a child, so what countervailing reasons apply here?

One class of cases where paternalistic interference is typically considered justified is where there isn’t a clear of expression of an agent’s will to interfere with in the first place. We may interpret the parent-child case in this way: a child hasn’t developed their full autonomous capabilities, therefore superseding their expressions of will when it runs counter to their best interests doesn’t seem as problematic as thwarting the will of a fully autonomous, mature adult. Vorysek, and other patients facing terminal prognoses who knowingly choose to expose themselves to risk, seem to be in a different class than those whose illness or condition of life diminishes their autonomy.

One barrier to truly just agreements is an unethical power dynamic founded on asymmetric information. For instance, if one party uses legal understanding and jargon to obscure the stakes and conditions of an agreement so that the other party can’t fully weigh the possible outcomes that they are agreeing to, this is intuitively not a fair case of agreement. These concerns are relevant in many legal contracts, for instance in end-user license agreements that consumers accept in order to use apps and software.

Another arena where there is often an asymmetry of technical understanding is in physician-patient exchanges (for discussion see Tucker Sechrest’s “The Inherent Conflict in Informed Consent”). In order to get informed consent from patients, physicians must communicate effectively about diagnoses, potential treatment options, as well as their outcomes and likely effects to patients who frequently do not have the breadth of understanding that the physician possesses. If a doctor does not ensure that the patient comprehends the stakes of the treatment choices, the patient may enter into agreements that do not reflect their priorities, preferences, and values. This asymmetric understanding is also the ethically problematic dimension of predatory lending, “the practice of a lender deceptively convincing borrowers to agree to unfair and abusive loan terms, or systematically violating those terms in ways that make it difficult for the borrower to defend against.”

But there remain further ethical considerations even when mutual understanding can be assured. It’s true that only when both parties to an agreement have a full grasp of the stakes and possible outcomes of the agreement is there the potential for each to weigh this information against their preferences, priorities, and values in order to determine whether the agreement is right for them. However, this doesn’t exhaust all ethical dimensions of making agreements. We could imagine that the 43-year-old patient seeking un-approved CRISPR treatments to be in such a position he might understand the risks and not be mistaken about how the facts of the matter relate to his particular values, preferences, and priorities. What ethical reservations are left?

Exploitation refers to a type of advantage-taking that is ethically problematic. Consider a case where an individual with little money is offered $500 in exchange for taking part in medical research. It could be the case that this is the “right” choice for them the $500 is sorely needed, say to maintain access to shelter and food, and the risk involved in the medical research is processed and understood clearly and the person determines that shelter and food outweigh the risk. In such cases, the ethical issue isn’t that a person may be entering agreements without understanding or against their best interests. Indeed, this individual is making the best choice in their circumstances. However, the structure of the choice itself may be problematic. The financial incentive for taking on unknown risk of bodily harm is a thorny ethical question in bioethics because of the potential exploitative relationship it sets up. When financial incentives are in place, the disadvantaged portion of a population will bear the brunt of the risk of medical research.

In order to avoid exploitation, there are regulatory standards for the kinds of exchanges that are permissible for exposing one’s body to risk of unknown harm, as in medical research. There are high standards for such research in terms of likelihood of scientific validity – the hypothesized outcome can’t just be an informed “guess,” for instance. Vorysek likely won’t find a researcher to agree to run experiments on him for fear that terminal patients, in general, will become vulnerable to experimentation. As a practice, this may be ethically problematic because patients are a vulnerable population and this vulnerability may be exploited the ethical constraint on agreements can be a concern even when making the agreement may be both in the individual’s best interest and satisfying their will.

This, of course, leads to tensions and controversy. Should Vorysek and others in similar positions be able to use their tenuous prognosis for scientific gain? “If I die of melanoma, it won’t help anyone,” he said. “If I die because of an experimental treatment, it will at least help science.”

Sparking Joy: The Ethics of Medically-Induced Happiness

Photograph of a sunflower in sunshine with blue sky behind

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Happiness is often viewed as an ephemeral thing. Finding happiness is an individual and ever-developing process. Biologically speaking, however, all emotions are the simple result of hormones and electrical impulses. In a recent medical breakthrough, a team of scientists has found a way to tap in to these electrical impulses and induce joy directly in the brain. This kind of procedure has long been the stuff of speculation, but now it has become a reality. While the technique shows a good deal of promise in treating disorders such as depression and post-traumatic stress, it also presents an ethical conundrum worth considering.

On initial examination, it is difficult to point out anything particularly wrong with causing “artificial” joy. Ethical hedonism would prioritize happiness over all other values, regardless of the manner in which happiness is arrived at. However, many people would experience a knee-jerk rejection to the procedure. It bears some similarity to drug-induced euphoria, but unlike illicit drugs, this electrical procedure seems to have no harmful side effects, according to the published study. Of course, with a small sample size and a relatively short-term trial, addiction and other harmful aspects of the procedure may be yet undiscovered. If, as this initial study suggests, the procedure is risk-free, should it be ethically accepted? Or is there cause for hesitation beyond what is overtly harmful?

The possibility of instantaneous, over-the-counter happiness has been a frequent subject of science-fiction. Notable examples include Aldous Huxley’s Brave New World, which featured a happiness-inducing drug called “soma”; and Philip K. Dick’s Do Androids Dream of Electric Sheep? (later adapted into the film Blade Runner), which included a mood-altering device called a “mood organ.” Both novels treat these inventions as key elements in a dystopian future. Because the emotions produced by these devices are “false”—the direct result of chemical alteration, rather than a “natural” response to external conditions—the society which revolves around them is empty and void of meaning. What is the validity of this viewpoint? Our bias towards what we perceive as “natural” may be simply a matter of maintaining the status quo–we’re more comfortable with whatever we’re used to. This is similar to the preference for foods containing “natural” over “artificial” flavoring despite nearly identical chemical compositions. While we are instinctively wary of the “artificial” emotions, there may be no substantive difference to the unbiased feeler.

Of course, emotions exist for more than just the experience of feeling. The connection between emotions and the outside world was addressed by Kelly Bijanki, one of the scientists involved in the electrically-induced happiness study, in her interview with Discover Magazine: “Our emotions exist for a very specific purpose, to help us understand our world, and they’ve evolved to help us have a cognitive shortcut for what’s good for us and what’s bad for us.” Just as pain helps us avoid dangerous hazards and our ability to taste bitterness helps us avoid poisonous things, negative emotions help drive us away from harmful situations and towards beneficial ones. However, living in a modern society to which the human body is not biologically adapted, our normally helpful sensory responses like pain and fear can sometimes backfire. Some people experience chronic pain connected to a bodily condition that cannot be immediately resolved; in these cases, the pain itself becomes the problem, rather than a useful signal. As such, we seek medical solutions to the pain itself. Chronic unhappiness, such as in cases of anxiety and depression, could be considered the same way: as a normally useful sensory feedback which has “gone wrong” and itself become a problem requiring medical treatment.

What if the use of electrically-induced happiness extended beyond temporary medical treatments? Why shouldn’t we opt to live our lives in a state of perpetual euphoria, or at least have the option to control our emotions directly? As was previously mentioned, artificial happiness may be indistinguishable from the real thing, at least as far as our bodies are concerned. Human beings already use a wide variety of chemicals and actions to “induce” happiness–that is, to make ourselves happy. If eating chocolate or exercising are “natural” paths to happiness, why would an electrical jolt be “unnatural”? Of course, the question of meaning still bears on the issue. Robert Nozick argues that humans make a qualitative distinction between the experience of doing something and actually doing it. We want our happiness to be tied to real accomplishments; the emotion alone isn’t enough. More concretely, we would probably become desensitized to happiness if it were all we experienced. In the right doses, sadness helps us value happiness more; occasional pain makes our pleasure more precious.

If happiness in the absence of meaning is truly “empty,” our ethical outlook toward happiness should reflect this view. Rather than viewing pleasure or happiness itself as the ultimate good, we might instead see happiness as a component of a well-lived life. Whether something is good would depend not on whether it brings happiness, but whether it fulfills some wider sense of meaning. Of course, exactly what constitutes this wider meaning would continue to be the subject of endless philosophical debate.

Facing the Synthetic Age with Christopher Preston

We’re in an age known as the Anthropocene, an era in which humans have been the dominant force on earth. We’ve impacted the climate, we’ve shaped the land and in recent years, we’ve made changes on the atomic and genetic levels. On this podcast, the philosopher Christopher Preston shares insights from his book The Synthetic Age, which explores the ethics of technologies that have the potential to radically reshape the world. We’re attempting to cool the surface of the earth by brightening clouds. We can introduce traits into wild species through gene drives and create entirely new organisms in the lab. While these new technologies are interesting and in many cases, potentially helpful, Christopher writes that we need to see them for what they are: a “deliberate shaping” of the earth and the organisms in it. He wants us to think carefully about what it might mean for humans to live in a world that they have intentionally manipulated.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

 

  1. Christopher Preston, The Synthetic Age
  2. Explaining the Anthropocene
  3. “We have 12 years to limit climate change catastrophe”
  4. Synthetic biology at the Venter Institute
  5. Malaria is a public health crisis
  6. Gene drives, mosquitoes and malaria
  7. More on living in a post-wild world
  8. 2015 fatality at Yellowstone National Park
  9. “Just 90 companies caused two-thirds of man-made global warming emissions”
  10. Joel Reynolds

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Zeppelin” by Blue Dot Sessions

Soothe” by Blue Dot Sessions

A Certain Lightness” by Blue Dot Sessions

Heliotrope” by Blue Dot Sessions

On Gene Editing, Disease, and Disability

Photo of a piece of paper showing base pairs

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On November 29, 2018, MIT Tech Review reported that at Harvard University’s Stem Cell Institute, “IVF doctor and scientist Werner Neuhausser says he plans to begin using CRISPR, the gene-editing tool, to change the DNA code inside sperm cells.” This is the first stage towards gene editing embryos, which is itself a controversial goal, given the debates that rose in response to scientists in China making edits at more advanced stages in fetal development.

Frequently the concern over editing human genes involves issues of justice, such as developing the unchecked power to produce humanity that would exist solely to service some population – for example, organ farming. The moral standing of clones and worries over the dignity of humanity when such power is developed get worked over whenever a new advancement in gene editing is announced.

The response, or the less controversial use of our growing control over genetic offspring, is the potential to cure diseases and improve the quality of life for a number of people. However, this use of genetic intervention may not be as morally unambiguous as it seems at first glance.

Since advanced testing was developed, the debate about the moral status of selective abortion has been fraught. Setting aside the ethics of abortion itself, would choosing to bring a child into the world that does not have a particular illness, syndrome, or condition rather than one that did be an ethical thing for a parent to do? Ethicists are divided.

Some are concerned with the expressive power of such a decision – does making this selection express prejudice against those with the condition or a judgment about the quality of the life that individuals living with the condition experience?

Others are concerned with the practical implications of many people making selections for children without some conditions. It is impractical to imagine that widespread use of such selection would completely eradicate the conditions, and therefore one worry could be that the individuals with conditions in the hypothetical society where widespread selection takes place will be further stigmatized, invisible, or have fewer resources. Also, the prejudice against conditions that involve disability might lead to selections that result in lack of diversity in the human population based on misunderstandings of quality of life.

Of course, on the other side of these discussions is the intuitive preference or obligation for parents or those in charge of raising people in society to promote health and well-being. Medicine is traditionally thought to aim at treating and preventing conditions that deviate from health and wellness; both are complex concepts, to be sure, but preventing disease or creating a society that suffers less from disease seems to fall within the domain of appropriate medical intervention.

How does this advancement in gene editing relate to the debate in selective birth? The Harvard example seeks to prevent Alzheimer’s disease, taking sperm and intervening to prevent disease. Lack of human diversity, pernicious ablest expressive power, and negative impact on those that suffer from the disease are the main concerns with intervening for the purported sake of health.

The Persistent Problem of the Fair Algorithm

photograph of a keyboard and screen displaying code

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


At first glance, it might appear that the mechanical procedures we use to accomplish such mundane tasks as loan approval, medical triage, actuarial assessment, and employment screening are innocuous. Designing algorithms to process large chunks of data and transform various individual data points into a single output offers a great power in streamlining necessary but burdensome work. Algorithms advise us about how we should read the data and how we should respond. In some cases, they even decide the matter for us.

It isn’t simply that these automated processes are more efficient than humans at performing these computations (emphasizing the relevant data points, removing statistical outliers and anomalies, and weighing competing factors). Algorithms also hold the promise of removing human error from the equation. A recent study, for example, has identified a tendency for judges on parole boards to become less and less lenient in their sentencing as the day wears on. By removing extraneous elements like these from the decision-making process, an algorithm might be better positioned to deliver true justice.

Similarly, another study established the general superiority of mechanical prediction to clinical prediction in various settings from medicine to mental health to education. Humans were most notably outperformed when a one-on-one interview was conducted. These findings reinforce the position that algorithms should augment (or perhaps even replace) human decision-making, which is often plagued by prejudice and swayed by sentiment.

But despite their great promise, algorithms carry a number of concerns. Chief among these are problems of bias and transparency. Often seen as free from bias, algorithms stand as neutral arbiters, capable of combating long-standing inequalities such as the gender pay-gap or unequal sentencing for minority offenders. But automated tools can just as easily preserve and fortify existing inequalities when introduced to an already discriminatory system. Algorithms used in assigning bond amounts and sentencing underestimated the risk of white defendants while overestimating that of Black defendants. Popular image-recognition software reflects significant gender bias. Such processes mirror and thus reinforce extant social bias. The algorithm simply tracks, learns, and then reproduces the patterns that it sees.

Bias can be the result of a non-representative sample size that is too small or too homogenous. But bias can also be the consequence of the kind of data that the algorithm draws on to make its inferences. While discrimination laws are designed to restrict the use of protected categories like age, race, or sex, an algorithm might learn to use a proxy, like zip codes, that produces equally skewed outcomes.

Similarly, predictive policing — which uses algorithms to predict where a crime is likely to occur and determine how to best deploy police resources — has been criticized as “enabl[ing], or even justify[ing], a high-tech version of racial profiling.” Predictive policing creates risk profiles for individuals on the basis of age, employment history, and social affiliations, but it also creates risk profiles for locations. Feeding the algorithm information which is itself race- and class-based creates a self-fulfilling prophecy whereby continued investigation of Black citizens in urban areas leads to a disproportionate number of arrests. A related worry is that tying police patrol to areas with the highest incidence of reported crime grants less police protection to neighborhoods with large immigrant populations, as foreign-born citizens and non-US citizens are less likely to report crimes.

These concerns of discrimination and bias are further complicated by issues of transparency. The very function the algorithm was meant to serve — computing multiple variables in a way that surpasses human ability — inhibits oversight. It is the algorithm itself which determines how best to model the data and what weights to attach to which factors. The complexity of the computation as well as the use of unsupervised learning — where the algorithm processes data autonomously, as opposed to receiving labelled inputs from a designer — may mean that the human operator cannot parse the algorithm’s rationale and that it will always remain opaque. Given the impenetrable nature of the decision-mechanism, it will be difficult to determine when predictions objectionably rely on group affiliation to render verdicts and who should be accountable when they do.

Related to these concerns of oversight are questions of justification: What are we owed in terms of an explanation when we are denied bail, declined for a loan, refused admission to a university, or passed over for a job interview? How much should an algorithm’s owner need to be able to say to justify the algorithm’s decision and what do we have a right to know? One suggestion is that individuals are owed “counterfactual explanations” which highlight the relevant data points that led to the determination and offer ways in which one might change the decision. While this justification would offer recourse, it would not reveal the relative weights the algorithm places on the data nor would a justification be offered for which data points an algorithm considers relevant.

These problems concerning discrimination and transparency share a common root. At bottom, there is no mechanical procedure which would generate an objective standard of fairness. Invariably, the determination of that standard will require the deliberate assignation of different weights to competing moral values: What does it mean to treat like cases alike? To what extent should group membership determine one’s treatment? How should we balance public good and individual privacy? Public safety and discrimination? Utility and individual right?

In the end, our use of algorithms cannot sidestep the task of defining fairness. It cannot resolve these difficult questions, and is not a surrogate for public discourse and debate.

Exploring Intellectual Property Rights with Adam Moore

We all interact with intellectual property on a daily basis, whether consciously or not. On this episode, we talk to intellectual property expert and philosopher Adam Moore to learn about some of the most important ethical issues related to intellectual property. Then, independent producer Sandra Bertin brings us the fascinating story of a fight for collective intellectual property rights in Guatemala.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Adam Moore’s body of work on intellectual property
  2. Intellectual property
  3. You only have legal protection over intellectual property that is fixed in physical form
  4. Some justifications for intellectual property:
  5. Common objections to intellectual property:
  6. Some more objections to intellectual property
  7. Copyright Act of 1976 (term of protection)
  8. Independent producer Sandra Bertin
  9. More on the National Movement of Mayan Weavers
  10. More on Angelina Aspuac

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Zeppelin” by Blue Dot Sessions

Clips from “A Comic’s Life Radio” (originally aired on KCAA in Loma Linda, CA Friday, January 22, 2016.)

Are We Loose Yet” by Blue Dot Sessions (sections of this song have been looped)

Lakeside Path” by Blue Dot Sessions

Great Great Lengths” by Blue Dot Sessions

 

“Minibrains” and the Future of Drug Testing

Image of a scientist swabbing a petri dish.

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


 NPR recently reported on the efforts of scientists who are growing small and “extremely rudimentary versions of an actual human brain” by transforming human skin cells into neural stem cells and letting them grow into structures like those found in the human brain. These tissues are called cerebral organoids but are more popularly known as “minibrains.” While this may all sound like science fiction, their use has already led to new discoveries in the medical sciences.

The impetus for developing cerebral organoids comes from the difficult situation imposed on research into brain diseases. It is difficult to model complex conditions like autism and schizophrenia using the brains of mice and other animals. Yet, there are also obvious ethical obstacles to experimenting on live human subjects. Cerebral organoids provide a way out of this trap because they present models more akin to the human brain. Already, they have led to notable advances. Cerebral organoids were used in research into how the Zika virus disrupts normal brain development. The potential to use cerebral organoids to test future therapies for such conditions as schizophrenia, autism, and Alzheimer’s Disease seems quite promising.

The experimental use of cerebral organoids is still quite new; the first ones were successfully developed in 2013. As such, it is the right time to begin serious reflection on the potential ethical hurdles for research conducted on cerebral organoids. To that end, a group of ethicists, law professors, biologists, and neuroscientists recently published a commentary in Nature on the ethics of minibrains.

The commentary raises many interesting issues. Let us consider just three:

The prospect of conscious cerebral organoids

Thus far, the cerebral organoids experimented upon have been roughly the size of peas. According to the Nature commentary, they lack certain cell types, receive sensory input only in primitive form, and have limited connection between brain regions. Yet, there do not appear to be insurmountable hurdles to advances that will allow us to scale these organoids up into larger and more complex neural structures. As the brain is the seat of consciousness, scaled-up organoids may rise to the level of such sensitivity to external stimuli that it may be proper to ascribe consciousness to them. Conscious organisms sensitive to external stimuli can likely experience negative and positive sensations. Such beings have welfare interests. Whether we had ethical obligations to these organoids prior to the onset of feelings, it would be difficult to deny such obligations to them once they achieve this state. Bioethicists and medical researchers ought to develop principles to govern these obligations. They may be able to model them after our current approaches to research obligations regarding animal test subjects. However, it is likely the biological affinity between cerebral organoids and human beings will require significant departure from the animal test subject model.

Additionally, research into consciousness has not nailed down the neural correlates of consciousness. As such, we may not necessarily know if a particularly advanced cerebral organoid is likely to be conscious. Either we ought to purposefully slow the progress into developing complex cerebral organoids until we understand consciousness better, or we pre-emptively treat organoids as beings deserving moral consideration so that we don’t accidentally mistreat an organoid we incorrectly identify as non-conscious.

Human-animal blurring

Cerebral organoids have also been developed in the brains of other animals. This gives the brain cells a more “physiologically natural” environment. According to the Nature commentary, cerebral organoids have been transplanted into mice and have become vascularized in the process. Such vascularization is an important step in the further development in size and complexity of cerebral organoids.

There appears to be a general aversion to the prospect of transplanting human minibrains into mice. Many perceive the creation of such human-animal hybrids (chimeras) as crossing the inviolable boundary between species.  The transplantation of any cells of one animal, especially those of a human (and even more especially those of the brain cells of a human) may violate this sacred boundary.

An earlier entry on The Prindle Post approached the vexing issues of the creation of human-animal chimeras. It appeared that much of the opposition to chimeras was based in part on an objection to “playing God.” Though some have ridiculed the “playing God” argument as based on “a meaningless, dangerous cliché,” people’s strong intuitions against the blurring of species boundaries ought to influence policies put in place to govern such research. If anything, this will help tamp down a strong public backlash.

Changing definitions of death

Cerebral organoids may also threaten the scientific and legal consensus around defining death as the permanent cessation of organismic functioning and understanding the criterion in humans for this as the cessation of functioning in the whole brain. This consensus itself developed in response to emerging technologies in the 1950’s and 1960’s enabling doctors to maintain the functioning of a person’s cardio-pulmonary system after their brain had ceased functioning. Because of this technological change, the criterion of death could no longer be the stopping of the heart. What if research into cerebral organoids and stem cell biology enables us to restore some functions of the brain to a person already declared brain dead? This undercuts the notion that brain death is permanent and may force us to revisit the consensus on death once again.

Minibrains raise many other ethical issues not considered in this brief post. How should medical researchers obtain consent from the human beings who donate cells that are eventually turned into cerebral organoids? Will cerebral organoids who develop feelings need to be appointed legally empowered guardians to look after their interests? Who is the rightful owner of these minibrains? Let us get in front of these ethical questions before science sets its own path.