← Return to search results
+

Phantom Patterns and Online Misinformation with Megan Fritts

We take in massive amounts of information on a daily basis. Our brains use something called pattern-recognition to try and sort through and make sense of this information. My guest today, the philosopher Megan Fritts, argues that in many cases, the stories we tell ourselves about the patterns we see aren’t actually all that meaningful. And worse, these so-called phantom patterns can amplify the problem of misinformation.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com.

Links to people and ideas mentioned in the show

  1. Online Misinformation and ‘Phantom Patterns’: Epistemic Exploitation in the Era of Big Data” by Megan Fritts and Frank Cabrera
  2. The Right to Know by Lani Watson
  3. Definition of the term “epistemic”
  4. Section 230 of the Communications Decency Act

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Golden Grass” by Blue Dot Sessions

Pintle 1 Min” by Blue Dot Sessions

 

AI and Pure Science

Pixelated image of a man's head and shoulders made up of pink and purple squares

In September 2019, four researchers wrote to the academic publisher Wiley to request that it retract a scientific paper relating to facial recognition technology. The request was made not because the research was wrong or reflected bad methodology, but rather because of how the technology was likely to be used. The paper discussed the process by which algorithms were trained to detect faces of Uyghur people, a Muslim minority group in China. While researchers believed publishing the paper presented an ethical problem, Wiley defended the article noting that it was about a specific technology, not about the application of that technology. This event raises a number of important questions, but, in particular, it demands that we consider whether there is an ethical boundary between pure science and applied science when it comes to AI development – that is, whether we can so cleanly separate knowledge from use as Wiley suggested.

The 2019 article for the journal WIREs Data Mining and Knowledge Discovery discusses discoveries made by the research term in the work on ethic group facial recognition which included datasets of Chinese Uyghur, Tibetan, and Korean students at Dalian University. In response a number of researchers, believing that it is disturbing that academics tried to build such algorithms, called for the article to be retracted. China has been condemned for its heavy surveillance and mass detention of Uyghurs, and this study and a number of other studies, some scientists claim, are helping to facilitate the development of technology which can make this surveillance and oppression more effective. As Richard Van Noorden reports, there has been a growing push by some scientists to get the scientific community to take a firmer stance against unethical facial-recognition research, not only denouncing controversial uses of technology, but the research foundations of it as well. They call on researchers to avoid working with firms or universities linked to unethical projects.

For its part, Wiley has defended the article, noting “We are aware of the persecution of the Uyghur communities … However, this article is about a specific technology and not an application of that technology.” In other words, Wiley seems to be adopting an ethical position based on the long-held distinction between pure and applied science. This distinction is old, tracing back to the time of Francis Bacon and the 16th century as part of a compromise between the state and scientists. As Robert Proctor reports, “the founders of the first scientific societies promised to ignore moral concerns” in return for funding and for freedom of inquiry in return for science keeping out of political and religious matters. In keeping with Bacon’s urging that we pursue science “for its own sake,” many began to distinguish science as “pure” affair, interested in knowledge and truth by themselves, and applied science which seeks to use engineering to apply science in order to secure various social goods.

In the 20th century the division between pure and applied science was used as a rallying cry for scientific freedom and to avoid “politicizing science.” This took place against a historical backdrop of chemists facilitating great suffering in World War I followed by physicists facilitating much more suffering in World War II. Maintaining the political neutrality of science was thought to make it more objective by ensuring value-freedom. The notion that science requires freedom was touted by well-known physicists like Percy Bridgman who argued,

The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.

For Bridgman, science just wasn’t science unless it was pure. He explains, “Popular usage lumps under the single world ‘science’ all the technological activities of engineering and industrial development, together with those of so-called ‘pure science.’ It would clarify matters to reserve the word science for ‘pure’ science.” For Bridgman it is society that must decide how to use a discovery rather than the discoverer, and thus it is society’s responsibility to determining how to use pure science rather than the scientists’. As such, Wiley’s argument seems to echo those of Bridgman. There is nothing wrong with developing the technology of facial recognition in and of itself; if China wishes to use that technology to oppress people with it, that’s China’s problem.

On the other hand, many have argued that the supposed distinction between pure and applied science is not ethically sustainable. Indeed, many such arguments were driven by the reaction to the proliferation of science during the war. Janet Kourany, for example, has argued that science and scientists have moral responsibilities because of the harms that science has caused, because science is supported through taxes and consumer spending, and because society is shaped by science. Heather Douglas has argued that scientists shoulder the same moral responsibilities as the rest of us not to engage in reckless or negligent research, and that due to the highly technical nature of the field, it is not reasonable for the rest of society to carry those responsibilities for scientists. While the kind of pure knowledge that Bridgman or Bacon favor has value, these values need to be weighed against other goods like basic human rights, quality of life, and environmental health.

In other words, the distinction between pure and applied science is ethically problematic. As John Dewey argues the distinction is a sham because science is always connected to human concerns. He notes,

It is an incident of human history, and a rather appalling incident, that applied science has been so largely made equivalent for use for private and economic class purposes and privileges. When inquiry is narrowed by such motivation or interest, the consequence is in so far disastrous both to science and to human life.

Perhaps this is why many scientists do not accept Wiley’s argument for refusing retraction; discovery doesn’t happen in a vacuum. It isn’t as if we don’t know why the Chinese government has an interest in this technology. So, at what point does such research become morally reckless given the very likely consequences?

This is also why debate around this case has centered on the issue of informed consent. Critics charge that the Uyghur students who participated in the study were not likely fully informed of its purposes and this could not provide truly informed consent. The fact that informed consent is relevant at all, which Wiley admits, seems to undermine their entire argument as informed consent in this case appears explicitly tied to how the technology will be used. If informed consent is ethically required, this is not a case where we can simply consider pure research with no regard to its application. And these considerations prompted scientists like Yves Moreau to argue that all unethical biometric research should be retracted.

But regardless of how we think about these specifics, this case serves to highlight a much larger issue: given the large number of ethical issues associated with AI and its potential uses, we need to dedicate much more of our time and attention to the question of whether some certain forms of research should be considered forbidden knowledge. Do AI scientists and developers have moral responsibilities for their work? Is it more important to develop this research for its own sake or are there other ethical goods that should take precedence?

Informed Consent and the Joe Rogan Experience

photograph of microphone and headphones n recording studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Joe Rogan Experience (JRE) podcast was again the subject of controversy when a recent episode was criticized by scientific experts for spreading misinformation about COVID-19 vaccinations. It was not the first time this has happened: Rogan has frequently been on the hot seat for espousing views on COVID-19 that contradict the advice of scientific experts, and for entertaining guests who provided similar views. The most recent incident involved Dr. Robert Malone, who relied on his medical credentials to make views that have been widely rejected seem more reliable. Malone has himself recently been at the center of a few controversies: he was recently kicked off of YouTube and Twitter for violating their respective policies regarding the spread of misinformation, and his appearance on the JRE podcast has prompted some to call for Spotify (where the podcast is hosted) to employ a more rigorous misinformation policy.

While Malone made many dubious claims during his talk with Rogan – including that the public has been “hypnotized,” and that policies that have been enforced by governments are comparable to policies enforced during the Holocaust – there was a specific, ethical argument that perhaps passed under the radar. Malone made the case that it was, in fact, the moral duty of himself (and presumably other doctors and healthcare workers) to tell those considering the COVID-19 vaccine about a wide range of potential detrimental effects. For instance, in the podcast he stated:

So, you know my position all the way through this comes off of the platform of bioethics and the importance of informed consent, so my position is that people should have the freedom of choice particularly for their children… so I’ve tried really hard to make sure that people have access to the information about those risks and potential benefits, the true unfiltered academic papers and raw data, etc., … People like me that do clinical research for a living, we get drummed into our head bioethics on a regular basis, it’s obligatory training, and we have to be retrained all the time… because there’s a long history of physicians doing bad stuff.

Here, then, is an argument that someone like Malone may be making, and that you’ve potentially heard at some point over the past two years: Doctors and healthcare workers have a moral obligation to provide patients who are receiving any kind of health care with adequate information in order for them to make an informed decision. Failing to provide the full extent of information about possible side-effects of the COVID-19 vaccine represents a failure to provide the full extent of information needed for patients to make informed decisions. It is therefore morally impermissible to refrain from informing patients about the full extent of possible consequences of receiving the COVID-19 vaccine.

Is this a good argument? Let’s think about how it might work.

The first thing to consider is the notion of informed consent. The general idea is that providing patients with adequate information is required for them to have agency in their decisions: patients should understand the nature of a procedure and its potential risks so that the decision they make really is their decision. Withholding relevant information would thus constitute a failure to respect the agency of the patient.

The extent and nature of information that patients need to be informed of, however, is open for debate. Of course, there’s no obligation for doctors and healthcare workers to provide false or misleading information to patients: being adequately informed means receiving the best possible information at the doctor’s disposal. Many of the worries surrounding the advice given by Malone, and others like him, pertain to just this worry: the concerns that they have are overblown, or have been debunked, or are generally not accepted by the scientific community, and thus there is no obligation to provide information that falls under those categories to patients.

Regardless, one might still think that in order to have fully informed consent, one should be presented with the widest range of possible information, after which the patient can make up their own mind. Of course, Malone’s thinking is much closer to the realm of the conspiratorial – for example, he stated during his interview with Rogan that scientists manipulate data in order to appease drug companies, as well as his aforementioned claims to mass hypnosis. Even so, if these views are genuinely held by a healthcare practitioner, should they present them to their patients?

While informed consent is important, there is also debate about how fully informed, exactly, one ought to be, or can be. For instance, while an ideal situation would be one in which patients had a complete, comprehensive understanding of the nature of a relevant procedure, treatment, etc., there is reason to think that many patients fail to achieve that degree of understanding even after being informed. This isn’t really surprising: most patients aren’t doctors, and so will be at a disadvantage when it comes to having a complete medical understanding, especially if the issue is complex. A consequence, then, may be that patients who are not experts could end up in a worse position when it comes to understanding the nature of a medical procedure when presented with too much information, or else information that could lead them astray.

Malone’s charge that doctors are failing to adhere to their moral duties by not fully informing patients of a full range of all possible consequences of the COVID-19 vaccination therefore seems misplaced. While people may disagree about what constitutes relevant information, a failure to disclose all possible information is not a violation of a patient’s right to be informed.

Thinking about Trust with C. Thi Nguyen

Many of us rely heavily on our smartphones and computers. But does it make sense to say we “trust” them? On today’s episode of Examining Ethics, the philosopher C. Thi Nguyen explores the relationship of trust we form with the technology we use. We not only can trust non-human objects like smartphones, we tend to trust those objects in an unquestioning way; we’re not thinking about it all that much. While this unquestioning trust makes our everyday lives easier, we don’t recognize just how vulnerable we’re making ourselves to large and increasingly powerful corporations.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Big Ten by Blue Dot Sessions

Lemon and Melon by Blue Dot Sessions

Vaccine Equity with Govind Persad

Many of us have vaccines on the brain recently–whether because we’ve just received a shot, or because we are trying to access one. Who gets vaccinated and when they get their doses is a decision largely in the hands of state public health officials. Many states use age as the primary factor in determining who gets priority. On this episode of Examining Ethics, Dr. Govind Persad–an expert in bioethics and health care law–argues that legislators should think through more equitable options for distributing vaccines.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Dr. Govind Persad
    1. Setting Priorities Fairly in Response to Covid-19…”
    2. Recorded talk, “Implementing COVID-19 Vaccine Distribution: Legal and Equity Dimensions”
  2. CDC’s COVID-19 Vaccine Rollout Recommendations
  3. Myths and Facts about COVID-19 Vaccines

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Partly Sage by Blue Dot Sessions

Colrain by Blue Dot Sessions

The Kindness of Strangers with Michael McCullough

How did humans turn from animals who were only inclined to help their offspring to the creatures we are today–who regularly send precious resources to total strangers? With me on the show today is Michael McCullough, who explores this difficult question in his book, The Kindness of Strangers: How a Selfish Ape Invented a New Moral Code.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Michael McCullough, The Kindness of Strangers: How a Selfish Ape Invented a New Moral Code
  2. W.D. Hamilton and the gene for altruism
  3. Robert Trivers and reciprocal altruism
  4. Ancient Mesopotamia
  5. Humanity’s turn to agriculture (the Neolithic Revolution)
  6. The Code of Hammurabi
  7. The Axial Age
  8. The Golden Rule
  9. Peter Singer

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Zeppelin” by Blue Dot Sessions from sessions.blue (CC BY-NC 4.0)

Silk and Silver” by Blue Dot Sessions from sessions.blue (CC BY-NC 4.0)

The Quandary of Contact-Tracing Tech

image of iphone indicating nearby infections

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


All over the country, states are re-opening their economies. This is happening in defiance of recommendations from experts in infectious disease, which suggest that states only re-open after they have seen a fourteen-day decline in cases, have capacities to contact trace, have sufficient personal protective equipment for healthcare workers, and have sufficient testing capabilities to identify hotspots and deal with problems when they arise.

Experts do not insist that things need to be shut down until the virus disappears. Instead, we need to change our practices; we need to open only when it is safe to do so and we need to employ common sense practices like social distancing, mask-wearing, and hand-washing and sanitizing when we take that step. The ability to identify people who either have or might have coronavirus and to contact those with whom they might have come into contact could play a significant role in this process. Instead of isolating everyone, we could isolate those we have good reason to believe may have become infected.

Different countries have approached this challenge differently. Many have made use of technology to track outbreaks of the virus. Without a doubt, these approaches involve balancing the value of public safety against concerns about personal privacy and undue governmental intrusion into the lives of private citizens.

Many in the West were surprised to hear that Shanghai Disney was scheduled to re-open, which it did on May 11th. Visitors to the park won’t have the Disney experience that they would have had last summer. First, unsurprisingly, Disney is restricting the number of people it will allow into the park at any one time to 24,000 people a day. This is down from its typical 80,000 daily guests. When guests arrive, they must have their temperatures taken, must use hand sanitizer, and must wear masks. Crucially, they must open an app on their phone at the gate that demonstrates to the attendant that their risk level is green.

Since the COVID-19 outbreak, people in China have been required to participate in a system that they call the “Alipay Health Code.” To participate, people download an app on their phones which makes use of geolocation to track the whereabouts of everyone who has it. People are not required to have a COVID-19 test in order to comply with the demands of the app. Instead, the app tracks how close people have come to others who have confirmed cases of the virus. The app assigns a person a QR code depending on their risk level. People with a green designation are low risk and can travel through the country and can go to places like restaurants, shopping malls, and amusement parks with no restrictions. Those with a yellow designation must self-quarantine for nine days. If a person has a red designation, they must enter mandatory government quarantine.

At first glance, this app appears to be a reasonable way of finding balance between preventing the spread of disease on one hand, and opening up the economy and freeing people from isolation on the other. China isn’t simply accepting the inevitable—opening up the economy and disregarding its obligation to vulnerable populations. Instead, it is trying to maximize the well-being of society at large.

Things are more complicated than they might originally appear. First, the process is not transparent to citizens. The standards for reassignment from one color designation to another are not made public. Some people are stuck in mandatory government quarantine without knowing why they are there or how long they might expect to be detained.

There are also concerns about regional discrimination. It appears that a person can be designated a particular threat level simply because they are from or have recently visited a particular region. Citizens have no control over how this process is implemented, and the concern is that decision-making metrics might be discriminatory and might serve to reinforce oppressive social conditions that existed before COVID-19 was an issue. We know that COVID-19 disproportionately affects people living in poverty who are forced to work in unsafe conditions. This kind of tracking may make life for these populations even worse.

There are also significant concerns about the introduction of a heightened degree of governmental surveillance. Before COVID-19 hit, the Chinese government had already slowly begun to implement a social credit system that assigns points to people based on their social behaviors. These points then dictate the quality of services for which the people might be eligible. The Alipay Health Code increases governmental surveillance and encroachment. When people download the Alipay app, the program that is launched includes a command labeled “reportInfoAndLocationToPolice” that sends information about that person to a secure server. It is unclear for what purpose that information will be used in the future. It is also unclear how long it will be mandatory for people in China to have this app on their phones.

But China is not the only country that is using tracking technology to manage the spread of COVID-19. Other countries doing this include South Korea, Singapore, Taiwan, Austria, Poland, the U.K., and the United States. There are advantages and disadvantages to each system. Each system reflects a different balance of important societal values.

South Korea’s system keeps its residents informed of the movement of people who have tested positive for COVID-19. The government sends out texts informing people of places these individuals have been so that others who have also been to those places know whether they might be at risk. This information also lets people know which places might be hotspots so they know to avoid those places. All of this information is useful to prevent the spread of the virus. That said, there are serious challenges here too. Information about the location of individuals at particular times leads to speculation about their behaviors that might lead to discrimination and harassment. The information is anonymous in principle; COVID-19 patients are assigned numbers that are used in reports. In practice, however, it is often fairly easy to deduce who the people are.

Some countries, like the U.K., Singapore, and the United States have “opt-in” tracking programs. Participation in these programs is voluntary and there tend to be regional differences in what they do and how they operate. Singapore uses a system called “TraceTogether.” Users of the app turn on Bluetooth capabilities for their devices. Each device is associated with an anonymous code. Devices communicate with one another and store each other’s anonymous codes. Then, if a person has interacted with someone who later tests positive, they are informed that they are at risk. They can then take action; they may be tested or may self-quarantine. This system appears to have established a comfortable balance between competing interests.

One problem, however, is that its voluntary nature results in low participation numbers—only 1.5 of Singapore’s 5.7 million people are using the app. What follows from this is that a person has the peace of mind of knowing that if they have been in contact with another app user who contracts COVID-19, they’ll know about it. However, this kind of system doesn’t achieve that much-desired balance between concerns for public safety and concerns for a healthy functioning economy. If a person knows only about some, but not all, of the people they’ve encountered who have tested positive for COVID-19, they’re no safer out in the world as a consumer in a newly-opened economy. This app also does nothing to prevent the spread of the virus by asymptomatic people who may never feel the need to get tested because they feel fine.

There are other, less straightforward ways of collecting and using data about the spread of the virus. Government agencies are attaining geo-tracking information from corporations like Google and Facebook. Most users don’t pay much attention when an app asks if it can track the user’s location. People tend to provide a morally meaningless level of consent—they click “okay” without even glancing at terms and conditions. Corporations use this information for all sorts of purposes. For example, police agencies have accessed this information to help them solve crimes through a process of “digital dragnet.” Because these apps track people’s movements, they can help the government to see who was present at sites later identified as hotspots and can identify where people at those sites at the time in question went next. This can help governments direct their attention to where it might do the most good.

Again, in many ways, this seems like a good thing. We don’t want to waste valuable time searching for information where there isn’t any to be found. It’s best instead to find the clues and follow them. On the other hand, this method of attaining information highlights something troubling about trust and privacy in the United States. A Pew poll from November, 2019 suggests that citizens view themselves as having very little control over who is collecting data about them and very little knowledge about what data is being collected or the purposes for which it is being used. Even so, people tend to pay very little attention to the fact that they are being tracked. They simply accept the notion that, if they want to use an app, they have to accept the terms and conditions.

People concerned about personal liberties are front and center on the public stage right now as their protests make for attention-catching headlines. People are unlikely to want to be forced by the government to use a tracking app. Their fears are not entirely unfounded—China’s program seems to open the door for human rights violations and a troubling amount of governmental surveillance of private citizens. Ironically, though, these people give that same information without any fuss to corporations through the use of apps. This may be even worse. At least in principle, governments exist for the good of the people, while the raison d’être of corporations is to make a profit.

The case of tracking poses a genuine moral dilemma. There are very good public health reasons to use technology to track and control the spread of the virus. There are also very good reasons to be concerned about privacy and human rights violations. Around 3,000 people died in the tragic terrorist attacks that took place on September 11th, 2001. As a result, Congress passed The Patriot Act, which significantly limited privacy rights of the people. Its effect on the way respect for individual privacy changed at airports is also noteworthy. How much privacy should we be willing to give up in exchange for safety? If we were willing to give up privacy for safety in response to 911, how much more willing should we be to do so when the death count is so much higher?

Moral Luck, Universalization, and COVID-19

photograph of toast and swank gathering

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


All over the country, people are making headlines for violating shelter-in-place and stay-at-home orders. Motivations for this behavior are diverse; some fail to recognize the gravity of the situation, some acknowledge that COVID-19 is bad, but doubt that it is a threat to them personally; others, despite a lack of expertise in infectious disease, trust their gut instincts more than they trust the opinions of experts. Some people who defiantly resist orders insist that they are doing so to protect their constitutional rights. People are hosting parties, attending church services, and engaging in life-as-usual activity. Those who have been sheltering in place for over a month look on with incredulity and, often, anger. Why do these people behave as if rules, created in emergency circumstances for the health and safety of the community at large, don’t apply to them?

Some people who choose to go out and spend time near others live in states in which doing so is currently against the law. Others live in Arkansas, Iowa, Nebraska, North Dakota, South Dakota, Utah, or Wyoming — states in which staying at home has been recommended, but not required by their respective governors. An answer to the question of whether going out in these conditions is legal doesn’t settle the question of whether it is ethical.

Plenty of people appear to be comfortable gambling with general health and well-being. In one case that made headlines, notorious libertarian Ammon Bundy defied Idaho’s stay-at-home order, routinely hosting in-person meetings on the topic of the order as a restriction of civil liberties. Bundy announced his intention to host a massive Easter get together of 1,000 people or more. In reality, 60 people attended the event, none of which took any social distancing precautions. They did so in defiance of what they viewed as a governmental infringement on their right to choose.

What is it to make a choice? One plausible way of looking at it is that a choice is an endorsement—it is a recommendation. When I choose a course of action, I affirm that the action is, on some description, valuable. I affirm that it would be acceptable for another person to make the choice that I make under similar circumstances. In performing an action, I express that I view the action not only as an action that can be performed, but as an action that ought to be performed. After all, if I didn’t think it ought to be performed, what on earth possessed me to perform it? If that is the implication of choice, then we should be very selective in our choices. In his 1946 lecture Existentialism is a Humanism, philosopher Jean-Paul Sartre emphasizes the responsibility each person bears for their own choice. He said,

“When a man commits himself to anything, fully realizing that he is not only choosing what he will be, but is thereby at the same time a legislator deciding for the whole of mankind – in such a moment a man cannot escape from the sense of complete and profound responsibility.”

Our choices then, even when they seem to us to be somewhat narrow in scope, are not entirely private or personal matters.

A number of things follow from the idea that our choices are endorsements. First, our choices are no small matter because they define who we are as people. People may want to conceive of themselves as kind, empathetic, and caring, but the question of whether a person has those traits is determined by what they actually do, rather than by what they claim to value. In pandemic conditions, a choice to attend a party or to go into a crowded place when doing so is not necessary may seem to be of little consequence if, ultimately, no one gets hurt. On the other hand, those choices say something about the kinds of risks a person is willing to take on and the kind of danger to which that person is willing to expose others.

Second, if choices are recommendations, then there is a good chance that people will follow them—that’s what happens with recommendations. If, for instance, college students observe that some of their peers are gathering together with no apparent consequences, there is some chance that they might conclude that doing so is, after all, no big deal. Others their age are making themselves exceptions to shelter-in-place rules, why can’t they do so as well?

Many philosophers have had much to say about the morality of making an exception of oneself. Eighteenth century philosopher Immanuel Kant urges us to think about whether our actions can be universalized—roughly, would it be acceptable if everyone performed the action we are considering performing? If not, then we are treating a principle, morally binding on everyone else, as if it doesn’t apply to us.

Decision-making in a pandemic demonstrates the moral importance of universalization powerfully. People who violate stay-at-home and shelter-in place-orders are counting on the fact that they are behaving as exceptions to the rules. If everyone followed the recommendations suggested by their actions, the disease would spread like wildfire, even faster than the rate at which it is now spreading. “But,” they might argue, “what is the real harm? If I don’t get sick, and if I don’t spread the disease, does it really matter if I saw some friends one Friday night in April?”

A person who makes this argument fails to recognize themselves as the recipient of what philosophers often refer to as moral luck. In his 1877 essay The Ethics of Belief, philosopher W.K. Clifford describes a ship owner who sends his ship out to sea despite the fact that he had reason to believe it might not be seaworthy. The ship sinks and the passengers die. What if, instead, the ship didn’t sink? What if all of the passengers survived? Would this diminish the guilt of the ship owner? Clifford answers, “Not one jot. When an action is once done, it is right or wrong forever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.” The shipowner got lucky in this case—no one discovered that he did something irresponsible. This doesn’t change how we should view his decision to send the ship off to sea; whatever the consequences turned out to be, his action was reckless.

Consider the following two cases. Tom and Mary both go out to a bar and become equally intoxicated. They both make the decision to drive their respective cars home while too impaired to operate a vehicle safely. They both live roughly the same distance from the bar. On the way home, Tom encounters a pedestrian whom he hits and kills. A pedestrian does not cross Mary’s path, and she arrives home safely. The fact that a pedestrian was present in one case but not the other was a matter of moral luck—neither Tom nor Mary had any control over that. That said, they both behaved equally recklessly and that is the decision for which they are morally responsible.

The same thing can be said about the decision to ignore critical recommendations during the COVID-19 pandemic. Such actions are reckless. Some people who disregard orders may not get the virus and they may not spread it to others. Nevertheless, their actions are not universalizable. They can’t be reasonably recommended to others. When these people take themselves to be defending their own liberties, they are really behaving selfishly and diminishing the liberty and well-being of others.

Expertise in the Time of COVID

photograph of child with mask hugging her mother

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Admitting that someone has special knowledge that we don’t or can do a job that we aren’t trained for is not very controversial. We rarely hesitate to hire a car mechanic, accountant, carpenter, and so on, when we need them. Even if some of us could do parts of their jobs passably well, these experts have specialized training that gives them an important advantage over us: They can do it faster, and they are less likely to get it wrong. In these everyday cases, figuring out who is an expert and how much we can trust them is straightforward. They have a sign out front, a degree on the wall, a robustly positive Google review, and so on. If we happen pick the wrong person—someone who happens to be incompetent or a fraud—we haven’t lost much. We try harder next time.

But as our needs get more complicated, for example, when we need information about a pandemic disease and how best to fight it, as our need for that kind of scientific information is politicized, figuring out who the experts are and how much to trust them is less clear.

Consider a question as seemingly simple as whether surgical masks help contain COVID-19. At first, experts said everyone should wear masks. Then other experts said masks won’t help against airborne viruses because the masks do not seal well enough to stop the tiny viral particles. Some said that surgical masks won’t help, but N95 masks will. Then some experts said that surgical masks could at least help keep you from getting the disease from others’ spittle, as they talk, cough, and sneeze. Still other experts said that even this won’t do because we touch the masks too often, undermining their protective capacity. Yet still others say that while the masks cannot protect you from the virus, they can protect others from you if you happen to be infected, “contradicting,” as one physician told me, “years of dogma.”

What are we to believe from this cacophony of authorities?

To be sure, some of the confusion stems from the novelty of the novel coronavirus. Months into the global spread, we still don’t know much about it. But a large part of the burden of addressing the public health implications lies not just in expert analysis but how expert judgments are disseminated. And yet, I have questions: If surgical masks won’t keep me from getting the infection because they don’t seal well enough, then how could they keep me from giving it to others? Is the virus airborne or isn’t it? What does “airborne” mean in this context? How do we pick the experts out of this crowd of voices?

Most experts are happy to admit that the world is messier than they would prefer, that they are often beset by the fickleness of nature. And after decades of research on error and bias, we know that experts, just like the rest of us, struggle with biased assumptions and cognitive limitations, the biases inherent in how those before them framed questions in their fields, and by the influence of competing interests—even if from the purest motives—for personal or financial ends. People who are skeptical of expertise point to these deficiencies as reasons to dismiss experts.

But if expertise exists, really exists, not merely as a political buzzword or as an ideal in the minds of ivory tower elitists, then, it demands something from us.

Experts understand their fields better than novices. They are better at their jobs than people who have not spent years or decades doing their work. And thus, when they speak about what they do, they deserve some degree of trust.

Happily, general skepticism about expertise is not widely championed. Few of us — even in the full throes of, for example, the Dunning-Kruger Effect — would hazard jumping into the cockpit of an airplane without special training. Few of us would refuse medical help for a severe burn or a broken limb. Unfortunately, much of the skepticism worth taking seriously attaches to topics that are likely to do more harm to others than to the skeptic: skepticism about vaccinations, climate change, and the Holocaust. If you happen to fall into one of these groups at some point in your life — I grew up a six-day creationist and evolution-denier — you know how hard it is to break free from that sort of echo chamber.

But even if you have extricated yourself from one distorted worldview, how do you know you’re not trapped in another? That you aren’t inadvertently filtering out or dismissing voices worth listening to? This is a challenge we all face when up against a high degree of risk in a short amount of time from a threat that is new and largely unknown and that is now heavily politicized.

Part of what makes identifying and trusting experts so hard is that not all expertise is alike. Different experts have differing degrees of authority.

Consider someone working in an internship in the first year out of medical school. They are an MD, and thus, an expert of sorts. Unfortunately, they have very little clinical experience. They have technical knowledge but little competence applying it to complex medical situations.

Modern medicine has figured out how to compensate for this lack of experience. New doctors have to train for several years under a licensed physician before they can practice on their own. To acquire sufficient expertise, they have to be immersed into the domain of their medical specialty. The point is that not every doctor has the same authority as every other, and this is true for other expert domains, as well.

A further complication is that types of expertise differ in how much background information and training is required to do their jobs well. Some types of expertise are closer to what philosopher Thi Nguyen calls our “cognitive mainland.” This mainland refers to the world that novices are familiar with, the language they can make sense of. For example, most novices understand enough about what landscape designers do to assess their competence. They can usually find reviews of their work online. They can even go look at some of their work for themselves. Even if they don’t know much about horticulture, they know whether a yard looks nice.

But expertise varies in how close to us it is. For example, what mortgage brokers do is not as close to us as landscapers. It is further away from our cognitive mainland, out at sea, as it were. First-time home buyers need a lot of time to learn the language associated with the mortgage industry and what it means for them. The farther out an expert domain is from a novice’s mainland, the more likely they are on what Nguyen calls a “cognitive island,” isolated from resources that would let novices make sense of their abilities and authority.

Under normal circumstances, novices have some tools for deciding who is an expert and who is not, and for deciding which experts to trust and which to ignore. This is not easy, but it can be done. Looking up someone’s credentials, certifications, years of experience, recommendations, track records, and so on, can give novices a sense of someone’s competence.

As the expertise gets farther from novices’ cognitive mainland, they can turn to other experts in closely related fields to help them make sense of it. In the case of mortgages, for example, they might have a friend who works in real estate or someone in banking to help translate the relevant bits to us in a way that meets our need. In other words, they can use “meta-experts,” experts in a closely related domain who understand enough of the domain to help them choose experts in that domain wisely.

Unfortunately, during a public health emergency, uncertainty, time constraints, and politicization mean that all of these typical strategies can easily go awry. Experts who feel pressured by society or threatened by politicians can — even if inadvertently — manufacture a type of consensus. They can double-down on a way of thinking about a problem for the sake of maintaining the authority of their testimony. In some cases, this is a simple matter of groupthink. In other cases, it can seem more intentional, even if it isn’t.

Psychologist Philip Tetlock, in his book with Dan Gardner Superforcasting: The Art and Science of Prediction (2015), explains how to prevent this sort of consensus problem by bringing together diverse experts on the same problem and suspending any hierarchical relationships among them. If everyone feels free to comment and if honest critique is welcomed, better decisions are made. In Are We All Scientific Experts Now? (2014), sociologist Harry Collins contends that this is also how peer review works in academic settings. Not everyone who reviews a scientific paper for publication is an expert in the narrow specialization of the researcher. Rather, they understand how scientific research works, the basic terminology used in that domain, and how new information in domains like it is generated. Not only can experts in related domains allow us to challenge groupthink and spur more creative solutions, they can help identify errors in research and reasoning because they understand how expertise works.

These findings are helpful for novices, too. They suggest that our best tool for identifying and evaluating expertise is, rather than pure consensus, consensus among a mix of voices close to the domain in question.

We might call this meta-expert consensus. Novices need not be especially close to a specialized domain to know whether someone working in it is trustworthy. They only have to be close enough to people close to that domain to recognize broad consensus among those who understand the basics in a domain.

Of course, how we spend our energy on experts matters. There are many questions that political and institutional leaders face that the average citizen will not. The average person need not invest energy on highly specialized questions like:

  • How should hospitals fairly allocate scarce resources?
  • How do health care facilities protect health care workers and vulnerable populations from unnecessary risks?
  • How can we stabilize volatile markets?
  • How do we identify people who are immune from the virus quickly so they can return to the workforce?

The payoff is too low and the investment too significant.

On the other hand, there are questions worth everyone’s time and effort:

  • Should I sanitize my groceries before or when I bring them into my living space?
  • How often can I reasonably go out to get groceries and supplies?
  • How can I safely care for my aging parent if I still have to go to work?
  • Should I reallocate my investment portfolio?
  • Can I still exercise outdoors?

Where are we on the mask thing? It turns out, experts at the CDC are still debating their usefulness under different conditions. But here’s an article that helps make sense of what experts are thinking about when they are making recommendations about mask-wearing.

The work required to find and assess experts is not elegant. But neither is the world this pandemic is creating. And understanding how expertise works can help us cultivate a set of beliefs that, if not elegant, is at least more responsible.

Owning a Monopoly on Knowledge Production

photograph of Monopoly game board

With Elizabeth Warren’s call to break up companies like Facebook, Google, and Amazon, there has been increasing attention to the role that large corporations play on the internet. The matter of limited competition within different markets has become an important area of focus, however much of the debate tends to focus on the economic and legal factors involved (such as whether there should be greater antitrust enforcement). However, the philosophical and moral issues have not received as much attention. If a select few corporations are responsible for the kinds of information we get to see, they are capable of exerting a significant influence on our epistemic standards, practices, and conclusions. This also makes the issue a moral one.

Last year Facebook co-founder Chris Hughes surprised many with his call for Facebook to be broken up. Referencing America’s history of breaking up monopolies such as Standard Oil and AT&T, Hughes charged that Facebook dominates social networking and faces no market-based accountability. Earlier, Elizabeth Warren had also called for large companies such as Facebook, Google, and Amazon to be broken apart, claiming that they have bulldozed competition and are using private information for profit. Much of the focus on the issue has been on the mergers of companies like Facebook and Instagram or Google and Nest. The argument holds that these mergers are anti-competitive and are creating economics problems. According to lawyer and professor Tim Wu, “If you took a hard look at the acquisition of WhatsApp and Instagram, the argument that the effect of those acquisitions have been anticompetitive would be easy to prove for a number of reasons.” For one, he cites the significant effect that such mergers have had on innovation.

Still, others have argued that breaking up such companies would be a bad idea. They will note that a concept like social networking is not clearly defined, and thus it is difficult to say that a company like Facebook constitutes a monopoly in its market. Also, unlike Standard Oil, companies like Facebook or Instagram are not essential services for the economy which undermines potential legal justifications for breaking these companies up. Most of these corporations also offer their services for free which means that the typical concerns about monopolies and anticompetitive practices regarding prices and rising costs of services do not apply. Those who argue this tend to suggest that the problem lies with the capitalist system or that there is a lack of proper regulation of these industries.

Most of the proponents and opponents focus on the legal and economic factors involved. However, there are epistemic factors at stake as well. Social epistemologists study matters relating to questions like “how do groups come to know things?” or “how can communities of inquirers affect what individuals come to accept as knowledge?” In recent years, philosophers like Kevin Zollman have provided accounts of how individual knowers are affected by communication within their network of fellow knowers. Some of these studies have demonstrated that different communication structures within an epistemic network in terms of the beliefs, evidence, and testimonies that are shared can affect what conclusions an epistemic community will settle on. The way that evidence, beliefs, and testimony of other knowers within the network is shared will affect what other people in the network believe is rational.

Once we factor the ways that a handful of corporations are able to influence the communication of information in epistemic communities on the internet, a real concern emerges. Google and Facebook are responsible for roughly 70% of referral traffic on the internet. For different categories of articles the number changes. Facebook is responsible for referring 87% of “lifestyle” content. Google is responsible for 84% of referrals of job postings. Facebook and Google together are responsible for 79% of referral traffic regarding the world economy. Internet searching is a common way of getting knowledge and information and Google controls almost 90% of this field.

What this means is that a few companies are responsible for the communication of the incredibly large amounts of information, beliefs, and testimony that is shared by knowers all over the world. If we think about a global epistemic community or even smaller sub-communities learning and eventually knowing things through referral of services like Google or Facebook, this means that few large corporations are capable of affecting what we are capable of knowing and will call knowledge. As Hughes noted in his criticism of Facebook, Mark Zuckerberg can alone decide how to configure Facebook’s algorithms to determine what people see in their News Feed, what messages get delivered, and what constitutes violent and incendiary speech. What this means is that if a person comes to adopt many or most of their beliefs because of what they are exposed to on Facebook, then Zuckerberg alone can significantly determine what that person can know.

A specific example of this kind of dominance is YouTube. When it comes to the online video hosting platform marketplace, YouTube holds a significantly larger share than competitors like Vimeo or Dailymotion. Content creators know this all too well YouTube’s policies on content and monetization have led many on the platform to lament the lack of competition. YouTube creators are often confused by why certain videos get demonetized, what is and is not acceptable content, and what standards should be followed. In recent weeks demonetization of history focused channels has been particularly interesting. For example, a channel devoted to the history of the First World War had over 200 videos demonetized. Many of these channels have had to begin censoring themselves based on what they think is not allowed. So, history channels have started censoring words that would be totally acceptable on network television.

The problem isn’t merely one of monetization either. If a video is demonetized, it will no longer be promoted and recommended by YouTube’s algorithm. Thus, if you wish to learn something about history on YouTube, Google is going to play a large role in terms of who gets to learn what. This can affect the ways that people evaluate information on these (sometimes controversial) topics and thus what epistemic communities will call knowledge. Some of these content creators have begun looking for alternatives to YouTube because of these issues, however it remains to be seen whether they will offer a real source of competition. In the meantime, however, much of the information that gets referred to us comes from a select few companies. These voices have significant influence (intentionally or not) over what we as an epistemic community come to know or believe.

This makes the issue of competition an epistemic issue, but it also inherently is a moral one. This is because as a global society we are capable of regulating in one way or another the ways in which corporations are capable of impacting our lives. This raises an important moral question: is it morally acceptable for a select few companies to determine what constitutes knowledge? Having information being referred by corporations provides the opportunity for some to benefit over others, and we as a global society will have to determine whether we are okay with the significant influence they wield.

Forbidden Knowledge in Scientific Research

cloeup photograph of lock on gate with iron chain

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

When Your Will Is Not Enough: Ethical Restrictions on Entering into Agreements

CRISPR image

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


A 43-year-old with a deadly skin cancer is asking doctors to use the recent developments in CRISPR to experiment with treatments that may help him as well as advance medical understanding. Malakkar Vohryzek is offering to be a test subject, contacting a number of researchers and doctors asking if they would be interested in modifying his genetic code. Such treatment falls well outside approved parameters for human exposure to risk with the gene-editing technology, but the potential patient seems to be providing straightforward consent. In medicine and law, however, consent is often not enough. Currently the international scientific community remains critical of researchers in China that edited the genes of twin children last year, saying that such interference was premature and that the understanding of CRISPR and the impact on human subjects was not advanced enough for such research (for discussion see A.G. Holdier’s “Lulu and Nana: The Surprise of Genetically-Modified Humans”). Vohrysek’s case is interesting, though, because with a terminal illness and clearly expressed desire, why stick to standards that aim to promote and protect a subject’s welfare? If Vohrysek is willing to risk his health (what is left of it given his illness), why should doctors and researchers hesitate to proceed?

The ethics surrounding agreements or contracts incorporate a number of dimensions of our agency and the way we relate to one another. These standards attempt to take seriously the import of being able to direct one’s own life and the significance of the harm of manipulating the lives of others.

Paternalism is the term used to describe efforts to promote others’ best interests when those actions run counter to their expressed wishes. In such cases, someone believes that if a person’s will were effective, it wouldn’t promote what is in their best interests, and therefore interference is justified. The standard case of paternalism is that of a parent who overrules the will of a child. Say, for example, a 5-year-old wants ice cream for dinner but a parent disregards this preference and instead makes the child eat a nutritious meal believing that this will be better for the child. Typically, we think parents are morally justified in disregarding the child’s expressed preferences in circumstances like these. But when, and under what circumstances, paternalism can be justified outside of these clear-cut parent-child cases is much less clear. In Vorysek’s case, there is something paternalistic about not prioritizing the autonomous choice he is communicating. In general, regulatory standards are meant to promote subjects’ welfare and interests, but Vorysek isn’t a child, so what countervailing reasons apply here?

One class of cases where paternalistic interference is typically considered justified is where there isn’t a clear of expression of an agent’s will to interfere with in the first place. We may interpret the parent-child case in this way: a child hasn’t developed their full autonomous capabilities, therefore superseding their expressions of will when it runs counter to their best interests doesn’t seem as problematic as thwarting the will of a fully autonomous, mature adult. Vorysek, and other patients facing terminal prognoses who knowingly choose to expose themselves to risk, seem to be in a different class than those whose illness or condition of life diminishes their autonomy.

One barrier to truly just agreements is an unethical power dynamic founded on asymmetric information. For instance, if one party uses legal understanding and jargon to obscure the stakes and conditions of an agreement so that the other party can’t fully weigh the possible outcomes that they are agreeing to, this is intuitively not a fair case of agreement. These concerns are relevant in many legal contracts, for instance in end-user license agreements that consumers accept in order to use apps and software.

Another arena where there is often an asymmetry of technical understanding is in physician-patient exchanges (for discussion see Tucker Sechrest’s “The Inherent Conflict in Informed Consent”). In order to get informed consent from patients, physicians must communicate effectively about diagnoses, potential treatment options, as well as their outcomes and likely effects to patients who frequently do not have the breadth of understanding that the physician possesses. If a doctor does not ensure that the patient comprehends the stakes of the treatment choices, the patient may enter into agreements that do not reflect their priorities, preferences, and values. This asymmetric understanding is also the ethically problematic dimension of predatory lending, “the practice of a lender deceptively convincing borrowers to agree to unfair and abusive loan terms, or systematically violating those terms in ways that make it difficult for the borrower to defend against.”

But there remain further ethical considerations even when mutual understanding can be assured. It’s true that only when both parties to an agreement have a full grasp of the stakes and possible outcomes of the agreement is there the potential for each to weigh this information against their preferences, priorities, and values in order to determine whether the agreement is right for them. However, this doesn’t exhaust all ethical dimensions of making agreements. We could imagine that the 43-year-old patient seeking un-approved CRISPR treatments to be in such a position he might understand the risks and not be mistaken about how the facts of the matter relate to his particular values, preferences, and priorities. What ethical reservations are left?

Exploitation refers to a type of advantage-taking that is ethically problematic. Consider a case where an individual with little money is offered $500 in exchange for taking part in medical research. It could be the case that this is the “right” choice for them the $500 is sorely needed, say to maintain access to shelter and food, and the risk involved in the medical research is processed and understood clearly and the person determines that shelter and food outweigh the risk. In such cases, the ethical issue isn’t that a person may be entering agreements without understanding or against their best interests. Indeed, this individual is making the best choice in their circumstances. However, the structure of the choice itself may be problematic. The financial incentive for taking on unknown risk of bodily harm is a thorny ethical question in bioethics because of the potential exploitative relationship it sets up. When financial incentives are in place, the disadvantaged portion of a population will bear the brunt of the risk of medical research.

In order to avoid exploitation, there are regulatory standards for the kinds of exchanges that are permissible for exposing one’s body to risk of unknown harm, as in medical research. There are high standards for such research in terms of likelihood of scientific validity – the hypothesized outcome can’t just be an informed “guess,” for instance. Vorysek likely won’t find a researcher to agree to run experiments on him for fear that terminal patients, in general, will become vulnerable to experimentation. As a practice, this may be ethically problematic because patients are a vulnerable population and this vulnerability may be exploited the ethical constraint on agreements can be a concern even when making the agreement may be both in the individual’s best interest and satisfying their will.

This, of course, leads to tensions and controversy. Should Vorysek and others in similar positions be able to use their tenuous prognosis for scientific gain? “If I die of melanoma, it won’t help anyone,” he said. “If I die because of an experimental treatment, it will at least help science.”

Sparking Joy: The Ethics of Medically-Induced Happiness

Photograph of a sunflower in sunshine with blue sky behind

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Happiness is often viewed as an ephemeral thing. Finding happiness is an individual and ever-developing process. Biologically speaking, however, all emotions are the simple result of hormones and electrical impulses. In a recent medical breakthrough, a team of scientists has found a way to tap in to these electrical impulses and induce joy directly in the brain. This kind of procedure has long been the stuff of speculation, but now it has become a reality. While the technique shows a good deal of promise in treating disorders such as depression and post-traumatic stress, it also presents an ethical conundrum worth considering.

On initial examination, it is difficult to point out anything particularly wrong with causing “artificial” joy. Ethical hedonism would prioritize happiness over all other values, regardless of the manner in which happiness is arrived at. However, many people would experience a knee-jerk rejection to the procedure. It bears some similarity to drug-induced euphoria, but unlike illicit drugs, this electrical procedure seems to have no harmful side effects, according to the published study. Of course, with a small sample size and a relatively short-term trial, addiction and other harmful aspects of the procedure may be yet undiscovered. If, as this initial study suggests, the procedure is risk-free, should it be ethically accepted? Or is there cause for hesitation beyond what is overtly harmful?

The possibility of instantaneous, over-the-counter happiness has been a frequent subject of science-fiction. Notable examples include Aldous Huxley’s Brave New World, which featured a happiness-inducing drug called “soma”; and Philip K. Dick’s Do Androids Dream of Electric Sheep? (later adapted into the film Blade Runner), which included a mood-altering device called a “mood organ.” Both novels treat these inventions as key elements in a dystopian future. Because the emotions produced by these devices are “false”—the direct result of chemical alteration, rather than a “natural” response to external conditions—the society which revolves around them is empty and void of meaning. What is the validity of this viewpoint? Our bias towards what we perceive as “natural” may be simply a matter of maintaining the status quo–we’re more comfortable with whatever we’re used to. This is similar to the preference for foods containing “natural” over “artificial” flavoring despite nearly identical chemical compositions. While we are instinctively wary of the “artificial” emotions, there may be no substantive difference to the unbiased feeler.

Of course, emotions exist for more than just the experience of feeling. The connection between emotions and the outside world was addressed by Kelly Bijanki, one of the scientists involved in the electrically-induced happiness study, in her interview with Discover Magazine: “Our emotions exist for a very specific purpose, to help us understand our world, and they’ve evolved to help us have a cognitive shortcut for what’s good for us and what’s bad for us.” Just as pain helps us avoid dangerous hazards and our ability to taste bitterness helps us avoid poisonous things, negative emotions help drive us away from harmful situations and towards beneficial ones. However, living in a modern society to which the human body is not biologically adapted, our normally helpful sensory responses like pain and fear can sometimes backfire. Some people experience chronic pain connected to a bodily condition that cannot be immediately resolved; in these cases, the pain itself becomes the problem, rather than a useful signal. As such, we seek medical solutions to the pain itself. Chronic unhappiness, such as in cases of anxiety and depression, could be considered the same way: as a normally useful sensory feedback which has “gone wrong” and itself become a problem requiring medical treatment.

What if the use of electrically-induced happiness extended beyond temporary medical treatments? Why shouldn’t we opt to live our lives in a state of perpetual euphoria, or at least have the option to control our emotions directly? As was previously mentioned, artificial happiness may be indistinguishable from the real thing, at least as far as our bodies are concerned. Human beings already use a wide variety of chemicals and actions to “induce” happiness–that is, to make ourselves happy. If eating chocolate or exercising are “natural” paths to happiness, why would an electrical jolt be “unnatural”? Of course, the question of meaning still bears on the issue. Robert Nozick argues that humans make a qualitative distinction between the experience of doing something and actually doing it. We want our happiness to be tied to real accomplishments; the emotion alone isn’t enough. More concretely, we would probably become desensitized to happiness if it were all we experienced. In the right doses, sadness helps us value happiness more; occasional pain makes our pleasure more precious.

If happiness in the absence of meaning is truly “empty,” our ethical outlook toward happiness should reflect this view. Rather than viewing pleasure or happiness itself as the ultimate good, we might instead see happiness as a component of a well-lived life. Whether something is good would depend not on whether it brings happiness, but whether it fulfills some wider sense of meaning. Of course, exactly what constitutes this wider meaning would continue to be the subject of endless philosophical debate.

Facing the Synthetic Age with Christopher Preston

We’re in an age known as the Anthropocene, an era in which humans have been the dominant force on earth. We’ve impacted the climate, we’ve shaped the land and in recent years, we’ve made changes on the atomic and genetic levels. On this podcast, the philosopher Christopher Preston shares insights from his book The Synthetic Age, which explores the ethics of technologies that have the potential to radically reshape the world. We’re attempting to cool the surface of the earth by brightening clouds. We can introduce traits into wild species through gene drives and create entirely new organisms in the lab. While these new technologies are interesting and in many cases, potentially helpful, Christopher writes that we need to see them for what they are: a “deliberate shaping” of the earth and the organisms in it. He wants us to think carefully about what it might mean for humans to live in a world that they have intentionally manipulated.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

 

  1. Christopher Preston, The Synthetic Age
  2. Explaining the Anthropocene
  3. “We have 12 years to limit climate change catastrophe”
  4. Synthetic biology at the Venter Institute
  5. Malaria is a public health crisis
  6. Gene drives, mosquitoes and malaria
  7. More on living in a post-wild world
  8. 2015 fatality at Yellowstone National Park
  9. “Just 90 companies caused two-thirds of man-made global warming emissions”
  10. Joel Reynolds

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Zeppelin” by Blue Dot Sessions

Soothe” by Blue Dot Sessions

A Certain Lightness” by Blue Dot Sessions

Heliotrope” by Blue Dot Sessions

On Gene Editing, Disease, and Disability

Photo of a piece of paper showing base pairs

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On November 29, 2018, MIT Tech Review reported that at Harvard University’s Stem Cell Institute, “IVF doctor and scientist Werner Neuhausser says he plans to begin using CRISPR, the gene-editing tool, to change the DNA code inside sperm cells.” This is the first stage towards gene editing embryos, which is itself a controversial goal, given the debates that rose in response to scientists in China making edits at more advanced stages in fetal development.

Frequently the concern over editing human genes involves issues of justice, such as developing the unchecked power to produce humanity that would exist solely to service some population – for example, organ farming. The moral standing of clones and worries over the dignity of humanity when such power is developed get worked over whenever a new advancement in gene editing is announced.

The response, or the less controversial use of our growing control over genetic offspring, is the potential to cure diseases and improve the quality of life for a number of people. However, this use of genetic intervention may not be as morally unambiguous as it seems at first glance.

Since advanced testing was developed, the debate about the moral status of selective abortion has been fraught. Setting aside the ethics of abortion itself, would choosing to bring a child into the world that does not have a particular illness, syndrome, or condition rather than one that did be an ethical thing for a parent to do? Ethicists are divided.

Some are concerned with the expressive power of such a decision – does making this selection express prejudice against those with the condition or a judgment about the quality of the life that individuals living with the condition experience?

Others are concerned with the practical implications of many people making selections for children without some conditions. It is impractical to imagine that widespread use of such selection would completely eradicate the conditions, and therefore one worry could be that the individuals with conditions in the hypothetical society where widespread selection takes place will be further stigmatized, invisible, or have fewer resources. Also, the prejudice against conditions that involve disability might lead to selections that result in lack of diversity in the human population based on misunderstandings of quality of life.

Of course, on the other side of these discussions is the intuitive preference or obligation for parents or those in charge of raising people in society to promote health and well-being. Medicine is traditionally thought to aim at treating and preventing conditions that deviate from health and wellness; both are complex concepts, to be sure, but preventing disease or creating a society that suffers less from disease seems to fall within the domain of appropriate medical intervention.

How does this advancement in gene editing relate to the debate in selective birth? The Harvard example seeks to prevent Alzheimer’s disease, taking sperm and intervening to prevent disease. Lack of human diversity, pernicious ablest expressive power, and negative impact on those that suffer from the disease are the main concerns with intervening for the purported sake of health.

The Persistent Problem of the Fair Algorithm

photograph of a keyboard and screen displaying code

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


At first glance, it might appear that the mechanical procedures we use to accomplish such mundane tasks as loan approval, medical triage, actuarial assessment, and employment screening are innocuous. Designing algorithms to process large chunks of data and transform various individual data points into a single output offers a great power in streamlining necessary but burdensome work. Algorithms advise us about how we should read the data and how we should respond. In some cases, they even decide the matter for us.

It isn’t simply that these automated processes are more efficient than humans at performing these computations (emphasizing the relevant data points, removing statistical outliers, and weighing competing factors). Algorithms also hold the promise of removing human error from the equation. A recent study, for example, has identified a tendency for judges on parole boards to become less and less lenient in their sentencing as the day wears on. By removing extraneous elements like these from the decision-making process, an algorithm might be better positioned to deliver true justice.

Similarly, another study established the general superiority of mechanical prediction to clinical prediction in various settings from medicine to mental health to education. Humans were most notably outperformed when a one-on-one interview was conducted. These findings reinforce the position that algorithms should augment (or perhaps even replace) human decision-making, which is often plagued by prejudice and swayed by sentiment.

But despite their great promise, algorithms carry a number of concerns. Chief among these are problems of bias and transparency. Often seen as free from bias, algorithms stand as neutral arbiters, capable of combating long-standing inequalities such as the gender pay-gap or unequal sentencing for minority offenders. But automated tools can just as easily preserve and fortify existing inequalities when introduced to an already discriminatory system. Algorithms used in assigning bond amounts and sentencing underestimated the risk of white defendants while overestimating that of Black defendants. Popular image-recognition software reflects significant gender bias. Such processes mirror and thus reinforce extant social bias. The algorithm simply tracks, learns, and then reproduces the patterns that it sees.

Bias can be the result of a non-representative sample size that is too small or too homogenous. But bias can also be the consequence of the kind of data that the algorithm draws on to make its inferences. While discrimination laws are designed to restrict the use of protected categories like age, race, or sex, an algorithm might learn to use a proxy, like zip codes, that produces equally skewed outcomes.

Similarly, predictive policing — which uses algorithms to predict where a crime is likely to occur and determine how to best deploy police resources — has been criticized as “enabl[ing], or even justify[ing], a high-tech version of racial profiling.” Predictive policing creates risk profiles for individuals on the basis of age, employment history, and social affiliations, but it also creates risk profiles for locations. Feeding the algorithm information which is itself race- and class-based creates a self-fulfilling prophecy whereby continued investigation of Black citizens in urban areas leads to a disproportionate number of arrests. A related worry is that tying police patrol to areas with the highest incidence of reported crime grants less police protection to neighborhoods with large immigrant populations, as foreign-born citizens and non-US citizens are less likely to report crimes.

These concerns of discrimination and bias are further complicated by issues of transparency. The very function the algorithm was meant to serve — computing multiple variables in a way that surpasses human ability — inhibits oversight. It is the algorithm itself which determines how best to model the data and what weights to attach to which factors. The complexity of the computation as well as the use of unsupervised learning — where the algorithm processes data autonomously, as opposed to receiving labelled inputs from a designer — may mean that the human operator cannot parse the algorithm’s rationale and that it will always remain opaque. Given the impenetrable nature of the decision-mechanism, it will be difficult to determine when predictions objectionably rely on group affiliation to render verdicts and who should be accountable when they do.

Related to these concerns of oversight are questions of justification: What are we owed in terms of an explanation when we are denied bail, declined for a loan, refused admission to a university, or passed over for a job interview? How much should an algorithm’s owner need to be able to say to justify the algorithm’s decision and what do we have a right to know? One suggestion is that individuals are owed “counterfactual explanations” which highlight the relevant data points that led to the determination and offer ways in which one might change the decision. While this justification would offer recourse, it would not reveal the relative weights the algorithm places on the data nor would a justification be offered for which data points an algorithm considers relevant.

These problems concerning discrimination and transparency share a common root. At bottom, there is no mechanical procedure which would generate an objective standard of fairness. Invariably, the determination of that standard will require the deliberate assignation of different weights to competing moral values: What does it mean to treat like cases alike? To what extent should group membership determine one’s treatment? How should we balance public good and individual privacy? Public safety and discrimination? Utility and individual right?

In the end, our use of algorithms cannot sidestep the task of defining fairness. It cannot resolve these difficult questions, and is not a surrogate for public discourse and debate.

Exploring Intellectual Property Rights with Adam Moore

We all interact with intellectual property on a daily basis, whether consciously or not. On this episode, we talk to intellectual property expert and philosopher Adam Moore to learn about some of the most important ethical issues related to intellectual property. Then, independent producer Sandra Bertin brings us the fascinating story of a fight for collective intellectual property rights in Guatemala.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Adam Moore’s body of work on intellectual property
  2. Intellectual property
  3. You only have legal protection over intellectual property that is fixed in physical form
  4. Some justifications for intellectual property:
  5. Common objections to intellectual property:
  6. Some more objections to intellectual property
  7. Copyright Act of 1976 (term of protection)
  8. Independent producer Sandra Bertin
  9. More on the National Movement of Mayan Weavers
  10. More on Angelina Aspuac

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

The Zeppelin” by Blue Dot Sessions

Clips from “A Comic’s Life Radio” (originally aired on KCAA in Loma Linda, CA Friday, January 22, 2016.)

Are We Loose Yet” by Blue Dot Sessions (sections of this song have been looped)

Lakeside Path” by Blue Dot Sessions

Great Great Lengths” by Blue Dot Sessions

 

“Minibrains” and the Future of Drug Testing

Image of a scientist swabbing a petri dish.

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


 NPR recently reported on the efforts of scientists who are growing small and “extremely rudimentary versions of an actual human brain” by transforming human skin cells into neural stem cells and letting them grow into structures like those found in the human brain. These tissues are called cerebral organoids but are more popularly known as “minibrains.” While this may all sound like science fiction, their use has already led to new discoveries in the medical sciences.

The impetus for developing cerebral organoids comes from the difficult situation imposed on research into brain diseases. It is difficult to model complex conditions like autism and schizophrenia using the brains of mice and other animals. Yet, there are also obvious ethical obstacles to experimenting on live human subjects. Cerebral organoids provide a way out of this trap because they present models more akin to the human brain. Already, they have led to notable advances. Cerebral organoids were used in research into how the Zika virus disrupts normal brain development. The potential to use cerebral organoids to test future therapies for such conditions as schizophrenia, autism, and Alzheimer’s Disease seems quite promising.

The experimental use of cerebral organoids is still quite new; the first ones were successfully developed in 2013. As such, it is the right time to begin serious reflection on the potential ethical hurdles for research conducted on cerebral organoids. To that end, a group of ethicists, law professors, biologists, and neuroscientists recently published a commentary in Nature on the ethics of minibrains.

The commentary raises many interesting issues. Let us consider just three:

The prospect of conscious cerebral organoids

Thus far, the cerebral organoids experimented upon have been roughly the size of peas. According to the Nature commentary, they lack certain cell types, receive sensory input only in primitive form, and have limited connection between brain regions. Yet, there do not appear to be insurmountable hurdles to advances that will allow us to scale these organoids up into larger and more complex neural structures. As the brain is the seat of consciousness, scaled-up organoids may rise to the level of such sensitivity to external stimuli that it may be proper to ascribe consciousness to them. Conscious organisms sensitive to external stimuli can likely experience negative and positive sensations. Such beings have welfare interests. Whether we had ethical obligations to these organoids prior to the onset of feelings, it would be difficult to deny such obligations to them once they achieve this state. Bioethicists and medical researchers ought to develop principles to govern these obligations. They may be able to model them after our current approaches to research obligations regarding animal test subjects. However, it is likely the biological affinity between cerebral organoids and human beings will require significant departure from the animal test subject model.

Additionally, research into consciousness has not nailed down the neural correlates of consciousness. As such, we may not necessarily know if a particularly advanced cerebral organoid is likely to be conscious. Either we ought to purposefully slow the progress into developing complex cerebral organoids until we understand consciousness better, or we pre-emptively treat organoids as beings deserving moral consideration so that we don’t accidentally mistreat an organoid we incorrectly identify as non-conscious.

Human-animal blurring

Cerebral organoids have also been developed in the brains of other animals. This gives the brain cells a more “physiologically natural” environment. According to the Nature commentary, cerebral organoids have been transplanted into mice and have become vascularized in the process. Such vascularization is an important step in the further development in size and complexity of cerebral organoids.

There appears to be a general aversion to the prospect of transplanting human minibrains into mice. Many perceive the creation of such human-animal hybrids (chimeras) as crossing the inviolable boundary between species.  The transplantation of any cells of one animal, especially those of a human (and even more especially those of the brain cells of a human) may violate this sacred boundary.

An earlier entry on The Prindle Post approached the vexing issues of the creation of human-animal chimeras. It appeared that much of the opposition to chimeras was based in part on an objection to “playing God.” Though some have ridiculed the “playing God” argument as based on “a meaningless, dangerous cliché,” people’s strong intuitions against the blurring of species boundaries ought to influence policies put in place to govern such research. If anything, this will help tamp down a strong public backlash.

Changing definitions of death

Cerebral organoids may also threaten the scientific and legal consensus around defining death as the permanent cessation of organismic functioning and understanding the criterion in humans for this as the cessation of functioning in the whole brain. This consensus itself developed in response to emerging technologies in the 1950’s and 1960’s enabling doctors to maintain the functioning of a person’s cardio-pulmonary system after their brain had ceased functioning. Because of this technological change, the criterion of death could no longer be the stopping of the heart. What if research into cerebral organoids and stem cell biology enables us to restore some functions of the brain to a person already declared brain dead? This undercuts the notion that brain death is permanent and may force us to revisit the consensus on death once again.

Minibrains raise many other ethical issues not considered in this brief post. How should medical researchers obtain consent from the human beings who donate cells that are eventually turned into cerebral organoids? Will cerebral organoids who develop feelings need to be appointed legally empowered guardians to look after their interests? Who is the rightful owner of these minibrains? Let us get in front of these ethical questions before science sets its own path.

Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant

an image of an anthill

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In a new study published in Proceedings of the Royal Society B, behavioral ecologist Erik Frank at the University of Lausanne in Switzerland and his colleagues discuss their findings that a species of sub-Saharan ants bring their wounded hive-mates back to the colony after a termite hunt. This practice of not leaving wounded ants behind is noteworthy on its own, but Frank and fellow behavioral ecologists note that the Matabele ants (Megaponera analis) engage in triage judgments to determine which injured ants are worth or possible to save–not all living wounded are brought back to the nest for treatment.

Continue reading “Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant”

Do Terminally Ill Patients Have a “Right to Try” Experimental Drugs?

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In his recent State of the Union speech, President Trump urged Congress to pass legislation to give Americans a “right to try” potentially life-saving experimental drugs. He said, “People who are terminally ill should not have to go from country to country to seek a cure — I want to give them a chance right here at home.  It is time for the Congress to give these wonderful Americans the ‘right to try.’” Though only a brief line in a long speech, the ethical implications of the push to expand access to experimental drugs are worth much more attention.

First, let us be clear on what federal “right to try” legislation would entail. Generally, a new drug must go through several phases of clinical research trials before a pharmaceutical company can successfully apply for approval from the Food and Drug Administration to market the drug for use. Advocates of “right to try” legislation want some terminally ill patients to have access to drugs before they go through this rigorous and often protracted process. Recent legislation in California, for example, protects doctors and hospitals from legal action if they prescribe medicine that has passed phase I of clinical trials, but not yet phase II and phase III. Phase I trials test a drug for its safety on human subjects. Phase II tests drugs for effectiveness. Phase III tests drugs to see if they are better than any available alternative treatments.

Thus, “right to try” is a misnomer. First, these experimental drugs are still expected to meet some safety standards before patients can access them. Second, such legislation would not likely mandate that a pharmaceutical company provides access to their experimental drugs. The company can always deny the patient’s request. Third, these laws do not address cost issues. Insurance plans are unlikely to cover any portion of the costs, and pharmaceutical companies are likely to expect the patient to foot the entire bill.

Ethical debate over “right to try” legislation recapitulates a conflict that regularly occurs in American political debate: to what extent does government intervention to protect public welfare by ensuring that drugs are both safe and effective impede the rightful exercise of a patient’s autonomy to choose for herself what risks she is willing to take? Advocates of expanded “right to try” laws view regulatory obstacles set up by the FDA as patronizing hindrances. Lina Clark, the founder of the patient advocacy group HopeNowforALS, put it this way: “The patient community is saying: ‘We are smart, we’re informed, we feel it is our right to try some of these therapies, because we’re going to die anyway.’” While safety and efficacy regulations for new pharmaceuticals generally protect the public from an industry in which some bad actors may be otherwise motivated to push out untested and unsafe drugs on an uninformed populace, the regulations can also prevent some well-informed patients from taking reasonable risks to save their lives by preventing them from getting access to drugs that may be helpful. Therefore, it is reasonable to carve out certain exceptions from these regulations for terminally ill patients.

On the other hand, medical ethicists worry that terminally ill patients are uniquely vulnerable to the allure of “miracle cures.” Dr. R. Adams Dudley, director of UCSF’s Center for Healthcare Value, argues that “we know some people try to take advantage of our desperation when we’re ill.” Terminally ill patients may be vulnerable to exploitation of their desire to find hope in any possible avenue. Their intense desire to find a miracle cure may prevent them from rationally weighing the costs and benefits of trying an unproven drug. A terminal patient may place too much emphasis on the small possibility that an experimental drug will extend his or her life while ignoring greater possibilities that side effects from these drugs will worsen the quality of the life he or she has left. Unscrupulous pharmaceutical companies who see a market in providing terminally ill patients “miracle cures” may exploit this desire to circumvent the regular FDA process.

The Food and Drug Administration already has “compassionate use” regulations that allow patients with no other treatment options to gain access to experimental drugs that have not yet been approved. The pharmaceutical company still must agree to supply the experimental drug, and the FDA still must approve the patient’s application. According to a recent opinion piece in the San Francisco Chronicle, nearly 99 percent of these requests are granted already. “Right to try” legislation at the federal level would not likely mandate that pharmaceutical companies provide the treatment. Such legislation would likely only remove the FDA review step from the process described above.

Proponents of the current system at the FDA view it as a reasonable compromise between respect for patient autonomy and protections for the public welfare. Terminally ill patients have an avenue to apply for and obtain potentially life-saving drugs, but the FDA review process helps safeguard patients from being exploited due to their vulnerable status. The FDA serves as an outside party that can more dispassionately weigh the costs and benefits of pursuing an experimental treatment, thus providing that important step in the rational decision-making process that might otherwise be unduly influenced by the patient’s hope for a miracle cure.

Disturbing Videos on YouTube Kids: Rethinking the Consequences of Automated Content Creation

"Youtube logo" by Andrew Perry liscensed under CC BY 2.0 (via Flickr)

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The rise of automation and artificial intelligence (AI) in everyday life has been a defining feature of this decade. These technologies have gotten surprisingly powerful in a short span of time. Computers now not only give directions, but also drive cars by themselves; algorithms predict not only the weather, but the immediate future, too. Voice-activated virtual assistants like Apple’s Siri and Amazon Alexa can carry out countless daily tasks like turning lights on, playing music, making phone calls, and searching the internet for information.

Of particular interest in recent years has been the automation of content creation.  Creative workers have long been thought immune to the sort of replacement by machines that has supplanted so many factory and manufacturing jobs, but developments in the last decade have changed that thinking. Computers have already been shown to be capable of covering sports analysis, with other types of news likely to follow; other programming allows computers to compose original music and convincingly imitate the styles of famous composers.

While these A.I. advancements are bemoaned by creative professionals concerned about their continued employment — a valid concern, to be sure — other uses for AI hint at a more widespread kind of problem. Social media sites like Twitter and Facebook — ostensibly forums for human connection — are increasingly populated by “bots”: user accounts managed via artificial intelligence. Some are simple, searching their sites for certain keywords and delivering pre-written responses, while others read and attempt to learn from the material available on each respective site. In at least one well-publicized incident, malicious human users were able to take advantage of the learning ability of a bot to dramatically alter its mannerisms. This and other incidents have rekindled age-old fears about whether a robot, completely impressionable and reprogrammable, can have a sense of morality.

But there’s another question worth considering in an age when an ever-greater portion of our interactions is with computers instead of humans: will humans be buried by the sheer volume of content being created by computers? Early in November, an essay by writer James Bridle on Medium exposed a disturbing trend on YouTube. On a side of YouTube not often encountered by adults, there is a vast trove of content produced specifically for young children. These videos are both prolific and highly formulaic. Some of the common tropes include nursery rhymes, videos teaching colors and numbers, and compilations of popular children’s shows. As Bridle points out, the formulaic nature of these videos makes them especially susceptible to automated generation. The evidence of this automated content generation is somewhat circumstantial; Bridle points to “stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

One byproduct of this method of video production is that some of the videos take on a mildly disturbing quality. There is nothing overtly offensive or inappropriate about these videos, but there is a clear lack of human creative oversight, and the result is, to an adult, cold and senseless. While the algorithm that produces these videos is unable to discern this, it is immediately apparent to a human viewer. While exposing children to strange, robotically generated videos is not by itself a great moral evil, there is little stopping these videos from becoming much more dark and disturbing. At the same time, they provide a cover for genuinely malicious content to be made using the same formulas. These videos take advantage of features in YouTube’s video search and recommendation algorithms to intentionally expose children to violence, profanity, and sexual themes. Often, they feature well-known children’s characters like Peppa Pig. Clearly, this kind of content presents a much more direct problem.

Should YouTube take steps to prevent children from seeing such videos? The company has already indicated its intent to improve on this situation, but the problem might require more than just tweaks to YouTube’s programming. With 400 hours of content published every minute,  hiring humans to personally watch every video is logistically impossible. Therefore, AI provides the only potential for vetting videos. It doesn’t seem likely that an algorithm will be able to consistently differentiate between normal and disturbing content in the near future. YouTube’s algorithm-based response so far has not inspired confidence: content creators have complained of unwarranted demonetization of videos through overzealous programming, when these videos were later shown to contain no objectionable content. Perhaps it is better to play this situation safe, but it is clear that YouTube’s system is a long way from perfection at this time.

Even if programmers could solve this problem, there is a potential here for an infinite arms race of ever more sophisticated algorithms generating and vetting content. Meanwhile, the comment sections of these videos, as well as social media and news outlets, are increasingly operated and populated by other AI, possibly resulting in an internet in which it is impossible for users to distinguish humans from robots (one software has already succeeded in breaking Google’s reCAPTCHA, the most common test used to prove humanity on the internet), and where the total sum of information is orders of magnitude greater than what any human or determined group of humans could ever understand or sort through, let alone manage and control.

Is it time for scientists and tech companies to reconsider the ways in which they use automation and AI? There doesn’t seem to be a way for YouTube to stem the flood of content, short of shutting down completely, which doesn’t really solve the wider problems. Attempting to halt the progress of technology has historically proven a fool’s errand — if 100 companies swear off the use of automation, the one company that does not will simply outpace and consume the rest. Parents can prevent their children from accessing YouTube, but that won’t completely eliminate the framework that created the problem in the first place. The issue requires a more fundamental societal response: as a society, we need to be more aware of the circumstances behind our daily interactions with AI, and carefully consider the long-term consequences before we turn over too much of our lives to systems that lie beyond our control.  

Frankenstein and His Creation: Who’s the Real Monster?

Mary Shelley’s 1818 novel Frankenstein introduced the world to archetypes we’re still familiar with: the mad scientist and his terrifying creation. But the novel is more than just a horror classic. It also asks questions about the ethics of scientific and technological innovation–questions that we still struggle with today.

On this episode, we explore one of these questions: is it wrong for scientists and innovators to work or create in isolation? First, we introduce you to “sociability,” an important, behavior-shaping idea in the scientific community of the nineteenth century. Then, we discuss whether scientists and innovators working today have similar ethical obligations. We cover things like the importance of transparency in the ethics of scientific and technological innovation. We also explore the value of democratic oversight to the world of science and technology.

For this show, we partnered with Indiana Humanities, whose One State, One Story: Frankenstein programming invites Hoosiers to consider how Mary Shelley’s classic novel can help us think about the hard questions at the heart of scientific investigation. One State/One Story: Frankenstein is made possible by a generous grant from the National Endowment for the Humanities. (Any views, findings, conclusions, or recommendations expressed in this program do not necessarily represent those of the National Endowment for the Humanities.)

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Mary Shelley’s Frankenstein
  2. Jason Kelly
  3. Monique Morgan
  4. Mary Shelley’s interest in Luigi Galvani
  5. Grave robbery and body snatching in the nineteenth century
  6. Jean-Jacques Rousseau
  7. John Basl
  8. Bioethics International
  9. WalMart begins selling organic food

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Partly Sage” by Blue Dot Sessions from the Free Music Archive. CC BY-NC 4.0

As the Creatures Unravel From Within/Vampyr” by thisquietarmy from the Free Music Archive. CC BY-NC-ND 3.0 US

The Three Witches” by tara vanflower from the Free Music Archive. CC BY-NC-ND 3.0 US

Hickory Interlude” by Blue Dot Sessions from the Free Music Archive. CC BY-NC 4.0

Tuck and Point” by Blue Dot Sessions from the Free Music Archive. CC BY-NC 4.0

Beautocracy” by Podington Bear from the Free Music Archive. CC BY-NC 3.0

Visit us.

LOCATION

2961 W County Road 225 S
Greencastle, IN 46135
765.658.5857

 

PLAN YOUR TRIP

HOURS (SUMMER 2025)

Monday-Thursday: 8AM-5PM
Friday: 8AM-12PM
Saturday-Sunday: Closed

Monday-Sunday: Sunrise to Sunset