← Return to search results
Back to Prindle Institute

Rights of the Unborn: Personhood After Dobbs

black and white photograph of pregnant person holding stomach

Last month a jailed pregnant woman claimed that her fetus was being illegally detained after she had been charged with second degree murder during her pregnancy. A writ of habeas corpus on behalf of the unborn child in Florida’s Third District Court of Appeal was filed arguing that the “unlawful and illegal detention” of the unborn child violated its constitutional rights on the basis that the “unborn child had not been charged with any crime of the state.” It was argued that the “draconian confinement” was harmful as the unborn child was in such a proximity to violent criminal offenders. It was further claimed that the rights of the unborn child were violated due to the lack of prenatal and medical care within the jail environment, as well as being subject to transportation to the jail without air conditioning for an extended period. The petition was dismissed, and the unborn child was not afforded “personhood.”

Since Dobbs v. Jackson Women’s Health Organization, which removed the constitutional right to an abortion, anti-abortion groups have advocated for federal fetal personhood laws. These protections grant the fetus similar legal rights and immunities available to citizens. However, in doing so, the fetus and the mother are treated as separate legal entities. In the most extreme, an unborn child could possess identical rights to a pregnant person.

Georgia’s Living Infants Fairness and Equality (LIFE) Act is the first glimpse into such world. Under this law, aborting a pregnancy after the sixth week could result in criminal charges as the state considers a fetus a person at that point. (Georgia recognizes the fetus in census data, allows a fetus to qualify for tax credits, and permits mothers to apply for child support during their pregnancy.) While the Act provides an exception for miscarriages, rulings are sure to be contentious as abortions and miscarriages are treated as medically comparable.

The role of personhood within both ethics and law has been widely debated. Within ethical discussions, personhood is often described as having moral status, which guarantees a right to life and inherent dignity. In the legal system, personhood confers rights, duties, and protections.

The landmark Supreme Court case Roe v. Wade in 1973 provided a constitutional right to abortion. The lawyers representing the state of Texas in Roe argued that a fetus should be entitled to all the protections guaranteed under the Fourteenth Amendment including a right to “life.” However, the court ruled that the word “person,” does not include the unborn. Instead, it was held that personhood could not be granted to a fetus before viability — the point in which a fetus can survive outside the womb. In Dobbs, which overturned Roe, the court declined to disclose their views on fetal personhood. “Our opinion is not based on any view about if and when prenatal life is entitled to any of the rights enjoyed after birth.” This ambiguity, coupled with new powers allowing states to determine abortion rights, has granted license for all manner of personhood accounts to be made regarding unborn children.

What would it mean for pregnant persons if extensive personhood laws were applied to both embryos and fetuses?

Firstly, pregnant persons could be prosecuted for the death or injury of their unborn child. If personhood laws were applied to unborn children and federally recognized, the termination of pregnancies would inevitably result in criminal prosecutions. An individual may be charged with murder or manslaughter for seeking an abortion, or conspiracy to commit murder if they travel to obtain an abortion in another state.

This opens up further questions beyond the scope of abortion. For example, what would happen if an individual had a miscarriage? Would they be charged with attempted murder? Maternal health disparities could allow for certain groups to be unfairly targeted over others. For example, research has shown that Black patients are at a much higher risk of suffering a miscarriage than white patients. Native Americans, some Asians, and some Latinas also experience higher maternal mortality rates compared to white patients. Should extensive unborn personhood laws be recognized, there is a risk that minority groups could suffer discrimination and unfair criminal targeting.

Further, a report from the Centers for Disease Control and Prevention found that a large number of pregnancy-related deaths could be contributed to provider factors, including misdiagnosis and ineffective treatments, as well as system of care factors such as lack of coordination between providers. This poses questions about who would be responsible for maternal policing, and how narratives about pregnancy loss would be analyzed.

What about a pregnant person’s day-to-day life? The impact of unborn personhood laws could affect pregnant people who engage in behavior that is seen as harmful, significantly impacting everyday decisions. Failing to provide sufficient prenatal care may allow pregnant persons to be prosecuted, and child endangerment laws could apply putting parents at risk of criminal prosecution. A pregnant individual’s diet, sleep, exercise, as well as their work and home environment could be scrutinized for putting unborn life at risk. Undergoing certain medical treatments such as chemotherapy or taking medications that doctors advise against during pregnancy such as aspirin or antidepressants may also become illegal. This would make it impossible for pregnant individuals and their healthcare providers to identify all the actions which might threaten criminal prosecution. Ultimately, recognizing robust unborn personhood would significantly curtail the rights of a pregnant person.

Additionally, extensive unborn personhood laws would have broader implications for reproductive health and assisted reproductive technologies. These laws could impact the use of in vitro fertilization or preimplantation genetic diagnosis as the destruction of some embryos are an inevitable side effect of both fertility processes. Disposing of unused fertilized eggs or disregarding implanted eggs could result in criminal charges should the embryo be seen as a person from the moment of conception.

Living in a world where the unborn’s right to life outweighs a pregnant person’s right to decide what happens to their body places considerable limits on a pregnant person’s autonomy. The liability implications of proposed laws and increased surveillance threaten to further foster a blame culture regarding pregnancy — judgments especially unfair towards pregnant persons who suffer poor maternal outcomes through no fault of their own. Abortion care is now heavily regulated in many U.S. states and the prospect of affording the unborn full personhood status adds another terrifying dimension to the way reproductive healthcare is controlled in the United States.

Another Way: Moral Reasoning and Selective Focus

photograph of a forked path in pine forest

Imagine, with me, a scenario.

You and I are medical students working in an intensive care unit. Over the course of the last few weeks, we’ve been caring for Mrs. Rhodes: a kind woman who, despite her advanced years, has retained her sharp humor and optimism through an incredibly severe bout of pneumonia. In the last few days, however, her condition has rapidly deteriorated, and the decision was made to place Mrs. Rhodes on a form of heart-lung bypass known as ECMO: though her chances are slim, we hope that, by decreasing the burden on her heart and lungs, we will buy her time for the infection to pass and her lungs to heal.

The problem, though, is that Mrs. Rhodes is not the only patient with this pneumonia: since Mrs. Rhodes’ admission to our unit, the causal infection has spread throughout the nursing home where she was being cared for. Over the past few weeks, therefore, our service has become overrun with ill seniors: many who just require supportive care, and some, like Mrs. Rhodes, who require invasive life support. And as of this morning, Mrs. Rhodes has continued to deteriorate, and all five of our ECMO machines are now occupied.

Later that night, as we hope that none of our patients will deteriorate further, we receive a call from a resident obstetrician. A young pregnant woman has just been seen in the emergency room. She’s eclamptic, and is showing early signs of cardiopulmonary failure: without some form of life support, both her and her fetus are at risk. The obstetrician asks if any ECMO machines are available.

We, then, are faced with a dilemma: do we allow this young woman and her fetus to die, or do we reallocate one of our ECMO machines, and allow one of our elderly patients to die in the process? We look down our patient panel, and see Mrs. Rhodes’ vital signs and laboratory results: it’s clear that her chances of recovery are dim to pitch dark. But is that grounds to sentence her to almost certain death? The ethical quicksand begins to surround us. Does it matter that we can save the lives of the mother and her fetus if we reallocate the machine? Does it matter that Mrs. Rhodes is older? Or is it ageist for us to even think so? Is Mrs. Rhodes entitled to the machine, given that she’s currently attached to it? Does it matter if Mrs. Rhodes is a leader in our community? If the mother is an alcoholic? If Mrs. Rhodes smokes?

Who gets a chance to live, and who doesn’t?

.  .  .

Ethical dilemmas are frequently framed by limitations; in fact, it is often the limitations themselves which give a dilemma moral weight. We are finite beings with finite options, capabilities, and resources, and this simple fact undergirds many of the dilemmas which we might encounter. In the case of Mrs. Rhodes, for example, our dilemma arises from a limited number of lifesaving ECMO machines: and appreciating this limitation, we turn our moral attention to the question of how to fairly and equitably allocate our limited resources.

Real-life ethical dilemmas, however, do not exist in a vacuum: they are inexorably dependent on the context in which they occur, on the socioeconomic, political, and philosophical systems which surround us.

These systems are causally connected to what happens, and often determine who it happens to; and dilemmas, in all of their alluring simplicity, can lead us to believe that these limitations are simply facts, and not the result of the unnamed systems which brought the dilemma about.

We may debate, for example, the merits of reallocating the ECMO machine or not; we may spill a significant amount of ink over what the moral thing to do is in such a case. Such a focus on the dilemma at hand, however, obscures the role of the systems which brought the dilemma about: we forget that the choice to buy five ECMO machines, and not six, was a choice. People gathered in a (likely exquisitely furnished) conference room, and decided that the procurement of five ECMO machines provided the greatest benefit at the least cost. When we debate and decide if Mrs. Rhodes or the young mother should get the ECMO machine, then, we are truly debating who must die to maintain the financial status quo — a reality which the dilemma, as presented above, wholly obscures. With this perspective, we see that a valid ethical analysis of the case of Mrs. Rhodes is that the time for moral action was in that conference room, and that a hospital which cannot care for a nursing home of sick seniors is negligently underprepared.

One could very well argue that when we get to the point where we are deciding who dies a preventable death for the maintenance of an unjust status quo, we can no longer claim that what we’re doing is truly moral in any meaningful sense. But when we only ask the question posed by a dilemma — in this case, who should get the ECMO machine — we fail to ask these broader questions, and we fail to see the full scope of the problem before us.

The same goes for similar dilemmas which we might encounter. Rare diseases go unresearched and untreated due to economic calculus, but what if non-economic values could guide how research priorities are set? We might ask questions about how to best allocate our individual resources to various charities, but what if individual philanthropy itself, in its modern form, frequently propagates injustice? We can ask questions about the dilemmas which are posed to us, but we must also ask questions of the systems which impose the limitations that inform these dilemmas.

Some dilemmas arise from limitations that are inherent to our nature: we cannot be in two places at once, and we cannot live forever. But, with disturbing frequency, the limitations which inform real-life ethical dilemmas are artificial: they are the contingent result of the decisions which we, as individuals and as societies, make, and though their role might be obscured, these decisions are relevant to how we choose to ethically resolve the dilemmas we face. Understanding ethical decision-making requires attention to these choices and the systems and beliefs which arise from them — attention to the things which, in the dilemmas we face, may lead us another way.

Media Criticism and Healthy Skepticism

photograph of bagged newspaper abandoned on road

In a recent article in The Conversation, Professor Michael Socolow argues that distrust in the media is, in fact, valuable for a democracy. To make his argument, he presents historical cases of politicians criticizing media outlets, along with examples of journalists and their publishers damaging their own credibility by knowingly putting out materials that were manipulated, fabricated, or outright false. Socolow’s point seems to be two-fold: that political figures encourage citizens to distrust the media, and that journalists may invite this by engaging in unscrupulous behavior. He then notes that only in authoritarian regimes would we see citizens unwilling to express skepticism towards the media. As a result, Socolow concludes, “measured skepticism can be healthy and media criticism comprises an essential component of media literacy – and a vibrant democracy.”

Socolow is correct but in an uninteresting way. Frankly, I am unsure who he is arguing against. Few, if any, think we ought to trust every story in every outlet. But, simultaneously, we should not think there is monolithic perpetually untrustworthy “media.” Socolow gestures towards this middle-ground when he mentions “measured skepticism” in his conclusion. Yet he fails to give any account of what this looks like.

Further, I worry that Socolow’s discussion implicitly sends the message that any criticism is legitimate and healthy. The article opens by noting that being “anti-media” has become part of the Republican political identity, and mentions media criticism by politicians like Donald Trump. But surely some of the criticisms are irresponsible. Socolow also discusses Lyndon Johnson challenging accurate reporting on the Vietnam war. He follows these clearly truth-indifferent and politically-motivated media criticisms with cases of fraudulent behavior by media outlets, such as Dateline rigging GM trucks with explosives during a story on potential safety hazards.

However, there is no differentiation between the bad-faith criticisms and criticisms driven by legitimate misdeeds by members of the media. Socolow treats both as explaining why people might distrust the media, without any explanation of whether we ought to accept both sorts of critique as legitimate.

I think it is worthwhile to spend time considering what measured or healthy skepticism looks like. I cannot give a full account here; that’s a philosophical project on its own. Nonetheless, I hope that some preliminary reflection will help us determine what does and does not contribute to democratic society.

Aristotle famously argued that the virtues – admirable character and intellectual traits, the possession of which makes for an ideal person – are a middle ground or mean between an extreme of excess and an extreme of deficiency. For instance, most would say bravery is a virtue. Suppose that, after initially hearing of Russia’s invasion of Ukraine that I, with no military training or combat experience, bought an AR-15 and booked a flight to Europe to travel to the front lines. We would not call this behavior brave. I am showing an excess of what bravery requires, being too willing to risk my safety to fight against injustice, which crosses the line into being reckless. Conversely, one might fall short of bravery through cowardice. Standing by as an old woman’s purse is stolen because I was afraid of what might happen to me, shows a deficiency in my willingness to face danger. We might apply the same analysis to skepticism. One may be too skeptical, or not skeptical enough. The virtue of healthy skepticism is in the middle of these extremes.

We might start our discussion of healthy skepticism by asking: what does it mean to be skeptical? To be skeptical of something is to doubt it. But what of being skeptical in general? A skeptical person tends to doubt and scrutinize something before accepting it as true.

With Aristotle’s view in hand, we can then say that a healthy skeptic submits claims to an appropriate level of doubt before accepting them. And to determine what an “appropriate” level of doubt is, we may need to first consider what an inappropriate amount looks like.

In Meditations on First Philosophy, Rene Descartes engaged in a kind of skepticism some now call methodological doubt. Descartes attempted to systematically question each of his beliefs, and rejected all those which he was capable of doubting. Indeed, Descrates goes so far as to (temporarily) reject the belief that he had hands or even a body. This is because he could doubt these things – perhaps he was a sleeping spirit who was only dreaming that he had a body. In Descartes’ view, the fact that he could doubt a belief undermined his justification for it.

Philosophers, at least until Gettier, viewed knowledge as a justified true belief. Justified means that the belief has good support – there’s strong evidence behind it, like data or a logical proof. Belief is accepting something as true. Further, something is true when it obtains in reality.

Of course, Descartes skepticism seems extreme. The mere fact that something could possibly be wrong does not mean that belief in it is unjustified. As a result, his skepticism appears exaggerated. This would be like refusing to trust any story in any media outlet, simply because members of the media have at some point lied. It is true that any given story could be fabricated; but that does not mean we should treat all of them as fabricated.

What is the appropriate level of scrutiny to apply stories in the news if Cartesian doubt goes too far?

Ultimately, we have to consider which factors could cause or motivate a media outlet to run a false or inaccurate story (or even refuse to cover a particular story), and weigh those against considerations that support the veracity of the reporting.

When criticizing media in the U.S., we have to keep in mind that, with a few exceptions, media outlets are privately owned. Their goal is to attract viewers, listeners, and/or readers willing to pay a subscription or view an ad in order to make money. This may sometimes affect their coverage. They may be less inclined to report on the misdeeds of their advertisers. Further, to attract a specific demographic, the news outlet may adapt their coverage and tone to cater to a  particular kind of audience. They may also pursue a “scoop” – breaking a unique story first might increase viewership in the future. (Hence why Dateline would be willing to explode GM trucks, despite this angering a potential advertiser.) Each of these factors may shape what outlets are willing to report and the slant of their coverage.

Further still, reports are often created by individuals or a small team. These individuals have private interests – regularly writing reports which drive audience engagement will advance their career. They may have personal connections to the subject matter which bias their reporting in some way. A healthy skeptic understands that the news is, ultimately, produced by people not published out of the ether. We must keep in mind what both individuals and organizations will gain from our acceptance of a particular story before we place our trust in their reports.

So, what reasons would weigh in favor of trusting a report in the media?

I cannot give a comprehensive list here, instead I can offer a few criteria. First, a consensus in reporting on an event provides further justification for accepting a story. The more outlets covering the same story, and deriving similar conclusions about it, the more justified we are in accepting it. Second, the extent to which reporting is consistent with other facts and accounts affects the justification of our believing it. The more easily all the information fits together, the more likely it is all to be true. An aberrant report which claims other commonly reported stories are false is itself likely to be false. Third, reports which are falsifiable are more trustworthy than those which are not. If a media outlet claims that something which could be proven wrong is true, then they are putting their credibility on the line if their report is false. This risk indicates a certain confidence in the judgment. Further, claims which are not falsifiable are typically not worthy of acceptance; the fact that you cannot prove with certainty there isn’t a secret shadow government does not show that we should believe that such a government does indeed exist.

A healthy skepticism towards media outlets, overall, involves a complex set of attitudes and behaviors. To be a healthy skeptic, one should regularly ask who benefits and how. Who stands to gain from presenting the particular story in this particular way? Whose interests are served by remaining silent about a particular event.

Further, a healthy skeptic remembers that all private media outlets are for-profit organizations that rely on advertising, and that even public media companies are often funded by governments. These interests shape their coverage. Someone who adopts an attitude of skepticism – an attitude indeed vital to a well-functioning democracy – does not view “the media” as a monolithic entity, nor do they view the same few outlets as unerringly trustworthy. Instead, they consider each story for what it is: an act of discretion – a specific report published for an intentional reason from a particular point of view. And perhaps most importantly, a healthy skeptic will submit criticisms of the media by public officials and authority figures to the same demanding level of scrutiny.

Moral Education in an Age of Ideological Polarization: Teaching Virtue in the Classroom

photograph of apple on top of school books stacked on desk

The Program for Character and Leadership at Wake Forest University was recently awarded $30.7 million by Lilly Endowment Inc. to create a national higher education network focused on virtue formation. Approximately $7 million will go towards further strengthening the program at Wake Forest, while $23 million will be earmarked for funding initiatives on character at other colleges and universities.

While this project is a big win for Lilly, which supports “the causes of community development, education and religion,” it also raises pressing questions about the role of the moral virtues within higher education. In the wake of the Unite the Right Rally in Charlottesville, Virginia, professor Chad Wellmon wrote in The Chronicle of Higher Education that the University of Virginia could not unambiguously condemn the demonstrations. This is because universities, Wellmon wrote, “cannot impart comprehensive visions of the good,” making them “institutionally incapable of moral clarity.” On Wellmon’s view, universities should focus solely on the life of the mind, leaving profound moral questions to churches, political affiliations, and other civic organizations.

Supporting this vision of the university, many conservatives have complained that higher education is insufficiently neutral when it comes to moral and political values. In rejecting courses on Black history deemed to lean too far left, Florida Governor Ron DeSantis claimed that citizens “want education, not indoctrination.”

If higher education ought to remain neutral and eschew a deep moral vision, however, then how is it possible for universities to stay true to their mission while, like Wake Forest, simultaneously engaging in character education?

One thing that can be said is that institutions of higher education already do engage in virtue education. Due to their commitment to help their students think well, colleges and universities encourage their students to be curious, open-minded, and intellectually humble. As even Wellmon acknowledges, forming the life of the mind requires robust intellectual virtues, including “an openness to debate, a commitment to critical inquiry, attention to detail, and a respect for argument.”

Along with these intellectual virtues, higher education also supports a number of civic virtues as well. Because colleges and universities are tasked with preparing students to be responsible citizens, they often aim at promoting civility, tolerance, and civic engagement. These virtues equip graduates to contribute within liberal democracies, coupling their intellectual development with civic preparation.

The obvious objection to these examples is that the virtues in question are not moral virtues. Intellectual and civic virtues may be well within the purview of higher education, but should professors really take it upon themselves to teach compassion, courage, generosity, integrity, and self-control?

While these might seem strange in context of the modern university, it is interesting to note that higher education does emphasize at least one moral virtue – the virtue of honesty. Regardless of the institution, academic honesty policies are ubiquitous, forbidding cheating, plagiarism, and other forms of academic dishonesty. We have, then, at least one obvious example of a moral virtue being promoted at the university level. If the moral virtues generally seem so out of place at colleges and universities, then why does honesty get a pass?

The intellectual virtues find their place within the academic world because of the ways they promote the mission of higher education. The flourishing life of the mind requires the intellectual virtues, and so there are no complaints when professors help students form their intellectual characters.

But honesty also plays an important role in thinking well. If, every time a student encounters an intellectual challenge, they turn to cheating or plagiarism, they are missing out on an opportunity to do the difficult work of developing the intellectual virtues. Academic dishonesty short-circuits their ability to grow in the life of the mind, making it important for instructors to not only encourage the intellectual virtues, but to guide students towards honesty as well.

From this we can see that, while universities do not typically engage in moral education, this is not because they must always remain neutral on moral issues. Instead, universities simply do not see the other moral virtues as necessary for their mission.

But such an omission is not always well-motivated, as there are many moral virtues that are integral to the goals that universities have for their students. Consider, for example, the goal of helping students prepare for careers post-graduation. While employers might be looking for candidates that are open-minded and intellectually curious, they likely also hope to hire professionals with honesty, integrity, and self-control. Employers want doctors who are compassionate, professors who are humble, and lawyers who are just.

If college presidents, deans, and provosts see it as part of their mission to prepare students for the working world, then there is a place for character formation on campus. While some may contest that job training is not the most important mission of the university, it is nevertheless a significant one, making the task of developing morally virtuous teachers, nurses, and engineers a central mission of higher education.

This emphasis on moral virtue, of course, still allows universities to leave space for students to develop their own visions of what a good and meaningful life might look like. Emphasizing the moral virtues does not require compromising the ideological neutrality necessary for a diverse and challenging university experience. Instead, emphasizing character can only deepen and strengthen what higher education has to offer, teaching students to not only be good thinkers, but to be good people as well.

Ethical Dilemmas in Lunar Mining and Colonization

image of moon's surface

This week the European Space Agency made a proposal that the Moon get its own time zone. Currently the Moon has no specific time, with the recorded time coinciding with the nation that launched the mission. However, there has been a steady increase in interest in the Moon with Japan, India, China, the United Arab Emirates, and America all sending probes of different sorts. With plans by the U.S. to send a crew to the moon by 2025 and China by 2030, it has been argued that there is a growing need to create a standardized Moon time in order to make coordination and cooperation amongst various nations (and corporations) easier. As we take yet another step closer to some sort of occupation, sticky questions and daunting concerns regarding lunar colonization abound.

Clearly, a space race will only hasten human efforts to colonize the Moon. It was recently reported that NASA is increasing its efforts to mine metals and locate fuel on the Moon — a response, in part, to China’s lunar mining efforts. NASA administrator Bill Nelson recently warned that China could establish a foothold on the Moon and attempt to dominate the most resource-rich locations and exclude other nations. (There are, for instance, only a few areas near the south pole of the Moon that are thought to be adequate for harvesting water.) Also, there are concerns that Chinese lunar infrastructure could be used to interfere with communication. Given this, it may not be long before a permanent occupation of the lunar surface – with equipment and infrastructure – begins.

A number of scholars have made the argument that we have an ethical obligation to begin colonizing space. They will note issues like overpopulation and the destruction of natural resources on Earth as reasons we need to begin looking elsewhere to habitate. Gonzalo Munevar has argued that we have an obligation to colonize space as a means of preventing the extinction of life on Earth and to make deflecting asteroids easier. I’ve also previously mentioned Michael Mautner’s argument that we are obligated to “plan for the propagation of life.” Nevertheless, there are a host of concerns with these propositions.

Many of these arguments are environmental in nature. For example, in the discussion about the colonization of Mars, scholars like Linda Billings have argued that it would be wrong to contaminate a potentially habitable planet and to transport life to it. Of course, the Moon does not have life, nor is thought to be capable of supporting it. Still, some argue that celestial objects such as the Moon or asteroids do constitute an environment that we may have certain ethical obligations to it. A paper by Daniel Pilchman outlines an argument using the work of W. Murray Hunt’s “Are Mere Things Morally Considerable?” He questions whether valuing life is morally arbitrary and argues that we ought to instead value existence itself. If this is correct, it’s possible we should consider the rights of the Moon to exist as it is. Further, Pilchman considers the possibility that our desire to mine asteroids (and by extension the Moon) constitute a failure of virtue on our part. The best people, so the argument goes, are those who live with a sense of awe, reverence, and care towards celestial objects rather than seeing them merely as means to our ends.

Unfortunately, these arguments may very well come to naught. Vast investments have been (and are being) made and with the competition forming between the private sector and rival nation states. Some form of lunar colonization seems inevitable.

The most practical question might not be should we colonize the moon, but how should we go about it – what is the most ethical way we can begin to plan and what are the worst outcomes to avoid?

For example, what justifications should we seek if someone stakes a claim to a region on the moon with resources? The Outer Space Treaty of 1967 claims that no nation may claim ownership of the Moon, and the Moon Treaty of 1979 forbids harvesting resources on the Moon. But these treaties may be well past their due date.

Pilchman notes that territorial claims to regions of space could follow the “original appropriation argument” which takes its lead from John Locke. According to Locke, to claim ownership of “the commons” someone must mix their labor with the land, leaving enough for others and only appropriating what can be reasonably used without it going to waste. So long as a nation or corporation invests time and effort and doesn’t go overboard in appropriation, they would be morally justified in claiming parts of the Moon as their private dominion.

We should, however, be concerned about what a system like this incentivizes space actors to do.

Despite the “no spoilage” proviso, it would incentivize a “first dibs” situation where the first groups to the Moon claim the best spots. As a recent Bloomberg article puts it, “The advantage extends beyond first dibs on what there is to be dug up. There’s also a role in establishing norms and precedents for how space operations should be conducted.” Not only would this create an inequitable situation for nations that can’t afford lunar excursions, it will incentivize a space race that could see groups aggressively protecting their stakes.

While international treaties ban the use of weapons of mass destruction in space, there are already serious concerns about the effects that warfare would have on space, including the destruction of GPS networks. “Outer space is not a wrestling ground,” said a spokesperson for the Chinese Embassy, “The exploration and peaceful uses of outer space is humanity’s common endeavor and should benefit all.” Nevertheless, Chinese satellites have even been designed with grappling arms capable of moving other satellites in orbit. The more infrastructure that exists in space, the greater the investment, the greater the incentive will be to weaponize space to protect them. This could quickly escalate in ways that we might not want if we don’t get out ahead of these issues of ownership now. Similarly, the mining of resources from outer space has the potential to profoundly affect the economy of Earth. For example, one asteroid has been valued at $700 quintillion dollars. It isn’t hard to imagine how much economic upheaval that large scale mining could bring about.

If we don’t take efforts to forestall the worst possible consequences now, we may find that geopolitics will shift in ways that we can no longer anticipate or control. Many will point out the problematic ethical similarities between colonialism on Earth and colonialism in outer space, and with that in mind, it’s worth considering the efforts of the world at that time to avoid warfare, like the Berlin Conference. This is not only because many of the imperial ambitions of the time were primarily focused on resource extraction, but also because of the grave risk of geopolitical conflict. If we don’t start making decisions about these issues now, we may find that our ethical choices are far more limited later.

ChatGPT and Emotional Outsourcing

Plenty of ink has been spilled concerning AI’s potential to plagiarize a college essay or automate people’s jobs. But what about writing that’s meant to be more personal?

Take for example the letter Vanderbilt sent to their students after the shooting at Michigan State University. This letter expresses the administration’s desire for the community to “reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment.” It was not written by a human being.

The letter was written by an AI tool called ChatGPT, which is a user-friendly large language model (LLM). Similar to predictive text on your phone, ChatGPT is trained on a large body of text to produce sentences by selecting words that are likely to come next.

Many people were upset to learn that Vanderbilt’s letter was written using ChatGPT — so much so that the administration issued an apology. But it’s not clear what exactly was worth apologizing for. The content expressed in the original letter was not insincere, nor was it produced illegally. Nothing about the wording was objectionable.

This case raises questions about tasking AI with what I’ll call emotional writing: writing that is normally accompanied by certain emotions.

Examples include an apology, an offer of support, a thank you note, a love letter. What exactly is the source of unease when a human being off-loads emotional writing to an AI model? And does that unease point to something morally wrong? When we consider a few related cases, I think we’ll find that the lack of a human author is not the main concern.

Let’s start by noting that the normal writing process for a university letter is similar to the process ChatGPT uses. Normally, someone within the administration might be asked to write the first draft. That person researches similar letters, using them as a guide. This draft is then vetted, edited lightly as necessary, and sent to the campus community. It’s natural to think that the main difference is that there’s a human at one end of the process in the normal case, and not (or not really) in the ChatGPT case.

Will any human do? Consider other cases where emotional writing is done by someone outside the situation. A highschooler gets their mom to write an apology for them. A university pays a freelancer to express sympathy for its students. A man with no game hires Will Smith to tell him what to say to his crush. In these cases as well, the recipient of the speech might be reasonably disappointed to discover the source of the words.

These considerations suggest that what’s objectionable in the AI case is not specifically the lack of a human author. The problem is that the author is not bound up in the relationship for which the words are written.

What all these cases have in common is that they involve emotional outsourcing: someone avoiding an emotional task by giving it to someone (or something) else. In these cases, the deeply personal writing becomes a kind of mercenary task.

Surprisingly, even having the right person write the text may not be enough to avoid this problem! Suppose someone writes a love letter to their romantic partner, and after their breakup reuses the letter by sending it to someone new. I would be peeved. Wouldn’t you? The emotional work has been done by the right person, but not with the right aim; not with the current recipient in mind. The work has been outsourced to the writer’s prior self.

There are a couple aspects of emotional outsourcing that might seem problematic. First, outsourcing emotional writing draws attention to the fact that much of our communication is socially scripted. If even a well-trained computer model can perform the task, then that task is shown to be formulaic. In a society that prizes individuality and spontaneity as signs of authenticity, relying on a formula can seem subpar. (Consider how you might react if a person used a template for a letter of condolences: “Dear [recipient], We offer our [sincerest / most heartfelt / deepest] [condolences / sympathies] in the wake of the [tragedy / tragic event / tragic events /atrocity] of [month, day].”)

I think objecting to this feature of emotional outsourcing is a mistake. Social scripts are to some extent unavoidable, and in fact they make possible many of the actions we perform with our speech. The rule not to draw attention to the script is also ableist, insofar as it disadvantages neurodivergent people for whom explicitly-acknowledged social scripts can be more hospitable. While drawing attention to the formulaic nature of the communication is a taboo — and that partly explains people’s disapproval of emotional outsourcing — that’s not enough to make emotional outsourcing morally objectionable.

The second issue is more problematic: emotional outsourcing misses some of the action behind the speech that gives the speech its meaning. Language not only means things; it also does things. A promise binds. A statement asserts. An apology repairs. (Often the action speech performs is limited by what is taken up by the audience. I can say “I do” as often as I’d like, but I haven’t married someone unless that person accepts it.)

Emotional writing performs specific actions — consoling, thanking, wooing — not only through the words it uses. It also performs those actions in part through the act that produces those words.

Writing out a thank you note is itself an act of appreciation. Thinking through how to express care for your community is itself an act of care. Putting words to your love is itself an act of love.

Part of what makes the words meaningful is lost when those prior actions are absent — that is, when someone (or something) else produces them. People often say with respect to gestures of kindness, “it’s the thought that counts.” When ChatGPT is used for emotional writing, at least some of that thought is missing.

Keeping these issues in mind, it’s worth asking whether outsourcing emotional writing to AI is entirely bad. Thinking deeply about grief can put people in a challenging place emotionally. It could trigger past trauma, for example. Could it be a mercy to the person who would otherwise be tasked with writing a sympathy letter to leave the first draft to an LLM that feels nothing? Or is it appropriate to insist that a human feel the difficult emotions involved in putting words to sympathy?

There may also be cases where a person feels that they are simply unable to express themselves in a way that the other person deserves. Seeking outside help in such a case is understandable — perhaps even an act of care for the recipient.

I have argued that emotional outsourcing is an important part of what people find objectionable about tasking AI with emotional writing. Emotional outsourcing draws attention to the formulaic nature of communication, and it can mean missing out on what counts. However, much remains to be explored about the moral dimensions of emotional outsourcing, including what features of a case, if any, could make moral outsourcing the best choice.

Nueva Pescanova and the Ethics of Octopus Farming

photograph of octopus underwater

The more intelligent an organism is, the more issues come with its captivity and, specifically, with its farming. Few lament carrot cultivation because vegetables are unintelligent and cannot suffer. Insects are more challenging as they respond to stimuli. Still, there is doubt whether they possess the required biological mechanisms to feel pain meaningfully. Domestic mammals are an even more significant challenge as their biology resembles ours enough to cast doubt on whether breeding and slaughtering them for food is permissible. Even more problematic are great apes, which, while not commonly bred for consumption, present severe challenges regarding humane treatment and enrichment in captivity. Finally (and hopefully theoretically), farming humans is strictly morally and legally prohibited because it would be principally and practically impossible to do so (putting aside the fact that we shouldn’t harm one another ipso facto). As Jeremy Bentham put more succinctly in An Introduction to the Principles of Morals and Legislation, when considering how to treat others, be they human or otherwise, “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”

This link between intelligence and farming underpins the outrage by animal rights groups at Nueva Pescanova’s plans for the world’s first octopus farm in Spain’s Canary Islands. Demand for such a facility certainly exists; many around the globe consider octopus a delicacy. But, until recently, successful octopus breeding had proven impossible to achieve commercially. Wild-caught octopuses were the only source. However, the company announced in 2019 that it managed to overcome the traditional hurdles that had prevented octopus breeding and was ready to proceed. As animal breeding is generally more straightforward and profitable than hunting and fishing, the venture stands to make Nueva Pescanova a lot of money.

But, unlike carrots, octopuses are not passive organisms unable to suffer. In fact, scientists typically consider octopuses highly intelligent, demonstrating a remarkable capacity for problem-solving and deep curiosity. For example, in 2009, workers at the Santa Monica Pier Aquarium arrived to find that one of their octopuses had redirected 200 gallons of seawater from its tank to the floor outside. They’re even capable of unscrewing jars to gain access to food within, which they complete faster each time scientists present them with the challenge. Indeed, as the 2020 hit film My Octopus Teacher revealed to many, octopuses are vastly complex organisms that play, mimic, and learn.

It is this capacity for intelligence and curiosity – one so remarkable that octopuses are the only invertebrate protected by the U.K.’s Animal (Scientific Procedures) Act of 1986 – that has led to the outrage about the planned octopus farm. According to a report by Eurogroup For Animals and Compassion in World Farming, the proposed facility will house and kill over a million octopuses yearly. This is a pretty horrifying statistic (although it pales compared to the roughly 890,000 cattle slaughtered daily in 2019). But, the conditions in which the octopuses will live and die give that number an even grimmer context.

First, according to the report, workers will kill the octopuses by submerging them into an ice slurry of around -3°C. Unfortunately, when used to kill fish, this method results in a slow and painful death (for more on the ethics of fish consumption, see The Prindle Post article on The Feelings of Fish). There is little reason to think this would be any different for octopuses. Indeed, given the octopuses’ remarkable cognitive capabilities, which exceed that of most fish, there’s reason to believe such a death would be even more agonizing.

Second, octopuses are, for the most part, solitary creatures. They prefer to live alone and only interact with others of their species at specific moments (like when mating). However, housing each octopus separately would be logistically and financially impossible at a commercial farm. So, Nueva Pescanova plans to keep its stock grouped in multiple tanks, with roughly ten to fifteen octopuses per cubic meter. For a solitary species, this is a recipe for a poor quality of life, and it runs the risk of leading to cannibalism. So, not only will they be housed amongst others of their species, for which they aren’t evolved, but they’ll also have to contend with the risk of predation.

Third, Nueva Pescanovaplan will keep the octopuses under 24-hour light to enhance captive females’ breeding capacity. Of course, this would be uncomfortable and likely traumatic for any number of creatures. Still, the prospect is practically hellish for octopuses that spend much of their time in the dark and can feel light via sensors in their many arms.

These are just a snippet of the concerns the farms raise. But, it paints a pretty unpleasant picture of a solitary, intelligent species forced into intimate proximity with others of its kind, for its entire life, under the gaze of 24-hour lights, until they reach a harvestable size when they’re dunked into sub-zero water to die. The company has acknowledged these worries and claims it will work to mitigate them. However, it is hard to see how Nueva Pescanova can accomplish this when the welfare concerns are in such stark contrast with the company’s proposed operating practices. And, if traditional agriculture and farming practices are any example, we can expect animal welfare to take a backseat to monetary interests.

In determining our obligations and responsibilities to others, Bentham asks us to consider whether an organism can suffer. If so, then we owe that creature the rights traditionally reserved for humans. So, would we feel comfortable treating humans in this way? The answer (hopefully) is no. If octopuses can suffer in a way that, while not identical to us, is at least comparable, then we have to ask whether such farming should be allowed.

While overwhelmingly dark, this story has a thin sliver of light. There are already bills progressing through Washington state’s House to prevent similar farms from being established there. While House Bill 1153 focuses on the environmental impacts (which is a good thing to focus on), it does make some allusion to the horrors that await farmed octopuses. Sadly, however, while this does offer some hope, it will come as cold comfort to those octopuses that could eventually be farmed in inhumane conditions around the rest of the world.

Ultimately, in the face of today’s all-consuming capitalistic practices, the question isn’t whether animals can suffer but whether their suffering can be made profitable.

What’s Wrong with AI Therapy Bots?

image of human and chatbot dialog

I have a distinct memory from my childhood: I was on a school trip, at what I think was the Ontario Science Centre, and my classmates and I were messing around with a computer terminal. As this was the early-to-mid 90s the computer itself was a beige slab with a faded keyboard, letters dulled from the hunt-and-pecking of hundreds of previous children on school trips of their own. There were no graphics, just white text on a black screen, and a flashing rectangle indicating where you were supposed to type.

The program was meant to be an “electronic psychotherapist,” either some version of ELIZA – one of the earliest attempts at what we would now classify as a chatbot – or some equivalent Canadian substitute (“Eh-LIZA”?). After starting up the program there was a welcome message, after which it would ask questions – something like “How are you feeling today?” or “What seems to be bothering you?” The rectangle would flash expectantly, store the value of the user’s input in a variable, and then spit it back out, often inelegantly, in a way that was meant to mimic the conversation of a therapist and patient. I remember my classmate typing “I think I’m Napoleon” (the best expression of our understanding of mental illness at the time) and the computer replying: “How long have you had the problem I think I’m Napoleon?”

30-ish years later, I receive a notification on my phone: “Hey Ken, do you want to see something adorable?” It’s from an app called WoeBot, and I’ve been ignoring it. WoeBot is one of several new chatbot therapists that tout that they are “driven by AI”: this particular app claims to sit at the intersections of several different types of therapy – cognitive behavioral therapy, interpersonal psychotherapy, and dialectical behavior therapy, according to their website – and AI that is powered by natural language processing. At the moment, it’s trying to cheer me up by showing me a gif of a kitten.

Inspired by (or worried they’ll get left behind by) programs like ChatGPT, tech companies have been chomping at the bit to create their own AI programs that produce natural-sounding text. The lucrative world of self-help and mental well-being seems like a natural fit for such products, and many claim to solve a longstanding problem in the world of mental healthcare: namely, that while human therapists are expensive and busy, AI therapists are cheap and available whenever you need them. In addition to WoeBot, there’s Wysa – also installed on my phone, and also trying to get my attention – Youper, Fingerprint for Success, and Koko, which recently got into hot water by failing to disclose to its userbase that they were not, in fact, chatting with a human therapist.

Despite having read reports that people have found AI therapy bots to be genuinely helpful, I was skeptical. But I attempted to keep an open mind, and downloaded both WoeBot and Wysa to see what all the fuss was about. After using them for a month, I’ve found them to be very similar: they both “check in” at prescribed times throughout the day, attempt to start up a conversation about any issues that I’ve previously said I wanted to address, and recommend various exercises that will be familiar to those who have ever done any cognitive behavioral therapy. They both offer the option to connect to real therapists (for a price, of course), and perhaps in response to the Koko debacle, neither hides the fact that they are programs (often annoying so: WoeBot is constantly talking about how its friends are other electronics, a schtick that got tired almost immediately).

It’s been an odd experience. The apps send me messages saying that they’re proud of me for doing good work, that they’re sorry if I didn’t find a session to be particularly useful, and that they know that keeping up with therapy can be difficult. But, of course, they’re not proud of me, or sorry, and they don’t know anything. At times their messages are difficult to distinguish from those of a real therapist; at others, they don’t properly parse my input, and respond with messages not unlike “How long have you had the problem I think I’m Napoleon?” If there is any therapeutic value in the suspension of disbelief then it often does not last long.

But apart from a sense of weirdness and the occasional annoyances, are there any ethical concerns surrounding the use of AI therapy chatbots?

There is clearly potential for them to be beneficial: your stock model AI therapist is free, and the therapies that they draw their exercises from are often well-tested in the offline world. A little program that reminds you to take deep breaths when you’re feeling stressed out seems all well and good, so long as it’s obvious that it’s not a real person on the other side.

Whether you think the hype about new AI technology is warranted or not will likely impact your feelings about the new therapy chatbots. Techno-optimists will emphasize the benefit of expanding care to  many more people than could be reached through other means. Those who are skeptical of the hype, however, are likely to think that spending so much money on unproven tech is a poor use of resources: instead of sinking billions into competing chatbots, maybe that money could be spent on helping a wider range of people access traditional mental health resources.

There are also concerns about the ability of AI-driven text generators to go off the rails. Microsoft’s recent experiment with their new AI-powered Bing search had an inauspicious debut, occasionally spouting nonsense and even threatening users. It’s not hard to imagine the harm such unpredictable outputs could cause for someone who relied heavily on their AI therapy bot. Of course, true believers in the new AI revolution will dismiss these worries as growing pains that inevitably come along with the use of any new tech.

What is perhaps troubling is that the apps themselves walk a tightrope between trying to be a sympathetic ear, and reminding you that they’re just bots. The makers of WoeBot recently released research results that suggest that users feel a “bond” with the app, similar to the kind of bond they might feel with a human therapist. This is clearly an intentional choice on the part of the creators, but it brings with it some potential pitfalls.

For example, although the apps I’ve tried have never threatened me, they have occasionally come off as cold and uninterested. During a recent check-in, Wysa asked me to tell it what was bothering me that morning. It turned out to be a lot (the previous few days hadn’t been great). But after typing it all out and sending it along, Wysa quickly cut the conversation short, saying that it seemed like I didn’t want to engage at the moment. I felt rejected. And then I felt stupid that I felt rejected, because there was nothing that was actually rejecting me. Instead of feeling better by letting it all out, I felt worse.

In using the apps I’m reminded of a thought experiment from philosopher Hilary Putnam. He asks us to consider an ant on a beach who, through its search for food and random wanderings, happens to trace out what looks to be a line drawing of Winston Churchill. It is not, however, a picture of Churchill, and the ant did not draw it, at least in the way that you or I might. However, at the end of the day a portrait of Winston Churchill consists of a series of marks on a page (or on a beach), so what, asks Putnam, is the relevant difference between those made by the ant and those made by a person?

His answer is that only the latter are made intentionally, and it is the underlying intention which gives the marks their meaning. WoeBot and Wysa and other AI-powered programs often string together words in ways that look indistinguishable from those that might be written down by a human being on the other side. But there is no intentionality, and without intentionality there is no genuine empathy or concern or encouragement behind the words. They are just marks on a screen that happen to have the same shape as something meaningful.

There is, of course, a necessary kind of disingenuousness that must exist for these bots to have any effect at all. No one is going to feel encouraged to engage with a program that explicitly reminds you that it does not care about you because it does not have the capacity to care. AI therapy requires that you play along. But I quickly got tired of playing make believe with my therapy bots, and it’s overall become increasingly difficult for me to find the value in this kind of ersatz therapy.

I can report one concrete instance in which using an AI therapy bot did seem genuinely helpful. It was guiding me through an exercise, the culmination of which was to get me to pretend as though I were evaluating my own situation as that of a friend, and to consider what I would say to them. It’s an exercise that is frequently used in cognitive behavioral therapy, but one that’s easy to forget to do. In this way, the app checking-in did, in fact, help: I wouldn’t have been as sympathetic to myself had it not reminded me to. But I can’t help but think that if that’s where the benefits of these apps lie – in presenting tried-and-tested exercises from various therapies and reminding you to do them – then the whole thing is over-engineered. If it can’t talk or understand or empathize like a human, then there seems to be little point in there being any artificial intelligence in there at all.

AI therapy bots are still new, and so it remains to be seen whether they will have a lasting impact or just be a flash in the pan. Whatever does end up happening, though, it’s worth considering whether we would even want the promise of AI-powered therapy to come true.

Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World

photograph of crowd in subway station

In his recent article, Aaron Schultz asks whether we have a right to attentional freedom. The consequences of a life lived with our heads buried in our phones – consequences not only for individuals but for society at large – are only becoming more and more visible. At least partly to blame are tech’s (intentionally) addictive qualities, and Schultz documents the way AI attempts to maximize our engagement by taking an internal X-ray of our preferences while we surf different platforms. Schultz’s concern is that as better and better mousetraps get built, we see more and more of our agency erode each day. Someday, we’ll come to see the importance of attentional freedom – freedom from being reduced to prey for these technological wonders. Hopefully, that occurs before it’s too late.

Attention is a crucial concept to consider when thinking about ourselves as moral beings. Simone Weil, for instance, claims that attention is what distinguishes us from animals: when we pay attention to our body, we aim at bringing consciousness to our actions and behaviors; when we pay attention to our mind, we strive to shut out intrusive thoughts. Attention is what allows us, from a theoretical perspective, to avoid errors, and from a moral, practical perspective, to avoid wrong-doing.

Technological media captures our attention in almost an involuntary manner. What often starts as a simple distraction – TikTok, Instagram, video games – may quickly lead to addiction, triggering compulsive behaviors with severe implications.

That’s why China, in 2019, imposed a limit on gaming and social media gaming use. Then, in 2021, in an attempt to further control and reduce mental and physical health problems of the young population, stricter limits for online gaming on school days were enforced, and children and teenagers’ use was limited to one hour a day on weekends and holidays.

In Portugal, meanwhile, there is a crisis among children who, from a very young age, are being diagnosed with addiction to online gaming and gambling –  an addiction which compromises their living habits and routine such as going to school, being with others, or taking a shower. In Brazil, a recent study showed that 28% of adolescents show signs of hyperactivity and mental disorder from tech use to the point that they forget to eat or sleep.

The situation is no different in the U.S., where a significant part of the population uses social media and young people spend most of their time in front of a screen, developing a series of mental conditions inhibiting social interaction. Between online gaming and social media use, we are witnessing a new kind of epidemic that attacks the very foundations of what it is to be human, to be able to relate to the world and to others.

The inevitable question is: should Western countries follow the Chinese example of controlling tech use? Should it be the government’s job to determine how many hours per day are acceptable for a child to remain in the online space?

For some, the prospect of big brother’s protection might look appealing. But let us remember Tocqueville’s warning of the despotism and tutorship inherent in this temptation – of making the  State the steward of our interests. Not only is the strategy paternalistic, in curbing one’s autonomy and the freedom to make one’s own choices, but it is also totalitarian in its predisposition, permitting the State control of one more sphere of our lives.

This may seem an exaggeration. Some may think that the situation’s urgency demands the strong hand of the State. However, while an unrestrained use of social media and online gaming may have severe implications for one’s constitution, we should recognize the problem for what it is. Our fears concerning technology and addiction are merely a symptom of another more profound problem: the difficulty one has in relating to others and finding one’s place in the world.

What authors like Hannah Arendt, Simone Weil, Tocqueville, and even Foucault teach us is that the construction of our moral personality requires laying roots in the world. Limiting online access will not, by itself, resolve the underlying problem. You may actually end up by throwing children to an abyss of solitude and despair by exacerbating the difficulties they have in communicating. We must ask: how might we rescue the experience of association, of togetherness, of sharing physical spaces and projects?

Here is where we go back to the concept of attention. James used to say that attention is the

taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness is of its essence. It implies withdrawal from some things in order to deal effectively with others. 

That is something that social media, despite catching our (in)voluntary attention, cannot give us. So, our withdrawal into social media must be compensated with a positive proposal of attentive activity to (re)learn how to look, interpret, think, reflect upon things and most of all to (re)learn how to listen and be with others. More than 20 years ago, Robert Putnam documented the loss of social capital in Bowling Alone. Simone Weil detailed our sense of “uprootedness” fifty years prior to that. Unfortunately, today we’re still looking for a cure that will have us trading in our screens for something that we can actually do attentively together. Legislation is unlikely to fill that void, alone.

Right-to-Work Laws and Workers’ Rights

photograph of worker's tools arranged with US flag background

Once a bastion of organized labor, Michigan has had a controversial right-to-work law on the books since 2012. On Tuesday, March 14th, the Michigan Senate approved a bill that would repeal it. Democratic Governor Gretchen Whitmer has already stated her intent to sign. With surging union approval ratings, some labor supporters cautiously hope this could signal broader pushback against the decades-long right-to-work initiative.

But what exactly are right-to-work laws, what case can be made for them, and why are they opposed by unions which generally support workers’ rights?

Right-to-work laws bear little relationship to a more colloquial understanding of the right to work as the right to seek and engage in productive employment. The term comes from  a 1941 editorial by editor of The Dallas Morning News, William Ruggles. Ruggles’s “right to work” was the right to not have to join a union as a condition of employment. His ideal was spun into a multi-state campaign by the corporate lobbyist turned right-wing political activist Vance Muse. This is still generally what right-to-work means in the United States.

In the words of the National Right To Work Legal Defense Foundation:

The Right to Work principle–the guiding concept of the National Right to Work Legal Defense Foundation–affirms the right of every American to work for a living without being compelled to belong to a union. Compulsory unionism in any form–“union,” “closed,” or “agency” shop–is a contradiction of the Right to Work principle and the fundamental human right that the principle represents. 

More precisely, right-to-work laws regulate the kinds of agreements that can be made between unions and employers known as union security agreements. These security agreements require certain measures of union support as a condition of employment. The typical ones are the closed shop, where only members of a certain union will be hired; the union shop, where employees must join the union as a condition of employment; the agency shop, where employees who choose not to join the union have to pay a fee to cover those union activities that they benefit from; and the open shop which imposes no conditions. (Closed shop agreements were made illegal by the 1947 Taft-Hartley Act.)

Most contemporary right-to-work laws – currently implemented in 27 states – forbid union shops and agency shops. Union membership cannot be a condition of employment, and non-union members cannot be required to pay agency fees.

The ban on agency fees has generated especially strident opposition from unions. Under the American policy of exclusive representation a union is still required to protect and negotiate on behalf of those employees who choose not to join it. Unions charge non-members agency fees, also known as fair share fees, to defray the cost of representing them. Banning agency fees creates an incentive for workers not to join the union, as they can still reap many of the benefits.

Numbers matter for unions. Employers may be more responsive to concerns about pay, benefits, and safety when many workers come together and voice them. It is uncontroversial right-to-work laws harm unions, and labor organizers argue this is the true purpose of such laws.

According to their advocates, right-to-work laws have two major selling points. The first is that they secure the rights of association/contract of the individual worker in contrast to “compulsory unionism.” The second is that right-to-work laws help the broader economy by attracting businesses to states. These very arguments were made by advocates of the Michigan right-to-work law, such as Republican State Senator Thomas Alberts.

Ostensibly, on freedom of association grounds, workers should have the right to join or not to join unions. On freedom of contract grounds the state should not be interfering with agreements between workers and employers. However, these defenses are incoherent on their face. As multiple scholars have pointed out — including Peter Vallentyne in these very pages — union membership or agency fees are simply a condition of employment and all sorts of conditions of employment are allowed, provided both parties agree to them — from drugs tests to uniforms. If an employee does not like the particular conditions on offer, the freedom of contract/association narrative goes, then they can choose a different job.

One can coherently argue the additional options provided by right-to-work laws are good — it is good for employees to have the option to join companies with or without union membership and with or without agency fees.

But right-to-work laws are not protecting the right to association or contract. Nor does so-called “compulsory unionism” appear obviously more compulsory than other work requirements, even if union membership is perhaps a more substantial requirement than uniforms.

What about the economic argument for right-to-work laws? Are they simply good policy, either for workers or for the state economy? Here the story is more complicated, and it is challenging to isolate the effects of right-to-work laws from the general political and economic background of states. On the one hand, it is often found that right-to-walk laws negatively impact wages. On the other, some studies find that through making states more attractive for businesses, overall state economic benefits compensate for potential lost wages.

The economic argument is treacherous ground though. For the essential claim is that by decreasing the power of workers and unions, states can lure businesses away from other states with more robust labor protections — a race to the bottom. An equally effective response would be to simply ban right-to-work laws at the national level, as some legislation proposes to do.

These arguments not withstanding, the debate at the heart of right-to-work is really a larger question concerning organized labor. There is a compelling historical case that the right-to-work movement in the United States has predominantly been about limiting union power and only nominally about rights or ethics. Similarly, for union supporters the main argument against right-to-work laws has always been that they hurt organized labor.

While requiring union membership as a condition of employment need not violate workers’ rights, most organizers would agree that it is preferable for workers to join and form unions independently. The agency shop, in which employees do not have to join but have to pay for some services, is especially antithetical to the historical intent of unionization. A union, after all, is an organization of and for workers; it is not simply a paid negotiator. Some problems of American labor, such as the tensions caused by exclusive representation, do not occur in many European countries which operate under a very different model. Perhaps there is room for a deeper rethink of what legal landscape does best by the American worker.

Do You Have a Right to Own a Gas Stove?

photograph of gas stove burning

In January, Representative Ronny Jackson, a Republican from Texas’ 13th district, a doctor, and a former rear admiral, tweeted,

This was in response to a statement by the commissioner of the U.S. Consumer Product Safety Commission that the agency might pursue new regulatory action with regard to gas stoves, given recent research showing that gas stoves contribute to environmental and health issues like asthma. (For a more detailed, and different, take on the issue, see Andrew Conarty’s  “Gas Stoves: A Kitchen Culture Clash” also here at The Post.)

It wasn’t just Jackson who had a negative reaction to this, but his response – echoing a popular National Rifle Association bumper sticker – seems especially notable.

Was Jackson saying that we have a right to own a gas stove in the same way as we have a right to own a gun? It’s a good question.

I think the answer, however, is that from an ethical point of view, we do not have a fundamental right to own any kind of physical object – whether it’s a gas stove or a gun.

This is an especially important issue right now. In the last few years, federal court decisions have been an unmitigated triumph for people who sell guns and an unmitigated disaster for people who don’t want to get shot. In 2022, two hundred and thirty-one years after it was ratified, the Supreme Court ruled for the very first time that the Second Amendment to the U.S. Constitution protects an individual’s right to openly carry a gun for self-defense. Most recently, in United States v. Rahimi, the 5th Circuit Court of Appeals ruled that laws forbidding domestic abusers from possessing firearms violate the 2nd Amendment by prohibiting specific individuals from owning weapons. In doing so, the court not only ignored the fact that a woman is five times more likely to be murdered by an abuser with a gun and that four and half million women have been threatened with, shot at, or murdered by an abuser with a gun, they specifically said that these facts are irrelevant. Why?

The theory behind the “liberal” in “liberal democracy” is that everyone has, in addition to a right to participate in the democratic process, certain inalienable (that is, you can’t give them up even if you wanted to), fundamental rights, liberties, and freedoms. A fundamental right demands more than a balancing of harms and benefits.

You have a legal right to do whatever you are not specifically legally prohibited from doing, but you have a smaller number of fundamental rights. These, as I’ve said, by their very nature trump, or overrule, ordinary rights – and even override consideration of bad consequences. In theory, the only thing that limits a fundamental right is another fundamental right. These fundamental rights include freedom of religion, the press, speech, association, and the right to own private property. But the fundamental rights are not a random list. They are meant to protect, as John Stuart Mill wrote, “The only freedom which deserves the name, that of pursuing our own good in our own way…” Or, as John Rawls would have it, the fundamental rights are those that allow us to form, revise, and pursue our own idea of the good.

Notice, how odd the right to own a gun, the right to own a particular kind of manufactured artifact, seems in this context. Guns, and gas stoves, seem to be simply the wrong kinds of things, the wrong kinds of things metaphysically, to be the objects of a fundamental right.

The right to own property doesn’t help here, since it does not mean a right to own just anything (e.g., you can’t own biological weapons, designated historical sites, or a car that you drive with no emission system).

Helpfully, the courts themselves have specified that the right to own a gun is actually “the right to own and carry a gun for self-defense.” Let’s just stipulate that self-defense is a basic right. (It may not be.) But if self-defense is a fundamental right, then, by this very definition, gun ownership is a derivative, and not a fundamental, right. For example, free speech might imply that I have a non-fundamental right to my own Facebook page. But having a Facebook page – or a gun – is not a fundamental right since they are explicitly derived from the more fundamental rights of speech or self-defense.

Why does this matter? Because if the fundamental right is a right to self-defense, and gun ownership is justified by that, then it matters what the empirical facts are.

If, for example, guns turn out to be poor tools for self-defense, then they can be regulated on that basis. If the evidence supports the claim that abusers are overwhelmingly more likely to harm an innocent person with a gun than justifiably defend themselves with it, then the right to self-defense itself undercuts gun ownership in these circumstances.

The most obvious objection is this: Guns, unlike gas stoves, are specifically stipulated in the Bill of Rights as a particular kind of thing you have a fundamental right to own. However, there are still good reasons to reject this view. For if there are certain kinds of particular physical objects, the ownership of which is protected via stipulation by the Constitution, those objects must be, in this case, muskets and flintlock pistols (especially if you are an “originalist”). It’s probably not a big public health problem if people own these, it’s handguns and AR-15s that are worth worrying about. If the argument is that people have a right to own muskets, so be it.

If gun advocates say, “but you can’t reasonably defend yourself with a flintlock pistol,” then we are back where we started. Even if people have a fundamental right to defend themselves, answering the question of how, and by what means, will depend on a bunch of empirical claims about harms and benefits. The point is that the derivative right to own a gun does not trump more fundamental rights, nor does it make weighing harms irrelevant. Overwhelming evidence that allowing abusers to own firearms will cost many women their lives can’t be set aside by simply asserting a right to own certain kinds of machines.

I think we are so used to hearing people talk about a “right to own a gun” that we no longer realize what a strange idea it is that we could have a basic right to possess a certain kind of manufactured machinery, or how different that is than the idea that we have a right to say what we want without government interference. This is why it helps to think about whether there could be a fundamental right to own a gas stove. If you did have fundamental rights to own guns and gas stoves, and could keep adding objects you would like to have to the list of things you have a fundamental right to, this would quickly undermine the very idea that there is any set of fundamental rights at all. What would be left is not a coherent list of what is necessary for shared and ordered liberty, but simply a laundry list of stuff one wishes to possess. And, so, if gas stoves do cause serious harms, it is perfectly reasonable to regulate – or even ban – them. Guns do cause serious harms – in 2022, 45,222 Americans died of gun-related injuries. Perhaps they should be regulated.

Ultimately, where no fundamental right is at stake, it’s all about harms and benefits which should be weighed through the ordinary democratic process. No one has a right of control over particular kinds of physical objects that is on par with other basic rights. Contrary to Representative Jackson’s tweet, no one is entitled to a handgun or a gas stove.

The Hunting with Dogs Bill: Dominance Hierarchies and Animal Rights

photograph of rider on horseback with hunting hounds

The practice of hunting with dogs traces back to ancient Egypt. It became particularly popular in Britain in the 16th century when social clubs began using dogs with highly developed senses of scent to track woodland animals, dominantly foxes. Participants engaged in the activity primarily for sport. For the most part, the practice in this region of the world has now come to an end. On January 23, 2023, the Hunting with Dogs Bill was passed in Scotland. The ban does not outlaw hunting outright, nor does it even ban the practice of hunting with dogs. What the bill does do is make it illegal to chase and hunt animals with a pack of more than two dogs for sport. But farmers and ranchers can apply for exemptions for the purposes of “wildlife management.”

The bill has received mixed responses from communities of animal advocates and serves to highlight key differences between approaches to thinking about our obligations to non-human animals.

One dominant line of reasoning in animal ethics is that we ought to focus on animal welfare. Animal welfare approaches frequently direct their attention not toward banning human use of animals outright, but toward making such practices less cruel or harmful. So, for example, the advocate of an animal welfare approach might focus not on eliminating factory farming, but on making the practices used as part of factory farming more humane. This is the kind of strategy that has been successful when it comes to legislation mandating that egg-laying hens be raised cage-free.

The argument against hunting with dogs takes a similar approach. The argument is that being chased by a large pack of dogs causes animals such as deer, hares, and foxes extreme distress. The animals who end up dying directly in the hunt do not die quickly and painlessly; they are ripped to death by a large pack of dogs against whom they never stood much of a chance. The animals who aren’t ultimately caught by the dogs and don’t die directly as a result of the hunt nevertheless experience severe psychological and physiological problems as a result of the trauma. Some of them suffer injuries that they must deal with for the rest of their lives. Some animal welfare theorists argue that it may not be possible to end hunting entirely, but we ought to ban this form of hunting because it is cruel and unusual.

Other animal advocates do not support the Hunting with Dogs Bill in its current form. Those who adopt this philosophy take on the perspective articulated by philosopher Tom Regan that “the truth of animal ethics requires empty cages, not larger cages.” Thinkers like Regan who believe that we should be focusing on rights rather than simply on welfare are likely to think of the Hunting with Dogs Bill as incoherent. After all, if we acknowledge that for sentient beings who can experience pain, being ripped apart while still alive is a bad thing, preventing these animals from being ripped alive by large packs of dogs doesn’t go far enough. We should outlaw dog hunting in any form by any number of dogs.

If what bothers society is the purpose for which animals are being hunted, then we should go beyond banning hunting for sport using dogs. We should ban hunting for sport altogether. Anything less is not just inconsistent, but inconsistent in ways that have life-or-death implications for countless animals.

Once one acknowledges that we have moral obligations to non-human animals in light of the kinds of beings that they are and the relationship in which we stand to them, it becomes difficult (or perhaps impossible) to effectively defend the position that it is acceptable to torture and kill them, for sport or otherwise.

Of course, animal advocates are not the only parties in Scotland or in Britain who disagree over laws of this type. There is strong pressure from some groups to overturn the legislation. Many of the arguments rest on familiar attitudes about the nature of non-human animals and their relationship to humans. This may have something to do with the fact that attitudes about species hierarchy have been dominant in the Western thought tradition since Aristotle, who famously argued in Politics that,

after the birth of animals, plants exist for their sake, and that the other animals exist for the sake of man, the tame for use and food, the wild, if not all at least the greater part of them, for food, and for the provision of clothing and various instruments. Now if nature makes nothing incomplete, and nothing in vain, the inference must be that she has made all animals for the sake of man.

Following Aristotle, those who argue that humans have a right to hunt with animals, using dogs or otherwise, claim that the universe is purposeful and that humans, the only rational animals, were placed by nature at the top of a dominance hierarchy. The Bible seemingly lends the authority of God to this position in Genesis,

And God said, let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.

In addition to arguments in support of dominance and hierarchy, advocates of dog hunting argue that they have a right to their cultural traditions. For instance, this month, Scotland’s Lanarkshire and Renfrewshire Hunt, which met for the first time in the 1700s, announced that, as a result of the new bill, they could no longer conduct their hunt and that their 300-hundred-year tradition was now coming to an end.

Friends of the hunt might point to the social importance of pluralism about values and attitudes about the nature of the good.

Liberal societies can only function well if we both recognize and accept that people do not share the same ethical convictions. People will simply have to agree to disagree about whether animals have rights, and animal advocates shouldn’t force their attitudes on others.

In response, defenders of animals argue that pluralism is laudable as it relates to liberties such as free exercise of religion, freedom of thought and expression, and freedom of association, but there are limits. We shouldn’t be value pluralists when it comes to the exploitation, oppression, and death of sentient creatures with lives and relationships of their own.

They might argue further that Darwin effectively demonstrated that the universe is not teleological — it had no particular hierarchy in mind and did not have the intention (nor could it) to enthrone human reason. We should be willing to critically analyze the ways in which appeal to reasoning capacities has been weaponized through the years to justify the oppression of women, children, and racial and ethnic minorities. Western thought has denigrated the body while glorifying the mind, while at the same time associating targets of oppression more closely with the body. As Cathryn Bailey powerfully articulates in her contribution to The Feminist Care Tradition in Animal Ethics,

Against this socially constructed background of clumsy brutes, sometimes childlike, sometimes dangerous, animals, women, and people of color have been made to serve as a kind of foil to the purity and controlled exercise of rationality.

In light of these observations, we ought to proceed with caution when we feel inclined to make dominance claims or to force sentient beings into value hierarchies. Humans have done this poorly for all of recorded history with disastrous consequences. We may one day come to see our treatment of non-human animals in the same way.

The Feelings of Fish

photograph of bass fish underwater

“It’s okay to eat fish, ‘cause they don’t have any feelings.”

So sang Nirvana frontman Kurt Cobain in the haunting “Something in the Way.” Here’s the problem, however: according to recent research, fish do have feelings – and this stands to create all sorts of concerns for how humans treat these animals.

A study published just last month shows that fish are able to pass the Mirror Test – being capable of recognizing themselves in mirror reflections and photographs. This test is an important indicator of an advanced level of cognitive capabilities in both human and non-human animals, and sees fish join an elite club previously only occupied by humans, dolphins, elephants, and some great apes. This latest study joins a mounting body of research showing that fish do, in fact, have a much higher level of cognition than we previously thought. For one, they don’t have a three-second memory. In fact, they tend to remember things for a period of about five months. Further, a 2019 study showed that fish experience pain in much the same way as humans; exhibiting accelerated ventilation rates and showing an ability to avoid pain-inducing stimuli. They even rub areas of their bodies that have suffered pain – much as we do a stubbed toe.

So what do these latest developments mean for the ethical treatment of fish? Why might features such as self-recognition and the ability to feel pain be relevant in considering what we can permissibly do to fish?

A few months ago, I considered a similar problem relating to our treatment of insects. Recent research had suggested that – contrary to our traditional understanding – insects might be capable of experiencing pain. I explained how the ability to feel pain (and its corollary, pleasure) is, for some ethicists, all that is required for a living being to have interests. How so? Well, if something can experience pain, then it has an interest in avoiding pain. In fact, the fish in the study cited above showed precisely this behavior – avoiding areas of their tanks where they received electric shocks, even where those areas were previously associated with feeding.

Suppose, then, that I want to go for a spot of recreational “catch-and-release” fishing. Is it wrong for me to do so?

One starting point might be to note that even if fish can experience pain (and have a corresponding interest in avoiding pain) the amount of suffering caused by a single barbless hook through the mouth is relatively small. Suppose that, for a fish, this experience is roughly about as painful as it would be for a human to be pierced in the back of the hand with a sharp needle. This might seem acceptable. But would we think it morally permissible for someone to go around stabbing others in this way for purely recreational purposes? If our answer is “no,” then we have a problem.

The reason why this is problematic comes down to the principle of equality.

Previously, I discussed how when we talk of the equality of humans, we aren’t generally claiming that all humans are equal, nor that they should be equal. Rather, equality is taken as a prescription that the interests of all humans should be given equal consideration. This is the principle that underpins the wrongness of sexism, racism, and other forms of bigotry.  It’s why it’s morally impermissible to prioritize one person’s interests in, say, being given a promotion based merely on their gender or skin color. Instead, the interests of these individuals need to be considered equally.

So, if two individuals have an interest in not experiencing pain, then those interests have to be treated equally. And if we believe that inflicting X amount of pain on one individual is morally impermissible, then we must believe that inflicting that same amount of pain on another individual is just as wrong. Further, we’re not permitted to make a distinction based purely on the species of the individual. Why? Because doing so would violate the principle of equality in the same way that sexism or racism does. In this case, however, it would be speciesist.

As with the ethical treatment of insects, we might consider discounting – or disqualifying altogether – the suffering of fish on the basis that they aren’t as intelligent as humans. But this contravenes the very same principle of equality.

Consider how we think about humans: Do we believe it is more morally permissible to cause pain and suffering to those who are less intelligent? Do we allow harm to very young children on the basis that they do not have the same cognitive faculties as fully developed adults?

Clearly not. So we must take the same approach to animals like fish.

Given all of this, it seems that if fish do truly experience pain in a way that is similar to humans, then something like recreational catch-and-release fishing becomes morally impermissible (assuming, of course, that we’re not willing to endorse harming humans in the same way for fun). But what if the harm caused to these fish wasn’t merely recreational, but was instead done for survival? Would it be permissible for us to catch-and-kill fish for this purpose?

This is where the waters become a little murky. Peter Singer – the proponent of the approach taken above – is a Utilitarian, meaning that he believes the morally right thing to do is that which maximizes pleasure (or minimizes pain). Thus, if a family was in desperate need of sustenance, the pain and suffering inflicted on a fish by catching and killing it for dinner might be outweighed by the good of the family’s continued survival. But here’s the thing: for many of us, this will never be the case. Most of us in developed nations have ample sources of sustenance that do not require the suffering of fish – or any animal for that matter. This is precisely why Singer argues so forcefully in favor of veganism.

Ultimately, however, all of this comes back to the question of whether fish do truly experience pain in a morally relevant way. And while some will take these latest studies as clear evidence that they do, others will remain skeptical that the kind of thing being experienced by fish (and insects and other animals) is fundamentally different to that experienced by humans. And that might turn out to be the case. I have, however, previously noted our very poor track record of understanding pain in other living beings (even infant humans). Given this, it would seem that caution is in order – and that the best approach might be to refrain from recreationally harming an animal that may turn out to experience pain in a way similar to humans.

On Our Collective Empathy Fatigue

image of shadowy figure with dark background

A Vanderbilt University employee was recently lambasted for callousness after utilizing ChatGPT to write a letter to students in the wake of the Michigan State tragedy. While such an action clearly displays poor professional judgment, I think many of us (if we’re honest) can relate to a kind of emotional numbness in the face of yet another instance of suffering and loss in our society. While it is impossible to know if such feelings are what prompted that particular employee’s actions, it is clear enough that many of us currently face a type of emotional fatigue in the face of numerous, seemingly insurmountable problems.

These negative feelings are further amplified if we zoom out of our national landscape and consider the state of the human race at large. The worst earthquake to strike the Middle East in a century further devastated an already impoverished, war-torn population, and there is a constant humdrum of deep injustices such as modern slavery, food and water insecurity, and regional violence in far away places. Human beings, even the most compassionate amongst us, do not have the emotional or cognitive bandwidth to carry the weight of tragedies that fill the world.

This inability can lead to a particular kind of condition, sometimes referred to as empathy fatigue. The condition is a product of being continually subjected to upsetting news or circumstances, paired with an inability to fix the issues at hand. Symptoms of the condition include feeling lackluster about the future, disconnected from those around us, and numb to our emotions. Given we live in a unique historical moment where news from across the world is nearly instantaneously available to us, we have access to an endless stream of information that can serve as fodder for this condition. There is evidence that frequent news consumption stirs up anxiety and raises cortisol levels, but we’re nevertheless consistently inundated with news updates via our phones, computers, and social media feeds. Even if you’ve never experienced the full force of empathy fatigue, you’ve most likely felt at some point a type of powerlessness when it comes to making a meaningful, global impact.

So what is the remedy for these feelings of apathy and detachment? I propose a certain group of liberal skeptics are well-equipped to answer this question.

The communitarian critique of liberalism, while arguably failing in its more ambitious project of offering an alternative to the liberal political order, succeeds in highlighting certain pitfalls of modern life. The movement, primarily comprised of moral and political philosophers, pushes for an increased focus on human nature, contending that our social and political structures must be responsive to this nature if they are to promote human flourishing. Political liberalism is doomed to failure because it fails to accurately account for the conditions of human flourishing, or so the charge goes. Communitarians particularly emphasize the inadequacy of the “autonomous self” conception of the person, arguing that the existence of such an entity is a damaging fiction at the heart of liberalism. As human beings, we are not able to thrive as autonomous entities, but rather we thrive when embedded in networks of meaningful social relationships that help guide and constrain our actions.

Our social networks tend to serve as primary sources of meaning, purpose, and identity in our lives, and thus we also plausibly bear certain duties of service to these communities in turn. Communitarians borrow from Aristotelian thought by contending that personal flourishing is bound up in communal flourishing. Communal and individual health are woven together in a way often rejected or, at least, downplayed in modern society. This emphasis on holistic flourishing places Communitarians at odds with the current political binary that characterizes the United States. Communitarianism is opposed to the rise of the large-scale bureaucratic state associated with the political left, as well as the downstream impacts of consumer capitalism, which tend to erode the cultural particularities of local communities. The preservation and cultivation of rich local cultures is of great importance to Communitarians, and thus their social and political prescriptions are aimed at this end.

So what are the practical takeaways these Communitarian insights offer us in regards to our collective empathy fatigue?

Importantly, a takeaway is not that we should decrease charitable activity in places outside our local community or stop involving ourselves in social and political efforts that seek to enact large-scale change. Rather, Communitarian thought encourages us to embrace our local communities as a grounding anchor and to reflect on what we might morally owe to these individuals or groups in our immediate vicinity. For the mass majority of us, our family, friends, and local communities constitute our primary domains of influence. Despite this reality, a quick look at the relevant statistics seems to suggest many of us are failing to fully invest in these communities. For example, local elections tend to elicit significantly lower voter turnout rates than presidential elections. Grassroots community organizations are oftentimes doing life-changing, transformational work, but statistics show less than a third of Americans regularly volunteer, and rates of charitable giving have sharply declined in recent years. Additionally, data shows that almost one-quarter of people under the age of thirty do not know any of their neighbors, a large statistical increase from older generations.

Don’t let the outcome of empathy fatigue be complacency and detachment but rather local investment. Insofar as the condition is exacerbated by an excess of information about tragedies we can do nothing or very little to change, an increased focus on our local communities can help alleviate feelings of powerlessness that threaten to render us apathetic. The empirical data seems to back this up, as there are demonstrated personal benefits associated with serving one’s community. Thus, both yourself and your community will be better off if you choose to implement some of the insights of the Communitarian movement in your own backyard.

A Brave New Pod? The EctoLife Fantasy

ultrasound image

If you’re wondering what an entire facility of artificial wombs may look like, then look no further. On December 9, 2022, Hashem Al-Ghaili released his concept, “EctoLife,” via YouTube. The video offers a glimpse into the “world’s first artificial womb facility,” or rather what looks like a sci-fi dystopia movie – rows of fetuses in pods being “produced” in a factory-like environment. While only a idea for now, what is essentially on offer is the ability for parents to produce customized children that bypasses the process of pregnancy altogether.

Renewable energy powers the fantasy facility’s 75 labs, all equipped with up to 400 growth pods designed to provide a similar environment to inside a mother’s womb. Designed to “grow” 30,000 babies a year, EctoLife promises non-gestational pregnancy, control of the gestational environment, and the chance for parents to witness their baby’s growth and development using an app on their phone and via a VR headset. The hope, the narrator in the video claims, is that such an apparatus could be used to combat infertility by allowing parents to have biologically related offspring without experiencing pregnancy or conception. Additionally, parents could “customize” their embryo and choose their child’s level of intelligence, height, hair and eye color, skin tone, and physical strength.

Unsurprisingly, this video has attracted much attention and currently has 2.1 million views on YouTube. Described in the comments section as “insanely scary” and “truly alarming,” some have likened the idea to a modern-day Matrix. USA Today has refuted false claims that such a facility exists, and Dr. Guid Oei, a gynecologist and professor at the Eindhoven University of Technology, has confirmed that the technology in the video is not currently possible.

Although EctoLife is pure fantasy, it does provide a rather interesting thought experiment for exploring our moral intuitions concerning the creation of future children.

Concerns over “designer babies” are not new; similar fears circulated when in vitro fertilization (IVF) was first developed. However, the possibility of child customization – using a “genetic menu” that offers the choice of over 300 genes – goes much further than reproductive technology like IVF, which has a more limited capability of recognizing and removing genetic abnormalities in an embryo. Today, this is done using a technique called pre-implantation genetic diagnosis (PGD). PGD is not currently regulated in the United States, and whilst it is used to screen for genetic diseases, it’s also commonly offered for sex selection. One study found that in the United States, 72.7% of fertility clinics offer sex selection and 83.5% of those clinics offer sex selection for couples without infertility problems. Meanwhile, the development of CRISPR-Cas9 is the most advanced science has become in gene editing and likely the closest we will get to “customizing” genes. This gene-editing technology can remove, add, or alter precise sections of our DNA sequence. However, there are some that argue that the high risk of collateral damage makes CRISPR-Cas9 gene editing ultimately unethical.

The EctoLife video promises parents the power to design “the most viable and genetically superior embryo” – labeling undesirable qualities and traits as genetic defects to be removed and placing negative connotations on (perceived) disabilities. But surely which attributes represent the “best qualities” in society is a subjective question; qualities that one individual may find desirable, another may not. Who can really say?

There have been examples of parents who have selected for certain qualities that some individuals may choose to select against. For example, a deaf couple who deliberately chose to have a deaf child did so because they did not consider their deafness a disability. Instead, they saw it as a cultural identity – a quality that enabled (and structured) communication with others within the deaf community. In an interview with The Washington Post, the parents asserted that they would make better parents to a deaf child. Many, however, criticized the parents for the “harm” that were believed to have intentionally foisted on that child. Others were not convinced. For instance, Julian Savulescu, in “Deaf Lesbians, Designer Disability, and the Future of Medicine,” insisted that a child “is harmed by being selected to exist only if his or her life is so bad it is not worth living. Deafness is not that bad.” Instead, Savulescu argued that parents should have the freedom to choose the “best” child for them, and not only that, he argued that all parents have a moral responsibility to do so.

So, should we have the freedom to extend our reproductive choices so far as to “fully customize” our children as EctoLife advertises? Do we have a moral obligation not only to prevent “suffering” in future children but also to “enhance” them?

The proposed ability to choose specific characteristics in EctoLife’s “elite package” pose a variety of issues. The liberalization of these choices may risk the creation of “designer babies,” which are unlikely to come without a hefty price tag. There is already a lack of insurance coverage for IVF in the United States, and it seems unlikely that insurance companies would pay for parents to “embryo shop” in this capacity. It is projected that the average cost of a single IVF cycle in 2023 can range from $15,000 to $30,000. It is therefore likely that if the EctoLife fantasy did exist, the projected costs for parents would be far higher. This may mean that the “elite” have a monopoly over the technology, as well as an outsized influence over which qualities make a child the “best.” Sure, the financially secure already have this capacity via social advantages, but allowing for “fully customizable” embryos and the creation of preference-matched children may encourage eugenic practices.

Still, we’re very far away from tailor-made humans “hatched” on a production line, and, in reality, it is highly unlikely EctoLife will ever come to be – firstly, due to it being outside of the realms of scientific possibility, and, secondly, because of the current restrictions in the U.S. attached to the editing of human embryos. At the moment, EctoLife is simply no more than a fanciful thought experiment. However, it certainly encourages us to revisit the ethical debate over where to draw the line between prevention and enhancement, as well as whether the benefits of gene editing can outweigh the potential societal harms.

From Conscience to Constitution: Should the Government Mandate Virtue?

photograph of cards, dice, chips, cigarettes, and booze

You have probably heard it said that you can’t legislate morality, that making laws that require people to do the right thing is both ineffective and authoritarian. Nevertheless, in his recent Atlantic article entitled “American Has Gone Too Far in Legalizing Vice,” Matthew Loftus encourages politicians to do just that. By legalizing sports betting and recreational marijuana, Loftus argues that states are neglecting to consider the countless addicts that will result, and that lawmakers should do more to outlaw these harmful vices.

On Loftus’s view, public policy plays a role in the habits that we form, and creating an environment where more people succumb to their vices is neither good for addicts nor the political communities that will be left picking up the pieces. A substantial portion of gambling revenue comes from those who struggle with addiction, and legalizing marijuana is linked to higher rates of drug abuse. If these activities remained illegal, then fewer people would get hooked.

On this score, it seems that Loftus is obviously correct. Our environments play a significant role in the habits we adopt. If I am surrounded by responsible peers, I will be more likely to study for my next exam, while if many of my friends are cutting class, I will be more likely to skip out as well. These choices then form my habits. In the good case, my habits will be virtues like temperance, honesty, and diligence. In the bad case, my habits will lead me into all sorts of vice, including destructive addictions like gambling and drug use.

But even if it is true that our environments form our habits, the question still remains whether it’s the government’s place to guide us towards virtue instead of vice.

As a democracy founded on the rights to “life, liberty, and the pursuit of happiness,” it may be too heavy-handed for political leaders to require us, or even nudge us, to live a certain way.

This concern is amplified by the fact that many of the philosophers who have been the staunchest advocates of state-sanctioned virtue have not been very enthusiastic about democracy. According to Plato, a well-functioning political community should mirror the way that virtuous individuals conduct their lives, while for Aristotle, the purpose of government is to help citizens to live flourishing lives of virtue. But Plato also held that we should all be ruled by philosopher kings, a class of highly educated rulers, and that the freedoms granted within democracies would inevitably lead to anarchy. Likewise, Aristotle thought that monarchy and aristocracy are superior to democracy. An emphasis on character formation through the law might also lead to rejecting democracy as a promising form of government rather than embracing important constitutional freedoms.

These considerations reveal that there is some tension between allowing citizens the freedom to conduct their own lives and passing laws that promote virtue. Part of this tension arises because we often disagree about what is morally best, a fact that the political philosopher John Rawls called reasonable pluralism. Intelligent, well-intentioned citizens can find themselves at odds over many key moral questions.

Is gambling a harmless pastime or a serious moral vice? Is access to abortion a central human right, or the murder of an innocent human being? By enforcing policies that promote particular virtues, lawmakers may have to come down on one side or the other of these ongoing debates.

Furthermore, even in cases where we can agree on what is morally best, it is not clear that the law should prevent us from doing things that we know are to our detriment. Certainly the law should prevent us from interfering with how others choose to pursue happiness, but if we are only hurting ourselves, then why is that anyone’s business besides our own? Part of making room for the pursuit of happiness is allowing citizens to decide for themselves what they pursue, not limiting them to only a menu of government-approved options.

All of this, however, overlooks the fact that promoting certain virtues might be an unavoidable aim even for democratic governments. If it is true that political institutions should enable their citizens to freely pursue their vision of the good life, this goal cannot be accomplished by being completely hands off.

To form and pursue their understanding of the good, citizens need wisdom, discernment, courage, and perseverance, amongst other virtues. These virtues are necessary, not because the government wants to control our lives, but because without them we would be incapable of controlling our own lives.

We would instead be left to the dictates of momentary desires or, in the worst case scenario, crippling addictions from which we cannot recover.

This insight opens up a potential middle road between fully laissez-faire public squares and domineering, authoritarian governments. According to the philosopher Martha Nussbaum, political institutions should cultivate the capabilities necessary for their citizens to pursue self-directed lives. By promoting these capabilities, or virtues, governments ensure that their citizens are able to pursue their own unique visions of the good.

This approach allows that the law can encourage citizens in virtue in a way that creates and supports their ability to choose the life that they want to lead. On this model, the rule of law would not be completely value neutral, but it would make space for people to be able to choose many of their own values.

Forbidding certain kinds of vice, like preventing adults from gambling or using addictive substances, would for the most part be off the table. Unless the government wants to endorse a more robust picture of what a good life is like, the default position would be to let those who can choose their own informed goals pursue those ends. Recreational activities, like football or freediving, come with substantial dangers, but it is typically left up to individuals whether they want to take on those risks. In contrast, protecting those who are still forming the ability to choose their own life paths, like forbidding Juul from marketing to children, would be well within the purview of government officials.

Of course, just having laws that promote virtue does not ensure that anyone will become particularly moral. While they may succeed in outlawing vice, laws simply compel behavior, and those who begrudgingly comply out of fear of punishment would not for that reason become deeply good. The law, rather, would act as a guide for what kinds of values might be worth adopting, and citizens can then decide whether or not they want to choose these ideals for themselves. Policies like sin taxes, for instance, allow states to discourage vice without outright banning it.

Thus, even a view like Nussbaum’s leaves plenty of room for people to develop their own distinctive moral characters. Democracies can lay the groundwork for citizens to live meaningful and fulfilling lives, but at the end of the day, it is up to them to decide what values their lives will ultimately serve.

A Right To Attentional Freedom?

collage of various people on their phones

The White House recently posted a proposal for an AI Bill of Rights. In California, there is a bill that aims to hold social media companies accountable for getting young children addicted to their platforms. Several of these companies also face a federal lawsuit for emotionally and physically harming their users.

For those who use technology on a day-to-day basis, these developments are likely unsurprising. There is an intuition, backed by countless examples, that our technology harms us and that those who have created the technology are somehow responsible. Many of us find ourselves doomscrolling or stuck on YouTube for hours because of infinite scrolling.

Less settled is precisely how these technologies are bad for us and how exactly these companies wrong us.

The California bill and the lawsuit both argue that one notable form of harm can be understood through the lens of addiction. They argue that social media companies are harming a particularly vulnerable group, namely young adults and children, by producing an addicting product.

While this way of understanding the problem certainly has plausibility, one might favor other ways of explaining the problem. The way that we frame the moral relationship users have with technology will shape legal argumentation and future regulation. If our aim is to forge a morally sound relationship between users, technology, and producers, it is important to get the moral story right.

What makes social media addicting is the fact that it has become especially adept at producing content that users want to engage with. Complex algorithms learn about its user’s predilections and can accurately predict the kinds of things people want to see. The ability for AI to manipulate us so effectively highlights our failure to recognize the importance of attention – a valuable good that has gone underappreciated for far too long.

First, our attention is limited. We cannot attend to everything before us and so each moment of attention is accompanied with non-attention. If I am paying attention to a film, then I am not paying attention to the cars outside, or the rain falling, or the phone in my pocket.

Second, attention is susceptible to outside influence. If someone is talking loudly while a film plays, I may become distracted. I may want to watch the film closely, but the noise pulls my attention away.

Third, attention is related to many foundational moral rights. Take for instance freedom of thought. We might think that in a society where there are no laws about what you are allowed to think, read, or say guarantees the freedom of thought. However, unless your attention is respected, freedom of thought cannot be secured.

We need only think of Kurt Vonnegut’s story “Harrison Bergeron” to show what this claim misses. In it, Harrison Bergeron lives in a society that goes to great lengths to ensure equality. In order to make sure everyone remains equal, those who are born with natural talents are given artificial burdens. For Harrison, who is exceptional both physically and mentally, one particularly clever tactic is used to ensure he does not think too much. Periodically, a loud, harsh sound is played through an earpiece. This makes it impossible for Harrison to focus.

The relevant point here is that even if no law exists that prohibits you from thinking whatever you please, reading what you want, or discussing what you wish, your freedom of thought can be indirectly overridden.

By utilizing the fact that your attention is limited and not fully voluntary, another party can prevent you from thinking freely. Thus, although our rights may be respected on paper, assaults on our attention may inhibit us from utilizing the capacities these rights are supposed to protect in practice.

When we interact with technology, we must give our attention over to it. Furthermore, much of the technology we interact with on a day-to-day basis is designed specifically to maintain and increase user engagement. As a result of these design choices, we have developed technology that is highly effective at capturing our attention.

As predictive technology improves, machines will also improve their ability to distract us. The result of this will mean that more people will spend more time using the technology (e.g., watching videos, reading news pieces, viewing content produced by other users). The more time people spend using this technology, the less they can spend attending to other things.

If our attention is limited, can be controlled from the outside, and is vital for utilizing other morally important capacities, it seems clear that it is something that should be treated with respect.

Consider how we tend to think that it is rude to distract someone while they are trying to concentrate. It rarely feels satisfying if the person causing the distraction simply replies “Just ignore me.” This response denies a crucial reality of the nature of attention, viz., it is often non-voluntary.

Furthermore, it would be even worse if the distracting person tried to mask their presence and distract someone secretly, and yet this is precisely what a great deal of our technology does. It exploits the non-voluntary nature of our attention, overrides attentional freedom, and does so in the most discrete way possible. Technology could be designed in a way that respected our attentional freedom, instead of covertly trying to undermine it. For example, periodically prompting the user to consider doing something else, instead of endlessly presenting more content to engage with.

Rather than focusing on technology’s tendency to encourage addictive behavior in young people, I would like us to think about the effects technology has on all users’ attentional freedom.

Technology that is designed to distract you is harmful because it overrides your attentional freedom. When you use this technology, you are less free. This analysis must overcome at least two challenges, both centered around consent.

The first is that we consent to use these products. To argue that my phone wrongfully harms me because it is distracting seems like arguing that a book wrongfully harms me if it is so gripping that I cannot put it down.

However, while a book may be enticing and may even be created with the hopes that it captures attention, the book does not learn about what captures attention. There is a difference between something capturing your attention because it is interesting and something that learns your preferences and sets about satisfying them. What makes AI driven technology unique is that it has the capacity to fine tune the kinds of things it offers you in real time. It knows what you click on, what you watch, and how long you engage. It also relies on the involuntary part of attention to keep you engaged.

The second argument is about general human interaction. If it is wrong to affect someone’s attention, then daily interactions must be wrong. For instance, if someone walks down the street and asks me to take a flier for a show, do they wrong me by distracting me? Do all interactions require explicit consent lest they be moral violations? If our moral analysis of attention forces us to conclude that even something as trivial as a stranger saying hello to you constitutes a moral wrong because it momentarily distracts you, we will have either gone wrong somewhere along the way, or else produced a moral demand that is impossible to respect.

To answer this second objection, one thing we can say is this. When someone distracts you, they do not necessarily wrong you. Someone who tries to hand you a flier in the street effectively asks for your attention, and you have the opportunity to deny this request with fairly little effort. Notably, if the person who asks for your attention continues to pester you, and follows you down the road as you walk, their behavior no longer seems blameless and quickly turns into a form of harassment. When someone intentionally tries to override your attentional freedom, the moral problem emerges. Because attentional freedom is connected to a set of important freedoms (e.g., freedom of thought, freedom of choice, etc.), if one can override another’s attentional freedom, they can override other important freedoms indirectly.

If technology harms us because we become addicted to it, then we have reason to protect children from it. We may even have reason to provide more warnings for adults, like we do with addictive substances. However, if we stop our analysis at addiction, we miss something important about how this technology operates and how it harms us. When we see that technology harms us because it overrides our attentional freedom, we will need to do more than simply protect children and warn adults. Several new questions emerge: Can we design technology to preserve attentional freedom, and if so, what changes should we make to existing technology? How can we ensure that technology does not exploit the non-voluntary part of our attention? Are some technologies too effective at capturing our attention, such that they should not be on the market? Is there a right to attentional freedom?

Trigger Warnings and the Perils of Mission Creep

In a recent column, Eli Schantz presents a compelling case for trigger warnings. He argues that such warnings are communicative acts that express respect for students grappling with trauma and the desire to support them in that struggle. They are, in short, a “basic act of kindness,” akin to showing solicitude for a friend’s feelings before broaching a difficult subject with them.

All of this sounds very reasonable, and indeed, were trigger warnings only used to show care and concern for those suffering from traumatic events, there might be little to object to. The trouble is that, in the real world, the use of trigger warnings has undergone mission creep: it has expanded beyond anything remotely related to trauma. Most worryingly, trigger warnings are now often used to signal that the curricular material for which the warning is given contains morally problematic or offensive content. In what follows, I will address two kinds of trigger warning mission creep and offer a suggestion for what we should do about them.

Schantz’s argument itself illustrates one kind of mission creep. The term “trigger warning” suggests, of course, that the purpose of the warning has something to do with preventing triggering — the onset of traumatic stress. This was, indeed, the original purpose of such warnings when they first emerged in online spaces where sexual violence and abuse were discussed. However, Schantz insists that “trigger warnings are not about minimizing emotional distress or intrusive thoughts,” but about expressing respect. Of course, if that is their purpose, then there is no reason why trigger warnings should be limited only to curricular material or activities likely to trigger traumatic stress; any reference to potentially traumatic events, ranging from parental divorce to sexual abuse, warrants a trigger warning. Furthermore, there is then no longer much justification for limiting trigger warnings to expressing respect for those who suffered traumatic experiences — surely those who have suffered in any way deserve respect and compassion at least as much.

Schantz might point out that showing kindness is an imperfect duty: it does not follow from his argument that an instructor is obligated to provide trigger warnings prior to every reference to potentially traumatic events, or in deference to any every kind of suffering. Again, though, the real-world application of his conclusion must be our guide in evaluating the argument. If students came to believe that trigger warnings are acts of compassion and kindness, it is not hard to imagine how the reputation of professors who fail to provide them would fare.

One factor enabling trigger warning mission creep of this kind is the increasing prevalence of the idea that potentially traumatic experiences invariably lead to trauma, with the corollary that most college students are traumatized. Schantz appears to endorse this view when he identifies the group consisting of students who have had potentially traumatic experiences, such as adverse childhood events, with the group of those who are “deeply and unforgettably traumatized.” In fact, while numerous studies confirm that a majority of incoming college students have experienced potentially traumatic events — most commonly, the sudden unexpected death or serious illness of a loved one — there is little evidence that most of these students suffer traumatic stress or functional impairment from these experiences. Interestingly, these findings call into question both the Amna Khalid/Jonathan Haidt line that college students are somehow more fragile than they once were, and the argument that trigger warnings are a reasonable response to widespread trauma among college students.

The kind of mission creep that truly worries me, however, is different from what I have so far discussed. It is undeniable that trigger warnings are now often utilized not with a view to addressing trauma, but to signal that the curricular content at issue contains something morally problematic or offensive. How else to explain putting a trigger warning on Peter Pan because of its “odd perspectives on gender,” or trigger warnings for blasphemy? A quick Google search unearthed two nearly identical documents from the University of Michigan and University of Connecticut providing guidance to faculty for the use of trigger warnings. They both state that the terms “trigger warning” and “content warning” can be used “interchangeably,” with the latter defined as a warning of content that “may offend or upset some people.” In other words, the idea that trigger warnings are meant only to address trauma is passé in the academy; as the University of Michigan document boldly states, “content and trigger warnings are intended to serve all students . . .” (emphasis added).

What is the effect in the classroom of expanding the scope of trigger warnings to “sensitive” content? I’m not entirely sure, but one indication comes from an incident at Hamline University I’ve written about previously. There, an adjunct art professor provided a trigger warning before showing a fourteenth-century painting of the Prophet Muhammad in class. After one student complained, the university administration rescinded its offer to the adjunct to teach the following semester and publicly decried her classroom activity as Islamophobic. When The New York Times reached out to the complainant, she stated that the adjunct provided a trigger warning “precisely because she knew such images were offensive to many Muslims.” In other words, for the complainant, the fact that the adjunct believed that the image warranted a trigger warning proves that she should not have shown it in the first place. For professors, this suggests that to put a trigger warning on offensive content is to put a target on one’s back.

Professors have good reason to include trigger warnings if their class materials or activities feature graphic content likely to elicit traumatic stress. This was the original purpose of trigger warnings, and it remains a good purpose. Unfortunately, however, the scope of their use has expanded far beyond this relatively narrow context. Our aim, then, should not be to wholly eliminate trigger warnings, but to restore them to their original and best use.

Contra Khalid: A Defense of Trigger Warnings

Amna Khalid, writing at Persuasion, argues that trigger warnings are futile. The research, she says, shows that trigger warnings do not minimize emotional distress or intrusive thoughts; she references, for example, a meta-analysis which found “that people felt more anxious after receiving the warning.” But beyond these empirical critiques, Khalid also asserts that trigger warnings “pander to student sensitivities—to the extent that it starts undermining the mission of the university.” When trigger warnings are used, she says, “we fail to equip our students with the skills and sensibilities necessary to cope with life” and “[do] them a great disservice.” “Instead of coddling our students,” she writes, “we should be asking why they feel so emotionally brittle. Might it be that their fragility is the result of limited exposure to what constitutes the human condition and the range of human experience?” She concludes: “perhaps, in the end, what [students] need is unmediated, warning-free immersion in more literature, not less.

Khalid’s argument is heavy on generalization — and lacking in rigor. It’s worth noting at the outset that the article she cites as evidence that trigger warnings don’t minimize intrusive thoughts doesn’t mention trigger warnings, and the meta-analysis she cites is a pre-print. But the problems in Khalid’s argument extend well beyond the data she cites; and no matter the pedigree of those who support it, we have good reason to reject it. Khalid doesn’t just advance a misinformed argument; she fundamentally misunderstands the point.

I will not bury the lede: I argue here that trigger warnings represent a basic act of kindness which demonstrates our respect for the trauma others have endured.

In what follows, I discuss adverse childhood experiences; violent crime and physical violence; severe illness; PTSD; and, finally, sexual assault and rape. No matter your choice to engage with this work or not, you have my thanks.

. . .

The banality of trauma is difficult to overstate. Adverse childhood experiences, as defined in the Journal of the American Medical Association, include

experiencing physical, emotional, or sexual abuse; witnessing violence in the home; having a family member attempt or die by suicide; and growing up in a household with substance use, mental health problems, or instability due to parental separation, divorce, or incarceration.

60.9% of adults have had at least one adverse childhood experience; 15.6% have had four or more. 82.7% of Americans have been exposed to a traumatic event. 2.6 million Americans over the age of 12 have been the victim of a violent crime.

In the context of higher education, the pattern persists. 35% of matriculating undergraduates have seen a loved one experience a life-threatening illness or had such an illness themselves; 24% have personally seen or been the victim of physical violence, and 7% have been sexually assaulted. The same study found that 9% of matriculating students met criteria for PTSD. 20.4% of women at American universities reported experiencing non-consensual penetration, attempted penetration, sexual touching by force, or assault via inability to consent — since they have been enrolled at their institution.

Khalid’s suggestion that students are fragile because of “limited exposure to what constitutes the human condition,” then, is either ignorant or dishonest: students come to the classroom bearing the full weight of the trauma which has been inflicted upon them.

But the problems for Khalid’s argument run deeper. The banality of trauma paints a picture which is difficult to ignore: every day, you interact with people who have been deeply and unforgettably traumatized. And contained in this truth is a question: how will this change how you interact with others?

On one hand, you may choose compassion. To broach a difficult conversation with a friend, for example, you may say to them: “There’s something difficult that we need to talk about soon, but I understand it if you’re not ready right now. Let me know when we can meet, and in what setting you’d be most comfortable having this conversation.”

Similar conversations occur in the professional context. Medicine and social work, for example, have recently begun a shift towards trauma-informed practice. Prior to discussing sexual health or other sensitive topics, a physician may say: “I have some questions which can be uncomfortable. I ask them because I want to provide the best care that I can, but I also understand if you’re not in a place to talk about them right now.” A social worker, when onboarding a new client, may say “I understand that things have been challenging for you lately, but I want to meet you where you are. Tell me when you’re ready to talk about what’s been bothering you, and I’ll do my best to support you in whatever ways you need.”

Or, analogously, a professor may say to their students: “As part of our next class, you may be exposed to topics and material that may bring about complex emotions. I want you to know that, for all of my concern for you as a student, I care for you as a human being more. I will do my best to ensure that our conversation is respectful and affirming; but if you need to not participate in this conversation, or not attend this particular discussion, I completely understand. And if you need support before or after, I am here to listen and help however I can.”

Each of these statements, spanning personal and professional interactions, represents a “trigger warning” of a kind: critics frequently ignore that portending a difficult conversation is a normal part of both personal and professional life.

But these critics also misunderstand the purpose of such statements. Trigger warnings are not about minimizing emotional distress or intrusive thoughts.

Furthermore, it should be taken as an obvious truth that trigger warnings increase anxiety: anyone who is told that a difficult conversation lies ahead will be understandably anxious. When Khalid and the researchers she cites argue through reference to data on these outcomes, they fundamentally miss the point.

Trigger warnings, as represented in all of the examples above, are a communicative act: they communicate a speaker’s understanding that traumatic experiences are ubiquitous, their desire to support others, and their respect for how challenging a conversation can be. They portend what is to come, but vitally, communicate that you are not alone in your struggle. Trigger warnings, then, show a respect for the trauma which others have endured, and solidarity with them as they navigate life after; they represent a basic act of kindness through which we, as individuals and as professionals, can express our respect for others. When understood in this light, Khalid’s argument against trigger warnings is made all the more cruel. To “equip our students with the skills and sensibilities necessary to cope with life,” should we withhold our respect and kindness from them? Should we ensure that they experience “unmediated, warning-free immersion” in the content of their trauma, and extol our virtue for doing so?

I answer no — but the choice remains yours. Compassion is not the only option, and you may choose its alternative; and as I have written in these pages before, the choices you make represent who you really are. If trigger warnings represent coddling or pandering, count me among the coddlers and panders; if respecting the trauma of others conflicts with the mission of the university, I reject the university and all it stands for.

I, for one, will choose compassion.

ChatGPT and the Challenge of Critical (Un)Thinking

photograph of statue of thinking man

For the past weeks there has been a growing interest on ChatGPT, this new artificial intelligence language model that was “programmed to communicate with people and provide helpful responses.” I was one of the curious that had to try it and figure out why everyone was talking about it.

Artificial intelligence is not a new thing; at least as an idea it has some decades now, since it was firstly introduced in 1950 by Alan Turing, the British mathematician who is generally considered to be the father of computer science. Later on, in 1956, John McCarthy coined the term “artificial intelligence” in a conference, giving birth to a new field of study. Today, it is everywhere, we use it even without knowing and the advancements in the area create entirely new fields of inquiry, bringing along new ethical dilemmas that go from the discussion what (if any) moral rights to attribute to A.I., to designing new digital rights that encompass different milieus and that have political and legal consequences – see, for instance, the European Union attempts since 2021 to create a legal framework regarding the rights and regulations of AI for its use on the continent.

ChatGPT is something unique – at least for now. While a recent development, it seems almost too familiar – as if it was always there, just waiting to be invented. It is a Google search on steroids, with much more complexity in its answers and a “human” touch. Once you read the answers to your questions, what calls your attention is not only how fast the answer is provided, but also how detailed it seems to be. It mimics pretty well our ways of thinking and communicating with others. See, for instance, what happened when staff members at Vanderbilt University used it to write an email responding to the shooting at Michigan State – a well written 297-word missive which might otherwise have been well received. However, the fact that at the bottom of the email was a line that read as following: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” outraged the community. The Associate Dean of the institution soon apologized, saying that the use of the AI-written email contradicted the values of the institution. This is one (of no doubt many) examples of how the use of this technology may disrupt our social and cultural grids. This new tool brings new challenges, not only for education – how students and professors incorporate this technique into their practices – but also for ethics.

Contemporary models of education still rely heavily on regular evaluation – a common mission across educational institutions is to foster critical thinking and contribute to the development of active and responsible citizens. Why is critical thinking so valued? Because being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value. To judge something is not as easy as it may initially seem, for it forces each of us to confront our prejudices, compare it to reality – the set of facts common to all of us, what the world is made up – and take a stand. This process also moves us from our inner monologue with our self to a dialogue with others.

What happens when students rely more and more on ChatGPT to do their homework, to write their essays and to construct their papers? What happens when professors use it to write their papers or books or when deans of universities, like the example mentioned above, use it to write their correspondence? One could say that ChatGPT does not change, in essence, the practices already in place today, given the internet and all the search engines. But insofar as ChatGPT is superior in mimicking the human voice, might its greatest danger lie in fostering laziness? And shouldn’t we consider this laziness a moral vice?

In the Vanderbilt case, what shocked the community was the lack of empathy. After all, delegating this task to AI could be interpreted as “pretending to care” but fooling the audience. To many it seems a careless shortcut done for time’s sake. Surely it shows poor judgment; it just feels wrong. It seems to betray a lack of commitment to the purpose of education – the dedication to examine and think critically. In this particular context, technological innovation appears nothing more than a privileged means to erode what was supposed to contribute to, namely, thoughtful reflection.

While technologies tend to make our life much more comfortable and easier, it’s worth remembering that technologies are a means to something. As Heidegger well pointed out in an emblematic text entitled “The Question concerning Technology” (1954), we tend to let ourselves be charmed and hypnotized by its power; while forgetting the vital question of purpose – not the purpose of technology but the purpose of our lives, as humans. And while ChatGPT may be great for providing context and references on virtually any topic of research, we cannot forget that the experience of conscious thinking is what makes us uniquely human. Despite all appearances of coherent and well-ordered prose, ChatGPT is only mirroring what we, humans, think. It still does not have nor can mimic one thing: our emotions and our ability to respond in a singular manner to specific situations.

If we generalize and naturalize the use of this kind of technologies, incorporating into our daily lives, aren’t we making a choice of non-thinking in detriment of an instantaneous response that serves a strict utilitarian purpose? Heidegger says that “technology is a mode of revealing,” insofar what we choose (or do not choose) reveals the ways in which we are framing our world. And if we choose not to think – believing that something else can “mirror” our possible thought – aren’t we abdicating of our moral autonomy, suspending the human task of reflecting, comparing, and judging, and instead embracing a “dogmatic” product of a technological media?

Is Abandoning Objective Journalism a Moral Failure?

photograph of newsstand in subway filled with celebrity magazines

One of the by-products (or perhaps causes) of folks’ belief that we live in a “post-truth” world is the conviction that there is no such thing as objectivity – an account of facts free from the distortion of personal bias. If there is no such thing as “Truth” with a capital “T,” there are no mind-independent facts; everything we say is true depends on some perspective, therefore everything is subjective.

Certainly, the idea that we can adopt an objective “view from nowhere” is questionable, but is it worth abandoning the concept of objectivity altogether?

Apparently, journalists are increasingly adopting this mode of thinking, with some even claiming that throwing aside a commitment to objectivity can build public trust. But is this true? Also, if journalists wish to give up on objectivity, should the public rethink the social place of journalism?

In the 1990s academics including historians, sociologists, scientists, and philosophers engaged in what were called the “science wars.” The central issue of these debates was the authority of science in making claims about the world. One side defended the ability of science to make authoritative claims about the world rooted in the idea of objective science, while the other side preferred to study the claims of science in terms of the social, political, and economic forces that act on the sciences. Scientific objectivity, it was claimed, is a myth, and instead the conclusions of science are not rooted in objective evidence and logic, but in social power structures. Scientific conclusions reflect the social influences at work that lead to its production.

Even scientists have, at times, claimed that objectivity is a myth. For example, computer scientist Timnit Gebru has argued, “scientists must understand that their science cannot be divorced from the world’s geopolitical landscape, and that there are no such things as meritocracy and objectivity.” She cites Sarah Marie Stitzlein who rejects “the view from nowhere” as first articulated by Thomas Nagel. Whereas Nagel understood objectivity as an aperspectival Archimedean point, Stitzlein rejects this ideal and the corresponding notion of truth it carries. Instead, knowledge is always socially and historically located. Our inquiries are driven by what interests us and what we value.

But all this means is that aperspectival objectivity is a myth, not that objectivity in general is mythical.

To many scholars, it isn’t a surprise that mechanistic physics came to prominence during the industrial revolution, or that the lucrative nature of pharmaceuticals makes scientists pay more attention to mental illness problems as biochemical in nature. But it is a false dichotomy to say that without aperspectival objectivity, there is no such thing as objectivity at all. Helen Longino’s “The Fate of Knowledge,” written in response to the science wars, argues for objectivity through intersubjectivity. If perspective affects what we know, then let’s subject our assumptions to as much criticism as possible to weed out those idiosyncrasies that are indefensible.

Heather Douglas has articulated eight distinct kinds of objectivity that do not reduce to each other and do not depend on a view from nowhere. For example, there is manipulative objectivity where we find success at manipulating the world. There is also convergent objectivity, which considers whether people pursuing questions in different ways might come to the same answer. There is also procedural objectivity, where we eliminate individual judgment in favor of protocols and procedures for reaching conclusions. Different forms of objectivity might be sought in different contexts, but they do not depend on adopting a view from nowhere.

The scientific worldview – the idea that we can discover some truth about the world using experimentation and evidence to reach our conclusions – is not exclusive to science. Journalism is founded on the idea of reporting on the events of the world using journalistic investigation. Despite this, Leonard Downie Jr., a former executive editor of The Washington Post, writes,

increasingly reporters, editors, and media critics argue that the concept of journalist objectivity is distortion of reality…they believe pursuing objectivity can lead to false balance or misleading “bothsidesism” in cover stories about race, the treatment of women, LGBTQ+ rights, income inequality, climate change, and many other subjects. And in today’s diversifying newsrooms, they feel it negates many of their own identities, life experiences, and culture contexts, keeping them from pursuing truth in their work.

It should be noted how inherently contradictory this view is. If we abandon the concept of objectivity, then what does “false balance” even mean? If it isn’t an objective consideration of what is balanced, then it is a subjective view of what is balanced (which is to say it isn’t balanced at all). What does it mean to say that you won’t cover a misleading “side” of the story if you explicitly state you aren’t even trying to be objective? The fallacy here is thinking that objectivity must include a view from nowhere. But, Nagel didn’t believe such a notion of objectivity would obligate reporting pseudo-science and debunked claims. A view from nowhere would not require teaching both relativistic and Aristotelian physics as two sides of the story.

Bothsidesism has nothing to do with “being objective” – such a claim betrays a fundamental misunderstanding of what objectivity demands. Being objective means asking questions: What kind of declarations should count as “facts”? What are the epistemic justificatory limits of the claim being made? No one is asking the press to report falsehoods for the sake of false equivalency. There is, however, a flipside to consider. By definition, every scientific discovery is a break from the previously established consensus. The more we begin to think that we are not obligated to consider “the other side” because we think we know what is right and true, the easier it becomes for us to become more entrenched and dogmatic in our views – to resist discovery. Refusing to reconsider one’s in light of new evidence means we’re the ones peddling misinformation.

Of course, knowing where to draw the boundaries between legitimate breaks from consensus and mere pseudo-science and misinformation is difficult. But if we are going to draw such a boundary, it must be using a standard outside of ourselves; that’s the point of objectivity.

If the standard merely represents whatever a journalist happens to believe is right, it makes journalism less transparent and less accountable for the stories that are told and the harm they cause.

It is worth considering what abandoning objectivity in journalism would mean for society. Young journalists are increasingly tempted to take on the role of activists. As The Washington Post article reports, “many journalists want to make a difference on such issues as climate change, immigration and education.” This includes journalists sharing their political views on social media or attending protests or writing first-person essays about their experiences. Meanwhile, journalists are increasingly complaining about being harassed and emphasizing the threat to democracy this poses. But journalists cannot have it both ways: they cannot simultaneously insist that their role is to serve as the guardians of democracy and contend that theirs is just another voice in the crowd, no different than protestors, lobbyists, think-tank analysts, or podcast hosts. We don’t have the same social reverence for these groups; being a protector of democracy means standing above the fray.

All these reflections suggest that there’s some basic confusion at the heart of the debate about who and what journalism is for. Downie Jr.’s conclusions on the prospects of objectivity, for example, are based on a survey of more than 75 news leaders, journalists, and other experts in print and broadcast media. Suspiciously absent from this survey is anyone from the broader public who depends on that news. In the end, journalism isn’t about indulging in journeys of self-discovery for the individual journalist, it’s about providing a public service. The Washington Post reminds us that “Democracy Dies in Darkness,” but what happens to public trust if the Post becomes the place objectivity goes to die?