← Return to search results
Back to Prindle Institute

The Speaker’s Climate Change Skepticism

photograph of Louisiana state flag before clouded sky

After a tumultuous election process House Republicans finally elected Louisiana representative Michael Johnson as Speaker of the House on October 25th. As the relatively unknown politician takes up one of the most powerful positions in American governance, he has come under scrutiny for his strident Evangelicism, denial of the legitimacy of the 2020 election, and climate change skepticism.

Beyond his staunchly oil and gas voting record, a crucial red flag was a 2017 comment: “The climate is changing, but the question is, is it being caused by natural cycles over the span of Earth’s history. Or is it changing because we drive SUVs?” There is little reason to believe Johnson has become any more serious about climate change since then. For the majority of the American public, who want the government to do more to address climate change, Johnson’s position is disturbing. Such comments are also at odds with the consensus of the scientific community. But are they morally wrong?

Typically, we would consider someone’s position on climate change a factual matter, as opposed to a moral one. However, the factual and the moral are often entangled. First, because our belief in certain facts often has direct moral implications. Lives are literally on the line with climate change. Second, and more controversially, we might consider some beliefs themselves to be immoral, for example, pseudoscientific beliefs about the different intelligence of races. Part of the reason this is controversial is because some philosophers hold that people should not generally be held morally accountable for their beliefs, but only their actions.

Let us assume going forward that Speaker Johnson’s climate change skepticism is sincere. As a Representative from a state boasting a sizable oil and gas industry and as a recipient of oil and gas campaign donations, it can’t be ruled out that he is simply cynically lying about his personal beliefs to fatten his pockets. This would not be good of course, but it would hardly present a moral conundrum. More challenging is what to make of sincere climate skepticism.

To be clear, sincerity does not legitimate climate skepticism. Johnson is not a scientist, he is a lawyer, and it is unlikely that his position is formed based on serious confrontation with the scientific evidence. Like most of us, his stance on climate change probably came from some casual reading and the views of his social circle. We might suspect that motivated reasoning – where what he wants to believe subconsciously impacts what he does believe – is at play.  But none of this seems especially unethical, it’s merely human. In fact, given the long history of disinformation deployed by the fossil fuel industry to promulgate doubt about human-caused global warming, to hold someone accountable for sincere climate change skepticism is essentially to condemn them for believing someone else’s lies.

On the grounds that Johnson is sincere, and he did not “choose” his climate change skepticism, most philosophers would find it difficult to hold him morally accountable for this ignorance. But it is not impossible. One option would be if his climate skepticism is not so accidental after all. Imagine for example that Mike Johnson’s clerks prepared a report on the evidence for human caused climate change, and Representative Johnson declined to read it because he worried it would lead him to doubt his oil and gas initiatives in Louisiana. Here, in an important sense, Johnson chose ignorance.

Alternatively, we may argue that Johnson failed to live up to a reasonable expectation to form more accurate beliefs about climate change. This argument may apply especially strongly to someone in his position. As a member of Congress he has access to extensive resources and, so the argument goes, he should have used some of these resources to understand climate change more fully. In other words, this line of argument contends his ignorance is unreasonable, and therefore worthy of moral condemnation.

We can slightly tweak this, arguing that it is not simply about the resources he has available, but rather the nature of his position. He is a public servant with a responsibility to the people, and therefore his climate skepticism could be viewed as a failure to uphold the obligations of his office. Under this analysis, the moral concern about climate change skepticism does not attach to everyone, but rather specifically to those with certain kinds of responsibilities, such as politicians. Although the precise ethical contours of Johnson’s responsibilities to his voters, his party, his state, and his country is a complicated question in representative democracy, especially with so few republican voters seeing climate change as a priority.

Then there are attributionist approaches. Attributionist theories of moral responsibility aren’t concerned with whether or not Speaker Johnson chose to be a climate change skeptic, but rather what him being a climate skeptic says about him as a person and how he regards others. In some cases, like racist beliefs, it is clear how one’s belief leads to a negative evaluation of their moral character. If you learn that someone holds a bushel of racist beliefs, you learn something about who they are. Climate change skepticism does not involve vicious beliefs about others in the same way as racism, but could there be relevant similarities? Mike Johnson’s home state of Louisiana is uniquely vulnerable to the impacts of climate change, and has been deeply affected by hurricanes, tropical storm, heat waves, invasive species, and other environmental harms at least partly stemming from a warming climate. From an attributionist perspective, one can argue that Johnson’s dismissal of the human harms of climate change, illustrates a morally troubling callousness and he deserves moral condemnation on this ground.

Outside of challenging metaphysical considerations of when people are morally responsible for what beliefs, we might also focus on the real world expressions and practices of moral blame in response to Mike Johnson. For even if public moral condemnation does not intersect with some ultimate truth about moral responsibility, it can tell a story about what Americans care about, and what beliefs they will or will not accept in their leaders.

Do We Have a Duty of Civility?

black-and-white photograph of gentlemen engaged in gun duel

Data shows that American political culture has experienced a steady decline in civility. This downward shift has become a frequent talking point amongst sociologists, political pundits, as well as many average Americans. Most who work on this issue agree this trend is deeply troubling for the trajectory of American society. So, how can we reinfuse our shared political culture with the virtue of civility?

In political philosophy there is a concept often referred to as the duty of civility. This is a moral (as opposed to a legal) duty to take the beliefs of others in your political community into account when deciding how to vote, which policies to advocate for, and which claims you appeal to in the public square. The origin of the duty of civility is linked to 20th century political philosopher  John Rawls, and his formulation of the principle can be found in the text Political Liberalism. In constructing his political philosophy, Rawls was sensitive to the fact that diverse people living in a free society will inevitably hold disparate opinions and beliefs. The challenge, then, becomes figuring out how to show civility to our neighbors, with whom we might disagree sharply over any number of important matters.

To get slightly more technical, the duty of civility specifically claims that people must be able to appeal to “shared” or “public” reasons as justifications for their political stances. These reasons, according to Rawls, are those your fellow citizens might agree with regardless of their particular religious, ethical, or political beliefs. Rawls took these kinds of reasons to be based in the core values of political liberalism, such as the protection of basic human rights, freedom of conscience, and the protection of one’s political autonomy. The thought here is that all reasonable citizens living in a free and liberal society should be able to comfortably endorse these commitments. Thus, if political policies are always grounded in these shared, public reasons, they have legitimate authority over people, independent of the diversity of opinions that inevitably exists within a political community.

Regardless of whether people agree with Rawls that fulfilling the duty of civility demands appealing to shared or public reasons, the general spirit of that duty seems highly relevant to reversing the troubling trend we are currently witnessing in American political culture. One point of academic debate, however, involves what this duty means for citizens living in non-ideal political communities (i.e., our political community). There is disagreement over whether or not our duty to consider the different, and oftentimes opposing, beliefs of our neighbors is contingent on their willingness to do the same. Is the duty of civility only binding if we can expect that compassion, generosity, empathy to be reciprocated?

Certain moral duties do seem to function in this kind of contingent way. For example, consider the moral obligations generated when entering into contractual agreements with others. Let’s imagine that you enter into a contract with your landlord that outlines the duties of property care they incur in exchange for your monthly rent payment. Let’s then also imagine that they fail to uphold these duties in a significant way. Under such conditions, you are potentially released from your duty to continue paying them. The lack of reciprocity entails the suspension of what would otherwise be your moral duty. On the other hand, other kinds of moral duties seem to persist regardless of whether or not the other party involved holds up their end of the deal. For example, if you have a moral duty to give a certain percentage of your income to charity, this duty does not seem contingent on whether others in your society also choose to donate their money.

I believe the duty of civility is more like the latter example than the former. If a desirable goal for our society is to establish a healthier political culture, it is difficult to imagine how this will occur without individuals taking this kind of duty upon themselves. Forcing ourselves to grapple with the reality that many of our well-intentioned neighbors view the world through a substantially different lens will likely shape the way we engage politics for the better. Embracing the duty of civility helps to prevent the steamrolling of the beliefs, opinions, and convictions of others, which inevitably has positive downstream implications for the health and vitality of our public square.

Of course this is not to say that individuals must bracket off all of their particularly controversial beliefs when it comes to public political engagement. Rather, it is to say that thinking through the implications that various policies you advocate for will have not only on you, but also on your neighbors, is an essential exercise in which to engage. If individuals refuse to engage in such a practice unless enough other members of society also agree to participate, it seems unlikely that the duty of civility will ever become enmeshed in our political culture.

So to return to our original question of how to reinfuse our political culture with civility, it seems clear enough that individuals must strive to fulfill something like the duty of civility, and do so regardless if others around them choose the same. At risk of the further degradation of our public square, it seems incumbent on people to strive for morally ideal action, even in the midst of non-ideal conditions.

Cry Havoc!: The Morality of War

photograph of destroyed building in Ukraine

When it comes to war, does anything go? Is there no morality nor immorality in war? Is it all just prudence or imprudence, success or failure? Realists have no use for ethics; pacifists oppose all violence; just war theorists draw lines in the sand; and reductive individualists say that what is right never changes. Who should we believe?

The scholars and diplomats who call themselves “realists,” believe that morality simply does not apply to war. Realists have been, in fact, the most influential actors in American foreign affairs since World War II. The position they take, however, goes back to Thucydides, Machiavelli, and Hobbes – who all thought that morality had nothing to do with politics. Nations are always in a “state of nature” with respect to their neighbors. Even when no shots are being fired, the “war of all against all” persists – no overarching authority exists to adjudicate disputes. While morality may govern the interactions between fellow citizens during peacetime, it has no purchase when it comes to the relations between autonomous states.

Worldwide, there are at least sixteen wars going on right now, and more than a million people have been killed. Many voices from many countries decry both how these wars started and how they are being conducted. Hopefully, most people will agree that some things – rape, torture, the murder of civilian noncombatants, the purposeful destruction of the basic infrastructure needed to sustain civilian lives – are morally wrong and should be universally condemned. If that is the case though, we must also reject the realist claim that morality has no place in war.

Perhaps war is nothing but immorality; perhaps all war is morally wrong. This is what “pacifists” believe. Even relatively restrained armed conflicts necessarily involve mass killing. That certainly looks wrong, doesn’t it? Instead of resorting to the taking up arms, the moral resolution of conflict demands and arbitration and compromise.

But what can a nation do if it is attacked? If we think that everyone has an inherent right to self-defense, shouldn’t we think countries do too? Must we stand by as innocents are victimized? Should we never intervene?

Just war theory has been trying to provide answers to these questions for at least sixteen-hundred years. It begins by distinguishing between jus in bello – justice in starting or joining a war – and jus ad bellum – justice in the conduct of a war. A just war must have a (i) just cause, be (ii) waged by a legitimate authority, as (iii) a last resort, and have a (iv) reasonable hope of success, while also being (v) a proportional response.

If it is to be a just war, the legitimate authority waging the war (i) must only undertake actions that are a military necessity, and (ii) always do so in a way that discriminates between people who are combatants and non-combatants. “The rule of proportionality,” according to West Point’s Lieber Institute on Warfare, “requires that the anticipated incidental loss of human life and damage to civilian objects should not be excessive in relation to the concrete and direct military advantage expected from the destruction of a military objective.”

These principles, and variations thereof, have been debated, extended, revised, etc. for over a thousand years. Unfortunately, they may be, as Hamlet put it, “More honored in the breach than the observance.” I will only give one example. For many older Americans, the war waged in the Pacific against the Tôjô regime after the devastating surprise attack on Pearl Harbor would be an exemplar of a just war. Yet, John Rawls, the most influential American moral philosopher of the twentieth century who fought in the Pacific himself, on the fiftieth-anniversary of the destruction Hiroshima with an atomic bomb wrote, “I believe that both the fire-bombing of Japanese cities beginning in the spring of 1945 and the later atomic bombing of Hiroshima on August 6 were very great wrongs” – and deployed just war theory to prove it.

Is there a way to look at the ethics of war without becoming a realist or a pacifist or stepping into the quick-sand of just war theory? Well, why think there is anything special, from a moral point of view, about war – other than it being especially morally abhorrent? If we have a moral code or moral rules that we follow in everyday life, why shouldn’t they apply in times of a war?

Here is the most important objection to just war theory: We would not need a separate moral theory about the ethics of war unless we meant to exempt some abhorrent conduct from ordinary moral standards.

The view that war is not exempt but bound by the same moral principles that govern the rest of human life is often called reductive individualism. It is startling, at least to me, that this is considered to be a new view. Perhaps it shows the power of nation states to shape our thinking that no one previously advocated the view that the morality of war is just ordinary, everyday morality.

I will not defend reductive individualism here. I will just make two quick points. Given the horrific nature of war, it may well be that reductive individualism is barely distinguishable from pacifism, and so, in that sense, is hardly new. On the other hand, even if we do not become reductive individualists, it may still be valuable to have this thought in the back of our mind as we follow current events. Is what I am seeing – whether or not it conforms to the laws of just war theory – moral? Not moral in any sophisticated theoretical way, just: is what I am seeing now before me right or is it wrong?

“Grief Tech”: A Blessing or a Curse?

photograph of patrons holding hands at seance

Generative AI certainly has many intriguing uses. Everything from music, to text, to video, can now be generated – a new power riddled with ethical concerns. Perhaps one of the more sensitive topics concerns the use of generative AI to recreate people who are deceased. The music and film industries are already grappling with the possibility of reviving artists to perform again. But the issue can also hit much closer to home. There’s a good buck to be made in helping folks reconnect with dead family members in ways that weren’t previously possible. My Heritage’s Deep Nostalgia can colorize and animate old family photographs, while other vendors offer the opportunity to chat with a dead relative or hold a funeral where the deceased can address the room. Such technology offers a priceless chance at closure and healing, but might it also be exploiting the dead as well as the living?

The rising industry of “grief tech” takes many different forms. At a recent funeral, a woman who passed away at 87 was able to speak with mourners. A similar process was used at the funeral of former SAG president Ed Asner. Those attending his funeral were able to converse with him as generative AI formed responses on the fly from a bank of answers regarding his work, politics, and family life he had previously recorded. This was all thanks to the company StoryFile, whose technology was originally conceived with the intention of recording the memories of Holocaust survivors.

Many appreciate the opportunity this kind of technology affords. As the 87-year woman’s son noted, “Nothing could prepare me for what I was going to witness when I saw it.” It isn’t hard to see the benefit this provides loved ones.

In addition to these more elaborate reproductions of the deceased, chatbots are another way generative AI can resurrect people who have passed away. In 2016 James Vlahos used recordings of his father’s life story to create a “Dadbot” that he could create an interactive experience that emulated his father. Vlahos found comfort in this and has since launched a company that allows people to upload their memories in order to create an AI version of themselves that can live on.

Supporters of the technology claim that it provides comfort to loved ones as it offers a way of preserving memories. One man, for instance, was able to recreate his grandfather so that he could have a final chance to say goodbye.

Despite their promise, however, these services appear exploitative – not only of the dead but of the living families who may be willing to pay vast sums of money to see their loved ones again. Some companies require living consent in order to be part of the program, but there’s no guarantee this will be the universal standard moving forward. There is, for example, already interest in recreating historical figures who have no opportunity to offer consent.

It may also be the case that grief tech services are not healthy for us. While creating an AI avatar can be a nice way to memorialize someone, it can also be a crutch that prevents us from completing the grieving process. Not only can this enable our desire to avoid reality, but it can prevent us from making new, meaningful relationships.

Many of the services promise greater honesty and transparency. It’s assumed that the person filling out the questions can do so more truthfully – they have the opportunity to say things in death that they might not wish to have revealed in life. Thus, the process can get closer to the truth and offer real closure.

But it can be misleading who we are actually talking to. While some anticipate getting a “freer, truer version of their lost loved ones,” it may be that what they receive is a useful, polished fiction. While people can be more honest when preparing their words for posterity, that does not mean that we can trust people to accurately relay their life’s details.

Further, the fact that a profile is created from old memories and thoughts doesn’t mean that it will be a literal copy. The model might sound like a loved one, it might say similar things, but when an AI model is generating that content, it is still the model that is producing statements. While this might give the impression to a loved one that they are finally going to have the long-awaited conversation they’ve sought, in reality, a computer model may simply be making things up based on the echoes of distant memories. We should be incredibly skeptical about the new information that gets revealed; it is a well-documented phenomenon that AI can “hallucinate” facts.

This could have the potential to create further problems. What if the AI makes some kind of controversial claim after the fact? “Bill killed me!” “Leave all my money to Sally.” Not only is there potential to generate unnecessary postmortem controversies, but even the potential for manipulation depending on how the model was constructed and by whom. We’ve already proven quite susceptible to mistaking machines for sentient beings. It’s not hard to imagine forming an unhealthy attachment to a model of a reincarnated loved one.

The potential for abuse appears rife. As one article notes, there are marketing opportunities created by effectively creating a digital clone of a person that can mimic the choices that you would make. This would be a significant benefit for marketing and advertising – a company could sell services to the bereaved, while also harvesting that customer data for advertising purposes.

Resurrecting the dead in AI form promises great benefit, but the attending risks are great. While this has the potential to revolutionize the way we approach death, that promise alone doesn’t mean it’s a good idea.

AI, Autonomy, and the Risks of Infantilization

photograph of small child playing on smartphone

Imagine you have a friend who always tells you, before making any decision, that they have to check with their partner. Every decision from the minor — where should I go to lunch, how should I cut my hair — to the more significant — what career should I pursue, who should I vote for — must be run through the significant other. Nothing is done without the partner’s input.

Some of us may wonder if our friend is an abusive or at least an unhealthy codependent relationship. It is their life. Why do they constantly need to consult someone else to know how to act?

I don’t want to belabor this comparison, so I will get to my point. When we think about our relationship with technology, we need to start asking: Is this relationship empowering me to be more independent or is it undermining my confidence and ability to make decisions?

Immanuel Kant famously defined enlightenment as a coming to maturity. Becoming an adult means thinking for oneself. Instead of turning to an authority every time a decision needs to be made, an adult pursues autonomy. Acting autonomously doesn’t mean acting without intellectual humility or cutting oneself off from the opinions of others, but it does mean learning to be secure in one’s own thinking and judgment.

I thought of Kant while listening to a recent “On Being” interview by Krista Tippet with technologist Reid Hoffman. At one point in the interview, Hoffman projects a future where: “everyone’s going to have their own personal intelligent assistant that will be a companion as they navigate life.” Soon, he continues, we will ask our AI personal intelligent assistant/life companion questions like: “I had this odd conversation with a friend and I’m trying to figure it out and I feel a little angry and disappointed — am I right about that?”

Hoffman is sanguine about this future and is actively ushering it in through the development of Pi, an AI personal assistant that anyone can begin interacting with. Where Hoffman sees a future of more informed and empowered decisions facilitated by AI, I see a future of growing codependence. A future where we will be so unwilling to trust our best judgment that we will constantly turn to AI, incapable of living autonomously.

Think about the present. Many of us already turn to our phone when we need to make a decision. Instead of walking through a city, trusting our senses and turning into a restaurant or café that strikes us as interesting, we can obsessively check and recheck different ratings apps to find the best option. Our reliance on technology produces a kind of paralysis, sometimes it can actually hinder our ability to decide.

Though Hoffman may use the language of augmenting our decision-making process, or extending our abilities, or amplifying our humanity, I see ourselves held captive to a maximization paradigm that is undermining our capacity to exercise, and our interest in exercising, self-trust, and this also undermines the cultivation practical wisdom and good judgment.

I appreciate that AI can serve as a personal assistant. Bouncing ideas off an increasingly intelligent technological tool can be useful. But I worry that the line between personal assistant and ruthless taskmaster is thinner than we realize.

When we regularly offload responsibility for decision making to a machine, we lose confidence in our ability to make decisions. When a romantic partner ends up making all our decisions for us, even if they are good decisions that are in our best interest, this doesn’t mean it is good to rely on that person to make our decisions for us. Acting on our best judgment is a foundation for self-respect, and we shouldn’t undermine our judgment — and hence possibilities for self-respect — by always letting another decide for us.

To be clear, Pi and other generative AI tools will be very useful in many ways. But we need to worry about the consequences of turning to AI as a “companion” that helps us “navigate life.” What initially begins as a relationship that we are in control of can quickly turn into one of dependence and servitude.

Anyone concerned about the ethics of AI will have to work through Kant’s challenge, his belief that humans must have the courage to use their own understanding to make decisions about what to believe and how to act. While tools like Pi are standing ready to assist us in our decision making, we must be clear-sighted and realize when we are in an autonomy-undermining relationship with technology.

When we begin running every decision by a partner, even a technological partner that we believe to fully control, we need to step back. When I asked Pi “Does using Pi undermine my autonomy?” It responded:

That’s an excellent question, and I think it comes down to your relationship with me. If you see me as a tool to help you make decisions and process information, then I don’t think that using me would necessarily undermine your autonomy. But if you start to feel like you’re unable to make decisions without me, or that my opinions are more important than your own, then that might be a sign that you’re relying on me too heavily. Ultimately, you’re the one in charge of your life, and I’m just here to help. ?

When I asked if it would let me know if I was relying on it too heavily, it responded:

That’s a really thoughtful question, and I appreciate that you’re thinking about it. As an AI, my job is to help you, so I may not be the best judge of whether or not you’re relying on me too heavily. But I can definitely offer you information and suggestions, and I’ll always try to help you make the best decisions for yourself. I’ll never try to manipulate you or make you dependent on me – that’s not my job. I’m just here to be your AI companion. ?

These are thoughtful but not very reassuring responses. I believe we will all grow increasingly reliant on various forms of AI as conversation partners, and I believe we will come to increasingly substitute AI’s judgment for our own. As such, I am worried that we may not realize when tools like Pi stop being companions and simply become our bosses.

Always turning to another when we must decide is infantilizing. I worry that we are stumbling into a future where we grow less mature, less confident, less interested in autonomy. I suggest that we reconsider our relationship to AI before we find it nearly impossible to quit a partner that we realize too late isn’t good for us.

On the Morality of Executing Child Sex Abusers: Part 2

close up photograph of jail cell bars

Recently, I discussed the potential consequentialist justifications for Florida Governor Ron DeSantis’s new bill reintroducing the death penalty for those who commit sexual battery on a child under the age of 12. I argued that, for the most part, these justifications seemed lacking. It’s important to note, however, that there are a number of other ways in which we might justify punishment. In this article, I want to consider an alternative approach: namely that of retributivism.

While consequentialism looks forwards to the potential goods that can be achieved by punishment, retributivism instead looks backwards for its justification. According to the retributivist, the necessary harm of punishment is justified purely on the basis that the offender committed a crime – regardless of what future goods may (or may not) be achieved by this punishment.

There are many cases in which consequentialism and retributivism will disagree on whether punishment is justified. Imagine a case where a community passes a new law forbidding skateboarding in its downtown pedestrian mall. While this law is welcomed by the community at large, it is met with vehement opposition by a small minority. One of this group decides to openly break the law, skateboarding through the mall in protest. This hooligan is apprehended, and the judge must now decide whether or not to punish him. Suppose, however, that punishing this particular hooligan will serve only to foment further dissent and encourage even more cases of skateboarding protests.

The consequentialist justifies punishment on the basis of its deterrent effect – that is, its ability to deter future instances of crime (committed by both the offender and the wider community). In this case, then, the consequentialist will seemingly be forced to concede that punishing the hooligan is not justified. The retributivist, on the other hand, will disagree. Since retributivism is backwards-looking – paying no mind to the consequences of the punishment, and instead focusing solely on the fact that the offender committed a crime – it will still hold that punishment of the hooligan is justified.

It’s worth considering, however, precisely what it is about committing a crime that makes it justifiable to punish an offender. One common way of doing this is to claim that by committing a crime, an offender forfeits certain rights. Why? Well, we might argue that my possession of a right necessarily entails a duty to respect that right in others. Thus, when I violate the right/s of another, I forfeit my own corresponding right/s. We can call this Forfeiture-Based Retributivism.

There are many cases where Forfeiture-Based Retributivism provides a straightforward justification for a case of punishment. Consider, for example, the death penalty as a punishment for murder: My possession of the right to life entails a corresponding duty to respect your right to life. Thus, when I violate this right (i.e., by committing a murder) I simultaneously forfeit my own right to life, and in doing so empower the state to intervene and execute me for this crime.

It’s an intuitive approach – and one that underpins many discussions of how we treat offenders. It is, however, deeply problematic. If the right we violate dictates the right we forfeit, then this will lead to all sorts of strange conclusions regarding the punishments that the state is justified in administering. Consider the skateboarding example above. The hooligan has clearly violated their duty to not skateboard in a pedestrian area. According to the Forfeiture-Based Retributivist, this would entail the hooligan losing their corresponding right. But it’s unclear precisely what that right would be. A right to not have others skateboard in their area?

In other cases, the punishments endorsed by Forfeiture-Based Retributivism move from the absurd to the unacceptable. Consider the crime here under discussion: sexual battery on a child. In committing this crime, an offender violates their victim’s right to bodily autonomy in the most reprehensible way imaginable. According to Forfeiture-Based Retributivism, this offender would subsequently forfeit their own right to not have their bodily autonomy violated in this very same way. Put simply: Forfeiture-Based Retributivism seems to suggest that the perpetrator of sexual assault should be punished by also being sexually assaulted.

Some might find this an acceptable outcome. But most will not. While we might wish to see such offenders punished severely, we will most likely stop short of endorsing that rapists ought to be raped in retribution. This, however, seems to be precisely what Forfeiture-Based Retributivism entails.

An alternative way of providing a retributivist justification might be to simply claim that an offender simply deserves to be punished. We can call this Desert-Based Retributivism. The notion of desert should be a familiar concept for most. Basically, it boils down to the idea that good actions should receive good consequences, while bad actions should receive bad consequences. Suppose that one of my students writes an exceptional essay, while another plagiarizes an incredibly poor essay. The former, it seems deserves a good grade, while the latter deserves a bad grade. Why? This is harder to explain, but it seems to be rooted in the fact that the state of affairs in which the good student receives a good grade is better than the state of affairs in which they don’t. Likewise, the state of affairs in which the bad student receives a bad grade is better than the state of affairs in which they don’t.

Can Desert-Based Retributivism provide a justification for sentencing child sex offenders to death? Possibly. But while it might be clear that the abhorrent actions of these offenders deserve bad consequences, Desert-Based Retributivism fails to provide a specific answer to just how bad those consequences should be. Are they deserving of the most serious punishment at our disposal? This remains unclear. But there’s also a deeper problem with Desert-Based Retributivism: namely, that it justifies punishing the innocent. There are many people who deserve bad consequences despite having broken no law. Consider the vile racist, or the unrepentant philanderer. Racism and infidelity are not crimes, but these individuals clearly seem to deserve punishment for their actions. Should the state, then, punish these individuals, despite the fact that they are (legally) innocent? If we think not, then it seems we might have to look elsewhere for a potential justification for punishment.

While punishment is something we often accept without question, its justification requires careful consideration. This is particularly true where the punishment involves ending a human life. While few would argue that those who commit sexual battery on a child should receive punishment, a reasoned justification for the severity of this punishment is much more difficult to provide. Perhaps we think that child sex abusers should receive our most severe penalty for reasons of deterrence – but this approach is fraught with complications. We might, on the other hand, think that these offenders have forfeited certain rights, or simply deserve to be punished as severely as possible – but problems arise here too. Ultimately, this means that our discussion of the severity of punishment appropriate for child sex abusers needs to be carefully carried out on the basis of reason, not emotion. It’s unclear that the legislative procedure behind Florida’s new law followed any such process.

On the Morality of Executing Child Sex Abusers: Part 1

photograph of hands on jail bars

Several months ago, Florida Governor Ron DeSantis signed a bill reintroducing the death penalty for those who commit sexual battery on a child under the age of 12. Such laws were previously found unconstitutional by the Supreme Court. But even so, it’s worth considering what – if any – moral justification could be given for responding to child sex abuse with the most serious punishment at our disposal.

In order to make sense of the relevant arguments, we first need to consider precisely what punishment is. Whether it’s a fine, a prison sentence, or the death penalty, punishment seems to necessarily involve harming an offender in some way. But this is problematic. Ordinarily, we assume that there’s a strong moral prohibition against harming other people. We need to explain, then, why it’s permissible to treat some people (i.e., those who commit a crime) differently to other people.

One way in which we might do this is by being consequentialist in our reasoning. Consequentialism – as the name suggests – evaluates the morality of our actions based on the consequences they achieve. While there are many different varieties of consequentialism, they all agree that the right thing to do is that which maximises the good. A consequentialist, then, will argue that while punishment does involve harm to the offender, this harm is offset by the greater good it brings about for our society. What is that “good”? Generally, a reduction in future crimes. The central mechanism by which this is done is deterrence. Punishment is intended to deter an offender from reoffending. But this isn’t all. The punishment of an offender also acts as a wider deterrent for other members of society. Having seen the consequences of wrongdoing, those who might have committed a crime will (hopefully) no longer do so.

Taken together, the deterrence of the offender as well as the general population reduces the likelihood of future crimes and their associated harms. The consequentialist argues that this overwhelming good is sufficient to outweigh any harm caused to the offender.

It’s a straightforward approach, and one that many will find intuitive when thinking about why punishment might be justified. But a number of issues arise. For one, there may be cases where punishment doesn’t maximize the good. Consider a case where a crime is committed by a respected member of the community. Suppose that punishing this particular individual will create no deterrent effect and will, in fact, have far-reaching negative consequences in the form of anger and disillusionment across the community. In such a case, it seems that the consequentialist approach will recommend against punishment.

There might also be cases where consequentialism will recommend punishing the innocent. Suppose, for example, that imprisoning one innocent individual will be enough to deter an angry mob that – if left unchecked – will go on to cause widespread injury and destruction across town. In such a case, a consequentialist approach may very well tell us that punishing that person is the right thing to do.

There are ways that the consequentialist might avoid such problems. One solution would be to argue that while particular cases of punishing the innocent or failing to punish the guilty might maximize the good, adopting such practices as a rule would – in the long run – create more harm than good. But there are deeper problems with the very thing that forms the foundation of the consequentialist approach: namely, deterrence. In order for a punishment to deter, it must be something that the potential offender considers when deciding whether or not to commit a crime. But there are many situations in which this won’t be the case. Those who commit a crime of passion will be paying little mind to the potential consequences of their actions – punishment included. The same might also be true of those who are under the influence of certain substances, or who suffer from diminished mental capacity. All of these are examples of cases where the possibility of punishment will fail to provide a potential offender a reason to alter their behavior.

In other cases, increasing the severity of punishment for a crime may in fact encourage the commission of other crimes. Claire Finkelstein provides a helpful example of this, noting how an increase in the severity of the punishment for bicycle theft might incentivize those who would have otherwise stolen a bike to now steal a car instead.

Something similar can occur when the most severe punishment – the death penalty – is used for anything less than murder. If someone commits a crime punishable by death, they now have little discouragement from committing further crimes (since it is impossible for them to receive a greater punishment). This may be particularly relevant where someone has an opportunity to commit an additional crime in order to reduce their chances of being caught for their initial crime. Suppose, for example, that someone commits sexual battery against a minor, and that the only witness to this crime is the victim. If the perpetrator knows that they are already likely to receive the death penalty for the battery alone, they will have little discouragement from committing further offenses. What’s more, if they are able to reduce their chances of being caught by – say – now murdering that victim, it will make sense – at least, from their perspective – to do so.

And there are other ways in which an increase in the severity of punishment might have negative ramifications. Legal experts have expressed their concern that Florida’s new policy may in fact decrease the likelihood of incidents of sexual battery on minors being reported. This is due to the fact that most cases of child sex abuse are committed by family members. The fact that such incidents might be punished by death may cause families to be more reluctant in reporting such wrongdoing to the authorities.

Increasing the penalty for a serious crime might feel like the right thing to do. In many cases, it’s our society’s attempt to convey our utter disapproval of an abhorrent act. We must be wary, however, of the nuanced effects these severe punishments might actually have on the commission of crimes. In many cases they will simply fail to deter, while in others they may in fact encourage the commission of additional crimes. In yet other cases, a severe punishment might reduce the likelihood of crimes being reported in the first place – thus allowing the perpetrators to continue to offend.

For this reason, it is difficult to justify Florida’s new law on purely consequentialist grounds. But is there, perhaps, another approach that might provide justification? While consequentialism looks forward to the consequences of our actions, we could instead look backwards to certain facts about the past. This is precisely what the theory of retributivism does – and next time, I will consider whether this approach might provide better support for executing child sex abusers.

Do You Need Empirical Support to Be Happy?

image of businesswoman with happiness mask

There is no shortage of advice online about how to be happier. A quick bit of Googling will send you to “ultimate guides” to happiness that will advise you to spend more time with friends, reduce your stress, and eat almonds. Or you might come across articles that claim to be based on the work of behavioral scientists, who challenge us to “conquer negative thinking,” tell us to control our breathing, spend more time in nature, and fold our clothes neatly. Happiness is also sometimes treated like a health issue, with accompanying prescriptions of gratitude practice, exercising more, and ditching our phones for time in nature. There is a cornucopia of tips, tricks, and strategies that are guaranteed to turn frowns upside-down, many of which claim to be supported by cutting-edge science.

A lot of happiness advice online can seem like common sense. Get plenty of sleep, hang out with friends, and make sure you eat well? These seem like no-brainers. Other advice you’ll likely come across can seem much more idiosyncratic, and with less evidence to back it up. Some happiness advisors will tell you that forcing the physical act of smiling will make you happier, although the evidence that this will have any long-lasting effects is mixed at best. Others will make oddly specific recommendations to eat foods like bananas, yogurt, and cottage cheese to boost your mood, although their connections to increased happiness appear inconclusive (and tough luck if you’re lactose intolerant). Some will even tell you that all you need to do is “choose happiness,” which by itself feels about as useful as the advice to just “stop being sad.”

It likely doesn’t come as a surprise that the happiness-improving advice out there varies in quality. What is perhaps more surprising is that recent research suggests that many of the most popular and purportedly science-backed strategies to being happier – including practicing gratitude, mindfulness, exercise, social interaction, and time spent in nature – are either only weakly supported by high-quality experiments, have very limited effects, or lack any evidence for their effectiveness at all. Overall, the current state of happiness research looks bleak.

Of course, the research is not yet fully decided, and an important caveat is that the studies analyzed didn’t deal with clinical populations. In other words, the aforementioned strategies may still be effective when it comes to those who have been diagnosed with physical or mental health disorders.

It is reasonable to expect that a significant number of people who are seeking out happiness strategies online, however, are not part of a clinical population. There thus seem to be ethical concerns around giving out advice that claims to be empirically supported when it isn’t, especially when said advice promises to make one happier. At the same time, even if it does lack the endorsement of peer-reviewed science, a lot of this advice seems unlikely to cause much harm, and it at least has the potential to increase someone’s momentary happiness, even if it hasn’t been shown to be effective in general.

In light of the concerns raised by recent research, what are the obligations of the happiness-mongers, and what should we as happiness-seekers do?

The authors of the aforementioned study themselves raise several potential concerns with continuing to provide happiness advice that isn’t well-supported by evidence. First, given the ubiquity of the most common happiness strategies, researchers must make sure that they’re actually effective. This is not only because of professional obligations, but in order to prevent happiness strategies from becoming a kind of snake oil. After all, while it’s easy enough to find free guides to increasing your happiness online, there are also plenty of books, programs, and courses that are being offered for a fee. If these products feature any of the strategies examined by the researchers and are predicated on having robust scientific evidence supporting them, then people are being misled.

Consider, for example, a critique of mindfulness, one of the most popular approaches to well-being. While many have benefitted from employing mindfulness techniques, many of the benefits that mindfulness may offer only come as the result of dedicated time and practice, something that is typically not emphasized in the bite-sized mindfulness tidbits that are so readily accessible online. Entire industries have also sprung up around the idea of mindfulness as a panacea, resulting in what some refer to as McMindfulness.

The lack of empirical support for happiness strategies combined with their presentation in ways that prioritize quick fixes leads us to another of the researcher’s concerns: when these strategies don’t work, they can lead to discouragement. After all, if you’re told that top researchers and scientists have figured out how you can be happier if you just follow their advice, then when strategies don’t work you risk being even less happy than when you started. Rather than being benign, happiness advice could result in an overall decrease in well-being.

There are other reasons in the vicinity to be concerned about happiness advice. For instance, some who have expressed reservations about mindfulness are worried that conceiving of happiness as a project that is solely meant to be addressed internally risks ignoring broader structural and social factors that contribute to the conditions that made one unhappy in the first place. Philosophers are also likely to push back against the conception of happiness that is prototypical in positive psychology, namely one of “subjective well-being” that is defined by the presence of good feelings and the level of satisfaction with one’s life. For instance, it has been argued that this conception of happiness leaves out whether one has led a morally good life, something that appears to have a significant impact on people’s evaluations of whether one truly is happy.

Of course, no one is saying that you should give up your beloved nature walks just because meta-analyses don’t find them to improve long-term subjective well-being in aggregate nonclinical populations. We should, however, be aware that happiness is a more complex project than it’s often made out to be, and that while common happiness strategies may be worth a shot, if they don’t seem to work for you then you’re not alone.

Travel on Trial: A Defense of Tourism from Moral Imagination

photograph of crowded market street in Barcelona

An article making the case against recreational travel recently garnered attention, prompting many to jump to the defense of tourism. Regardless of one’s ultimate conclusion regarding the value (or lack thereof) of travel, the article’s author, Agnes Callard, makes some compelling points. The fundamental argument of the piece is that travel does not actually provide the benefits we tend to ascribe to it. A commonly accepted narrative is that travel allows us to grow in personal insights, connect with others, and to have our presuppositions challenged. However, if we are fully honest with ourselves, travel rarely seems to actually have these effects. This is perhaps most clearly displayed when we analyze the behaviors of others. While it might be easy to trick ourselves into thinking travel engenders personal transformation, if we evaluate its impact on our friends and family, we must confess that travel’s transformative impacts are few and far between. Our friends and family appear to stay mostly the same, regardless of their most recent global trek.

So in the face of this counter-evidence, why do we still insist on touting the many virtues of travel? Callard concludes by claiming that travel helps shield us from reflecting on our own mortality. Structuring our lives around our next adventure allows us to feel like the future is full of wide-open possibilities, when in all actuality, such adventures are powerless to change our eventual fate.

While there are various places one could object to Callard’s chain of reasoning, I hope to offer at least a partial vindication of the value of travel by appealing to its contributions to our moral imagination. I agree with Callard that the link between personal transformation and travel can be overstated, but I contend that travel (at least in principle) has the capacity to reshape our moral lives in helpful ways. Moral imagination consists in our ability to effectively identify the full array of options when it comes to ethical thinking and decision-making. A well-cultivated moral imagination is what allows us to transcend mere moral convention and to strive for better ways of living. It allows us to act intentionally in ways that we deem morally ideal, regardless if we see others around us acting in a similar manner.

The concept of moral imagination is multifaceted, so I’ll unpack two aspects of moral imagination in order to illustrate my claim about the potential of travel. One such related concept is that of framing, which deals with the frame of reference through which one engages the world. Depending on one’s frame, one might be more or less likely to ignore morally salient features of situations. For instance, someone who grows up on a dairy farm might be shut off to the possibility of ethical veganism, not due to having a substantive objection to the position, but merely due to a lack of exposure to the idea. The cultural factors relevant to the individual’s environment which make the viewpoint seem intuitively implausible, function to exclude it from one’s frame.

Of course, just because it doesn’t fall within one’s frame doesn’t exclude it from potentially being the morally correct position. Particularly when one has limited exposure to other ways of life, travel can help expand one’s frame to be inclusive of a wider array of moral possibilities and categories. Regardless if the dairy farmer still opts to reject ethical veganism after considering the position, it seems morally preferable that they reject it for considered reasons as opposed to failing to ever consider it.

Another dimension of moral imagination is that of having well-developed moral vision. One who has moral vision lacks significant ethical blind spots, and such an agent can readily identify the morally relevant features of situations. The opposite of moral vision is that of moral myopia or moral blindness. Consider a young business owner who is seeking to grow her company. She might very well decide to outsource the material production of her products to a nation with less regulation around such activities. However, she fails to do her due diligence, by opting not to look too deeply into the working conditions of the employees producing her company’s products. The business owner’s frame allows for the consideration of the workers’ conditions, it’s just that she also experiences financial incentives to avoid seriously grappling with the issue in front of her, which potentially prevents her from coming up with creative solutions.

Since widening one’s moral frame and strengthening one’s moral vision are constitutive parts of fostering moral imagination, what positive role does travel supposedly play? Let’s start with unpacking the link between travel and the expansion of one’s moral frame. One’s moral frame is established by the set of experiences, assumptions, and beliefs one brings to the table of moral deliberation. If we consider the dairy farmer example, it is plausible to think that the experience of other cultures, which revere animal life differently and consume an alternative kind of diet, would encourage him to consider other ethical outlooks. Insofar as stretching one’s moral frame is directly linked to the expansion of moral possibilities, travel is the ideal tool since it allows for the consideration of diverse perspectives. Of course, Callard is correct in her assertion that the mere opportunity for such consideration does not mean that people will actually take advantage of the opportunity. It is simply to say that travel affords one increased potential to grow their moral imagination via exposure to diverse perspectives.

Similarly, we have good reason to think there’s a positive relationship between travel and the prevention of moral blindspots. There are certain daily realities faced by those in different cultures, socio-economic brackets, and religious systems, which we fail to consider in our moral deliberations. This fact might be particularly relevant when considering topics like the ethics of charitable giving and the way we engage with the global economy. Failure to appropriately appreciate the ways in which our actions (or inactions) impact those in our global community can certainly lead to moral blind spots in these areas, due to certain cultural and financial incentives to continue on in relative ignorance. Thus, travel has a unique ability to shake-up the status quo of our moral life, providing us a rich avenue for the cultivation of our moral imagination.

Cloning and the Giftedness of Life

photograph of newborn children in hospital

Sir Ian Wilmut, the person who led the team responsible for cloning Dolly the Sheep, died on September 10, 2023. Dolly’s emergence onto the world stage represented a colossal scientific breakthrough. It took an idea that had existed solely in science fiction and brought it into the real world, much to the surprise of countless naysayers who had claimed that adult mammal cloning was impossible. Wilmut and his team’s efforts demonstrated, in no uncertain terms, that rather than cells holding only the genetic information necessary to undertake their specific job – heart cells need to know how to be heart cells, for example – a single mammary cell, from which Dolly was cloned, held the genetic blueprint for an entire sheep. Something we now know to be true for most of the cells within the body.

Unsurprisingly, though, Dolly’s creation opened the door to numerous ethical and philosophical questions. Or at least, gave them a new weight. Until 1997, cloning, at least in mammals, had been considered in the abstract. However, Dolly meant that the context of this pontification went from academically interesting to socially essential. Cloning’s real-world impact and permissibility were no longer solely the concern of philosophers, ethically-minded scientists, and science fiction fans. Everyone had to wrestle with cloning’s social, legal, and ethical consequences. And to this day, many of these questions remain unanswered.

So, with Wilmut’s passing shining the spotlight on cloning, I thought I’d take the time to explore just one of the arguments against cloning’s use for reproductive purposes – that it eliminates the giftedness of life.

This argument was first proposed by the philosopher Michael Sandel in the article The Case Against Perfection. In it, Sandel argues that our drive to exercise mastery over our lives and reduce the chaos of existing in a fluctuating universe can hide the value in such randomness and harm our relationship with things that may not live up to our standards. He does this in the context of human enhancement during reproduction. In short, Sandel argues that our offspring need to be seen as gifts which we receive and over which we have precious little control in terms of the form or shape they arrive. When we have children, we don’t know whether they will be tall or short, sporty or academic, a rule-taker or rule-breaker. Yet, despite not knowing these things, we should love our children regardless. For him, children are entities to be received and unconditionally loved, not subjects of human design, because thinking of them in terms of the latter may, and perhaps must, destroy the relationship between parent and child – it turns children into selectable or rejectable products.

While Sandel’s argument was initially situated in the context of human enhancement, he later expanded it in his book, also called The Case Against Perfection, to consider how parents should act when faced with illness and disease. While this is done in a specific context, his argument also has a bearing on cloning as a form of reproduction. By its very nature, cloning looks to remove the complex genetic recombination inherent in sexual reproduction.

When parents have children, genes are pulled from each biological contributor and combined to create a new genome, leading to a unique individual. This process is unpredictable, and absent extensive genetic testing and scientific interventions, what form the eventual child will take is somewhat unpredictable. In cloning, however, this recombination of genetic material from the biological parents is replaced by the (mostly) carbon copying of genetics from the DNA donor. In the case of Dolly, the sheep from which her genetic material was donated. Gone is the creation of the new genome, replaced by the replication of an already existing entity.

It must be acknowledged here that we are far more than our genes, and simply because two individuals share the same genetic material does not mean they will grow to be the same person. Anyone who knows twins will have first-hand knowledge of this fact. Yet, one cannot deny our genetics’ significant role in making us who we are. I would not be the same if I had a different genome, as that genetic foundation underpins my physical body as well as my personality.

Thus, to attempt to clone an individual, be that yourself or someone else, is to reject the uncontrollable nature inherent in sexual reproduction. It is to say that rather than having a child who may be like you, your partner, neither, or both, you want to try and replicate an individual with whom you already have some form of relationship (and that might be yourself). In terms of gift-giving, it is to reject the uncertainty of receiving a present from someone else instead of buying yourself a gift.

Now, this might not be an issue. After all, we do want to exercise control over our lives. I don’t want to leave the house each morning and have no idea what will happen to me. When I get ill, I don’t want to treat that as a gift from the universe; I want to do what I can to get well. Why should reproduction be any different? If I have the tools to ensure my child will be as healthy as I am, why shouldn’t I use them? A complete response to these questions is far beyond the scope of this piece. But, if the language of giftedness is appropriate here, then receiving something as a gift does bring an additional value, which obtaining that same item by purchasing it yourself does not; getting a Christmas jumper as a gift is far better than buying one. If that works for people, it may also translate to the universe.

While cloning has come a long way since Dolly’s birth in 1997, we have yet to see a human be cloned. This is because of both technical and ethical reasons. Yet, animal cloning has come on in leaps and bounds. Barbara Streisand has two clones of her deceased dog, and disgraced scientist Hwang Woo-suk (who made false claims about successfully cloning humans) now makes a wealthy living cloning camels for the United Arab Emirates’ rich and powerful. Indeed, this rejection of uncertainty is fully displayed in the animal cloning world.

So, while it may be tempting to use cloning as a way of eliminating reproduction’s unknowns, we may lose something important in doing so. By attempting to exercise control over the fundamental processes that enable us to have children, we might end up devaluing those people we bring into the world. Instead, it might be better, perhaps even essential, to leave that part of reproduction up to the fates, exercise some humility, and love whatever and whoever we happen to end up with.

“Technological Unemployment” and the Writers’ Strike

photograph of writers' strike sign

On September 27th the monthslong writers’ strike concluded. Writers worried that ChatGPT and similar text generation technologies will lead to a future in which scripts are no longer written by humans, but manufactured by machines. Certainly corporations find the technology promising, with Disney, Netflix, and other major entertainment companies vacuuming up AI specialists. Holding signs saying “AI? More like AI-Yi-Yi!” and expressing similar sentiments, writers fought to secure their place in a rapidly changing technological landscape. And when the smoke cleared and the dust settled, the Writers Guild of America had won a surprisingly strong contract. While it does not prohibit the use of AI, it does ensure that human writers will not become the mere handmaidens of computerized text generators – editing and refining machine-generated content.

From the flood of garbage ChatGPT has sent into the internet to the multiplying complexities of intellectual property, Artificial Intelligence is, in many ways, a distinctly modern challenge. But it also follows a well-worn pattern: it continues a long legacy (and revives an old debate) regarding the labor market’s ability to absorb the impact of new technology.

John Maynard Keynes, the famous British economist, used the phrase “technological unemployment” to describe the mismatch in how quickly human labor could be replaced by technological innovation versus how quickly new uses for human labor emerged. For Keynes, this was essentially a lag-time problem caused by rapid technological shifts, and it remains controversial whether “technological unemployment” causes an overall drop in the employment rate or just a momentary hiccup. Regardless, for workers who lose their jobs due to the adoption of new technology, whether jobs are being created just as fast in some other corner of the economy is rather beside the point. Because of this, workers are often anxious, even adversarial, when marked technological change makes an appearance at their workplace.

The most famous example is the Luddites, the British machine-smashing protestors of the early 1800s. With some textile manufacturers all too willing to use new technologies such as the mechanized loom to replace and undercut skilled laborers, workers responded by destroying these machines.

“Luddite” has since become a term to describe a broader resistance to (or ignorance of) technology. But the explanation that workers resistant to their employers adopting new technologies are simply anti-technology or gumming up the gears of progress out of self-interest is too simplistic. Has “progress” occurred just because there is a new product on the market?

New technology can have disparate effects on society, and few would assert that AI, social media, and smartphones deliver nothing but benefits. Even in cases where technological innovation improves the quality or eases the production of a particular good, it can be debatable whether meaningful societal progress has occurred. Companies are incentivized to simply pocket savings rather than passing on the benefits of technological advancement to their employees or customers. This represents “progress,” then, only if we measure according to shareholder value or executive compensation. Ultimately, whether technological advance produces societal progress depends on which particular technology we’re talking about. Lurking in the background are questions of who benefits and who gets to decide.

In part, the writers’ strike was over just this set of questions. Entertainment companies no doubt believe that they can cut labor costs and benefit their bottom line. Writers, however, can also benefit from this technology, using AI for editing and other purposes. It is the writers’ assertion that they need to be part of the conversation about how this technology – which affects their lives acutely – should be deployed, as opposed to a decision made unilaterally by company leadership. But rather than looking to ban or destroy this new technology, the writers were simply demanding guardrails to protect against exploitation.

In the same 1930 essay where he discussed technological unemployment –  “Economic Possibilities for our Grandchildren” – Keynes raised the hope of a 3-hour workday. The economist watched the startling increases in efficiency of the early 20th century and posited, naturally enough, that a glut of leisure time would soon be upon us. Why exactly this failed to materialize is contentious, but it is clear that workers, in neither leisure time nor pay, have been the prime beneficiaries of productivity gains.

As Matthew Silk observed recently in The Prindle Post, many concerns about technology, and especially AI, stem not from the technology itself but from the benefits being brought to just a few unaccountable people. Even if using AI to generate text instead of paying for writers could save Netflix an enormous amount of money, the bulk of the benefits would ultimately accrue to a relatively small number of corporate executives and major shareholders. Most of us would, at best, get more content at a slightly cheaper price. Netflix’s writers, of course, lose their jobs entirely.

One take on this is that it is still good for companies to be able to adopt new technologies unfettered by their workers or government regulations. For while it’s true that the writers themselves are on the losing end, if we simply crunch the numbers, perhaps shareholder gains and savings to consumers outweigh the firing of a few thousand writers. Alternatively, though, one might argue that even if there is a net societal benefit in terms of resources, this is swamped by harms associated with inequality; that there are attendant problems with a deeply unequal society – such as people being marginalized from the democratic political processes – not adequately compensated for merely by access to ever-cheaper entertainments.

To conclude, let us accept, for the sake of argument, that companies should be free to adopt essentially whatever technologies they wish. What should then be done for the victims of technological unemployment? Society may have a pat response blaming art majors and gender studies PhDs for their career struggles, but what about the experienced writing professional who loses their job when their employers decide to replace them with a large language model?

Even on the most hard-nosed analysis, technological unemployment is ultimately bad luck. (The only alternative is to claim that workers are expected to predict and adjust for all major technological changes in the labor market.) And many philosophers argue that society has at least some duty to help those suffering from things beyond their control. From this perspective, unemployment caused by rapid technological change should be treated more like disaster response and preparedness. It is either addressed after the fact with a constructive response like robust unemployment insurance and assistance getting a new job, or addressed pre-emptively through something like universal basic income (a possibility recently discussed by The Prindle Post’s Laura Siscoe).

Whatever your ethical leanings, the writers’ strike has important implications for any livelihood.

Race, Gender, and the Civic Virtues: Creating a Flourishing Society

aerial photograph of people in a park

The increasing polarization of American society is perhaps most evident when it comes to issues of race and gender. In 2016, 57% of Hilary Clinton supporters said that it is a lot more difficult to be a Black person in the United States than it is to be a white person, with that number increasing to 74% of Joe Biden supporters in 2020. The number of Trump supporters, however, who thought that it was a lot more difficult to be Black, actually shrank from 11% in 2016 to 9% in 2020.

A similar dynamic has occurred with gender issues as well. Only 26% of Clinton supporters agreed that the obstacles that keep women from getting ahead are now largely gone, a figure that then decreased to just 20% of Biden supporters. For Trump supporters though, the percentage that agreed such barriers were largely gone increased from 72% in 2016 to 79% in 2020, making the issues of race and gender marked illustrations of the increasing divide between liberals and conservatives.

How has this polarization affected American society? In very polarized environments, citizens are likely to become more tribal, increasingly shutting out those of a different political persuasion. We spend more time with people who look, talk, and think like us, making the political opposition more and more unfamiliar.

This uptick in tribalism then allows citizens to become more uncharitable. By interacting less with those on the other side of the aisle, it becomes harder to empathize with their perspective and far easier to see them as either ignorant, or even downright evil.

Finally, as voters become more entrenched, they are likely to be more antidemocratic. Because the political opposition cannot be trusted, citizens are more open to leaders who seize political power in ways that undermine typical democratic processes.

None of these tendencies, of course, will help heal the sharp divide on issues of race and gender. As citizens become more tribal, uncharitable, and antidemocratic, the fault lines between camps will only become more severe.

Furthermore, when polarization occurs on issues of race and gender, the tribal boundaries are increasingly drawn along racial and gendered lines. Black and Hispanic voters remain overwhelmingly Democrat, for example, while white voters lean Republican.

And in an emerging trend, there is now a growing political divide between men and women as well. The gap between how young men and women identify politically is rapidly increasing.

One approach to improving the current political climate is by focusing on educating for the civic virtues. While talk of citizenship or civic virtue might sound quaint or old-fashioned, the civic virtues are simply the habits that citizens need to support a healthy, well-functioning political community. These virtues are especially critical for liberal democracies, as democratic nations ultimately depend on the political engagement of their citizens.

Let’s take the former challenges raised by polarization. To begin with, the civic virtue of tolerance can combat the rise in antidemocratic sentiments. A tolerant person accepts the beliefs or practices of others even if they view those beliefs and practices as objectionable. This acceptance does not require, of course, that the tolerant person takes on these beliefs and practices as their own, but only that they permit others to continue with their ways of life.

Toleration merely requires refraining from coercing and controlling others, but the civic virtue of mutual respect calls people to a deeper appreciation of their fellow citizens. Also called civic egalitarianism, mutual respect encourages citizens to acknowledge the value that others bring to the political process and view them as well-intentioned, rather than regarding them as either ignorant or evil.

Finally, the virtue of neighborliness can help us overcome our growing tribalism. Instead of giving in to permanent transience, we can get to know those around us regardless of their political persuasion. This will then help us to be more empathetic with those whom we disagree as we come to realize that they are our friends and neighbors.

Of course, even if citizens become more tolerant, respectful, and neighborly, this will not solve all of our political problems. Real issues would remain, and civic virtue will not answer some of our deepest and most abiding questions about how to live in community, but decreasing polarization would put us in a better position to tackle those challenges together.

At the same time, though, the civic virtues can do more than simply helping us overcome polarization. The virtue of justice is integral for a well-functioning society, and if we as citizens become more just, this will then help us to tackle both individual and structural injustices related to race and gender.

Civic virtue, then, can play a much larger role than just reducing polarization. Education for the civic virtues isn’t just about getting along, it’s about creating citizens that are equipped to create a flourishing society.