← Return to search results
Back to Prindle Institute

Race, Authorship, and ‘American Dirt’: Who Owns Migration Narratives?

photograph of border wall stretching into the distance

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Fiction allows both readers and writers to inhabit perspectives wildly different from their own, which is perhaps one of its greatest attractions. However, this sense of fluidity has limitations, which are constantly being redrawn and contested within the literary community. For example, it’s hotly debated whether it’s possible, or even valuable, for a white author to inhabit the perspective of a person of color, or for an American to authentically reproduce the perspective of a Mexican migrant. What agendas do such appropriated narratives serve, and what do they tell us about what it means to be an author?

These questions can be explored through Jeanine Cummins’s novel American Dirt, and more broadly, through the storm of controversy surrounding the novel. American Dirt, published in January 2020 by Flatiron Books, tells the story of Lydia, a middle-class bookseller who flees Mexico with her young son after being targeted by the drug cartel that murdered her husband. Jeanine Cummins, the half-Irish half-Puerto Rican author, researched the novel for seven years, taking frequent trips to Mexico and conducting interviews with undocumented migrants to give her story a veneer of authenticity.

Almost immediately after the book was released, it inspired outrage in both professional critics and general readers alike. The most galvanizing of these reactions was Myriam Gurba’s review of the novel, in which she accuses Cummins of

“1. Appropriating genius works by people of color

2. Slapping a coat of mayonesa on them to make palatable […] and

3. Repackaging them for mass racially ‘colorblind’ consumption.”

Like many critics, Gurba takes issue with American Dirt’s reliance on racist cliches, labeling it thinly-veiled trauma porn geared at middle-class white readers rather than an authentic depiction of displacement and oppression. Many also took issue with the claim on the jacket of the book that Cummins’s husband immigrated to America illegally, a vague statement that purposefully lends more authority to her writing. However, the jacket fails to mention that her husband is a white man who immigrated to the States from Ireland, not Mexico.

Outraged with the commercial success the novel, 124 writers signed a letter urging Oprah Winfrey to remove American Dirt from her book club list. In the letter, the writers explain that,

“Many of us are also fiction writers, and we believe in the right to write outside of our own experiences: writing fiction is essentially impossible to do without imagining people who are not ourselves. However, when writing about experiences that are not our own, especially when writing about the experiences of marginalized people, still more especially when these lived experiences are heavily politicized, oppressed, threatened, and disbelieved—when this is the case, the writer’s duty to imagine well, responsibly, and with complexity becomes even more critical.”

Cummins writes in the novel’s defensive afterword that “the conversation [surrounding immigration] always seemed to turn around policy issues, to the absolute exclusion of moral or humanitarian concerns,” and that she only “wished someone slightly browner than me would write it.” Her stated aim is to encourage readers to sympathize with migrants through Lydia, a character whose “respectable” middle-class values will remind them of their own. Some of the books defenders have cited that approach as a necessary evil. On an episode of NPR’s “Latino USA” podcast, Sandra Cisneros, one of the novel’s few vocal advocates, argued that American Dirt is

“going to be [for an audience] who maybe is undecided about issues at the border. It’s going to be [for] someone who wants to be entertained, and the story is going to enter like a Trojan horse and change minds. And it’s going to change the minds that, perhaps, I can’t change.”

In other words, Cisneros is arguing that white authors can reach audiences that non-white authors won’t have access to, and that it’s a worthwhile task to move these audiences emotionally, even if harmful tropes are employed to do so.

Bob Miller, the president of Flatiron Books, issued a statement to address the controversy surrounding Cummins’s novel. He claims that Flatiron

“made serious mistakes in the way we rolled out this book. We should never have claimed that it was a novel that defined the migrant experience; we should not have said that Jeanine’s husband was an undocumented immigrant while not specifying that he was from Ireland […] We can now see how insensitive those and other decisions were, and we regret them.”

Miller acknowledges the validity of Cummins’s critics and the myopia of the publishing industry, stating that,

“the fact that we were surprised [by the controversy] is indicative of a problem, which is that in positioning this novel, we failed to acknowledge our own limits. The discussion around this book has exposed deep inadequacies in how we at Flatiron Books address issues of representation, both in the books we publish and in the teams that work on them.”

At the same time, he laments that “a work of fiction that was well-intentioned has led to such vitriolic rancor. While there are valid criticisms around our promotion of this book that is no excuse for the fact that in some cases there have been threats of physical violence [against Cummins].” In lieu of the planned book tour, the author will attend a series of “townhall meetings, where [Cummins] will be joined by some of the groups who have raised objections to the book.” Miller claims that this alternative “provides an opportunity to come together and unearth difficult truths to help us move forward as a community.”

The controversy surrounding American Dirt ties into a perennial debate on the relationship between identity and writing. In an article on the ethics of authorship for The New Yorker, Louis Menand explores two competing models of how identity impacts authorship. In the late 20th-century, the “hybrid” author was championed by white literary theorists. In that model, the author is a nebulous being with no fixed racial or gender identity, as such things were considered extraneous to the meaning of the text. The author can and should inhabit any role, regardless of who they are. But because of our growing consciousness of racial and gender politics, according to Menand,

“hybridity is out; intersectionality is in. People are imagined as the sum of their race, gender, sexuality, ableness, and other identities. Individuals not only bear the entire history of these identities; they ‘own’ them. A person who is not defined by them cannot tell the world what it is like to be a person who is. If you were not born it, you should not perform it.”

Menand’s description of intersectional authorship (and “intersectional” may not be the most accurate word to describe this model) feels almost petulant. Those who criticize insensitive portrayals of race or gender are cast by Menand as greedy gatekeepers, and those who are forced to write in such a climate are fettered by their identity. In actuality, the hybrid model allows harmful stereotypes to be reproduced by even well-meaning authors under the guise of imaginative fluidity. Furthermore, the intersectional model does not exclude the hybrid one as completely as Menand assumes, as authors can both inhabit different perspectives and remain sensitive to issues of race.

This point is evident in the critical reaction to American Dirt. Parul Sehgal, reviewing American Dirt for the New York Times, writes,

“I’m of the persuasion that fiction necessarily, even rather beautifully, requires imagining an ‘other’ of some kind. As the novelist Hari Kunzru has argued, imagining ourselves into other lives and other subjectives is an act of ethical urgency. The caveat is to do this work of representation responsibly, and well. […] Cummins has put in the research, as she describes in her afterword […] Still, the book feels conspicuously like the work of an outsider.”

The issue that Sehgal, and many other critics, have with the novel is not that Cummins made an attempt at inhabiting another perspective, but that the attempt was made without sensitivity to the political implications of the act. The letter addressed to Oprah further speaks to this criticism; the coalition of writers explicitly acknowledge that fiction is a place to explore identity, but explain that Cummins’s novel fails to give her subject the weight it deserves as a political issue. As Sehgal says,

“[American Dirt] is determinedly apolitical. The deep roots of these forced migrations are never interrogated; the American reader can read without fear of uncomfortable self-reproach. It asks only for us to accept that ‘these people are people,’ while giving us the saintly to root for and the barbarous to deplore—and then congratulating us for caring.”

In other words, such subject matter will always be political, and it is Cummins’s inability to acknowledge that which ultimately dooms her novel.

The publishing industry’s whiteness, as Miller acknowledges, plays a large role in what kind of stories considered worth telling, and writers should be allowed to take on different perspectives to broaden the horizons of the literary world. Writers are even morally obligated to acknowledge issues like immigration, to foster the growth of sympathy and connection between disparate groups. As British literary critic Frank Kermode said, “fictions are for finding things out, and they change as the needs of sense-making change. Myths are agents of stability, fiction the agents of change.” But ultimately, we cannot pretend that an American author appropriating the experience of an undocumented migrant is somehow not fraught with political meaning, just because it’s happening in the pages of a book.

Why Isn’t Everybody Panicking? Scientific Reticence and the Duty to Scare People

photograph of gathering clouds

In 2017, journalist David Wallace-Wells published an article, The Uninhabitable Earth, which told a frightening tale of possible scenes from a bad to worst-case scenario outcome of the effects of global warming, ecological degradation, and widespread pollution – effects ranging from extreme weather, sea-level rise, and wildfire to mass migration, food scarcity, and social collapse. “It is, I promise, worse than you think.” writes Wallace-Wells, “no matter how well-informed you are.”

The knowledge of how bad it could be has been around for a while. James Hansen first presented the case for possible harms of global warming, caused by the burning of fossil fuels, to the US congress in 1988. Given that the scientific evidence has always been out there for anyone to see (even though media reporting has usually been lean), why is it worse than we think?

There is an epistemic failure occurring: people in the affluent, industrialized world do not, in general, appear to know how bad the climate crisis is, and do not, in general, appear to appreciate how much worse things will get if we continue to burn fossil fuels and pollute the atmosphere.

There are two distinct but related knowledge gaps opening up – between previous scientific prediction and what is actually happening, and between what scientists know is really happening and what the public thinks.

The first problem arises from factors about the nature of climate science itself, like in-exactitude of knowledge. We cannot be sure, for instance, what precise degree of warming will result from exactly what new concentration of greenhouse gasses in the atmosphere. The world appears to be warming faster, as ice melt and other such indicators are accelerating much faster that was predicted only a decade ago. A year ago, scientists concluded that the Earth’s oceans were warming 40 per cent faster than previously believed.

The second problem, that the public does not really know what scientists know, is not simply a problem of dissemination. The possible ramifications, from possible physical changes to the environment, to the social and humanitarian effects of these does not come straight off the data – it takes interpretation, thought, and imagination.

Doubtless, part of the knowledge difficulty, the epistemic deficit, is a form of cognitive dissonance. It is hard to imagine the scale of the problem. “Climate induced societal collapse is now inevitable in the near term” writes Professor of Sustainability Jem Bendell, “it may be too late to avert environmental catastrophe.” Part of the problem is that this does in fact sound like a crazy dystopian fiction.

This failure of the imagination is related to the problem of scientific reticence, which some have recently argued is having an adverse effect on policy action, and is even a dereliction of an ethical duty to seriously entertain possible (if extreme) scenarios. Scientific reticence arises both methodologically and stylistically. It takes the form of a tendency to understate the risks of global warming.

For instance, much of the scientific modelling, such as that used by the IPCC, has tended to largely underestimate the risks. IPCC climate modelling does not account for tipping points that result in non-linear, rapid, and irreversible chunks of damage, and trigger uncontrollable impacts. Melting sea ice and permafrost are some well-known tipping points. When sea ice melts, temperature rises are compounded by the reduction in reflective surfaces; and when permafrost thaws, large amounts of greenhouse gasses will be released and warming will leap.

Added to the difficulties of prediction and blind spots in the modelling capabilities is the generally conservative nature of science as a discipline. A great deal of the surrounding scientific literature to emerge over several decades has been conservative in its estimates of effects. That conservatism has meant that scientists are not conveying bad or worst-case scenarios to the public or policy makers.

When Wallace-Wells published his article, there was some pushback from climate scientists. Some felt that the science was not served by dramatizing the outcomes, and that really dire predictions might undermine scientific integrity with alarmism. There are some signs this attitude is beginning to change, but there are deeply embedded methodological, stylistic, and even ethical reasons for scientific caution.

Wallace-Wells says that he wrote The Uninhabitable Earth to address the fact that possible worse cases were not being talked about in scientific papers. (James Hansen is a notable exception to this, and he has written about the phenomenon of scientific reticence.)

Drawing attention to the dangers of global warming has at times caused cries of alarmism, and it has been suggested by Hansen that cautious or hesitant predictions are often perceived to carry more authority. The problem is that, now, it is looking like some of those worst-case scenarios are going to be much closer to the truth than the conservative underplay of catastrophe.

In any case it is becoming clear that science has not been effective at communicating the worst risks of climate change, therefore those who need to know these possibilities – the public and policy makers – have been ill-informed and ill-served.

In their paper What Lies Beneath, which explores the failures and blind spots of climate science’s understanding of the effects of global warming, Spratt and Dunlop write: “It is now becoming dangerously misleading, given the acceleration of climate impacts globally. What were lower-probability, higher-impact events are now becoming more likely.”

Scientific reticence has hindered communication to the public of the true dangers of global warming. This may in turn have directly (and indirectly) hindered action, which in turn has worsened the problem. Given that the findings of climate science research have existential implications for us, it could be argued that not entertaining the worst potential outcomes is a dereliction of moral duty as well as our duty to science.

There is a view that it is dangerous to frighten people too much, that the relevant information and worst risks worth considering are enough to scare the public into a sense of fatalism. Indeed, the news is bad, and at this critical time, resignation may be the last nail in the coffin (so to speak).

On ordinary scientific standards, incontrovertible confirmation will happen only when an effect has played out, or begun, and it will then be too late to abate it. The central ethical issue here is that ethics seems to be making an unusual demand on scientific communication, and on the translation of research data into conclusions needed by the public and policymakers – the demand to be a little more scary.

One could argue that man-made effects, which are likely to be harmful, should be treated differently from other types of observations and predictions, by virtue of what is at stake – and because caution could in this instance be a vice.

People aren’t scared enough about global warming. It is, as Wallace-Wells says, worse than people think – and though it may not be as bad as his picture, the trend so far points in that direction.

Having made that case, though, it must be acknowledged that scientific reticence might be peanuts next to the obfuscations of fossil fuel corporate rapaciousness, as a cause of the epistemic deficit our societies are in the (hopefully loosening) grip of.

Twitter and Disinformation

black and white photograph of carnival barker

At the recent CES event, Twitter’s director of product management Suzanne Xie announced some proposed changes to Twitter which are aimed to begin rolling out in a beta version this year. They represent fundamental and important changes to the ways that conversations are had on the platform, including the ability to make tweets to limited groups of users (as opposed to globally), and perhaps the biggest change, tweets that cannot be replied to (what Twitter is calling “statements”). Xie stated that the changes were meant to prevent what are seen by Twitter as unhealthy behavior by its users, including “getting ratio’d” (when one’s Tweet receives a very high ratio of replies to likes, which is taken to represent general disapproval), and “getting dunked on” (a phenomenon in which the replies to one’s tweet are very critical, often going into detail about why the original poster was wrong).

If you have spent any amount of time on Twitter you have no doubt come across the kind of toxic behavior that the platform has become infamous for: people being rude, insulting, and aggressive is commonplace. So one might think that any change that might reduce this toxicity should be welcomed.

The changes that Twitter is proposing, however, could have some seriously negative consequences, especially when it comes to the potential for spreading misinformation.

First things first: when people act aggressively and threatening on Twitter, they are acting badly. While there are many parts of the internet that can seem like cesspools of vile opinions (various parts of YouTube, Facebook, and basically every comment section on any news website), Twitter has long had the reputation of being a place where nasty prejudices of any kind you can imagine run amok. Twitter itself has recognized that people who use the platform for the expression of racist, sexist, homophobic, and transphobic views (among others) are a problem, and have in the past taken some measures to curb this kind of behavior. It would be a good thing, then, if Twitter could take further steps to actually deter this kind of behavior.

The problem with allowing users the ability to Tweet in such a way that it cannot receive any feedback, though, is that the community can provide valuable information about the quality and trustworthiness about the content of a tweet. Consider first the phenomenon of “getting ratio’d”. While Twitter gives users the ability to endorse Tweets – in the form of “hearts” – it does not have any explicit mechanism in place that can allow users to show their disapproval – there is no non-heart equivalent. In the absence of a disapproval mechanism, Twitter users generally take a high ratio of replies-to-hearts to be an indication of disapproval (there are exceptions to this: when someone asks a question or seeks out advice, they may receive a lot of replies, thus resulting in a relatively high ratio that signals engagement as opposed to disapproval). Community signaling of disapproval can provide important information, especially when it comes from individuals in positions of power. For example, if a politician makes a false or spurious claim, their getting ratio’d can indicate to others that the information being presented should not be accepted uncritically. In the absence of such a mechanism it is much more difficult to determine the quality of information.

In addition to the quantity of responses that contribute to a ratio, the content of those responses can also help others determine whether the content of a tweet should be accepted. Consider, for example, the existence of a world leader who does not believe that global warming is occurring, and who tweets as such to their many followers. If this tweet were merely made as a statement without the possibility of a conversation occurring afterwards, those who believe the content of the tweet will not be exposed to arguments that correctly show it to be false.

A concern with limiting the kinds of conversations that can occur on Twitter, then, is that preventing replies can seriously limit the ability of the community to indicate that one is spreading misinformation. This is especially worrisome, given recent studies that suggest that so-called “fake news” can spread very quickly on Twitter, and in some cases much more quickly than the truth.

At this point, before the changes have been implemented, it is unclear whether the benefits will outweigh the costs. And while one should always be cautious when getting information from Twitter, in the absence of any possibility for community feedback it is perhaps worth employing an even healthier skepticism in the future.

University Divestment from Fossil Fuels

photograph of campus building at McGill University

This month, tenured McGill University Philosophy professor Gregory Mikkelson resigned from his position. Mikkelson explained that he could no longer work for an institution that professes a commitment to a reduction to its carbon footprint, all the while continuing to invest in fossil fuels. Mikkelson argued further that the university board’s continued refusal to divest from fossil fuels is in opposition to the democratic mandate in favor of divestment that has developed across the campus.

Mikkelson’s actions make a powerful statement in a general academic climate in which divestment from fossil fuels has strong support among faculty and students. Some universities have taken action in response. In September 2019, the University of California system announced that they would be cutting fossil fuels from their over $80 billion dollar investment portfolio, citing financial risk as a major motivating factor. The University of California system is the largest educational system in the country, so this move sets an important precedent for other universities under pressure to do the same thing.

Many prominent schools across the country are resisting pressure to divest. On January 3rd, students of Harvard and Yale Universities staged a protest of their respective universities’ continued support for the fossil fuel industry by storming the field of the annual football game between Harvard and Yale, delaying the game by almost an hour. This is only one such protest; there have been many others over a span of almost a decade. Students, faculty members, and staff have occupied the offices of administrators, held sit-ins, and conducted rallies.

Those who wish to defend continued investment in the fossil fuel industry make the argument that universities have a fiduciary obligation to students, faculty, and staff. As a result, they need to maintain the most promising investment portfolio possible. They need financial security in order to continue to provide a thriving learning environment. This involves investing in the market that actually exists rather than an idealized market that doesn’t. A portfolio that includes diversified investments in sustainable renewable sources of energy would be ideal, but many think that the current political climate provides little evidence that this approach would be a wise investment strategy. President Trump can be relied upon to thwart the advance of renewable energy at every turn. At this point, it is unclear how many more years universities will need to make investment decisions that take into account the political realities of living under this administration. Those who make this argument contend that the primary obligation of a university—first and foremost—is to provide education to students. Universities can fulfill this obligation if and only if they are financially secure.

Relatedly, some argue that, in keeping with universities’ general fiduciary responsibilities, institutions should avoid making investment decisions that are overly political. Investments that look like political statements could deter future donors, which would limit the potential services the university could provide. In response to this argument, critics are quick to point out that continued investment in fossil fuels is a political statement. Crucially, it is a political statement with which the heart and soul of the university—faculty, staff, and students—tend to strenuously disagree.

Those who want to defend continued investment in the fossil fuel industry argue further that investors are in a better position to change the behavior of fossil fuel companies because they have voting powers on crucial issues. Shareholders are in a position to vote directors and even entire boards out of their jobs if they do not acknowledge and take meaningful action on climate change. Shareholders are in a position to force transparency when it comes to publishing substantive emissions data. When fossil fuel industries are forced to acknowledge the threat that they pose, they may lead the transition to renewables from within.

Many critics are dubious about the authenticity of this proposal. Even if we take it at face value, we don’t have much reason to believe that this approach is motivating the fossil fuel industry at anything approaching the rate we would need to see in order to achieve the necessary change in the right timeframe. To ward off, or, at the very least, minimize, the threat posed by climate change, we need to take significant meaningful action now, rather than waiting the indeterminate amount of time it might take for the fossil fuel industry to make internal changes that seem to be decidedly against their own interests.

Many disagree with the claim that continued investment in fossil fuels provides a university with financial security. In fact, the entire University of California system disagrees. The reasons the UC system offered for their decision to divest were financial rather than ethical. Their argument is that abandoning investment in fossil fuels now in favor of developing a portfolio of sustainable renewable resources cuts their losses later and is consistent with the inevitable green path forward. It simply isn’t possible to continue in the direction we’re headed. We will inevitably change course.

When academic institutions refuse to divest, faculty and students are put in an uncomfortable position—it is difficult for a person who is concerned about climate change to continue in their role at such an institution while avoiding the charge of personal hypocrisy. Students work hard to earn their spots at universities, and they pay dearly for them. The academic job market is notoriously competitive, and professor positions are extremely hard to come by. Many find Mikkelson’s actions admirable, but recognize that they are not in a position to follow in his footsteps.

Divestment sends a powerful message—institutions of higher education will no longer provide financial support to industries that contribute to climate change. The very nature and mission of universities cast such institutions in pivotal roles to usher in a new, healthier, greener future. Far from shying away from this role, universities should embrace it as a natural fit—after all, they ideally prepare young citizens to design, and thrive in, a promising future. Mikkelson recognized that refusal on the part of higher education to divest from fossil fuels is hypocrisy on the part of the university itself—it is antithetical to the goals of excellence in innovation, empathy and compassion toward our fellow living beings and respect for the ecosystems in which we live, as well as clear, rigorous critical thinking that includes the ability to give appropriate weight to supporting evidence.

What’s more, fossil fuel companies have intentionally obfuscated the facts when it comes to the harms posed by climate change. This practice of putting significant roadblocks in the pathway to knowledge about critical issues is not consistent with the pursuit of knowledge that characterizes a college or university. If an academic institution is to act with integrity, it should not continue to support campaigns of misinformation, especially when the stakes are so high.

Rising Sun Flag: Symbol of Hate or Cultural Pride?

image of aged Rising Sun flag

Last fall, South Korea asked the International Olympic Committee to ban the Rising Sun flag from Olympic stands in the Tokyo 2020 Olympics. South Korea argues that the Rising Sun flag is representative of Imperial Japan, thus representative of the human rights abuses and war crimes that occurred during the World War II era. Japan has pushed back by stating the Rising Sun flag is not a political statement. The reasoning is that, in the Japanese government’s view, the Rising Sun is a cultural symbol of pride and patriotism, and does not merely represent Japan’s Imperial Empire. To understand this issue, consider a similar case regarding the symbolism of flags: in the United States, many states still display the Confederate flag at state houses and on public grounds. Critics of the Confederate flag see it as a symbol of slavery, human rights abuses, and racism, while supporters of the Confederate flag view it as a cultural symbol of the South. In the case of Rising Sun or Confederate flag, it is important to recognize that the disagreement is not about if the flags have connections to human rights abuses (it is quite obvious these flags do have connections to barbaric wrongdoings); instead, the disagreement concerns questions about a flag’s meaning, whose opinion is relevant in determining that meaning, and if whatever original meaning a flag might have can be corrupted by factors later on.

To understand the significance and sentiments surrounding the Confederate flag, it is important to understand the history of it. The Confederacy has explicitly supported slavery in its constitution as it said ,”No bill of attainder, ex post facto law, or law denying or impairing the right of property in negro slaves shall be passed.” There were nearly 4 million slaves in 1860, mostly in the confederate states, and they were subject to brutal conditions such as whipping, burning, hanging, mutilation, and rape. The Ku Klux Klan made use of the flag during cross burnings and lynchings, and many white supremacist groups still use it today.

Supporters of the Confederate flag, however, see it as a symbol of heritage and a recognition of the millions of confederate soldiers who fought in the Civil War. In fact, most confederate soldiers did not own slaves. But critics say the Civil War was mainly caused by slavery, while some claim the cause was states’ rights. No matter the cause of the Civil War, slavery was an extremely common, atrocious infringement of human rights in the Confederate states, and many associate the the flying of the Confederate flag with those abuses.

Like the history and symbolism surrounding the Confederate flag, the Rising Sun flag has negative connections surrounding it. The flag was most notably used during Japan’s Imperial Era, but it has origins of being used by feudal warlords during the 1600s. Many Japanese people do not see its use as a political statement because it is widely used for seasonal festivals, fishermen boats, and naval ships. Although this flag can be seen in everyday cultural events, it has also been used by Japanese nationalist groups. Many in Korea, China, and other parts of East Asia see this as a symbol of Japanese Imperialism. During the Pre-World War 2 and World War 2 era, Korea and, later on, parts of China, suffered under the brutal, barbaric rule of the Japanese Empire. The Japanese military was known for kidnapping Korean, Chinese, and other Asian women and forcing them into sexual slavery, while Korean men were put into forced labor camps. Koreans were not allowed to speak, write, or learn Korean and were forced to use Japanese. In China, Japan massacred and raped hundreds of thousands of civilians which is known as the Nanking Massacre. During this period, the Rising Sun flag was used, so many Koreans, Chinese, and other countries in Asia see this flag as a symbol representing these barbaric actions by Japan. This historical divide is still can be seen today with Japan’s reluctance to apologize for war crimes consistently, South Korea’s recent boycott of Japanese goods, and Japanese nationalist groups holding rallies regularly.

Given the atrocities committed under these two banners, the question remains what message the Rising Sun or Confederate flag might send that would overcome these connections. Supporters of the Confederate or Rising Sun flags claim the original meaning and intention behind the cultural symbol should be taken as the sole meaning. But consider the Nazi Swastika or the Soviet hammer & sickle typically associated with the millions of deaths in concentration camps or gulags. If their logic was consistently applied, then the Nazi Swastika would symbolize peace and the Soviet hammer & sickle would symbolize socioeconomic equality because those were these flags’ original intentions. The fact is that flags are embedded into the community it represents, so it cannot conveniently detach from past wrongs while only associating itself with the good. Take the analogy to a sports team. Teams have wins and losses, but can the team only associate with the wins? Of course not. If a symbol represents the team, then the symbol represents the wins and losses too. The Confederacy participated in the brutal enslavement of humans, so the Confederate flag reflects slavery and southern culture. Imperial Japan sponsored massacres across East Asia, so the Rising Sun flag reflects massacres and Japanese culture. The Rising Sun and Confederate flags represent both the good and the bad.

When cruel wrongdoings are of monumental magnitude as in the case of the Confederacy and Imperial Japan, there is no removing the history of barbaric atrocities to see only original intention. Therefore, when a state or national government openly supports flags with connections to widespread historical wrongdoings, it is an amplification of the historical wrongdoings the flag represents. The flying of the Confederate and Rising Sun Flag fails to acknowledge historical injustices, some injustices in which people who are alive today have lived through.

On Political Purity Tests

photograph of Trump at Catholic church

With the 2020 presidential election less than a year away, talk of “purity” tests for political candidates – so-called requirements, expectations, or “deal-breakers” for voters’ support – has become curiously common.

On the Democratic side, where more than a dozen contenders are still vying for their party’s nomination (and have begun to challenge each other more openly about their progressive bona fides), concerns about flexibility and eventual electability have led some figures to warn against holding impossibly high standards for the eventual Democratic standard-bearer. Just before Thanksgiving, at a question-and-answer session in California, Former President Barack Obama explained that “We will not win just by increasing the turnout of the people who already agree with us completely on everything – which is why I am always suspicious of purity tests during elections. Because, you know what, the country is complicated.” Instead of requiring a political candidate to perfectly match your ideals in every way, this position suggests a more pragmatic approach that allows for (at least some) ideological compromise.

Meanwhile, the Republicans have spent much of the last three years practicing precisely that sort of compromise, frequently (and sometimes even proudly) admitting that President Donald Trump is, in many ways, far from the conservative ideal in his personal life, but is, nevertheless, the most useful figure for accomplishing politically conservative goals. Despite long-popular rhetoric amongst Republicans about faith and family values, the children of the Moral Majority have committed themselves to defending a thrice-married philanderer because, for example, Trump’s ability to appoint conservative judges to federal positions outweighs his inability to name a book of the Bible. When Christianity Today, a leading magazine for Evangelical Christians, recently published an opinion piece arguing, in part, that Trump’s unapologetic immorality damages the credibility of his religious defenders, it was lambasted amongst the party faithful as proof that the periodical represents the “elitist liberal wing” of their denomination.

The question of purity indeed poses an interesting (potential) ethical dilemma: either you get your hands dirty to take what you want, or you find that your clean hands remain empty in the end – which is preferable?

In its crudest form, this dilemma is not unlike the classic “trolley problem,” where a person is tasked to choose whether it is better to act in a way that condemns one person to die or, instead, to refrain from action in a way that results in five deaths. Although the former requires bloodying your own hands by involving yourself in a causal chain resulting in the death of a person, it brings about a set of consequences which involves fewer deaths overall; the latter allows you to avoid direct responsibility, but results in a significantly less palatable end. Which of these options is the right one to choose?

Instead of circling debates around various ethical theories, Alexis Shotwell, professor of sociology, anthropology, and philosophy at Carleton University, offers a different solution altogether: rejecting the possibility of “purity” as an attainable quality, period. In her 2016 book Against Purity: Living Ethically in Compromised Times, Shotwell argues that the perception of moral purity as a genuine goal is, in principle, illusory, so the sort of clean-cut options supposed by trolley-style dilemmas are simply unrealistic. Instead, our embeddedness in social contexts requires an amount of interdependency with others that will always, as a general rule, require ideological compromise to at least some degree. Given that everyone has slightly different desires, interests, and goals, “an ethical approach aiming for personal purity is inadequate,” and, ultimately, “impossible and politically dangerous for shared projects of living on earth.”

This sort of approach neither draws lines in the sand across which certain people are not welcome, nor does it try to give some ends-based excuse for allowing deplorable people into one’s inner circle: instead, it recognizes that – like it or not – we’re already all in this together. As Shotwell explains, the idea is rather that

“I’m going to work on this thing and I’m definitely going to make a mistake. I’m already part of a really messed up situation, so I’m not going to be able to personally bend the arc of the universe toward justice. But I might be able to work with other people so that all together we can do that.”

Perhaps the main way that someone can ethically fail on such a model is to reject trying to work together at all.

So, importantly, Shotwell’s approach does not license an individual to behave however they choose: the emphasis on collective and relational approaches to problem-solving (as not only pragmatic requirements, but as the logically prior element of moral exchanges altogether) means that moral agents are inextricably bound to certain moral expectations based on the communities in which we find ourselves – these relationships (more so than our individual intentions or the direct consequences of our own actions) ground our moral judgments – as well as our political choices. So, candidates who transgress these sorts of communal expectations for cooperative and mutual care can indeed still be held accountable, but in a manner notably more ecumenical than either the myopic purity tests of the Democrats or the sycophantic apologetics of the Republicans.

Although the outcome of the 2020 election cycle is still far from determined, one thing seems clear: it’s messy already and the chaos will only get worse. Rather than pretending that “the Right candidate will be Good” or that “the best candidate doesn’t actually have to be good,” Shotwell’s “politics of imperfection” suggests that everyone needs to hold each other accountable to work together in the project of creating a world for us all.

The Harms of Reporting Political Insults

photograph of reporters' recording devices pushing for response from suited figure

This week I had the most amazing experience reading a news article. The article was discussing the preparations being made for the impeachment trial and I came across this sentence: “Trump tweeted right before and after Pelosi’s appearance, in both instances using derisive nicknames.” What an idea: to avoid repeating what is essentially name calling and to simply refer to what kind of statement was made. Afterall, what is the journalistic value of reporting that a politician called someone else by derisive nicknames and then repeating those nicknames? Does it make us more informed? Does it make national political debates any better? Perhaps not, and this means that the question about whether journalists should repeat such insults is an ethical one.

After the 2016 Presidential election there was much discussion about the issue of journalistic standards and the merits of covering a candidate like Donald Trump so much. Even before the election there were reports that Trump had essentially received over $2 billion dollars in free media simply because he was so consistently covered in the news cycle. Later there were those in the media, such as CNN President Jeff Zucker who acknowledged the mistake of airing campaign rallies in full as it essentially acted as free advertising. According to communication studies professor Brian L. Ott such free advertising did affect the electoral results. What this means is that media is not always merely a bystander covering election campaigns because that coverage can affect who wins or loses. This is relevant for several reasons when it comes to reporting and repeating political insults.

For starters, such insults can act like a form of fake news. Part of the problem with fake news is that the more it is repeated, even while being demonstrated to be false, the more people are likely to believe it. In fact, a study has demonstrated that even a single exposure to a piece of fake news can be enough to convince someone that its contents are true. Even when a report explicitly aims to repute some false claim, the claim itself is more likely to be remembered than the fact that it is false. Now, if we think about insults and nicknames as a piece of information, we are likely to make the same mistake. Every “Lyin’ Ted,” “Shifty Schiff,” or “Crooked Hillary” in some form offers information about that person. The more it is repeated the harder it is to repudiate claims related to it. No matter how many fact checks are published, “Crooked so-and-so,” remains crooked.

One may argue that if the media took measures to stop directly quoting such names and insults and simply noted the fact that an insult was made or is being popularized, then it is no longer performing its journalistic function of informing the public. It might be wrong to not report on direct quotes. However, if insults are more likely to stick in the minds of the public than the information repudiating the stories behind such insults, then the result may be a less informed public. As for the matter of reporting on quotes, this issue is already being discussed in terms of whether the media should repeat quotes that are factually incorrect. Darek Thompson argues that the media should put such quotes in “epistemic quarantine” by abstaining from direct reporting on the language being used in the name of securing the original purpose of journalism: to report the truth.

There is another objection to consider. By not covering insults and replacing them with general descriptions of the comments the media will no longer be reporting neutrally. However, the important thing to keep in mind is that politics is not just about information it is about branding. According to Amit Kumar, Somesh Dhamija, and Aruna Dhamija one outcome of political marketing is the political brand. If a politician is able to cultivate a personal brand, they can create a style and image which is distinct, and thus are able to target specific “consumer citizens” in a way such that politicians are able to establish an instantaneous reaction with the public.

For example, Canadian Prime Minister Justin Trudeau has spent years cultivating a brand beginning with his boxing match with a Conservative Senator, with it establishing a sense of “toughness, strength, honour and courage.” One can only imagine that his recent beard growth, with its ability to project experience, is an attempt to change that brand. However, just as a company’s brand can be tarnished, so can a personal brand. When an insult like “Cooked Hillary” is used in a repeated and targeted way, it damages that brand in a way distinct from merely insulting someone. It acts less as an assertion of fact and more like a way to connect concepts on a subconscious level. Thus, when the press repeats insults, they are acting as a form of advertising to attack a political brand. In other words, repeated reporting of such insults is already non-neutral in its effects.

Perhaps reporting insults still serves a journalistic purpose, however it is difficult to see what purpose that is. Such insults are less about the merits of policy and are essentially ad hominem attacks. In fact, the reporting of such ad hominem attacks makes addressing empirical claims very difficult. According to a study, attacking an individual’s credibility may be just as effective as attacking the claims that the individual makes. For instance, attacking Clinton for being corrupt “could be just as effective as actual evidence of criminality, and no less influential.” In other words, once an ad hominem attack is made, the empirical facts of the case do not really matter. Dr. Elio Martino of Quillette notes, “If attacks on a person’s character are effective, and potentially irreversible even with the subsequent addition of facts, it becomes easy to discredit people wishing to tackle the difficult but important issues facing our society.”

So, reporting ad hominem attacks essentially does not aid in keeping the public informed. However, others have noted that reporting insults only serves to make politics “more trivial and stupid.” In a polling exercise in Australia, a group sought to get voter perceptions of political leaders and to form a word cloud of the responses. The responses mostly consisted of insults. As Terry Barnes notes, “While it may be a bit of fun—and it’s always fun for the rest of us to see political figures publicly humiliated—this tawdry exercise dumbed our politics down that little bit further, trivializing for the sake of titillation.” This isn’t an issue isolated to one politician or one nation; reporting on ad hominem attacks is trivial and it damages our ability to carry on political conversations. It is hard to see what journalistic purpose the reporting of any political insult could have.

All of this brings me back to the article I began with. It was so pleasant to see a pointless insult not being directly quoted, but simply noted. My hopes were dashed, however, when I scrolled further to not only find the tweet containing the insult embedded in the article, but to also find the article itself later mentioning the “derisive nickname” in question: “Crazy Nancy.” Would I have been missing out to know that a politician insulted another without knowing what the insult was? I don’t think so.

Campaign Donations, Caveat Emptor, and #RefundPete

photograph of Mayor Pete at an even flipping pork chops in Iowa Pork apron

The second week of December saw another unusual wrinkle in an already-complicated Democratic primary season: grassroots donors began demanding refunds for political contributions made to Pete Buttigieg’s presidential campaign. Citing concerns about Buttigieg’s pursuit of high-dollar donors, defenses of corporate interests, and dismissive attitudes towards questions regarding these tactics, as well as specific revelations regarding his work at the management consulting group McKinsey and Company, some voters who had once considered Buttigieg an interesting newcomer to the national stage are changing their minds. Although the Buttigieg campaign has declined to release data on the number of refunds requested, the movement appears to be growing as the hashtag #RefundPete began trending online.

As it stands, presidential campaigns are only legally obligated to refund a campaign donation if that donation somehow violates legal requirements (such as if it exceeds the FEC’s contribution limits) – no provision requires refunds simply because donors have had a change of heart. However, might Buttigieg’s campaign have a moral obligation to dispense refunds? Or does the Latin warning “Caveat Emptor” – “let the buyer beware” – apply to political donations just as much as it might to property sales?

On the one hand, you might think that a political donation is simply a non-binding show of support – a flat contribution demonstrating a thin sort of sponsorship that does not commit either a donor or a candidate to anything further. Put differently, this view sees a campaign donation as simply a gift with no strings attached. Even though a voter might give money to one (or even multiple) campaigns, that would in no way indicate how the donor would end up voting at the ballot box and, conversely, the candidate can use that money at-will.

On the other hand, it might be that making a campaign contribution thereby initiates the donor into the candidate’s group of supporters, creating a net of (at least some) obligations between the donor and the candidate – such as the expectation that the candidate represent the will of the donors/supporters. On this view, a donation is more like a contract or a promise that a candidate must perpetually merit. Presumably, on this second, thicker view, if the candidate breaks the contract (perhaps by initially misrepresenting themselves or by changing their positions), then the donor could have grounds to demand repayment.

If these choices are right, then it would seem like the #RefundPete movement is assuming the second option to ground their reimbursement expectations: although someone may have contributed to Buttigieg when he was presenting himself as a progressive, small-town mayor looking for grassroots support, that same contributor could easily feel deceived when Buttigieg later adopts a more openly centrist position, chases elitist funding, and cavalierly ignores questions regarding that shift. Because of that perceived deception, former Buttigieg donors might think they are entitled to a refund.

However, it is the first option which seems like the most natural understanding of how campaign donations actually function. Given that there is a clear difference between contributing to a campaign and actively campaigning for a candidate (via rallying, door-knocking, sign-posting, or a myriad of other approaches), it’s not clear that a simple financial transaction (often done impersonally through an online payment portal) is able to automatically create the thick sorts of relational obligations between a candidate and his supporters required to ground a reimbursement request. That is to say, although campaign donors and campaign workers are both supporters of a candidate, they are not identical political agents (someone can easily be one without being the other). If former Buttigieg-donors also put in the effort to build relational ties with the Buttigieg campaign (thereby becoming Buttigieg-campaign-workers), then they might indeed have standing to expect some form of recompense for their wasted efforts (given what they now know); if those former donors are now simply regretting their choice to toss some “pocket change” at a candidate that they now don’t like, then it’s much less clear that they deserve the refunds they’re requesting. Indeed, this second scenario seems fairly familiar to any voter who has ever ended up dissatisfied with the results of representative democracy.

To be fair, it seems like much of the #RefundPete hashtag is motivated by the opportunity to make a political statement about Buttigieg’s campaign tactics, policy positions, and general demeanor: for example, the hashtag was sparked by a campaign worker for Elizabeth Warren and one of the inspirations of the #RefundPete hashtag had only donated $1 to help Buttigieg qualify for an early debate. Particularly in a race where grassroots support has become a defining wedge issue among Democratic candidates (as Bernie Sanders joked about in the December debate), such statements might be perfectly legitimate – but that’s a far cry from saying that the concept of a campaign donation refund is, in principle, legitimate.

Strategic Nonviolence: An Alternative to Moral Pacifism

photograph of protest in front of police station

Protest and civil resistance is quickly becoming one of the defining characteristics of the new century, from the early gains of the Arab Spring, the protest movements throughout Latin America, the Hong Kong democracy movement to Greta Thunberg’s School Strikes for Climate and the Extinction Rebellion movement in response to the climate and ecological emergency.

Philosophers began to theorize about social change in terms of methods of nonviolent social intervention in the Nineteenth Century. Henry David Thoreau, in the essay Civil Disobedience, defends the validity of conscientious objection to unjust laws, which he claims ought to be transgressed. He writes “all men recognize the right of revolution; that is, the right to refuse allegiance to and to resist the government, when its tyranny or its inefficiency are great and unendurable.”

Early social movements, such as the campaign led by Mohandas Gandhi against the British colonial occupiers of India, connected non-violence with pacifism and cemented that as a deontological moral principle. In the early years of the civil rights movement, Martin Luther King wrote:

“non-violence in the truest sense is not a strategy that one uses simply because it is expedient in the moment,  [but as something] men live by because of the sheer morality of its claim.”

For Gandhi and King the practice of nonviolence is grounded in the timeless and universal values of love, compassion, and cooperation. This view is closely related to the philosophy of pacifism, which holds that all violence is immoral. Pacifism is principled, moral opposition to war, militarism or violence. As such, it arises not out of a discipline or practice, so to speak, but out of a strongly held philosophical and spiritual belief.

Conditional pacifism, which is a version of pacifism with some possibility for compromise, is utilitarian in nature, such that the bad consequences are what make it wrong to resort to war or violence. However, based on utilitarian principles, there could be a situation where violence of some magnitude is morally permissible if it prevents violence of a greater magnitude. That is, according to conditional pacifism, there could be situations where violence is necessary to prevent worse outcomes.

The idea of pacifism, and of seeking non-violent solutions to disputes between and within nations, plays a significant part in international politics, particularly through the work of the United Nations. But there is, within this structure, a recognition that sometimes (in theory at least, though this has been notoriously difficult in practice) a need for ‘humanitarian intervention.’

An anti-pacifist view would not exactly advocate war as a good in itself, but would hold the view that sovereign states have a duty to protect their citizens, and that duty may in some circumstances extend to the waging of just war – and furthermore that in this case, citizens have a duty to carry out certain tasks. The critical, anti-pacifist view holds that pacifists’ refusal to participate in war means that they fail to carry out an important moral obligation, and that the respect for human life that motivates them is an idealistic but counterproductive position.

On the other hand, there is a different alternative to pacifism, which does not sanction violence but does differentiate itself from the pacifist’s principled, moral position. This is known variously as ‘strategic non-violence’ or ‘nonviolent direct action.’

Gene Sharp, theorist and author of seminal works on the dynamics nonviolent conflict, sought to redefine it outside the context of pacifism and outside the sphere of the moral question of violence. Sharp contends that nonviolence can be employed strategically, as something that social movements can choose because it provides an effective avenue for leveraging change. For example, in overcoming a dictatorial or repressive regime, such as the popular uprisings which ended Milošević’s reign in Serbia in 2000, or in effecting social change within a broader social context, such as the civil rights movement in the United States in the 1960’s.

Maintaining a strict nonviolent discipline for strategic reasons has, according to Sharp, several important strategic advantages over armed civil resistance, as does using strategic nonviolence as a method of waging conflict rather than as a moral position. Strategic nonviolence is active as a form of conflict, therefore much more likely to be effective in creating or forcing social and political change, and nonviolence maintained as a strict discipline makes a movement vastly more inclusive, allowing for widely participatory campaigns of direct action.

Maintaining nonviolent discipline is necessary against a state that has a well-developed arsenal. The state has a monopoly on violence: a group of citizens taking up arms against a regime is usually vastly outgunned. But, importantly, armed struggle legitimizes the state’s use of force against the citizens.

This is not to say that those using strategic nonviolence will not be harmed. In the conflicts mentioned above, and many more around the world, the state may turn on demonstrators or strikers. This often backfires as it creates negative public response and shows the state’s apparatus to be reacting disproportionately, which can create sympathy for a cause and can sometimes greatly strengthen it, but at a large cost.

Strategic nonviolence is therefore an effective alternative to armed struggle, conceived of as a form of resistance and, perhaps perversely, as an effective form of waging war.

A common feature of pacifism is the belief that winning adversaries over to one’s cause is necessary, effecting a change of heart, and being able to love one’s enemies. Sharp rejects this position, arguing that expecting people to love those who have wronged them or treated them cruelly is not only unreasonable, but unwise as it might lead people to turn towards violence.

Instead, our goals may need to be different. As civil rights leader James Farmer writes: “where we cannot influence the heart of the evildoer we can force an end to the evil practice.”

It is in this sense that strategic nonviolence has an overarching ethic – because King is right that there is ‘a sheer morality in its claim.’ I am not sure Sharp would make the argument this way, but you could say that the ethical rewards are the social and political improvements in principles of justice and freedom won by the more effective strategies of nonviolent resistance; that nonviolence is better, morally, is an effect, not a cause of the principle of nonviolent resistance.

Smoking Legislation and the E-Cigarette Epidemic

photograph of Juul pods with strawberries, raspberries, a peach, and a cocktail

At the end of the year, President Trump signed legislation changing the federal minimum age for tobacco and nicotine purchase from 18 to 21. This move to raise the federal smoking age was made in response to the popularity of e-cigarettes amongst teen users and the e-cigarette epidemic. To combat this public health crisis, attempts have also been made to ban flavored e-cigarettes. For e-cigarettes to stay on the market, vape companies will need to prove that they cause more good than harm. This proposed legislation applies to all e-cigarette companies, threatening the smaller vapor manufacturers as well as Juul Labs, who make up seventy-five percent of the nine-billion dollar industry.

Juul pods, marketed at millennials and teen users, contain twice the amount of nicotine found in traditional freebase nicotine e-cigarettes These products are especially addictive and are sold in a variety of fruity flavors making them very appealing to children. It’s unsurprising, then, that America’s youth are hooked. In fact, one can hardly walk across a college campus or use the bathroom of a high school without seeing a Juul user “fiending.”

But Juul Labs isn’t just selling their products to children; children are their targeted demographic. Although e-cigarette executives publicly claim nicotine vaporizing devices have always been about a safer smoking alternative to traditional combustible cigarettes, looking to social media advertising tactics from the company’s inception, as well as interviews with investors and employees, children have always represented a main marketing target. Using youthful brand ambassadors that fit the young demographic and advertisements featuring millennials at parties demonstrate the company’s clear attempts to market the sleek e-cigarette device to young people. A study conducted by the University of Michigan two years ago emphasized the dramatic rise in high school students – a generation with historically low tobacco use – in just a single year. And many blame Juul Labs for their irresponsible marketing tactics that created a generation of kids addicted to nicotine.

Even scarier than the addiction that it causes are the health risks. Throughout the summer of 2019, thousands of teens were hospitalized and 39 e-cigarette related deaths were reported to the CDC. Although the vaping illness was linked to vitamin E acetate, an ingredient in illicit THC vape cartridges, since the outbreak, legislatures have had full support of curbing teen vaping from concerned parents across the nation.

Another issue with Juul Labs is their association with Big Tobacco. While it may seem as though e-cigarette companies are the tobacco industry’s biggest competitor, for the most part, the tobacco industry and vaping industry are becoming more and more related. Altria, of Marlboro cigarettes, recently bought a 35% stake of Juul Labs for $12.8 billion, and the e-cigarette company’s CEO was replaced by K.C. Crosthwaite, an Altria executive. These changes left employees concerned and angered with their new relationship with Altria. How can a company whose mission is to provide a safer smoker alternative to combustible cigarettes be associated so closely with Big Tobacco?

While the danger the vaping epidemic presents is dangerous, and the specific targeting of kids seems objectionable, many wonder if the FDA should regulate e-cigarettes quite so heavily. The regulation of the vaping industry is a case of paternalism, where one’s choices are interfered with in order to promote one’s well-being and long-term interests. Some are concerned that the raising of the federal minimum smoking age is an overextension of the government’s authority, especially considering there is lack of evidence that nicotine e-cigarettes cause significant health issues. Similarly, because there are less immediate consequences of teen nicotine use (compared to teen alcohol use for example), such regulations may appear overcautious. There are more practical concerns at play as well; if vape products are banned, teens may be pushed to use combustible cigarettes or illicit vaping products that have been linked to respiratory disease. Although some are concerned about the restriction of personal choice, others view such laws as similar to mandatory seatbelt and compulsory child education laws.

Issues of classism and racism are rooted in the e-cigarette industry as they were in the tobacco industry. Because a large amount of stigma surrounds combustible cigarettes in the United States, smoking cigarettes is especially frowned upon by the middle class, and the habit is associated with those of a lower socioeconomic class according to British economist Roger Bate. Middle-class, adult vapers are conditioned to feel ashamed for smoking traditional combustible cigarettes. Similarly, many feel wronged that e-cigarettes are being regulated so heavily when flavored menthol cigarettes, claimed to be more addictive and are most commonly used by African Americans, remain on the market. Tobacco companies’ use of racially targeted marketing tactics of the addictive menthol flavored cigarettes are eerily similar to Juul’s early advertising blitzes, however, it seems that it is only when “young white people [are affected], then action is taken really quickly,” according to LaTroya Hester, spokeswoman for the National African American Tobacco Prevention Network.

Ultimately, any form of governmental intervention will cause debate about which personal liberties warrant being curbed, what our “best interests” are, and who is best positioned to know what those interests actually are. Juul and other e-cigarette companies might be blameworthy, but for many it’s not clear that the government should go to such great lengths to save us from ourselves.

The Witcher and the Lesser of Two Evils

photograph of The Witcher character from a gaming expo

In Netflix’s The Witcher, we are treated to swords, sorcery, sex, and a slightly confusing plotline. More surprisingly, we also get to see an interesting take on an issue from moral philosophy: getting your hands dirty and doing the lesser of two evils.

The protagonist, Geralt, hunts monsters; but he is no mere sword-for-hire, and he will not kill innocent people. In one scene, the malicious Stregobor asks Geralt to kill Renfri, a woman Stregobor believes is cursed and has the power to destroy everybody. (Not to mention, she wants to kill Stregobor.) Stregobor implores Geralt to kill Renfri, suggesting that it is “the lesser evil.” Geralt’s response is fascinating: “Evil is evil… lesser, greater, middling. It’s all the same. If I have to choose between one evil and another, then I prefer not to choose at all.”

Geralt doesn’t want to get his hands dirty. The problem of dirty hands is often presented as a political problem. To take Michael Walzer’s example, should a political leader order the torture of a terrorist in order to find out the location of a series of bombs that will harm innocent citizens? The political leader has to do something bad—something one would much prefer not to do, something with a moral cost—in order to secure a better state of affairs. But these cases need not be so grand, we might make minor moral sacrifices or do things that are a little grubby in order to achieve worthy political goals. And we can find these cases outside of the political sphere: you might have to lie to a friend to save their feelings or ignore somebody’s needs in order to help somebody else who is in a worse position.

One might think that there is no moral cost to doing the lesser of two evils. If you do the best thing, can it really be evil? And shouldn’t we be content to bring about the lesser of two evils, given that it avoids a greater evil?

Bernard Williams thought that it can still be evil and that there can be reasons why we might want to avoid bringing about that evil. Take one of Williams’s most famous examples: Jim, an explorer, stumbles into a scene where twenty people are condemned to be executed. Because Jim is a venerable guest, the executioner offers to free all but one of the condemned, if Jim wants the honor of killing that one; if Jim refuses, all twenty will be killed. The condemned beg Jim to kill one of them. For utilitarians (the specific targets of Williams’s critique), it doesn’t matter that Jim has to kill someone—what matters is that either twenty people will die, or one will die, and it is far better that only one dies. Williams’s point was that it clearly does matter, especially to Jim, that to secure this optimal state of affairs Jim has to kill somebody.

What we do matters to us, and this is often very significant. In doing the lesser of two evils, perhaps we lose something, perhaps we harm someone, perhaps there is something “distressing or appalling”—such as in Jim’s case—or even just a little off about what we do, or perhaps it simply is not the sort of thing done by “honourable and scrupulous people.” The point is that even if it is the best option, the lesser of two evils can still be genuinely evil and we can be averse to doing it.

Ethical theory should leave some space for self-regard and the fact that actions can implicate us in ways that we may deeply wish to avoid. This might help to justify Geralt’s position: he would rather not choose, because if he chooses, he is forced to do evil and get his hands dirty. Still, in Jim’s case, Williams thinks that Jim should get his hands dirty; Williams’s point is that our involvement matters, it is not the stronger claim that we are always justified in keeping our hands clean.

But Geralt takes this to an extreme: he recognizes the lesser evil, but he’ll do all that he can to avoid doing it himself. Evil is evil, and he prefers not to choose at all. But this means that Geralt would allow a greater evil to take place, rather than commit a lesser or a middling evil himself. There is something noble about this, but there is also something distastefully self-regarding: in refusing to bring about evil in order to prevent greater evils, Geralt insulates himself from what happens in the world. He shows that, to some extent, he doesn’t care what happens to people, as long as he isn’t involved.

But Geralt’s position is not just self-regarding, it is unrealistic. Geralt doesn’t have the luxury of not choosing at all; the greater evil, if Stregobor is right, is not trying to kill Renfri. By not choosing, Geralt chooses the supposedly-greater evil. Williams was keen to emphasize this: sometimes whatever we do might be evil. If Jim turned down the chance to shoot one of the condemned, they would all die; if Walzer’s political leader refused to order the torture of the terrorist, innocent citizens would die. Even if there is something noble about Geralt’s desire to avoid getting his hands dirty, sometimes he simply might not have the luxury of choosing not to choose. And when he realizes that he must choose, he might be less committed to the idea that evil—lesser, greater, middling—is all the same.

U-Haul’s Anti-Smoking Workplace Wellness

photograph of overcrowded UHaul rental lot

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


U-Haul International recently announced that, beginning next month, the company will not hire anyone who uses nicotine products (including smoking cessation products like nicotine gum or patches). The new rule will take effect in the 21 states that do not have smoker protection laws. The terms of employment will require new hires to submit to nicotine screenings, placing limits on employees’ lawful, off-duty conduct.

The truck and trailer rental company has defended the new policy as nothing more than a wellness initiative. U-Haul executive Jessica Lopez has described the new policy as “a responsible step in fostering a culture of wellness at U-Haul, with the goal of helping our Team Members on their health journey.” But as the LA Times points out, “Simply barring people from working at the company doesn’t actually improve anyone’s health.”

U-Haul, however, is not alone, and employer bans on smoking are not new. Alaska Airlines has had a similar policy since 1985, and many hospitals have had nicotine-free hiring policies for over a decade. But there are important distinctions between these past policies and U-Haul’s new policy. Alaska Airlines’ ban was, at least in part, justified by the risk and difficulty of smoking on planes and in places surrounding airports; smoking simply isn’t conducive to that particular work environment. Meanwhile, hospitals’ change in hiring process was meant to support the healthy image they were trying to promote, and to demonstrate their commitment to patient health.

Interestingly (and importantly), U-Haul has not defended its new policy as a measure to improve customer experience or improve employees’ job performance. The (expressed) motivation has centered on corporate paternalism – U-Haul’s policy intends to protect their (prospective) employees’ best interests against their employees’ expressed preferences – and this has significant implications. This isn’t like screening for illicit drugs or forbidding drinking on the job. As Professor Harris Freeman notes, it “makes sense to make sure people are not intoxicated while working … there can be problems with safety, problems with productivity.” But in prohibiting nicotine use, U-Haul “seems like they’re making a decision that doesn’t directly affect someone’s work performance.” Unlike Alaska Airlines or Cleveland Clinic,

“This is employers exercising a wide latitude of discretion and control over workers’ lives that have nothing to do with their own business interests. Absent some kind of rationale by the employer that certain kind of drug use impacts job performance, the idea of telling people that they can’t take a job because they use nicotine is unduly intrusive into the personal affairs of workers.”

Similarly, the ACLU has argued that hiring policies like these amount to “lifestyle discrimination” and represent an invasion of privacy whereby “employers are using the power of the paycheck to tell their employees what they can and cannot do in the privacy of their own homes.” This worry is further compounded by the fact that,

“Virtually every lifestyle choice we make has some health-related consequence. Where do we draw the line as to what an employer can regulate? Should an employer be able to forbid an employee from going skiing? or riding a bicycle? or sunbathing on a Saturday afternoon? All of these activities entail a health risk. The real issue here is the right of individuals to lead the lives they choose. It is very important that we preserve the distinction between company time and the sanctity of an employee’s private life. Employers should not be permitted to regulate our lives 24 hours a day, seven days a week.”

Nicotine-free hiring policies or practices that levy surcharges on employees who smoke tend to rely heavily on the notion of individual responsibility: employees should be held accountable for the financial burden that their personal choices and behaviors place on their employers and fellow employees. But these convictions seem to ignore the fact that smoking is highly addictive, and 88% of smokers formed these habits before they were 18. Given this, the issue of accountability cannot be concluded so cleanly.

Apart from concerns of privacy or questions about individual responsibility, smoking bans on employment present a problem for equality of opportunity. According to the CDC, about 14 percent of adults in the U.S. smoke cigarettes. But smokers are not evenly distributed across socioeconomic and racial groups. For instance, half of unemployed people smoke; 42% of American Indian or Alaska Native adults smoke, 32% of adults with less than a high school education smoke; and 36% of of Americans living below the federal poverty line are smokers. It’s not hard to see that nicotine-free hiring practices disproportionately burden vulnerable populations who are already greatly disadvantaged. U-Haul’s low-wage, physical labor jobs, from maintenance workers to truck drivers to janitors, are restricted from those who may need them most (on grounds that have nothing to do with a candidate’s ability to perform job-related tasks).

This is no small thing; the Phoenix-based moving-equipment and storage-unit company employs roughly 4,000 people in Arizona and 30,000 across the U.S. and Canada. Lopez has claimed that “Taking care of our team members is the primary focus and goal” and that decreasing healthcare costs is merely “a bonus,” but it’s hard to separate the two. A recent study by Ohio State University estimated the cost employees who smoke pose to employers. Added insurance costs as well as the productivity lost to smoke breaks and increased sick time amounted to nearly $6,000 annually. Clearly, employee health, insurance costs, and worker output are all linked, and all contribute directly to a company’s profitability. The question is who should have to pay the cost for the most preventable cause of cancer and lung disease: employers or employees?

It may be that the real villain here is employer-sponsored insurance. By decoupling one’s employment from one’s healthcare, companies like U-Haul might be less invested in meddling with their employees’ off-duty choices. They have much less skin in the game if their employees’ behaviors aren’t so intimately tied to the company’s bottom line. Unless healthcare in the US changes, we may be destined to constantly police the line separating our private lives from our day jobs.

The Insufficiency of Black Box AI

image of black box spotlighted and on pedestal

Google and Imperial College London have collaborated in a trial of an AI system for diagnosing breast cancer. Their most recent results have shown that the AI system can outperform the uncorroborated diagnosis of a single trained doctor and perform on par with pairs of trained diagnosticians. The AI system was a deep learning model, meaning that it works by discovering patterns on its own by being trained on a huge database. In this case the database was thousands of mammogram images. Similar systems are used in the context of law enforcement and the justice system. In these cases the learning database is past police records. Despite the promise of this kind of system, there is a problem: there is not a readily available explanation of what pattern the systems are relying on to reach their conclusions. That is, the AI doesn’t provide reasons for its conclusions and so the experts relying on these systems can’t either.

AI systems that do not provide reasons in support of their conclusions are known as “black box” AI. In contrast to these are so-called “explainable AI.” This kind of AI system is under development and likely to be rapidly adopted within the healthcare field. Why is this so? Imagine visiting the doctor and receiving a cancer diagnosis. When you ask the doctor, “Why do you think I have cancer?” they reply only with a blank stare or reply, “I just know.” Would you find this satisfying or reassuring? Probably not, because you have been provided neither reason nor explanation. A diagnosis is not just a conclusion about a patient’s health but also the facts that lead up to that conclusion. There are certain reasons that the doctor might give you that you would reject as reasons that can support a cancer diagnosis.

For example an AI designed at Stanford University system being trained to help diagnosis tuberculosis used non-medical evidence to generate its conclusions. Rather than just taking into account the images of patients’ lungs, the system used information about the type of X-ray scanning device when generating diagnoses. But why is this a problem? If the information about what type of X-ray machine was used has a strong correlation with whether a patient  has tuberculosis shouldn’t that information be put to use? That is, don’t doctors and patients want to maximize the number of correct diagnoses they make? Imagine your doctor telling you, “I am diagnosing you with tuberculosis because I scanned you with Machine X, and people who are scanned by Machine X are more likely to have tuberculosis.” You would not likely find this a satisfying reason for a diagnosis. So if an AI is making diagnoses based on such facts this is a cause for concern.

A similar problem is discussed in philosophy of law when considering whether it is acceptable to convict people on the basis of statistical evidence. The thought experiment used to probe this problem involves a prison yard riot. There are 100 prisoners in the yard, and 99 of them riot by attacking the guard. One of the prisoners did not attack the guard, and was not involved in planning the riot. However there is no way of knowing specifically of each prisoner whether they did, or did not, participate in the riot. All that is known that 99 of the 100 prisoners participated. The question is whether it is acceptable to convict each prisoner based only on the fact that it is 99% likely that they participated in the riot.

Many who have addressed this problem answer in the negative — it is not appropriate to convict an inmate merely on the basis of statistical evidence. (However, David Papineau has recently argued that it is appropriate to convict on the basis of such strong statistical evidence.) One way to understand why it may be inappropriate to convict on the basis of statistical evidence alone, no matter how strong, is to consider the difference between circumstantial and direct evidence. Direct evidence is any evidence which immediately shows that someone committed a crime. For example, if you see Robert punch Willem in the face you have direct evidence that Robert committed battery (i.e., causing harm through touch that was not consented to). If you had instead walked into the room to see Willem holding his face in pain and Robert angrily rubbing his knuckles, you would only have circumstantial evidence that Robert committed battery. You must infer that battery occurred from what you actually witnessed.

Here’s the same point put another way. Given that you saw Robert punch Willem in the face, there is a 100% chance that Robert battered Willem — hence it is direct evidence. On the other hand, given that you saw Willem holding his face in pain and Robert angrily rubbing his knuckles, there is a 0- 99% chance that Robert battered Willem. The same applies to any prisoner in the yard during the riot: given that they were in the yard during the riot, there is at best a 99% chance that the prisoner attacked the guard. The fact that a prisoner was in the yard at the time of the riot is a single piece of circumstantial evidence in favor of the conclusion that that prisoner attacked the guard. A single piece of circumstantial evidence is not usually taken to be sufficient to convict someone — further corroborating evidence is required.

The same point could be made about diagnoses. Even if 99% of people examined by Machine X have tuberculosis, simply being examined by Machine X is not a sufficient reason to conclude that someone has tuberculosis. Not reasonable doctor would make a diagnosis on such a flimsy basis, and no reasonable court would convict someone on the flimsy basis in the prison yard riot case above. Black box AI algorithms might not be basing diagnoses or decisions about law enforcement on such a flimsy basis. But because this sort of AI system doesn’t provide its reasons, there is no way to tell what makes its accurate conclusions correct, or its inaccurate conclusions incorrect. Any domain like law or medicine where the reasons that underlie a conclusion are crucially important is a domain in which explainable AI is a necessity, and in which black box AI must not be used.

Can Spiritual Needs Be Met by Robots?

photograph of zen garden at Kodaiji temple

Visitors to the 400-year old Kodaiji Temple in Kyoto, Japan can now listen to a sermon from an unusual priest—Mindar—a robot designed to resemble Kannon, the Buddhist deity of mercy. In a country in which religious affiliation is on the decline, the hope is that this million-dollar robot will do some work toward reinvigorating the faith.

For some, the robot represents a new way of engaging with religion. Technology is now a regular part of life, so integrating it into faith tradition is a way of modernizing religious practice that also retains and respects its historical elements. Adherents may feel increasingly alienated from conventional, ancient ways of conveying religious messages. But perhaps it is the way that the message is being presented, and not the message itself, that is in need of reform. Robotic priests pose an intriguing solution to this problem.

One unique and potentially useful feature of the robot is that it will never die. It is currently not a machine that can learn, but its creators are hopeful that it can be tailored to become one. If this happens, the robot can share with its ministry all of the knowledge that comes with its varied interactions with the faithful over the course of many years. This is a knowledge base that no mortal priest could hope to obtain.

Mindar is unusual but not unique among priests. A robotic Hindu priest also exists that was programmed to perform the Hindu aarti ritual. In the Christian tradition there is the German Protestant BlessU-2, a much less humanoid robot programed to commemorate the passing of 500 years since Martin Luther wrote his Ninety-Five Theses, by delivering 10,000 blessings to visiting faithful. For Catholics, there is SanTO, a robotic priest designed to provide spiritual comfort to disadvantaged populations such as the elderly or infirm, who may not be able to make it to church regularly, if at all.

To many, the notion of a robotic priest seems at best like a category mistake and at worst like an abomination. For instance, many religious people believe in the existence of a soul, and following a religious path is often perceived as a way of saving that soul. A robot that does not have an immortal soul is not well suited to offer guidance to beings that possess such souls.

Still others may think of the whole thing as a parlor trick—a science fiction recasting of medieval phenomena like fraudulent relics or the selling of indulgences. It is faith, love of God, or a commitment to living a particular kind of life that should bring a person to a place of worship, not the promise of blessings from a robot.

To still others, the practice may seem sacrilegious. Seeking the religious counsel of a robot, venerating the wisdom of an entity constructed by a human being may be impious in the same way that worshiping an idol is impious.

Others may argue that robotic ministry misses something fundamental about the value of priesthood. Historically, priests have been persons. As persons, they share certain traits in common with their parishioners. They are mortal and they recognize their own mortality. They take themselves to be free and they experience the anguish that comes with the weight of that freedom. They struggle to be the best versions of themselves, tempted regularly by the thrills in life that might divert them from that path. Persons are often the kinds of beings that are subject to weakness of will—they find themselves doing what they know is against their own long term interests. Robots don’t have these experiences.

Priests that are persons can experience awe in response to the beauty and magnitude of the universe and can also experience the existential dread that sometimes comes along with being a mortal, embodied being in a universe that sometimes feels incomprehensibly cold and unfair. For many, religion brings with it the promise of hope. Priests are the messengers of that hope, and they are effective because they deliver the message as participants in the human condition.

Relatedly, one might think that a priest is a special form of caregiver. In order to give effective care, the caregiver must be capable of experiencing empathy. Even if robots are programmed to perform tasks that satisfy the needs of parishioners, this assistance wouldn’t be conducted in an empathetic way, and the action wouldn’t be motivated by a genuine attitude of care for the parishioner.

One might think that human priests are in a good position to give sound advice. Though that may (in some cases) be true, there is no reason to think that robots can’t also give good advice if they are programmed with the right kind of advice to give. What’s more, they may be uncompromised by the cognitive bias and human frailty of a typical priest. As a result, they may be less likely to guide someone astray.

Of course, as is often the case in conversations about robotics and artificial intelligence, there are some metaphysical questions lingering behind the scenes that may challenge our initial response to the appropriateness of robotic priests. One argument against priests like Mindar may be that the actions that Mindar performs are, in some way, inauthentic because they come about, not as the result of the free choices that Mindar has made, but instead as a result of Mindar’s programming. If we think this is a significant problem for Mindar and that this consideration precludes Mindar from being a priest, we’ll have to do some careful reflection on the human condition. To what degree are human beings similarly programmed? Physical things are subject to causal laws and it seems that those causal laws, taken together with facts about the universe, necessitate what those physical things will do. Human beings are also physical things. Are our actions causally determined? If so, are the actions of a human priest really any more authentic than the actions of a robotic one? Even if facts about our physical nature are not enough to render our actions inauthentic, human beings are also strongly socially conditioned. Does this social conditioning count as programming?

In the end, these considerations may ultimately point to a single worry: technology like this threatens to further alienate us from ourselves, our situation, and our fellow human beings. For many, the ability to respond to vital human interests like love, care, sex, death, hope, suffering, empathy, and compassion must come from genuine, imperfect, spontaneous human interaction with another struggling in a similar predicament. Whatever response we receive may prove far less important than the comfort that comes from knowing we are heard.

DNA Dating

photograph of person looking at phone display of tinder profile

Recently, geneticist George Church attracted controversy for his involvement in Digid8, a startup company which proposes to use DNA comparisons in dating apps to help limit the probability of two people who share a genetic mutation exposing their potential offspring to serious genetic disease. The idea has been met with charges of racism and trans-phobia, yet Church maintains that this is an important step towards the elimination of all genetic diseases. While charges of Nazi-like eugenics projects are premature at this stage, there are genuine moral dilemmas involved with projects like this.

The central idea behind the project is to let users know about their and a potential partner’s genetic background. For instance, it will inform them if they carry recessive genes connected to genetic illnesses like sickle cell anemia, Huntington’s disease, or Tay-Sachs. It is possible for a person to carry such genes but not have the disease if they also carry a healthy dominant gene. However, if two potential parents both have the recessive gene, then their offspring has a twenty-five percent chance of suffering from the disease. Through a dating app, potential matches would be made aware of this and could be given the opportunity to plan accordingly.

As genetic testing gets cheaper, projects like this become more possible. Ethical concerns are abundant, ranging from the issues involved with sharing genetic information with a corporation to the potential to share additional genetic markers that go beyond disease. We also need to distinguish between this specific proposal and Digid8’s implementation of it, and the more general idea which could be implemented in different ways. It is possible for specific proposals to be implemented poorly or unethically apart from the ethics of whether such DNA comparisons should be used at all. Since the specific proposals are still in planning and because other companies may follow suit, I will focus on the general moral concerns.

DNA comparison to help make informed pregnancy considerations is not new. Existing prenatal genetic screening already offers us the opportunity to make these types of decisions. So there is a good deal of overlap concerning the ethical dimensions at play. In addressing the issue of eugenics in prenatal screening, Tom Shakespeare distinguishes between strong eugenics where there is an effort at population-level control of reproduction at the state-level, and weak eugenics which promotes technologies of reproductive selection via non-coercive individual choices. Such a distinction allows us to “avoid rhetorical excesses” and “disingenuous dissociations with unfortunate historical precedents.”

But we might also be concerned about the kinds of conditions being screened for. As Shakespeare notes, not every form of genetic variation constitutes a disease. While social barriers can make life harder, people born with Down’s syndrome or congenital deafness may be healthy individuals presenting no medical problems. On the other hand, there are genetic conditions where, regardless of social context, one will experience suffering, pain, and premature death. Thus, one of the most important questions is about which conditions and genetic markers are appropriate to identify and screen for.

An important factor that also needs to be considered when addressing genetic illness is the fact that several thousand illnesses are triggered by environmental factors. For instance, while there is a genetic component to diabetes, developing the condition is heavily dependent on individual diet. With different conditions being a result of genetic/environmental interaction, determining the ethics of screening for certain conditions becomes a tricky matter.

There are also additional concerns regarding public information. Currently, prenatal screening practices done in hospitals and clinics afford the possibility of consultations and counseling with obstetricians, geneticists, or pediatricians, while DNA comparison through a social media app does not afford such consultation unless it is specifically sought. This means that there is a greater chance of potential couples making uninformed or badly informed decisions. If a dating app presents information about possible genetic conditions, the person needs to have the reliable resources necessary to make informed choices about who they choose to date.

Despite these concerns, there are good reasons to pursue services that allow for DNA comparison. Certain diseases are responsible for much suffering, and so there are ethical reasons to try to prevent a child from being born with them. As Shakespeare argues,

“conditions like Tay-Sachs disease or anencephaly causes major suffering, leading to a very premature death. It is important to argue that living as a disabled person is a viable and valuable form of existence, but that existing without any possibility of a real life is not living at all.”

Services like that proposed by Digid8 may help prevent significant suffering.

There is also value in knowing what one may be in for in a potential pregnancy, even if this information may not affect the decision to have a child. In prenatal testing, women sometimes seek prescreening even while knowing that the results will not affect their choices. The information can still be valuable to a candidate parent about what they may expect. Similarly, as Church notes, the Digid8 service is not intended to block dates for people carrying dominant disease genes. Instead, this kind of information can be helpful for potential couples in their plans for creating a family.

On the other hand, there are obvious ethical concerns with DNA comparison. While it may only constitute weak eugenics in the democratic world where there is more choice and freedom and where laws and regulations will try to protect human rights, other areas of the world may not use the technology in the same way. For example, while democratic nations may limit the uses of the technology to inform users about serious and debilitating genetic conditions, authoritarian nations may seek to expand the DNA comparisons to support pseudo-scientific aims or may use the information to restrict and isolate groups of people and create greater social barriers. Thus, such technology could also be used for strong eugenics purposes. Without careful ethical oversight the technology could be used and tailored for more niche purposes like designer babies, to prevent the birth of gay and transgendered people, or to help groups of people engage in self-segregation practices.

There is much potential for good or bad in the DNA comparison services like that Church proposes, and discovering and solving the ethical problems that occur will likely take time. Trying to sort out the ethical issues involved in situations like this will be aided with input from those who have genetic conditions. Their experiences and testimonials may vary, but we need to listen to people directly affected. Those who suffer from Huntington’s are far more likely to favor prenatal diagnosis over those who are at-risk. Thus, sorting out what kinds of laws, policies, and codes of conduct may be required for DNA comparison must factor these experiences into account.

Dry January

photograph of New Year's Resolutions list

The beginning of the year is often the time when people start thinking about the changes they want to make in their lives. While traditionally this has come in the form of New Year’s resolutions, it has become more popular recently to see January in particular as a time to abstain. Two of the more recent popular forms of January abstinence are “Dry January” – in which one does not drink any alcohol for the month – and “Veganuary” – in which one eats only a vegan diet for the month. There is, of course, nothing special about January that makes it a particularly good month from which to remove alcohol or animal products from one’s diet, but it is no doubt the feeling of a fresh beginning that comes with the new year, combined with potential regrets from overindulgence from the holidays, that explain how these trends have caught on.

There seems to be something virtuous about abstaining from things that you like but are bad for you. So should you be joining in and cutting out alcohol and animal products for the month? Is this really what the virtuous life requires?

First things first: there are clear health benefits to drinking less alcohol. While we have no doubt all come across articles of varying degrees of clickbait-ness proclaiming the health benefits of moderate drinking (perhaps “antioxidants” are mentioned), there is little reason to think that alcohol consumption can actively do any good, at least in terms of one’s physical health. That’s not to say that any level of alcohol consumption will necessarily cause a great amount of harm, but rather that it should probably be seen as an indulgence, as opposed to something that’s actually good for you. There is also reason to think that reducing meat consumption (especially red meat) can have many health benefits; additionally, there are persuasive ethical arguments – both in terms of preventing harm to animals and protection of the environment – that should incline us towards eating less meat. So it does seem that we have plenty of reason to cut down on both alcohol and meat.

That doesn’t necessarily mean that complete abstinence is what’s required, though. First, it is not definitive that abstinence months will, in fact, accomplish the results they aim to. On the one hand, there are some concerns that there empirical evidence about the effectiveness of abstinence campaigns like Dry January, and that as a result it’s not clear whether it could actually result in more harm in some portions of the population. Ian Hamilton at York University worries that,

“Although not the intention, people may view their 31 days of abstinence as permission to return to hazardous levels of consumption till next New Year’s day. ‘I’ve had a month off, so now I can drink as much as I did before, ignoring the need for regular breaks from alcohol.’”

On the other hand, however, there are some tentative studies that do suggest that participating in Dry January does, in fact, result in lower alcohol consumption 6 months later, although there are perhaps some limitations to such studies (self-reports of alcohol consumption generally need to be taken with a grain of salt, there are many variations within a population that are difficult to control for, etc.)

With regards to Veganuary, some have worried that the difficulty in changing one’s diet so drastically – especially if one is used to eating a lot of meat – could backfire, resulting in people being less likely to try to adopt such a diet long-term. Some research suggests that campaigns aimed at reducing, as opposed to completing eliminating meat from one’s diet can be effective at reducing meat consumption long-term. There is also the worry that such diets are just not feasible for those who do not have access to good vegan food options, or who are unable to afford them.

So should you participate in Dry January and Veganuary? Well, that depends. As I mentioned above, there is nothing about the month of January that makes it a particularly important time to start being virtuous. If there are moral reasons to drink less and eat fewer animal products, then, these reasons apply just as equally at all other times of year, as well. And just because you’ve done what you ought to for a month does not mean that you’re off the hook until 2021. There is also reason to think that aiming towards moderation as opposed to immediate and complete elimination can be a good way of making long-term positive changes. If not drinking for all of January means that you’ll just drink twice as much in February to make up for it then you’re hardly doing any good.

One undoubtedly positive aspect of these January abstinence campaigns, however, is that they may encourage people to reflect upon habits that might require adjusting. And concerns about rebound effects aside, there seems to be very little harm that could result from taking the month off from drinking alcohol or eating meat. With regards to Dry January in particular, Ian Gilmore from Liverpool University suggests that, “Until we know of something better, let’s support growing grassroots movements like Dry January and Dry July in Australia and take a month off.”

Collective Action and Climate Change: Consumption, Defection, and Motivation

photograph of dry, cracked earth with grass growing on a few individual pieces

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Last month, the United Nations held talks regarding international climate agreements in hopes to abate and adapt to the changes to our global environment due to the industrialization and emissions by humans. With the president of the United States vocally skeptical about the moral imperative to action, other countries have been considering defection from previous joint commitments. If the United States continues to consume resources at its current rate, the United Nations’ climate goals will not be possible. This is a feature of climate change agreements: they impact nations differently, irrespective of any country’s particular contribution to the problem. Thus, tied up with the responsibility to commit to taking action is the question of whether there should be different burdens depending on the level of industrialization and development (i.e., consumption) of the nation in question.

Environmental ethicist Martino Traxler distinguishes between two approaches in assigning the duties and burdens for responding to climate change. The first approach would be “just”, in that it takes into consideration the historical context in which we now find ourselves, as well as the power/structure dynamics at play. For instance, placing identical duties on all countries to reduce energy consumption equally doesn’t attend to countries’ differing ability to pay, varying causal contributions to the current state of things, and the extent to which various countries have unequally benefited from previous policies and activities that were environmentally damaging. For instance, some developing nations are improving economically by taking advantage of some less-clean technologies, and it is arguably hypocritical for developed nations with a history of colonialism, imperialism, or military interference/manipulation to intervene at this stage and charge the developing nations with the responsibility to reduce their pollution. Such a policy would slow the progress toward leveling the international playing field. Countries that are developing now and changing the shape of their economies in order to grow out of poverty may have greater claim to use resources that have damaging effects on the environment than countries that have put the globe in a place of climate crisis while at the same time creating conditions for global poverty that allowing the use of environment-damaging resources would go some distance to alleviate.

Thus, some examples just approaches to alleviating the impact of climate change would be to make:

  1. Benefiters pay (in proportion to benefits)
  2. Polluters pay (in proportion to responsibility)
  3. Richest pay (in proportion to their ability)

This would distribute burdens unevenly internationally, and typically nations that have more industrialized infrastructure would bear heavier burdens than those nations that are in the process of building their economies. This can be concerning for the most-developed nations and the nations that have experienced the most power and privilege historically. For nations that already have systems of infrastructure that involve emitting greenhouse gases, for instance, committing to being more environmentally responsible according to the just approach can amount to committing to a strikingly different way of life. Some still may recall George H. W. Bush’s declaration at the 1992 Earth Summit in Rio de Janeiro that “The American way of life is not up for negotiations. Period.” In 2012 in Rio de Janeiro, delegates noted that a child born in the developed world consumes 30 to 50 times as much water as one born in the developing world, emphasizing the different relations to resource consumption depending on the ways of life established in one’s society.

These features highlight how a “just” approach to burden sharing may disincentivize the richest, most internationally powerful nations from agreeing to combating climate change. The alternative approach style is a “fair” model of burden allocation, which has each nation share evenly in the responsibility to remain in a safe zone of resource consumption and emission standards. The fair approach presents a clean slate and looks forward, rather than backward at the contextual perspective of the just approach. The approach is fair in the sense that there is even distribution of responsibility, but most find the just approach more appropriate. Are there reasons why we might find the fair model appealing? The breakdown at the UN last month suggests several reasons why.

When we face an issue like climate change, nations need to determine how to act together. It’s difficult enough to determine the path for a single country or nation, but collective action like this requires agreement and commitment that structurally resembles classic dilemmas from game theory and economics. Traxler argues that because of these structural similarities, we should opt for fair rather than just allocations. In short, because the cost of the rich countries deferring is so great, we need to construct agreements that don’t over-burden them. As we see this year with countries like China and India hesitating to make commitments to change if the U.S. isn’t on board, Traxler may be onto something. In modeling climate change agreements in terms of incentivizing the related parties to not defer, Traxler is suggesting that we face an international Prisoner’s Dilemma.

The Prisoner’s Dilemma is a classic problem in game theory. The puzzle arises when what is in the best interest of a collective diverges from what is in the interest of an individual. The Prisoner’s Dilemma is set up as follows: two burglars (or criminals of some sort) are arrested and separated so they cannot coordinate. The authorities are attempting to get a confession from at least one of the burglars to aid in their investigation and conviction. If both burglars remain silent, they will each receive a sentence of 1 year. The authorities are attempting to incentivize cooperating with them because they would get information and successful convictions. However, the information will be most useful if one burglar cooperates and turns on the other, allowing the authorities to “throw the book” at one of them. Therefore, if one of the burglars betrays their comrade and gives information to the authorities while the other remains silent, this would result in the betrayer going free and the silent conspirator getting a 3 year sentence. If both burglars attempt to achieve this end (confessing in hopes that the other remains silent), the authorities have information on both of them and neither can go free. Instead, both will serve 2 years. Each burglar thus faces the decision of whether to cooperate with their conspirator or betray them and confess to the authorities. The possible combinations and outcomes are displayed in the table below.

The reason this poses a game theoretic dilemma is that the ideal outcome for each burglar is to exploit the cooperation of the other (i.e., to betray while the other stays silent). This creates a scenario where what is in the individual’s best interest is at odds with what is in their collective interest; if we consider the burglars together as a group and evaluate what would get the best outcome overall, then they should both remain silent. If, however, the individual aims for his or her best individual outcome, then there is great pressure to defect so as to potentially receive the best individual outcome (go free) and always avoid the worst (3 years).

As individual nations, it is in our best interest to over-consume and hope that the rest of the international community behaves responsibly and attempts to save the planet. That way we enjoy the benefits of our consumption of resources AND the benefits of the rest of the international community’s conservation efforts. BUT! If everyone in the international community behaves this way (parallel to both burglar’s betraying their partnership), we end up in a bad scenario. What is in the interest of the collective is for everyone to cooperate, but individual interests encourages us to over-consume.

The structure of the Prisoner’s Dilemma arises in a few areas of public life, not just among police bargaining with alleged criminals. The key to the dilemma is that what is collectively rational comes apart at times from what is individually rational. This is also the tension in Tragedy of the Commons cases. The Tragedy of the Commons describes cases in economics or scenarios in shared-resources situations where the best thing to do from an individual level is to consume more than others, but this focus on immediate use leads to a situation such that our shared ability to use the resource over time is undermined.

Described by William Forster Lloyd in the 19th century, we could imagine a case of the Tragedy as a river by a village that sustains an ecosystem containing fish. If the village fishes in the river some reasonable amount over time, the river is sufficient to replenish itself and feed the village. So, collectively speaking, it’s rational for the village to maintain fishing levels at this restrained rate. However, at an individual level, each villager is in a position where they have access to a resource that is potential income, and though it undermines the future use of the river and the other villagers’ potential use of the resource, overfishing to get more food and more resources is in the individual’s best interest. Thus in a similar way that the burglars have conflicting strategies in the Prisoner’s Dilemma, in shared-resource scenarios, there is tension between individual and collective interests.

In order to avoid countries defecting and not living up to the commitment to abate climate change, we may need to “sweeten the deal” a bit, according to Traxler and not go with the just approach that would perhaps be too burdensome to the richest, most industrialized nations. This would give them fewer reasons to over-consume and ignore the agreements. However, it would also perhaps produce less pressure for the richest and most able countries to aggressively researching “cleaner” alternatives to the technology so that they may continue something like their current ways of life.

Military Operations and Questions of Collective Responsibility

photograph of soldiers in uniform saluting

On January 3, while at a ceremony for Evangelical Christians in Miami, Donald Trump announced the execution of Iranian General Qasem Soleimani, saying “…he was planning a very major attack and we got him.”

Two days later, multiple news outlets reported that the United States will be deploying roughly 3000 soldiers to the Middle East in response to escalating tensions, with possibly several thousand more to follow.

On Tuesday, several hours before Iran attacked an Iraqi military base housing US troops, U.S. Defense Secretary Mark Esper told CNN that “We are not looking to start a war with Iran, but we are prepared to finish one.”

Although Trump and Esper are clearly referring to groups of people when they say “we” have done (or will do) such things, it is far from clear exactly who they understand to comprise those groups. Is “[the military] prepared to finish a war with Iran” or does Esper mean “[the American people]”?

Similarly, attributing responsibility to collective nouns like “the United States” is vague – what portion of US citizens, for example, made the decision to deploy troops overseas? Clearly, since citizens do not directly vote on either federal or military operations, such a question is confused in several ways. So, perhaps, “the US” should be understood as an abstract concept along the lines of “a nation-state that is different than the sum of its parts” with some individual or sub-group (like “the government”) responsible for making practical decisions.

This is a small example of what philosophers call “The Problem of Collective Responsibility.” Many considerations of the nature of blameworthiness are interested in questions of individual culpability – “what do I deserve as a consequence of my own actions?” However, some philosophers have suggested that collections of agents can be viewed as culpable (or innocent), such as hate groups or terrorist organizations – however, this raises a host of questions. Transferring between group-based blame and individual culpability is tricky (if one soldier commits a war crime, should his entire unit be held responsible for them?). Internal disagreement within a group seems problematic as well (is it right to hold a full group responsible for something if 62% of the less-powerful individuals in the group disagreed with the decision? What if only 43% protested?)

Nevertheless, collective-responsibility models are not without precedent. For centuries, the just war tradition has relied on distinctions between “combatants” and “non-combatants” to codify its rules for jus in bello; consider the statement released on Sunday by Hezbollah threatening all US military agents that also explicitly stated how US civilians should not be targeted.

So, consider the soundbite “We got him” – who is the “we” actually responsible for killing Soleimani? Multiple interpretations of Trump’s term “we” seem possible:

  1. The individual pilots of the drone that killed Soleimani,
  2. The military unit engaged in the attack,
  3. The military unit and its line of commanding officers (up to and including the Commander-in-Chief),
  4. The US military as a whole,
  5. The US military and the US government as wholes,
  6. The collective citizenry of the US,
  7. The nation-state of the US (as an abstract concept),
  8. The particular group of people in Miami where Trump was delivering his speech.

And this list could go on.

By saying “we” (as opposed to “they”), Trump includes himself in the responsible group, ruling out options 1 and 2. It seems like option 8 could also easily be rejected, but it also seems reasonable to think that Trump was attempting to include his audience in his celebration, at least in part, thereby ruling out options 3, 4, and 5.

If Trump means “[the United States as a collection of citizens] got him” (that is, if he means option 6), then he’s attributing responsibility for Soleimani’s death to millions of people (including children) who have never heard of Soleimani, have never voted, and – in many cases – would explicitly reject such an operation if they had the option to do so. Each of these outcomes seems, at best, odd.

So, at this point, option 7 – the “US as an abstract concept” choice – appears to be the least problematic. Admittedly, this is the sort of tactic we take in other contexts to explain how group identities remain constant over time, even as group membership fluctuates (the 1997 Colorado Rockies and the 2019 Colorado Rockies are, in some sense, the same baseball team, despite no player from the 90s remaining on the roster). But abstract constructs cannot be held morally responsible – only individuals can! If every member of the 1997 Rockies were found to have been using steroids throughout their season, it would be unjust to punish the 2019 Rockies because the individuals are different. If people cannot be blamed in this way, then it seems like they also cannot be praised in this way, leaving Trump’s “we” to be puzzling once more.

Collective responsibility problems are messy and far from intuitively obvious. This point is always useful to remember when listening to representatives of organizations or governments, but it is especially important when war drums are starting to beat.

WWIII?: Desensitization, Alarmism, and Anxiety

image of British WWII enlistment poster

With the recent American airstrike which killed Iranian General Qassem Soleimani the internet has been abuzz with talk of a Third World War. This includes plenty of talk on Twitter about an imminent global war as well as countless memes which address everything from the potential widespread damage to the possibility of being drafted. This raises an important ethical issue: is it justifiable to raise the specter of a Third World War over this matter and is it okay to joke about such things?

To recap the current situation, following attacks on the American embassy in Baghdad, American drones targeted Soleimani just after he was getting off of a plane in Baghdad killing him and nine others. In Iran, Soleimani was considered a “folk hero” for his long military history. Following the attack, Iranian officials called the move “a foolish escalation,” while the Iranian President has called for revenge. Because of this situation, there are now worries that open conflict between the United States and Iran is a real possibility and that this could be the beginning of a new World War. Comparisons have been made on Twitter between the death of Soleimani and the death of Archduke Franz Ferdinand whose assignation famously started the First World War.

While a war between the United States and Iran is possible, the outbreak of a world war is unlikely. Still, the use of the concept of “Word War Three” is ethically salient. For starters it is alarmist language that could cause some forms of panic. The website of the Selective Service System crashed on Friday because of the “spread of misinformation” as many apparently were concerned about the possibility of being drafted into military service. Secondly, the flippant use of the term “World War Three” can now be added to several other historical scares, and continued use of the term may not only desensitize us to the possibility of an actual future war but also to the First and Second World War. It is easy to label a conflict as a possible “sequel” to the two historical events, but it is much more difficult to review the historical record in order to understand why those two wars were world wars, why they came about, and how the world has since changed in comparison.

Beyond the rhetoric, there is also controversy because of the jokes and memes which the incident has inspired. Several of these make reference to the possible widespread death, being drafted, being imprisoned for refusing the draft, and becoming a prisoner of war. A recent article by Katherine Singh is critical of such posts noting, “people on the internet have pretty much not taken the issue seriously.” She argues that joking about the draft is rude to those who were drafted and had to serve in the armed forces prior to 1973. It is additionally rude to those who are currently serving in the Middle East and face a very real danger of potential harm; such jokes do not take into account the real effects of war and the threat to civilians in the region. She asks, “How horrible is it that we’re so desensitized to warfare that we make memes and jokes about the prospect of airstrikes and combat?”

On the other hand, joking or making light of a world war is hardly new. Many early recruits of the First World War jumped at the chance simply to escape the boredom of life at home as the possibility of going to France and meeting French girls seemed exciting. Recruitment stations often advertised a “free trip to Europe.” Once at the front “trench newspapers” would joke that the sport of hunting was “open season all year round” with “no permit required.” Pilots in the Flying Corps, who faced the real possibility of burning to death in their cockpit, would joke about “joining the sizzle brigade.” Such humor would not be out of place in the memes of today. In the Second World War the slow start was dubbed the “The Phoney War” and the “Sitzkrieg.” During The Blitz of London, as civilians died or lost their homes, the BBC broadcasted satirical pieces about fictional officials of the German Propaganda Ministry. The fact that people of the day could joke about such things does not mean that they were desensitized to war, but rather that humor can be a way of dealing with anxiety and stress.

The humor found in modern day memes about a hypothetical World War Three is not necessarily any different than the humor derived from previous world wars. They may not reveal a desensitization to war, but they may, in fact, be a symptom of real anxieties that younger people might have about their future and their control over it. If such a war broke out, it would be younger people who would be expected to do most of the fighting and would have to deal with the long-term consequences. Large global conflicts such as World War Two or even the Cold War are not part of the lived experiences of many young people, and so the possible threat of a break down in the global order may be a real source of anxiety over a future that they are not prepared for. This can be true of other cases like climate change or the rise of populist and nationalist movements as well.

One must also consider whether joking about avoiding the draft or about going to war is worse than mere indifference to these possibilities. If the reaction was a collective shrug, it might suggest that people are desensitized to the possibility of war. Everything requires perspective. In some cases, alarmist language and jokes about war can be ethically harmful and desensitizing. It’s easy to euphemize the assassination of a general by stating that “Bullies Understand a Punch the Nose,” but the analogy breaks down if the bully then decides to set your neighbor’s house on fire in response. On the other hand, making light of war can be a way of reminding us what is at stake and help us deal with our anxieties about global conflicts that we may have little control over.

Jus ad Bellum: US, Iran, and Soleimani

photograph of General Qasem Soleimani in military uniform

On Thursday, January 2nd, the United States successfully executed high ranking political Iranian military targets that were in Iraq at the time. The United States was not engaged in military conflict with Iran (i.e., the states were not at war), and so the justification for the US’s deliberate killing of Iranian officials has been called into question and widely criticized.

Because the assassination took place on Iraqi soil, Iraq is requesting that US forces evacuate their territory. Iraq is interpreting US actions against Iran to be a violation of their territory, and now remaining in Iraq without their permission could constitute a further act of aggression.

Though Soleimani was undisputedly no “friend” of the United States, targeting him with military force is incredibly controversial, if not seen by a growing consensus as the wrong thing to do. This is because the stakes involved here are not the morality of Soleimani and his participation in a regime, but rather the appropriate reasons for bringing military force against another state. It can be widely accepted that Soleimani did not act in the United States best interests, or in the best interest of the United States’ allies, and can even be granted that Soleimani’s death may in fact BE in the best interest of the United States and its allies. However, in the realm of ethics and political theory, these considerations do not warrant killing someone.

We can see that there is a high bar of justification for these actions in two frameworks: a pragmatic framework, or a “just war,” or deontic, framework. From a practical standpoint, we weigh the harms and benefits of different plans of actions and assess the potential fallout. Consider, for instance, that Soleimani actions and attitudes had not changed over the course of decades and yet previous administrations in the US had not targeted him for assassination. This has been attributed to the rationale that such an assassination “is not worth it” – the anticipated destabilization of the relations in the region risked by targeting such a high-ranking official in Iran was too big a gamble.

Rather than an assessment of potential outcomes and consequences, the standards in “just war” theory center on the conventions governing the proper and responsible use of military-level harm (just cause, proportionality, etc.). One of the least controversial moral dictates is that harming one another is bad and that we ought not to do it without the strongest of justifications. Wars and military actions are systematic harms at a grand scale, so their justification should at least parallel the stakes involved in harming one another on the personal level.

Because Soleimani represents Iran in his role as leader of Iranian military forces, taking military action against him amounts to taking military action against Iran. [Unless the international community has decided that his regime is illegitimate or that Soleimani can be considered to be an independent agent, force against him amounts to aggression against a legitimate state – Iran.] On a Just War framework, initiating acts of aggression like these against legitimate states represent justifiable cause for war on the grounds of self-defense.

To commit an act of violence that constitutes a cause for war is judged by high standards indeed because of the stakes involved in war; wars include untold violence and suffering not only to those who willingly participate but to bystanders and those caught up in the conflict. As such, the justifiability of states’ use of military force is limited, according to Just War Theory, to a handful of reasons, the strongest of which is in response to aggression.

In casual terms outside discussions of war theory, aggression includes any hostile or violent behavior, but for political actors, it means something very specific. In international relations, “aggression” is the term used for the crime of war itself. It articulates that a state has violated the territorial integrity or political sovereignty of another legitimate state.

Just as each person has a right to the safety of their body from molestation, states have a right to territorial integrity: a right to control land within one’s border. The most straightforward way of violating this on the personal level is to assault someone and on the political level, to invade. “Political Sovereignty” notes the right a legitimate state has to self-determination, paralleling the right to autonomy that an individual has. States can set up their political organizations in terms of democracies, monarchies, etc., and have the right to run their governments without interference from other states. State A violates the political sovereignty of B when A tries to change B’s political structure. Such acts, along with territorial invasion, are the most straightforward instances of acts of aggression by one state on another and thus count as acts of war.

When someone has attacked you, common moral principles hold that you would be justified in defending yourself. Therefore, in the political sphere, if a state commits an act of aggression towards your state, you may be justified in responding with force.

Soleimani’s assassination, however, fits uncomfortably in this framework. The justification currently provided by the administration is that there was an anticipation of aggression by Iran. Soleimani and the Iranian government had not yet committed acts of aggression and therefore we were not in a war scenario. Therefore, by engaging militarily in acts of aggression, the US initiated aggression by interfering with the Iranian state.

Because there was anticipated aggression against the US, the self-defense justification can be attempted for the assassination. However, when claiming to defend yourself against an attack you only think will happen, that attack must be imminent and great. This is a difficult set of circumstances to establish and in the present case a great deal of doubt already exists.

One route to justifying the attack is to attempt to categorize it as a targeted terrorist killing rather than an assassination of a high-ranked head of state. This is an important distinction, because employees of the United States, as well as anyone acting on behalf of the United States, are forbidden from conspiring or engaging in assassinations, political or otherwise, according to Executive orders by Presidents Ford and Reagan. President George W. Bush allowed for the “targeted killings” of terrorist funders and leaders, but in a manner consistent with these executive orders, and, since 2001, that has been where executive dictation has remained.

The US has targeted many individuals under the guise of the “War on Terror” by categorizing individuals as terrorist leaders and thus not representing states. But attempts to characterize Soleimani’s execution in such terms will not protect the US’s behaviors from international scrutiny or an Iranian response. (Iran has promised retribution and declared that it will no longer abide by the terms of the nuclear prohibition deal that Trump’s administration had quit Spring 2018.) This situation is markedly different. The killing of political representatives of a foreign state outside of wartime will seem to many a straightforward act of aggression, complicating the US’s claim to self-defense.

Australia’s Apocalyptic Summer

photograph of smoke on horizon from Australian bushfire

Summers in Australia are always hot. During the break over Christmas and New Year tens of thousands of people are abroad, traveling to holiday destinations up and down the coast. Mallacoota on the east coast of Victoria, is one small seaside town among hundreds whose numbers swell with holidaymakers at this time of year.

In Mallacoota on New Year’s Eve of 2019 an escalation in Australia’s month-long bushfire crisis gave rise to truly horrifying scenes when thousands of tourists and locals were forced to flea and shelter on the beaches as bushfire ravaged the town. As the sun rose on December 31, the sky was glowing orange. Some observers described the scenes that followed as Armageddon. At around 9am the sky blackened, to the visibility of midnight, and for the next four hours those who had fled to the waterfront huddled as fire ripped through the town and burned forest virtually up to the water’s edge.

The devastation wrought on this small town was so severe that all roads in and out were, and remain at the time of writing, closed. Advice for those still trapped in Mallacoota is that roads may not be reopened for up to two weeks. Many thousands of people remain on the beachfront. The Australian Navy have sent a vessel to collect just under one thousand people, an operation which is currently underway at the time of writing. This was just one town, similar scenes were repeated up and down the south east corner of the country.

Tens of thousands of people are, at the time of writing, attempting to evacuate coastal towns in Victoria and New South Wales ahead of a weekend during which even more dangerous conditions loom, as temperatures are set to rise to up to 44C (112F) in places. Many are trying to exit areas already hit by fires, with highways closed, and stores running low on food and petrol supplies dwindling.

Emergency services are struggling to cope. Three volunteer fire-fighters have been killed in extraordinary fire conditions. One fire-fighter was killed when a cyclonic weather system created within the fire overturned an 8-ton truck. Where previous fire seasons have seen large-scale disasters, they are normally single events. Never has there been a situation like this with multiple emergency level fires burning simultaneously in every state.

Australia is the driest inhabited continent on Earth, and has always been fire prone, but this is different. After three years of severe drought, the air and the land is so dry that it is literally combusting. We are in no way prepared or equipped to deal with the scale of this event, which has overwhelmed emergency services. The descriptor we are hearing over and over again, from emergency workers, is “unprecedented.”

This is the very outcome climate science has been warning about for at least 2 decades. And, more recently, this is the hellish scenario that a group of fire and emergency chiefs have been trying to warn the current government about. Back in April 2019 a group of former fire chiefs tried and failed to get the attention of the Prime Minister, Scott Morrison, in order to warn that the coming fire season would be the most severe the country has ever seen by a long way. The Prime Minister refused to meet with them. The reason is that the Australian government does not want to talk about climate change.

I wrote a several articles for this publication in 2019 about the climate emergency, examining the issue from different ethical angles. I emphasized how dangerous the situation is becoming, and how urgent is the need to act. I discussed Australia’s inadequate climate policy, the current government’s refusal to acknowledge the problem and its addiction to the coal industry. I wrote about the new generation of civil disobedience and community mobilization in the face of government inaction and the moral case for nonviolent disruptive tactics. I have argued there should not be a moral conundrum here, and in my most recent piece I noted that moral arguments seem simply not to be working.

Yet nothing has prepared me for the severity, the shock, of what is happening here right now, and even with what I understand about the Morrison Government’s intransigence, yet I am still shocked by its lack of empathy and understanding in its response to the crisis.

The Prime Minister refuses to call the crisis ‘unprecedented,’ saying that we have always had fires and recalling smoke in Sydney when he was a child. This belies what everyone else acknowledges: that it is totally outside our past experience of the bushfire season. Morrison cheerfully suggested that Australia is ‘the best place to bring up kids’ while picture after picture emerged of children fleeing holiday houses, or worse, picking through the wreckage of their own family homes; of melted bicycles, pools filled with ash and kids playing on swings wearing masks to filter out the hazardous air. He has counseled people not be anxious and doubled down on his blithe defense of Australia’s climate policy. He continues to suggest that the cost, to the economy, of a more ambitious climate policy is greater than the the cost of inaction:

“…our policies remain sensible, that they don’t move towards either extreme, and stay focused on what Australians need for a vibrant and viable economy, as well as a vibrant and sustainable environment. Getting the balance right is what Australia has always been able to achieve.”

I can offer no further analysis of these remarks. The country is burning and the whole world should take a look in our direction, because this is what the cost of inaction really looks like.

Eventually economics will catch up, and the economic costs of global warming will overtake those of the transition to clean energy and carbon abolition. If such considerations are the only factors that can motivate some leaders (like Morrison) then how long that takes will determine how much worse this gets.

What is certain is that it is going to get worse, and soon such events, in Australia and elsewhere, will indeed no longer be unprecedented.

Resurrecting James Dean: The Ethics of CGI Casting

A collage of four photographs of James Dean

James Dean, iconic star of Rebel Without a Cause, East of Eden, and Giant died in a tragic car accident in 1955 at the age of 24. Nevertheless, Dean fans may soon see him in a new role—as a supporting character in the upcoming Vietnam-era film Finding Jack.

Many people came out against the casting decision. Among the most noteworthy were Chris Evans and Elijah Wood. Evans tweeted, “This is awful. Maybe we can get a computer to paint us a new Picasso, or write a couple new John Lennon tunes. The complete lack of understanding here is shameful.” Wood tweeted, “NOPE. this shouldn’t be a thing.”

The producers of the film explained their decision. Anton Ernst, who is co-directing the film, told The Hollywood Reporter they “searched high and low for the perfect character to portray the role of Rogan, which has some extreme complex character arcs, and after months of research, we decided on James Dean.”

Supporters of the casting decision argue that the use of Dean’s image is a form of artistic expression. The filmmakers have the right to create the art that they want to create. No one has a right appear in any particular film. Artists can use whatever medium they like to create the work that they want to create. Though it is true that some people are upset about the decision, there are others that are thrilled. Even many years after his death, there are many James Dean fans, and this casting decision appeals to them. The filmmakers are making a film for this audience, and it is not reasonable to say that they can’t do so.

Many think that the casting of a CGI of Dean is a publicity stunt. That said, not all publicity stunts are morally wrong. Some such stunts are perfectly acceptable, even clever. Those that are concerned with the tactic as a stunt may feel that the filmmakers are being inauthentic. The filmmakers claim that their motivation is to unpack the narrative in the most affective way possible, but they are really just trying to sell movie tickets. The filmmakers may rightly respond: what’s wrong with trying to sell movie tickets? That’s the business they are in. Some people might value authenticity for its own sake. Again, however, the filmmakers can make the art that they want to make. They aren’t required to value authenticity.

Those opposed to the casting decision would be quick to point out that an ethical objection to the practice need not also be a legal objection. It may well be true that filmmakers should be free to express themselves through their art in whatever way they see fit. However, the fact that an artist can express himself or herself in a particular way doesn’t entail that they should engage in that kind of expression. CGI casting, and casting of a deceased person in particular, poses a variety of ethical problems.

One metaethical question posed by this case has to do with whether it is possible to harm a person after they are dead.  One potential harm has to do with consent. If Dean were alive today, he could decide whether he wanted to appear in the film or not. His estate gave permission to the production company to use Dean’s likeness, but it is far from clear that they should be able to do so. It is one thing for an estate to retain ownership of the work that an artist made while living. It is reasonable to believe that the fruits of that artist’s labor can be used to benefit their family and loved ones after the artist is dead. The idea that an artist’s family is in a position to agree to new art to be created using the artist’s likeness requires further ethical defense.

A related argument has to do with artistic expression as a form of speech. Often, the choices that an actor makes when it comes to the projects they take on are expressions of their values. Dean may not have wanted to participate in a movie about the Vietnam War. Some claim that Dean was a pacifist, so the message conveyed by the film may not be one that Dean would endorse. Bringing back James Dean through the use of CGI forces Dean to express a message he may not have wanted to express. On the other hand, if Dean no longer exists, it may make little sense to say that he is being forced to express a message.

Another set of arguments has to do with harms to others. There are many talented actors in the world, and most of them can’t find work. Ernst’s claim that they simply couldn’t find a living actor with the range to play this character is extremely difficult to believe. Filmmaking as an art form is a social enterprise. It doesn’t happen in a vacuum—there are social and political consequences to making certain kinds of artistic choices. Some argue that if filmmakers can cast living actors, they should.

There is also reason for concern that this casting choice sets a dangerous precedent, one that threatens to destroy some of the things that are good about art. Among other things, art is a way for us to understand ourselves and to relate to one another. This happens at multiple levels, including the creation of the art and the interpretation of its message. Good stories about human beings should, arguably, be told by human beings. When a character is computer generated, it might sever that important human connection. Some argue that art is not art at all if the intentions of an artist do not drive it. Even if the person creating the CGI makes artistic decisions, an actor isn’t making those decisions. Some argue that acting requires actors.

The ethical questions posed here are just another set that falls under a more general ethical umbrella. As technology continues to improve in remarkable and unexpected ways, we need to ask ourselves: which jobs should continue to be performed by living human beings?