← Return to search results
Back to Prindle Institute

When Moral Arguments Don’t Work

photograph of machines at a coal mine at dawn

In 2019, the global issue of the climate emergency has taken center stage, with the School Strike for Climate movement, led by Swedish teen activist Greta Thunberg, mobilizing 4 million people on September 20 to strike in the largest climate demonstration in human history.

It is of course well understood, by scientists and by much of the public, that burning fossil fuels is trapping carbon (and other greenhouse gasses) in the atmosphere and causing the world to warm. Since pre-industrial times the world’s climate has warmed by an average of 1C, and on the current trend of greenhouse gas emissions, will warm more than 3C by the end of the century.

Though it is becoming harder for climate change deniers to evade the existential implications of inaction, globally, governments are still prevaricating and fossil fuel companies are doubling down.

The climate and ecological emergency is clearly a grave moral issue, yet for many of those within whose power it is to act, moral imperatives to do so are either unrecognized or unheeded. This raises the question of whether moral arguments on this issue are defunct, in terms of their power to move those within whose power action lies.

Right now Australia is a case in point; the country is in the grip of an unprecedented bushfire emergency.

More than 3 million hectares has burned just in the state of New South Wales (NSW) already this bushfire season; in excess of 20 percent of the national park area of the Blue Mountains, adjacent to Sydney, has been razed, and a ‘mega-fire’ that emergency crews say can not be extinguished continues to rage. The state capital, and Australia’s largest city, Sydney, has been blanketed in toxic bushfire smoke for several weeks, and the city’s already low water supply is in danger of being poisoned by toxic ash. Out of control fires are burning as well in the states of Queensland and Western Australia.

This horrifying start to the summer has sparked a national conversation about the reality of global heating for an already drought and bushfire prone country, and also about the exponential costs of government inaction. It has elicited pleas from large sections of the public, and from professional organizations representing front-line health and emergency services, for the government to own up to its moral responsibility – all of which appear to be falling on profoundly deaf ears.

Back in 2007 then Labor Prime Minister Kevin Rudd stated: “Climate change is the great moral challenge of our generation.”

Yet, here we are in 2019 with a Liberal-National government that is determined to continue subsidizing the coal industry and whose refusal to countenance climate action is scuppering hopes of an effective international agreement.

Just last week, as the country burned, the latest round of climate talks in Madrid ended in stalemate, and Australia was accused of cheating (by claiming ‘carryover credits’ to meet its Paris target) and of frustrating global efforts to secure meaningful action.

The moral argument for climate action is not registering at a political level here, and it is impossible to miss the fact that this failure is inversely proportional to the government’s support for Australia’s coal industry.

This week 22 medical groups have called on the Australian government to phase out fossil fuels, and close down the coal industry, due to what it is calling a major public health crisis. Dr. Kate Charlesworth, a fellow of the Royal Australasian College of Physicians, said: “To protect health, we need to shift rapidly away from fossil fuels and towards cleaner, healthier and safer forms of energy.”

At the same time the Emergency Leaders for Climate Action are calling for a national summit to address climate change, and are criticizing the government for its failure to address the climate emergency.

Yet Michael McCormack, Deputy and currently the Acting Prime Minister told a press conference, which was being held in the incident control centre for a state-wide bushfire emergency that “… We need more coal exports.”

Given that moral imperatives are traditionally thought to be some of the strongest motivations we have for action, why aren’t the moral arguments cutting through?

The obvious, though depressing, answer is that the rapacious demands of neoliberal capitalism have managed to drown out the principled stance of moral analysis.

There is a plethora of literature available on the relationship of capitalism, neoliberalism, overconsumption and climate change. One need read no further, for example, than Naomi Klein’s 2014 book ‘This Changes Everything’ to understand the mechanisms by which neoliberal capitalism has caused the climate crisis and has systematically frustrated efforts to combat it.

If climate change is the great moral challenge of our generation, it is rapidly becoming its great moral failure. But since moral language is not working it is perhaps time, for pragmatic reasons, to deploy another set of concepts.

One suggestion is to recalibrate our analysis from the ethical to the clinical by thinking of the problem as one of addiction. Caution is obviously needed here – we do not want to make the mistake of assuming diminished responsibility. The point is, rather, that the concept of addiction allows the compulsive, subconscious elements to be taken into account as part of our understanding of the degree of difficulty we face in solving this problem.

Addiction is a psychological and physical inability to stop consuming (a chemical, drug, activity, or substance), even though it is causing psychological and physical harm. A person with an addiction uses a substance, or engages in a behavior, for which the rewarding effects provide a compelling incentive to repeat the activity, despite detrimental consequences. Traditionally, at least in western thought, ethics is a rational activity but we seem to be facing a situation where the rational is struggling to break through the dark and self-destructive compulsions of the addiction.

The coal industry is killing us, and the degree of difficulty of interrupting deeply entrenched patterns of addiction reflects the degree of difficulty of interrupting the Australian government’s commitment to the coal industry. Of course the issue is vastly larger than just the coal industry as such, but the Australian government’s relationship to coal is emblematic of the entrenched patterns of consumption to which all of us in rich countries are similarly addicted.

As we try to free ourselves from the grip of what is now threatening our very existence, moral arguments may be less effective than existential ones, and thinking in clinical terms may possibly arm us with the practical understanding we need to appreciate the difficulty of the kind of work that has, now, to be done.

State Neutrality and Public Holidays

photograph of sign at Korean Costco identifying holidays

Now that Christmas is over it is a good time to reflect on its role as an American public holiday and the role of public holidays more generally. Christmas, a holiday originating as a blend of pagan solstice festivals and a Christian celebration of Christ’s birth has become, in the United States at least, a fairly secular holiday. While claims of a War on Christmas are overblown, there certainly is a decline in religious association with the holiday. This should come as no surprise since the rate of religious participation in the United States has been declining for many years. According to Pew, among Millennials 44 percent consider Christmas more of a “cultural holiday.”

What is a “cultural holiday” and what differentiates them from ordinary holidays? According to Etymonline, the word “holiday” derives from Old English “haligdæg,” itself derived from the words “halig” and “dæg” meaning “holy” and “day” respectively. “Haligdæg” though came with the particular meaning of “holy day, consecrated day, religious anniversary; Sabbath” with the sense of “day of exemption from labor and recreation” only coming centuries later.

So, holidays originally were days of religious observance. People would stop working to engage in festivities and religious rituals. The ethics of celebrating these holidays is clear-cut. These holidays originate with the command of one’s god or gods, and the commands of the gods must be followed in obedience of the moral rule “one must obey the gods.” The moral imperative to observe so-called “cultural holidays” is less clear. Any good Christian celebrates Christmas but what justifies the celebration of Christmas for an atheist, whose ethics preclude obedience to the commands of supposed gods, or a Hindu, who worships other gods? Further still, what is the moral imperative for governments and corporations to observe these holidays and to thus give their employees a break from work at these times?

In a religiously homogeneous society, the justification for these bodies to collectively observe holidays is clear: if all their members must observe the holiday as per their shared deity’s commands, they will be unable to do anything else. But in a multicultural society, how do these bodies decide which religious holidays to observe and which to ignore? And, how do they decide what secular holidays to endorse, such as Martin Luther King Jr. Day or Columbus day? There are important consequences to these decisions. The observance of religious holidays by secular governments and corporations may create a public perception of those religions being endorsed or recognized as true, leading to increased marginalization of religious minorities. And, there may be tension between the observance of particular religious holidays by the federal government and the establishment clause of the 1st Amendment. In addition, every holiday provides benefits, in the form of free time, for people as well as costs, in the form of less economic productivity and potentially a lack of work-hours for those who need or want them.

In the United States, the government can only declare “federal holidays,” on which federal employees may not work. Unlike many other countries, there are no “national holidays” on which businesses are required to close. Thus, there end up being a vague list of “public holidays” on which many businesses will close, though many will not, and the observance of holidays is totally up to the discretion of the particular business. Furthermore, businesses may provide either paid or unpaid time off for employees on these holidays. There are ten federal holidays and six public holidays that are “universally embraced,” being endorsed by 90 percent of businesses and organizations. These are New Year’s Day, Memorial Day, Independence Day, Labor Day, Thanksgiving, and Christmas. Among these, only Christmas is an originally religious holiday. All the others are purely cultural holidays.

What would happen if suddenly Christmas was no longer a federal holiday and if businesses and organizations stopped recognizing it? For starters, many people would be very upset. According to Gallup, 93 percent of Americans celebrate Christmas and having to work on that day would certainly be annoying. However, there is no reason to think there would be mass protests or revolution; it is just a holiday after all. But, since it is so universally popular, people would try to make do. I see two ways people could do so: first, people could simply use their own paid vacation days on Christmas and/or Christmas Eve, and, second, people could observe Christmas on the weekend in years where it falls during the week, or whenever they have a regularly scheduled day off. However, there are problems with both of these approaches that may reveal just why people would be so upset if Christmas were to cease being a recognized holiday.

Americans do not get a lot of time off. Indeed, among other advanced economies, the United States is the only one which does not have a statutory minimum amount of paid time off. Legally, it would be possible for every working American outside of the federal government to work every weekday of the year. The United Kingdom guarantees 28 days of paid time off on top of nine national holidays; France has 25 days and eleven national holidays. Even our close neighbors Canada and Mexico beat us with 10 and 6 days guaranteed off and 9 and 7 national holidays, respectively. Few people could afford to use one of their precious few vacation days on Christmas, and many people do not even have paid time off that they could use to avoid working on Christmas. People’s rallying around these few public holidays has its source in this troubling lack of labor rights.

On the other hand, celebrating Christmas on a day other than December 25th would be an acknowledgment of the total secularization of the holiday. Those who complain about the supposed “War on Christmas” would have new ammunition. And, in reality, many people do consider Christmas a religious holiday. Those who would reject such moves to turn Christmas into a floating holiday like Thanksgiving would have to defend why only Christmas gets to be designated a public holiday, and thus why only Christianity gets a public holiday. Those who presently work on Eid or Diwali or Yom Kippur while devoutly following the corresponding non-Christian religion would have to be given good reason why only Christians get to have a holiday that does not count against their total paid time off.

The consideration of the idea that governments and businesses must observe Christmas thus reveals a number of problems including the limited labor rights in the United States and the problems of recognizing only one religion’s holidays. While any individual certainly has the right to celebrate whatever holidays he wishes in his free time, and no government or corporation tries to prevent this, there are a number of moral problems with the public endorsement of holidays. Holidays allow businesses to pacify their employees without guaranteeing them as much paid time off as do businesses in other nations. And, the fact that only certain religious holidays are publicly endorsed shows how Christianity-centric American society is, callous to those who follow Islam, Judaism, Hinduism, and other religions with holidays of their own.

Everyone likes to be able to stay home and get paid to do so. Few would complain to get paid time off on every religious holiday that exists. But as such a world would probably have no working days at all, it is hardly realistic. So long as there exist public holidays such as Christmas which preference one religion over all others in a society as multicultural as the United States there will be unjust inequity. So long as American workers depend on whatever holidays they can get just to have any paid time off, there will be oppression and control of people by corporations. People like Christmas; they would be a lot more miserable if they had to work on December 25th. But, it is important to consider why people depend so much on having holidays off and how people following other religions are left behind when only Christmas is such a widely endorsed religious public holiday.

Search Engines and Data Voids

photograph of woman at computer, Christmas tree in background

If you’re like me, going home over the holidays means dealing with a host of computer problems from well-meaning but not very tech-savvy family members. While I’m no expert myself, it is nevertheless jarring to see the family computer desktop covered in icons for long-abandoned programs, browser tabs that read “Hotmail” and “how do I log into my Hotmail” side-by-side, and the use of default programs like Edge (or, if the computer is ancient enough, Internet Explorer) and search engines like Bing.

And while it’s perhaps a bit of a pain to have to fix the same computer problems every year, and it’s annoying to use programs that you’re not used to, there might be more substantial problems afoot. This is because according to a recent study from Stanford’s Internet Observatory, Bing search results “contain an alarming amount of disinformation.” That default search engine that your parents never bothered changing, then, could actually be doing some harm.

While no search engine is perfect, the study suggests that, at least in comparison to Google, Bing lists known disinformation sites in its top results much more frequently (including searches for important issues like vaccine safety, where a search for “vaccines autism” returns “six anti-vax sites in its top 50 results”). It also presents results from known Russian propaganda sites much more frequently than Google, places student-essay writing sites in its top 50 results for some search terms, and is much more likely to “dredge up gratuitous white-supremacist content in response to unrelated queries.” In general, then, while Bing will not necessarily present one only with disinformation – the site will still return results for trustworthy sites most of the time – it seems worthwhile to be extra vigilant when using the search engine.

But even if one commits to simply avoiding Bing (at least for the kinds of searches that are most likely to be connected to disinformation sites), problems can arise when Edge is made a default browser (which uses Bing as its default search engine), and when those who are not terribly tech-savvy don’t know how to use a different browser, or else aren’t aware of the alternatives. After all, there is no particular reason to think that results from different search engines should be different, and given that Microsoft is a household name, one might not be inclined to question the kinds of results their search engine provides.

How can we combat these problems? Certainly a good amount of responsibility falls on Microsoft themselves for making more of an effort to keep disinformation sites out of their search results. And while we might not want to say that one should never use Bing (Google knows enough about me as it is), there is perhaps some general advice that we could give in order to try to make sure that we are getting as little disinformation as possible when searching.

For example, the Internet Observatory report posits that one of the reasons why there is so much more disinformation in search results from Bing as opposed to Google is due to how the engines deal with “data voids.” The idea is the following: for some search terms, you’re going to get tons of results because there’s tons of information out there, and it’s a lot easier to weed out possible disinformation sites about these kinds of results because there are so many more well-established and trusted sites that already exist. But there are also lots of search terms that have very few results, possibly because they are about idiosyncratic topics, or because the search terms are unusual, or just because the thing you’re looking for is brand new. It’s when there are these relative voids of data about a term that makes results ripe for manipulation by sites looking to spread misinformation.

For example, Michael Golebiewski and danah boyd write that there are five major types of data voids that can be most easily manipulated: breaking news, strategic new terms (e.g. when the term “crisis actor” was introduced by Sandy Hook conspiracy theorists), outdated terms, fragmented concepts (e.g. when the same event is referred to by different terms, for example “undocumented” and “illegal aliens”), and problematic queries (e.g. when instead of searching for information about the “Holocaust” someone searches for “did the Holocaust happen?”). Since there tends to be comparatively little information about these topics online, those looking to spread disinformation can create sites that exploit these data voids.

Golebiewski and boyd provide an example in which the term “Sutherland Springs, Texas” was a much more popular search than it had ever previously been in response to news reports of an active shooting in November of 2017. However, since there was so little information online about Sutherland Springs prior to the event, it was more difficult for search engines to determine out of the new sites and posts which should be sent to the top of the search results and which should be sent to the bottom of the pile. This is the kind of data void that can be exploited by those looking to spread disinformation, especially when it comes to search engines like Bing that seem to struggle with distinguishing trustworthy sites from the untrustworthy.

We’ve seen that there is clearly some responsibility on Bing itself to help with the flow of disinformation, but we perhaps need to be more vigilant when it comes to trusting sites with regards to the kinds of terms Golebiewski and boyd describe. And, of course, we could try our best to convince those who are less computer-literate in our lives to change some of their browsing habits.

Should You Eat Baby Yoda?

photograph of Star Wars shaped sugar cookies

Since it went live in November, Disney+ has cemented itself within the web of online entertainment streaming services, smashing already-high projections for first-wave subscribers by registering more than 10 million accounts in its first month. While there are many reasons behind such numbers, one strong factor is the tightly-controlled exclusive programming available only to Disney+ subscribers – in particular, stories based within various IP’s swept up by the Disney machine in recent years; if you’re a fan of the Marvel Cinematic Universe, the Star Wars franchise, or The Simpsons (or just want to stream the highest-grossing film of all time), Disney+ is the only place to go. And with the breakout success of The Mandalorian, an eight-episode series set in the Star Wars universe about a bounty hunter protecting a mysterious child, this business model already appears to be paying off – particularly with the popularity (and profitability) of the adorable “Baby Yoda” character.

But what would Baby Yoda taste like?

This (hopefully) seems like an odd question: love for the powerful little Force-user is strong, both on-and-offline – and I don’t mean to suggest that I’m hoping for the season’s penultimate episode’s cliffhanger to be resolved with Giancarlo Esposito’s villainous character picking his teeth with a furry green bone. But the simple fact that a creature is cute is not typically enough for everyone to think that said creature should not be eaten. I’m guessing that, if you count yourself as a fan of The Child, two things are true:

  1. You recoiled in disgust at the thought of eating Baby Yoda, and
  2. You clearly recognize that Baby Yoda is not human.

So, given that many people are perfectly content to consume adorable cows, pigs, and other nonhuman animals (which all meet that second qualification), why aren’t more people bothered by the fact that farm animals are cute?

T.J. Kasperbauer suggests that our feelings towards nonhuman animals are, at least in part, a result of generations of our forebears categorically downgrading different species into lower social groups; in his 2018 book Subhuman: The Moral Psychology of Human Attitudes to Animals, Kasperbauer explains how the process of dehumanization has led many societies to stratify along speciesist lines. Although dehumanization typically describes the process of unjustly discounting a person’s moral value entirely, Kasperbauer focuses on a more technical variety called infrahumanization whose victims “are still attributed various key human qualities but are treated as inferior to some other group by comparison.” Adapting this concept to analyze cross-species relationships, Kasperbauer concludes that nonhuman animals have, as a category, been historically classified as a dehumanized outgroup, thereby (in theory) removing our feelings of moral obligations to care for nonhuman animals. However, because some animals (like dogs or horses) are more similar to humans than others (particularly in their apparent cognitive capacities), our psychological responses to different creatures have been similarly cultivated along infrahumanizing lines.

What does this thesis mean for sci-fi/fantasy stories where plenty of nonhuman species are involved in the narratives? Should humans (or apparent-humans) in a story necessarily treat Tolkien’s Hobbits or Roddenberry’s Klingons as an infrahumanized outgroup comprised of less-significant moral patients (or deny their status as moral agents altogether)? Certainly not: my simple psychological response to a creature (or the lack thereof) is not necessarily an indication of that creature’s moral status; whether we want to ground our obligations to nonhuman creatures (whether terrestrial or otherwise) in their experience of pain, their possession of rights, or something else, this is a fully separate matter than someone’s reflexive affective experience of an creature’s treatment. Kasperbauer’s point is neither a prescription nor a justification for the mistreatment of nonhumans – it is an explanation of our current cultural landscape designed, in part, to provoke arguments in animal ethics that can be believed by people not-already persuaded by things like logical anti-speciesist concerns. As he explains, “Ethicists who are interested in changing attitudes toward animals must consider whether the specific goals they promote meet minimal criteria for psychological plausibility.” Put differently, before we can argue convincingly about things relevant to Point 2, Kasperbauer thinks that philosophers should consider the role played by the psychological structures relevant to Point 1.

So, when it comes to The Mandalorian’s Child, despite the fact that the character has yet to speak, it – just as much as Chewbacca, Jabba the Hutt, R2D2, and, of course, Master Yoda – is clearly cognitively and emotionally similar to humans in many relevant ways. But we should take care not to confuse the ethical issue with the psychological one: if its similarity to humans is the only thing differentiating Baby Yoda from a baby cow, then that’s an argument for the Child’s less-infrahumanized status, not its actual moral status.

If, on the other hand, you think that it’s actually wrong to eat Baby Yoda, then you should probably work out why you think the Child deserves to live – and then keep thinking about what other sorts of nonhuman animals deserve the same considerations (even if you don’t feel the same disgust at the thought of eating them).

Christmas Music and Emotional Manipulation

blurry photograph of decorated Christmas trees

There is a predictable pattern of reactions to Christmas music every year. First, stores start to play it much too early – typically right after Thanksgiving, or maybe even right after Halloween – and people comment on how stores are playing it much too early. Then there’s that sweet spot, where for a few weeks the songs are fun and comforting to listen to, and Wham’s “Last Christmas” is still tolerable. Inevitably, though, patience starts to run out as holiday stresses mount, and by the time the season’s over pretty much everyone is ready for another 10-month break from Christmas music.

There is, however, one class of song that is particularly difficult to tolerate no matter what time in December: the preachy Christmas song that doesn’t celebrate the spirit of giving so much as it seems to chastise you for having been a terrible person all year long. Two such songs stand out: John Lennon’s “Happy Xmas (War is Over)” (that’s the one with the “War is over/If you want it” chorus) and Band Aid’s “Do They Know It’s Christmas?” (that’s the one with the “Feed the world/let them know it’s Christmastime again” chorus).

While both songs come from a place of good intentions – Lennon’s song was written partly in protest against the Vietnam War, while Band Aid were attempting to help raise awareness for a famine in Ethiopia in the early-to-mid 1980s – I doubt they make many people’s holiday party playlists. And for good reason: I don’t want to feel bad about myself during the holidays. And although I didn’t really do anything this past year to try to put an end to war or famine, do I really have to be reminded about my many moral failings?

If you think that I’m being too hard on these types of songs, then you should know that I’m not alone. “Do They Know It’s Christmas?” has been criticized repeatedly, for many different reasons. Perhaps most damning of all is its ill-informed message about what was happening in “Africa” at the time. Consider, for example, the following lyrics, which describe Africa as a place:

Where the only water flowing
Is the bitter sting of tears
And the Christmas bells that ring there are the clanging chimes of doom
Well tonight thank God it’s them instead of you

And there won’t be snow in Africa this Christmastime
The greatest gift they’ll get this year is life
Where nothing ever grows
No rain nor rivers flow
Do they know it’s Christmastime at all?

While undeniably schmaltzy, it’s also bizarre to talk about all of Africa in a single breath. As Bim Adewunmi at The Guardian writes:

“There is a humourless danger in taking song lyrics too literally, but I can’t help it: yes, they do know it’s Christmas time in Africa because huge swaths of that vast continent are Christian; the greatest gift anyone can have is life; and actually, it is more likely to be water, not just “bitter tears”, flowing across Africa’s 54 nations.”

Adewunmi also argues that the song perpetuates a narrative in which the people of Africa need to be “saved” by those in the west, and ignore the efforts of those actually living in countries affected by some of the problems that “super groups” like Band Aid are meant to draw attention to.

So not only is it emotionally manipulative, but it’s patronizing as well. Is there any good reason to keep playing this song around Christmas?

Well, perhaps there’s one: the song and subsequent concerts put on by related act Live Aid have raised a good quantity of money for charity. Although the original Band Aid song was released in 1984, subsequent re-releases – including Band Aid II in 1989, Band Aid 20 in 2004, and Band Aid 30 in 2014, with updated rosters of contemporary popular musicians – donated a portion of profits from sales of the single each time to various charities in Africa, approximately £40m worth – although there has been debate about the overall benefits or detriments of the original Live Aid efforts, with some arguing that unforeseen political consequences of Live Aid’s donations may have caused a significant amount of harm, as well.

Whether the consequences were overall positive or negative, we can also ask the more theoretical question of whether it is appropriate to solicit charitable donations by means of emotional manipulation. Clearly the song is meant to make the listener question their relative position of privilege – especially when they are told to “thank God” it’s “them” instead of you who are suffering. We might then be motivated to donate to the Band Aid cause not out of legitimate concern for the suffering of others, but instead to assuage our own guilt. We might worry, though, that while it’s overall a good thing to donate to charity, one should be motivated by actually helping others, and not just to try to feel less bad about oneself.

That being said, if it does indeed help distribute some of the wealth and goods from those who have a lot to those who need it, it is hard to see how a little emotional manipulation in the form of cheesy Christmas songs could hurt. And while it might be close to another year before you hear “Do They Know It’s Christmas?” again, next time you do it’s worth thinking about the best way to assuage that year-end guilt.

No Country for Indigenous Men?

close-up photograph of Australia on globe

What does it take to be a natural citizen of a country? Does a person only have to be born within the borders; do they need only to have some sort of ancestral connection to it; or is there some other criterion that a person must satisfy? The High Court of Australia is poised to decide the answer to these questions as far as it pertains to Australian citizenship. The relevant cases both involve people who were born outside of the country to an Aboriginal parent, and who relocated to Australia as children. However as neither man ever obtained Australian citizenship, and are in the country on visas, Australian immigration authorities want to subject them to deportation. Both men have been convicted of crimes for which Australian law allows the government to revoke immigration visas. Those defending the two men, Daniel Love and Brendon Thoms, argue that it is absurd to claim that any Aboriginal person could count as an immigrant who needs a visa in the first place.

The defense of the Aboriginal men depends on the idea that they have some sort of automatic—or nearly automatic—citizenship or at least resident status. (For brevity, we’ll just use ‘citizenship’ to refer to both statuses.) Automatic citizenship is often referred to as birthright citizenship. To have birthright citizenship is to count as a citizen of a country simply by being born. But there are two ways to think of when someone has such a birthright. The first is jus soli (literally, “law of the soil”), which confers citizenship on any person born within the territorial borders of a country. This is the idea of birthright citizenship at issue in immigration debates in the United States, and which President Donald Trump infamously said that he could revoke. However given that neither Love nor Thoms were born in Australia, this can’t be the sense of birthright citizenship at issue.

The second conception of birthright citizenship is jus sanguinis (literally, “law of blood”), which confers citizenship on any person born to one or more parents who were themselves citizens. Jus sanguinis is not commonly referred to as birthright citizenship, but it is helpful and appropriate to think of it as such because it is a mechanism of conferring citizenship status to someone automatically at birth. Moreover this is the mechanism by which it is possible to claim that Love and Thoms have birthright citizenship. Both of them have one parent who is an Australian Aboriginal.

However, Australian law no longer has any birthright citizenship mechanisms. Australian law does not contain a simple jus soli provision anymore, as of the 1986 Australian Citizenship Amendment Act. Children automatically receive citizenship when born within Australian territorial borders only if born to at least one parent who themselves had Australian citizenship or resident status at the time of the child’s birth. Neither does Australian law provide for automatic citizenship by a jus sanguinis mechanism. Instead it is allowed for the parents of a child born outside of Australian borders to apply for the child’s citizenship, provided that the parents are themselves Australian citizens at the time of the child’s birth. Neither Love nor Thoms, nor their respective parents, ever applied for citizenship.

In lieu of a mechanism of birthright citizenship, Love’s and Thom’s representatives are appealing to the idea that as Aboriginal people the two men have a significant connection to the Australian land that confers on them some sort of significant legal standing. This defense invokes an alternative to birthright citizenship called jus nexi (literally, “law of linkage”), which confers citizenship on any person who has an immediate stake in the laws and operations of a country. Jus nexi does not function automatically but confers citizenship on the basis on the contingent fact of a person’s being a “stakeholder” in a country. In the case of Love and Thoms it is specifically their Aboriginal heritage that is taken to make them stakeholders. (This makes the issue a bit confusing, but in general jus nexi is not an automatic mechanism: it does not confer citizenship on the basis of being born to certain parents, or in a certain place.)

Immigration is an increasingly complex problem for governments as refugees and those seeking new opportunities move from one country to another. The social tension and financial pressure that attends such migration has drawn anti-migration reactions not only from people, but from the governments themselves. (The permanent furor over immigration in the United States attests to that.) The invention of jus nexi is one way in which theorists have tried to update the conceptions of the mechanics of citizenship in response to increased migration, and to avoid travesties like the one Australia’s government aims to perpetrate.

Of Trump and Truth

photograph of empty US Capitol steps

Donald Trump, president of the United States of America, is a pathological liar. This is not a revelatory statement or controversial position. But it is occasionally worthwhile reminding ourselves, in case we become numb to it, how extraordinary a fact it is that continuous deceit is the widely acknowledged reality and defining characteristic of the 45th presidency.

Trump lies about big, important things and small, banal things. He tells transparent lies: “Between 3 million and 5 million illegal votes caused me to lose the popular vote.” He tells absurd lies: “Now, the audience was the biggest ever. But this crowd was massive. Look how far back it goes. This crowd was massive.” He tells mendacious lies: “We’ve taken in tens of thousands of people. We know nothing about them. They can say they vet them. They didn’t vet them. They have no papers. How can you vet somebody when you don’t know anything about them and you have no papers? How do you vet them? You can’t.”  He tells self-serving lies: “I am the least racist person ever.” He tells offhanded lies: “My father is German, Right? Was German. And born in a very wonderful place in Germany, and so I have a great feeling for Germany.”

The Washington Post has a lie tally. At time of writing the count is, for 993 days in office, 13,435 lies or misleading statements.  That is an average of 13.5 lies per day. That’s a lie every two hours. But that’s just the flat average; actually, the daily number of lies has been increasing exponentially – the average had risen from five a day during Trump’s first nine months in office to thirty a day in the seven weeks before the 2018 midterm elections.

But lying is not, it would seem, merely an idiosyncrasy of the president, but official administration policy. Following the now infamous remarks about the inaugural crowd size, Kelly Anne Conway first used the phrase ‘alternative facts’. Rudy Giuliani, in August 2018, dropped his bombshell: “truth isn’t truth.” And remember the lot of erstwhile press secretary Sarah Huckabee Sanders, who may not have been progenitor of such zingers as Giuliani or Conway coined, but who nevertheless gave a stolid performance night after night deflecting reality?

This is a deeply shocking situation, and yet, even as it (currently) culminates in the revelations of an impeachment trial that has outdone itself for shocking testimony, the fact that the US president lies continuously, relentlessly, daily, is taking up a position as the ‘new normal.’ But truth is so fundamental to our political and ethical lives that without it politics and ethics will become all but impossible. So what can the effect of this ‘new normal’ be?

The epistemological problem, the problem of knowledge, is a deep philosophical question of how we acquire knowledge and what counts as knowledge. Epistemological skepticism casts doubt upon the possibility of knowing the world around us; upon the origins or the foundational soundness of knowledge. Notwithstanding problems of verification that are philosophically intricate, the idioms ‘alternative facts’ and ‘truth is not truth’ are not cases of epistemological skepticism. These statements are not questioning the nature or origin of truth; they are attempting to undermine its moral authority.

Truth has a fundamentally moral character. According to Immanuel Kant, truth is a categorical imperative. We have a duty to tell the truth because we could not live in a world in which we would not will others to tell the truth. A world where truth was not a ‘moral law’ would cease to function. In this sense truth is a kind of cardinal moral category. It is not just moral in the special sense of being a binding duty unto itself, but in the practical and general sense of enabling the integrity of the very structures of morality to be possible at all. Morality needs truth; truth is a necessary condition for ethics.

The companion concept of truth is trust, similarly without which a moral system could hardly function, since justice and compassion depend upon it. Without truth there cannot be trust, and without trust the political and moral is degraded and social contracts are at risk of breaking down.

Can morality be irrevocably eroded by Trump’s litany of falsehoods? What effect will it have? How will the ‘moral fabric’ of American society be impacted?

In her book On Totalitarianism, Hannah Arendt makes a remark about a famous anti-Semitic conspiratorial forgery known as the Protocols of the Elders of Zion, which was widely believed and taken up. Arendt remarks that the circumstances of its production were not as significant as the forgery’s being believed by so many.

But surely, given that Donald Trump’s lies are mostly bald-faced, his supporters must know he is a liar? Certainly, George Conway thinks they do. In March he said in a tweet, “…Even his die-hard supporters… know he’s a liar. They just don’t care.”

Here, then, the situation may be worse: contrary to Arendt, the most important thing may not be the circumstances of the production of lies, or even that they might be believed, but that the lies are tolerated, brushed off, and factored in. This is not a moral failing, but a moral abrogation.

Life, Death, and Aging: Debating Radical Life Extension

photograph of grandmother and grandson under blankets with a book laid down

An article from The Atlantic has resurfaced in the last week, sparking new discussions about the impact of healthcare on our end of life desires and decision-making. In 2014, Ezekiel J. Emmanuel articulated his reasons for wanting to die at 75 in a provocative op-ed. In 2019, he confirmed that his position has not changed. Emmanuel’s worry is that,

It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.

When polled in 2016, over half of people in the U.S. would not want to adopt enhancements that would enable them to live longer, more healthy lives. While 68% of those polled responded that they thought “most people” would “want medical treatments that slow the aging process and allow the average person to live decades longer, to at least 120 years,” only 38% of respondents said that they personally would want such treatments. In this same poll, 69% were almost in agreement with Emmanuel, that their ideal lifespan would be 79-100 years (only 14% said 78 or younger, and Emmanuel is actually in this small camp). There are many considerations that go into this preference.

One motivation against life extension is thinking that we are only deserving some natural amount of time on this earth, perhaps in order to fulfill a religious or spiritual commitment to “move on.” Over half of the respondents in the Pew survey considered treatments that extend life to be “fundamentally unnatural.” The distinction in bioethics between “treatment” and “enhancement” could be playing a significant role here; it is easy to justify intervention to make someone whole, to restore or to ensure a state of health. Such interventions are deemed “treatment,” and are more easily covered by insurance in the U.S. “Enhancements,” on the other hand, make one better than well, or do not have wellness as an aim. Of course, there are gray areas in medical interventions that don’t fit neatly into one or the other of these categories. Obstetrics, for example, doesn’t aim to treat an illness, but nor does it seek to “enhance” the future parent.

For many, considering a life without an end point is disorienting in the extreme. Philosophers from Martin Heidegger to Bernard Williams were committed to the idea that death – a final conclusion – is necessary for bringing meaning to life. If life’s meaning is similar to the meaning that a story’s narrative has, then we may think of it as consisting of stages, with different stages shaping the import and significance of the events that came before. If a life were to go on indefinitely, it could undermine the ability to shape a narrative or derive purpose in each stage. Radically or indefinitely delaying the conclusion can be seen to thus diminish or undermine the meaning in one’s life.

For many, the considerations against life extension are grounded less in theory and more in practice. If lives are indefinitely extended, this will increase the elderly population. The potential additional strain on environmental and social resources of the additional population could be cause for concern (a la Malthus). The impact on the economy, if living a longer life means staying in the workforce longer, could mean that young people have a harder time entering the workforce when competing with workers that have decades of experience. If those who extend their lives do not remain in the workforce, then different social pressures would arise – supporting a booming retired population, for instance. Regardless of the labor considerations, an extended lifespan could alter the shape and meaning of relationships. Marriages that previously consisted of a commitment of less than 50 years now may seem like unrealistic arrangements if people can anticipate living another 50 years past the average lifespan today.

Further, the practical considerations for and against radical life extension are enmeshed in our current understandings of health care, aging, and dependence. Our worries of becoming a burden to our loved ones should our health conditions require some degree of dependent living is contingent on governmental structures not providing support, either directly to those living with conditions of dependence or to those who will care for them. The way we consider the connection between dependence and burdening is also wrapped up in the way we value IN-dependence.

In the end, the theoretical question regarding the morality of extending the average human lifespan is inextricably tied to the realities of the social and political systems in which we live.

In Search of an AI Research Code of Conduct

image of divided brain; fluid on one side, curcuitry on the other

The evolution of an entire industry devoted to artificial intelligence has presented a need to develop ethical codes of conduct. Ethical concerns about privacy, transparency, and the political and social effects of AI abound. But a recent study from the University of Oxford suggests that borrowing from other fields like medical ethics to refine an AI code of conduct is problematic. The development of an AI ethics means that we must be prepared to address and predict ethical problems and concerns that are entirely new, and this makes it a significant ethical project. How we should proceed in this field is itself a dilemma. Should we proceed in a top-down principled approach or a bottom up experimental approach?

AI ethics can concern itself with everything from the development of intelligent robots to machine learning, predictive analytics, and the algorithms behind social media websites. This is why it is such an expansive area with some focusing on the ethics of how we should treat artificial intelligence, others focusing on how we can protect privacy, or some on how the AI behind social media platforms and AI capable of generating and distributing ‘fake news’ can influence the political process. In response many have focused on generating a particular set of principles to guide AI researchers; in many cases borrowing from codes governing other fields, like medical ethics.

The four core principles of medical ethics are respect for patient autonomy, beneficence, non-maleficence, and justice. Essentially these principles hold that one should act in the best interests of a patient while avoiding harms and ensuring fair distribution of medical services. But the recent Oxford study by Brent Mittelstadt argues that the analogical reasoning relating the medical field to the AI field is flawed. There are significant differences between medicine and AI research which makes these principles not helpful or irrelevant.

The field of medicine is more centrally focused on promoting health and has a long history of focusing on the fiduciary duties of those in the profession towards patients. Alternatively, AI research is less homogeneous, with different researchers in both the public and private sector working on different goals and who have duties to different bodies. AI developers, for instance, do not commit to public service in the same way that a doctor does, as they may only responsible to shareholders. As the study notes, “The fundamental aims of developers, users, and affected parties do not necessarily align.”

In her book Towards a Code of Ethics for Artificial Intelligence Paula Boddington highlights some of the challenges of establishing a code of ethics for the field. For instance, those working with AI are not required to receive accreditation from any professional body. In fact,

“some self-taught, technically competent person, or a few members of a small scale start up, could be sitting in their mother’s basement right now dreaming up all sorts of powerful AI…Combatting any ethical problems with such ‘wild’ AI is one of the major challenges.”

Additionally, there are mixed attitudes towards AI and its future potential. Boddington notes a divide in opinion: the West is more alarmist as compared to nations like Japan and Korea which are more likely to be open and accepting.

Given these challenges, some have questioned whether an abstract ethical code is the best response. High-level principles which are abstract enough to cover the entire field will be too vague to be action-guiding, and because of the various different fields and interests, oversight will be difficult. According to Edd Gent,

“AI systems are…created by large interdisciplinary teams in multiple stages of development and deployment, which makes tracking the ethical implications of an individual’s decisions almost impossible, hampering our ability to create standards to guide those choices.”

The situation is not that different from work done in the sciences. Philosopher of science Heather Douglas has argued, for instance, that while ethical codes and ethical review boards can be helpful, constant oversight is impractical, and that only scientists can fully appreciate the potential implications of their work. The same could be true of AI researchers. A code of principles of ethics will not replace ethical decision-making; in fact, such codes can be morally problematic. As Boddington argues, “The very idea of parceling ethics into a formal ‘code’ can be dangerous.” This is because many ethical problems are going to be new and unique so ethical choice cannot be a matter of mere compliance. Following ethical codes can lead to complacency as one seeks to check certain boxes and avoid certain penalties without taking the time to critically examine what may be new and unprecedented ethical issues.

What this suggests is that any code of ethics can only be suggestive; they offer abstract principles that can guide AI researchers, but ultimately the researchers themselves will have to make individual ethical judgments. Thus, part of the moral project of developing an AI ethics is going to be the development of good moral judgment by those in the field. Philosopher John Dewey noted this relationship between principles and individual judgment, arguing:

“Principles exist as hypotheses with which to experiment…There is a long record of past experimentation in conduct, and there are cumulative, verifications which give many principles a well earned prestige…But social situations alter; and it is also foolish not to observe how old principles actually work under new conditions, and not to modify them so that they will be more effectual instruments in judging new cases.”

This may mirror the thinking of Brent Mittelstadt who argues for a bottom-up approach to AI ethics that focuses on sub-fields developing ethical principles as a response to resolving challenging novel cases. Boddington, for instance, notes the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context; they must be able to make contextualized interpretations of rules, and to judge when rules are no longer appropriate. Still, such an approach has its challenges as researchers must be aware of the ethical implications of their work, and there still needs to be some oversight.

Part of the solution to this is public input. We as a public need to make sure that corporations, researchers, and governments are aware of the public’s ethical concerns. Boddington recommends that in such input there be a diversity of opinion, thinking style, and experience. This includes not only those who may be affected by AI, but also professional experts outside of the AI field like lawyers, economists, social scientists, and even those who have no interest in the world of AI in order maintain an outside perspective.

Codes of ethics in AI research will continue to develop. The dilemma we face as a society is what such a code should mean, particularly whether it will be institutionalized and enforced or not. If we adopt a bottom up approach, then such codes will likely be only there for guidance or will require the adoption of multiple codes for different areas. If a more principled top-down approach is adopted, then there will be additional challenges of dealing with the novel and with oversight. Either way, the public will have a role to play to ensure that its concerns are being heard.

Rural Health Disparities and Telemedicine

photograph of surgery performed with help of teleprescence robot

Rural America has been struggling from a lack of hospitals and physicians at an alarming rate. In the past decade, ER patients in rural communities have increased by 60% and hospitals in those locations have decreased by 15%. A potential solution to the lack of health care providers is to consider telemedicine as an option for these rural locations. Telemedicine is a remote care center which provides hospitals, clinics, or even individuals with direct access to a physician. One such company that provides this service is Avera eCARE. At Avera eCARE, doctors work out of high-tech cubicles, dressed in scrubs to look the part, but never actually physically touching or seeing their patients. Instead, they use a high-resolution camera and microphone to work with their patients and nurses or healthcare professionals at remote locations.

Dr. Brian Skow is an example of a physician who works from one of the Avera eCARE centers that provides remote emergency care for 179 hospitals across the nation. Skow was called in when a comatose, unresponsive patient came into the emergency room in rural Montana with only nurses on staff. Skow remotely instructed the nurse how to incubate the patient – inserting a tube into the patient’s throat in order to get her onto a ventilator. Without his help, this patient would have most likely died from lack of oxygen.

“If anything defines the growing health gap between rural and urban America,” The Washington Post claims, “it’s the rise of emergency telemedicine in the poorest, sickest, and most remote parts of the country, where the choice is increasingly to have a doctor on screen or no doctor at all.” And Dr. Skow’s situation is a perfect example. He watched as 5 people performed the procedure, all with careful instruction and encouragement from his remote location. To compare this to his hospital at Sioux Falls, he has to compete with an emergency physician, trauma surgeon, cardiologist, anesthesiologist, a team of 20 residents, ER nurses, and paramedics to be at the bedside. This has meant that each month telemedicine can help cardiac episodes, traumatic injuries, overdoses, and burns at a rate that is much higher than before.

There are a number of benefits generated by the move to such a system. Telemedicine helps hospitals retain doctors and recruit them because it allows for time off- and on-site support. Many critical-access hospitals are struggling to find even a single doctor or can’t keep physicians long. This technology offers the option for the nurses and physician assistants to call in for immediate health care suggestions. Another benefit is that hospitals are able to treat more patients with more intense conditions than before, as the technology allows hospitals to treat patients without needing to immediately transfer them. These transfers increase the time in which the patient suffers, and for most of these cases, every second counts. Apart from pain and outcome, transferring also greatly increase billing charges for patients. Even hospitals benefit by treating more cases and thus generating more profit.

Despite these advantages, there are still many limitations. Telemedicine costs approximately $70,000 monthly and $170,000 to install. Hospitals have to face a difficult decision in choosing between installing this technology or investing money on other life-saving machines like MRI and CAT scans.

Critics also worry that telemedicine takes the humanity out of patient-physician relationship. Instead of physically being with the patient, that crucial interaction is separated by a screen and thousands of miles. This reality can affect treatment in ways that are unexpected. Especially in remote communities, it is very common for the nursing staff to know the patient personally, but for the virtual doctor, the patient can become “less human.” Doctor Kelly Rhone, describes this phenomenon as she watched nurses from North Dakota perform CPR on a patient for over 10 minutes. One of the worst things that the remote doctor can do, Rhone argues, is withdraw care too quickly. Even when a patient has passed, it’s important for the medical staff in the room to acknowledge the situation in their own time. This obligation may even extend to being present with grieving family members.

It is important to consider then, if remote care is an adequate substitute and can offer sufficient support for the human element to medicine. Perception can play a major role in diagnosis, and if doctors aren’t seeing their patients in the same way, they will treat them differently. It may be more likely for doctors to withdraw care or save resources, compared to situations where they are with them in person.

There are also some challenges when it comes to telemedicine being used directly in people’s homes. There are apps which can help patients connect with a doctor via Facetime, text messages, and phone calls. There are some benefits to this option. For busy parents and working folk, this is a quick and easy solution to getting better fast. Some people live an hour or more from the nearest health clinic, and so to be able to describe their symptoms over the phone and get their medicine prescribed within minutes is a great benefit. However, there is also the increased risk of misdiagnosis. It can be easy to miss symptoms of larger health problems – when chest discomfort isn’t just a strained muscle, but an early sign of a heart attack, for example. In this way, reliance on telemedicine can increase risk to patients.

There is a clear injustice in our health care services in the United States for rural areas and urban locations. Telemedicine is one option for those who are suffering from lack of adequate healthcare. It increases virtual staff and gives current staff direct access to help for their situations. With the rising trend toward virtual telemedicine, we must consider what cost to patient health we are willing to accept for increased efficiency.

Mindfulness, Capitalism, and the Ethics of Compassion

photograph of person meditating before dawn

Mindfulness, a meditation technique lifted from Buddhist practice, has gained popularity in recent years, especially in the corporate world, as a means to combat stress and improve personal performance. The practice promises to relieve anxiety associated with the pressures of modern life. Indeed, not only are our work lives often more demanding and less secure, we live in a 24-hour news cycle and among frenetic social media activity from which many people are finding it increasingly difficult to retreat. All this activity has negative consequences for our concentration, mental acuity, and general well-being. We also live in a time of rising inequality, of epidemics of stress, anxiety and other mental health issues; and in which politics is bitterly divided and people’s trust in politicians is at very low ebb. We live in a time in which the problems caused by neoliberal capitalism’s rapacious activity are coming home to roost as we sit at the brink of ecological collapse. It is natural for people to seek succor.

Practicing mindfulness involves focusing one’s attention on one’s immediate surroundings, sounds, and sensations, with the purpose of drawing the mind out of its busy chatter, and its anxious worry, and focusing only on the immediate present and the immediate surroundings: “To live mindfully is to live in the moment and reawaken oneself to the present, rather than dwelling on the past or anticipating the future.”

But what are the ethics of something that promises succor without addressing the destructive injustices of capitalism that are causing the problems in the first place?

Take the climate emergency – we are at the beginning of, and falling increasingly into the grip of, a man-made catastrophe. Here in Australia the summer has barely begun and already a drought ravaged area the size of Albania has been razed by fires, more than eighty of which are still burning as Sydney, the largest city, is blanketed in toxic smoke. If we aren’t feeling anxious we should be – and if we aren’t focused on the future, we ought to. Used as a method of easing the anxiety of climate catastrophe, mindfulness threatens to contribute to the problem by shifting the focus from action to management; from state responsibility for action to individual burden of amelioration.

The origins of mindfulness are in Buddhist practice, where letting go of the ego’s desires and worldly attachments opens one to a greater connectedness with the world’s other beings. The form of this connection is compassion. Yet the popular Western-appropriated version of mindfulness being practiced increasingly in the corporate world appears to be moving in the opposite direction of compassion. Mindfulness is touted as a cure for modern ills like anxiety, yet rather than cure them, it is a technique of evasion; rather than being focused on connectedness, it reinforces ego by centering on self-improvement.

The use of mindfulness as stress relief in corporate institutions helps corporations avoid responsibility for the environments, detrimental to mental well-being, they create; and tries to shift the burden away from the toxic system back onto the individual. It is therefore unsurprising that mindfulness has been appropriated by the corporate world.

Bhikkhu Bodhi, an outspoken Western Buddhist monk, has warned that “absent a sharp social critique, Buddhist practices could easily be used to justify and stabilize the status quo, becoming a reinforcement of consumer capitalism.”

Compassion is not central to post-enlightenment Western philosophy in the same way it is in some religious ethics such as Buddhism. The Western tradition tends to distrust emotion in morals, because the moral life is taken to be is centered around decision-making and emotions are thought not to be a solid basis for rational action.

But compassion can be found in different kinds of appeals to the universal nature of ethics that most normative theories make. There is a form of the ‘golden rule’ – the moral rule that states one ought to treat others as one would want to be treated – present, for example, in both deontological and utilitarian styles of normative moral theory.

This indicates the presence of a general principle of ethics – that it is universal. In Utilitarianism this principle dictates that each stakeholders’ preferences are considered equally. In deontological theories, such as rights-based ethics, it dictates that rights are universal and inalienable. These demands of ethics are other-centered, and require us to make decisions in the interests of, say, justice rather than in the self-interest.

In terms of western normative ethical theories, mindfulness in its original Buddhist form is akin to virtue ethics in which the agent’s character is at issue, and ethics is centered around the virtues of good character which enable and contribute to the good life.

As David Loy, in a famous article called Beyond McMindfulness, wrote:

“mindfulness is a distinct quality of attention that is dependent upon and influenced by many other factors: the nature of our thoughts, speech and actions; our way of making a living; and our efforts to avoid unwholesome and unskillful behaviors, while developing those that are conducive to wise action, social harmony, and compassion.”

Here we see that mindfulness is meant to be an ethical position, which one takes up in order to develop, but to develop in an ethical, that is, other-centered direction.

Mindfulness should be about a quality of awareness, a kind of attunement that is by its nature ethically in the world. Using it for self-improvement, without compassion or social conscience, distorts its nature.

Perhaps mindfulness creates a much-needed reflective space in life. But perhaps, rather than use that space for the avoidance of thought, it should be used for a reflective kind of attention than everyday life permits.

Buddhist philosophy is about overcoming ego – and ego is at home in capitalism – or rather, capitalism is at home in the ego. Capitalism depends upon the restless ego seeking and finding momentary satisfaction of desire by consumption; and it depends upon that satisfaction being soon superseded by another desire that can in turn be satisfied by another consumption. But capitalism is in crisis as we reach the endgame in the climate and ecological emergency. Corporate mindfulness is a way of easing the anxiety without interfering with the capitalist machine. Then individuals can feel better, and business can carry on as usual. But it is a case of merely treating the symptom while allowing the disease to run rampant.

If, as I believe it is, the climate crisis is a crisis of capitalism, the role of alienation is central – alienation from each other and from the natural world. From this point of view, our alienation from these spheres is what has caused the crisis in the first place. Compassion is a possible way back to the ethical dimensions of our interconnectedness. But that cannot be found in the western-appropriated practice of mindfulness.

Cui Bono? Public Goods and College Education

photograph of campus building at the University of Tennessee

Pete Buttigieg recently caused a stir by arguing students from wealthy families should not benefit from any scheme that makes public college tuition free. This distinguishes his position on free college tuition from that of Bernie Sanders and Elizabeth Warren, both of whom have plans that would extend the benefit of free tuition to all students, regardless of the amount of money their family makes. Buttigieg motivates his view by appealing to an idea of fairness. How is it fair, he asks, that middle and lower-class families pay for the children of rich families to go to school? (Hillary Clinton expressed the same sentiment in 2016.) However, Sanders and Warren also motivate their plans by appealing to fairness. Sanders has argued since his 2016 presidential run that it is essential to each person’s ability to secure their future livelihood should not be hostage to how much money their family makes. With mutually incompatible plans each apparently appealing to the same moral concept, what can we make of the respective arguments?

Buttigieg’s plan is to offer free tuition to both two and four-year public institutions of higher education to those students whose family income is less than $100,000. Buttigieg argues that Sanders’ and Warren’s plans are not properly targeted at those who most need assistance and would moreover be wasteful due to paying for the education of students whose families could afford to pay for it. The sense of what is fair here is something like, “from each according to their ability, to each according to their need.” Wealthy families have the ability both to subsidize other families’ education while also paying for their own education. Less wealthy families need more assistance securing the resources to pay for their education. On Twitter, Alexandra Ocasio-Cortez criticized Buttgieg’s plan for misconstruing what sort of good a college education is. She argued “Everyone contributes & everyone enjoys. We don’t ban the rich from public schools, firefighters, or libraries bc [sic] they are public goods.” The sense of what is fair here is something like, “you pays your money you takes your choice.” Everyone who pays into a fund secures the right to draw on that fund for themselves if they decide to do so.

But what does it mean for something to be a public good? One way of explaining this is via the idea of so-called “neighborhood effects.” This term is used in economics to describe situations in which there is a forced exchange between parties. The typical example is that of an upstream polluter. If an individual or institution dumps materials into the water that make it undrinkable or otherwise inconvenient for people downstream to use those people are forced to accept the sullied water. This is true even if the upstream polluter offers to compensate the downstream people: they are forced between having dirty water and no compensation, or dirty water and compensation. But this is not a real choice. In the case of education (of at least the K -12 type) something similar happens. When a person is educated everyone else in society receives a benefit without bearing any costs. Educated people are better neighbors, workers, and fellow citizens. In this situation the educated person is forced into an exchange: educate themselves at their own cost (of time and resources) and benefit everyone else or don’t educate themselves at their own cost (of future benefit and improved quality of life). But this is not a real choice. Because each individual’s actions automatically affect the quality of the air and water supply of each other person (within a given area) air and water supplies are public goods. Likewise because each person automatically benefits from each other person receiving at least basic education, basic education is a public good.

In the case of neighborhood effects, even libertarian thinkers like Milton Friedman have argued for the acceptability of government administration and intervention. For education this takes the form of the government securing funds to create public schools by way of taxes or fees. This is how public K – 12 education is universally provided to children in the United States. Moreover Buttigieg agrees with this principle for K – 12 education. So why doesn’t he extend this principle to college education? Again, in line with the thinking of people like Friedman, Buttigieg thinks of college education as mostly beneficial to the educated people themselves, rather than society at-large. Because it benefits them personally, he reasons, it is appropriate for them to bear at least part of the cost which they can then repay by way of their increased post-graduation earnings. Moreover Buttigieg argues that a college education is not necessary for everyone, whereas K – 12 education is. To boot the people who choose to go to college, according to Buttigieg, are largely those from wealthier households anyway. He says, “Americans who have a college degree earn more than Americans who don’t. As a progressive, I have a hard time getting my head around the idea of a majority who earn less because they didn’t go to college subsidizing a minority who earn more because they did.” Hence he sees entering college is a form of risk that a person chooses to take, betting that their future earnings will make the risk worthwhile. However, society at-large ought not be forced to subsidize the risks of individuals if those risks will only pay off for the individuals themselves, rather than the whole of society.

What of Ocasio-Cortez’ critique, then? Is she wrong to draw an analogy between public services like firefighting and college education? The answer lies in how her claim that, “everyone contributes and everyone enjoys” is interpreted. In the case of a true public good the benefit everyone enjoys is a sort of generic, blanket benefit. If the mansion and estate grounds of a wealthy family catch fire, the fire could spread to the homes of the other citizens or at the very least pollute the air in the area with smoke. This is a neighborhood effect, which provides a basis for making everyone pay into funds to provide universal firefighting services. As she says, each person benefits from the provision of these services to every other individual. Likewise with K – 12 education: each person benefits from every other person gaining basic literacy, numeracy, and civics knowledge. The question, then, is whether each person benefits from every other person gaining advanced skills in literary analysis, theoretical physics, philosophy, psychology, and host of other disciplines college students can pursue. To understand Buttigieg’s “No” is to see the benefits stemming from college education as specific, non-blanket benefits that accrue primarily to each individual who chooses to partake rather than a generic, blanket benefit.

Because Buttigeig does not view college education as a public good, he does not think it is fair to make everyone pay so that everyone can enjoy it. Ocasio-Cortez explicitly views it as a public good, and so does think it is fair that everyone is able to enjoy it equally. Sanders and Warren also seem to implicitly view college education as a public good, given their policy proposals for free college tuition for all students. Because it is easier to quantify how individuals are benefitted by their own college education, Buttigieg’s plan has a certain appeal. But without completing the harder task of quantifying how an individual’s college education benefits society as a whole, or thinking beyond quantitative evidence, it is not clear that he can stave off criticisms like that of Ocasio-Cortez.

Religious Liberty and Science Education

photograph of empty science classroom

In November, the Ohio House of Representatives passed “The Ohio Student Religious Liberty Act of 2019.” The law quickly garnered media attention because it seems to allow students to get answers wrong without penalty if the reason they get those answers wrong is because of their religious beliefs. The language of the new law is the following:

Sec. 3320.03. No school district board of education, governing authority of a community school […], or board of trustees of a college-preparatory boarding school […] shall prohibit a student from engaging in religious expression in the completion of homework, artwork, or other written or oral assignments. Assignment grades and scores shall be calculated using ordinary academic standards of substance and relevance, including any legitimate pedagogical concerns, and shall not penalize or reward a student based on the religious content of a student’s work.

Sponsors of the bill claim that students will be required to learn the material they are being taught, and to answer questions in the way that the curriculum supports regardless of whether they agree with it. Opponents of the law disagree. The language of the legislation prohibits teachers from penalizing the work of a student when that work is expressive of religious belief. This seems to entail that a teacher cannot give a student a bad grade if that student gets an answer wrong for religious reasons. In any event, the vagueness of the law may affect the actions of teachers. They might be reluctant to grade assignments correctly if they think doing so may put them at odds with the law.

Ohio is not the only state in which bills like this are being considered, though most have failed to pass for one reason or another. Some states, such as Arizona, Florida, Maine, and Virginia have attempted to pass “controversial issues” bills. The bills take various forms. Arizona Bill 202, for example, attempted to prohibit teachers from advocating any positions on issues that are mentioned in the platform of any major political party (a similar bill was proposed in Maine). This has implications for teaching evolution and anthropogenic climate change in science classes. Other controversial issue bills prohibit schools from punishing teachers who teach evolution or climate change as if they are scientifically controversial.

Much of the recent action is motivated by attitudes about Next Generation Science Standards, a science education program developed by 26 states in conjunction with the National Science Teachers Association, the American Association for the Advancement of Science, and the National Research Council. The program aims to teach science in active ways that emphasize the important role that scientific knowledge plays in innovation, the development of new technologies, and in responsible stewardship of the natural environment. NGSS has encountered some resistance in state legislatures because the curriculum includes education on the topics of evolution and anthropogenic climate change.

Advocates of these laws make a number of different arguments. First, all things being equal, there is value in freedom of conscience. We should set up our public spaces in such a way that respects the fact that people can believe what they want to believe. The U.S. Constitution was intentionally written in a way that provides protections for citizens to form beliefs independently of the will of governments. In response, an opponent of this legislation might say that imposing a set of standards for curriculum based on the best available evidence is not the same thing as forcing citizens to endorse a particular set of beliefs. A student can learn about evolution or anthropogenic climate change, all the while disagreeing with what they are learning.

A second, related argument might be that school curriculum and grading policies should respect the role that religion plays in people’s lives. For many, religion provides life with meaning, peace, and hope. Given the importance of these values, our public institutions shouldn’t be taking steps that might undermine religion.

A third argument concerns parental rights to raise children in the way that they see fit. This concern is content-neutral. It might be a principle that everyone should respect. Parents have significant interests in the way that their children turn out, and as a result they have interests in avoiding what they might view as indoctrination of their children by the government. Attendance at school is mandatory for children. If the government is going to force them to attend, they shouldn’t be forced to “learn” things that their parents might not want them to hear.

A fourth argument has to do with the value of free speech and the expression of alternative positions. It is always valuable to hear opposing positions, even those that are in opposition to received scientific knowledge, so that science doesn’t just become another form of dogma. In response, opponents would likely argue that we get closer to the truth when we assess the validity of opposing viewpoints, but not all opposing viewpoints are created equal. Students only have so much time dedicated to learning science in school, so if opposing positions are considered in the classroom, perhaps it is best if they are positions advocated by scientists. Moreover, if a particular view reflects only the opinion of a small segment of the scientific community, perhaps it is a waste of valuable time to discuss those positions at all.

Opponents of this kind of legislation would insist that those in charge of the education of our children must value best epistemic practices. Some belief-forming practices contribute to the formation of true beliefs more reliably than others. The scientific method and the peer review process are examples of these kinds of reliable practices. It is irresponsible to treat positions that are not supported by evidence as if they are equally deserving of acceptance as beliefs that are supported by evidence. Legislation of this type presents tribalism and various forms of pernicious cognitive bias as adequate evidence for belief.

Furthermore, opponents argue, the passage of these bills is nothing more than political grandstanding—attempts to solve non-existent problems. The United States Constitution already protects the religious liberty of students. Additional legislation is not necessary.

Education, in part, is the creation of responsible, productive, autonomous citizens. What’s more, the issues at stake are crucially important. Denying the existence of anthropogenic climate change has powerful, and even deadly, consequences for millions of current living beings, as well as future generations of beings. Our best hope is to create citizens who are well-informed on this issue and who are therefore in a good position to mitigate the effects and to construct meaningful climate policy in the future. This will be impossible if future generations are essentially climate illiterate.

Conscientious Exemption, Reasonable Accommodation, and Dianne Hensley

On December 2nd, McLennan County Justice of the Peace Dianne Hensley was issued a public warning for refusing to perform same-sex marriages. She continued to perform marriages for heterosexual couples, but claimed that she was following her “conscience and religion” by abstaining from performing the non-straight marriages.

Hensley has been open about her policy and claimed in 2017 that she qualified for a “religious exemption” from performing this service for non-straight couples. She sees her position as grounded in her Christian faith, and therefore considers herself to be “entitled to accommodations just as much as anyone else.”

For the past several years Hensley’s office has refused to officiate same-sex marriages. In response to requests, Hensely and her staff offer a document explaining her reasoning and indicating other local qualified and willing alternative substitutes.

Hensley would not be the first public official to be reprimanded for not participating in the administration of same-sex marriages. In 2018, an Oregon Supreme Court judge was suspended for three years for refusing to conduct same-sex marriages. In 2015, in a case that garnered a great deal of national attention, Kimberly Davis, now a former county clerk in Kentucky, refused to issue marriage licenses to same-sex couples and was fired (this year she was declared vulnerable to lawsuits).

Hensley’s case is unique, however, because it is not a required part of her job to perform marriages at all. Officiating marriages is a way to earn “thousands of dollars in personal income,” but is optional for justices of the peace. Because officiating is optional, many of Hensley’s like-minded colleagues simply stopped performing them after the Supreme Court granted rights to gay couples.

The right to reasonable accommodation can be murky in cases like these. Roughly speaking, unreasonable accommodations are those that:

Typically accommodations are seen to be unreasonable because of the first or third consideration listed; if, in order to accommodate the needs or conscience of the employee, the job itself must be fundamentally altered, then the employer is not required to make such accommodation. Perhaps relatedly, if making such an accommodation is sufficiently burdensome for the employer, they need not provide it. For instance, a business would not be required to lower production standards or create a new position in order to accommodate an employee.

The justification for exemptions of conscience constitute a difficult area of labor ethics and fit uncomfortably with the right to reasonable accommodation. First, it is intuitive that we would not want a system to be in place where individuals could not live according to their values. However, this is not an unrestricted value, and there are intuitive constraints on when appeals to moral integrity would be reasonable: norms of professions and their role in our society will limit when individuals can conscientiously refuse.

Consider the case of a health care provider who finds it morally objectionable to provide some medical intervention. The context of the role of medical professionals in society plays an important part in determining the extent to which it makes sense to allow for such professionals to selectively abstain from providing services based on their conscience. Here, the particular social value that the training and care involved in health care providers make the professional standards especially pertinent. They possess both knowledge and skills that the public does not generally have and therefore the public must rely on them for part of their lives (health maintenance) that is particularly significant.

Thus, while moral integrity is deeply important, appropriate refusals must not run afoul of the role that professionals play in our society. In this, health care providers are likely in a similar category as justices of the peace: specialized training and skills that the general public relies on for unique and irreplaceable services.

One of the motivations behind the Texas commission’s complaint against Hensley is that, due to her discriminatory practices regarding officiating marriages, she is displaying a lack of ability to be impartial, which is certainly a requirement of a justice of the peace. This again might mirror concerns for health care providers that select which interventions to provide – such practices may indicate a provider is not being guided by norms of the profession and make decisions regarding medical interventions on medical grounds.

Some professions allow for personal conscience to guide professional decisions, but for most, the decision-making process for what to do is grounded in the professional aims, so one’s individual values are given sway only when the profession itself allows for leeway in making the decision. For example, a teacher who assigns grades randomly instead of according to some system grounded in pedagogy is flouting the professional norms of teaching. Teachers can assign grades on a number of bases, as long as they are pedagogical grounds – as long as they are serving recognizable pedagogical purposes. An instructor’s normative attitudes may be able to play some role in how they make teaching choices, but only in spaces that the profession allows for some leeway.

Similarly, in the healthcare profession, providers can adopt different degrees of risk aversion and styles of patient rapport, different philosophies of patient care and approaches to remaining up to date with treatment standards, but it is hard to see where any extra-medical leeway would come in: in controversial or difficult decision scenarios, health care providers are still expected to make decisions on medical grounds. Similar standards would apply for justices of the peace regarding the performance of their duties.

The particularly significant role that justices, teachers, and health care providers play in our society may be underlying the difficulty in motivating an exemption of conscience. That such professionals have special skills that provide critical services for public welfare means that it is important they not arbitrarily practice their professional role.

Compare these cases to the role conscience might play in other professions where the role is less integral to society’s well-functioning. Imagine a concierge is an ethical vegetarian, believing that consuming and purchasing meat products is against the dictates of morality. On the surface, this wouldn’t have a significant impact on her ability to be a good concierge. However, part of the job of a concierge is to give visitors information in order to guide them in the foreign city. Say this concierge considers it to be morally wrong to eat at steakhouses and that she would be morally complicit in the wrong of eating at steakhouses were she to direct patrons in their direction. Of course we wouldn’t want to make the vegetarian do things that make her uncomfortable, or lead an inauthentic life – and this is what grounds the value of moral integrity and the push to find grounds for conscientious refusals.

However, if the concierge makes decisions about how to treat visitors, or about how to go about her job based on non-concierg-ing reasons, it seems she is not meeting this standard for the profession; she is being a bad concierge. Concierges should guide visitors, and if the vegetarian concierge doesn’t do that, she is failing at conceirg-ing. It seems like this is similar in structure to other scenarios we could imagine as a matter of philosophical fiction, such as a Christian Scientist HCP or an Amish Apple store Genius. For such individuals, they have sincere moral grounds to refuse to engage with patients or clients in the way their profession dictates. So are these individuals’ moral attitudes consistent with their performance of their job or candidates for reasonable accommodation?

As for Hensley, she has support to “practice her religion” from members of conservative religious groups. They do not engage with the question of whether or to what extent some careers may simply be incompatible with the free practice of some religions. Since 2015, the Texas Justice Court Training Center has said that permitting a justice of the peace to perform only straight marriages lacked legal authority.

Johnson’s Mumbling and Top-Down Effects on Perception

photograph of Boris Johnson scratching head

On December 6th, in the midst of his reelection campaign, UK Prime Minister Boris Johnson spoke about regulating immigration to a crowd outside a factory in central England, saying “I’m in favour of having people of talent come to this country, but I think we should have it democratically controlled.” When Channel Four, one of the largest broadcasters in the UK, uploaded video of the event online, their subtitles mistakenly read “I’m in favor of having people of color come to this country,” making it seem as though Johnson was, in this speech, indicating a desire to control immigration on racial grounds. After an uproar from Johnson’s Conservative party, Channel Four deleted the video and issued an apology.

However, despite Tory accusations of slander and media partisanship, at least two facts make it likely that this was, indeed, an honest mistake on the part of a nameless subtitler within Channel Four’s organization:

  1. Poorly-timed background noise and Johnson’s characteristic mumbling make the audio of the speech less-than-perfectly clear at the precise moment in question, and
  2. Johnson has repeatedly voiced racist, sexist, and homophobic attitudes in both official and unofficial speeches, as well as in his writings (again, repeatedly) and his formal policy proposals.

Given the reality of (2), someone familiar with Johnson may well be more inclined to interpret him as uttering something explicitly racist (as opposed to the still-problematic dog whistle “people of talent”), particularly in the presence of the ambiguities (1) describes. Importantly, it may not actually be a matter of judgment (where the subtitler would have to consciously choose between two possible words) – it may genuinely seem to someone hearing Johnson’s speech that he spoke the word “color” rather than “talent.”

Indeed, this has been widely reported to be the case in the days following Johnson’s campaign rally, with debates raging online regarding the various ways people report to hear Johnson’s words..

For philosophers of perception, this could be an example of a so-called “top-down” effect on the phenomenology of perceptual experience, a.k.a. “what it seems like to perceive something.” In most cases, the process of perception converts basic sensory data about your environment into information usable by your cognitive systems; in general, this is thought to occur via a “bottom-up” process whereby sense organs detect basic properties of your environment (like shapes, colors, lighting conditions, and the like) and then your mind collects and processes this information into complex mental representations of the world around you. Put differently, you don’t technically sense a “dog” – you sense a collection of color patches, smells, noises, and other low-level properties which your perceptual systems quickly aggregate into the concept “dog” or the thought “there is a dog in front of me” – this lightning-fast process is what we call “perception.”

A “top-down” effect – also sometimes called the “cognitive penetration of perception” – is when one or more of your high-level mental states (like a concept, thought, belief, desire, or fear) works backwards on that normally-bottom-up process to influence the operation of your low-level perceptual systems. Though controversial, purported examples of this phenomenon abound, such as how patients suffering from severe depression will frequently report that their world is “drained of color” or how devoted fans of opposing sports teams will both genuinely believe that their preferred player won out in an unclear contest. Sometimes, evidence for top-down effects comes from controlled studies, such as a 2006 experiment by Proffitt which found that test subjects wearing heavy backpacks routinely reported hills to be steeper than did unencumbered subjects. But we need not be so academic to find examples of top-down effects on perception: consider the central portion of the “B-13” diagram.

When you focus on the object in the center, you can probably shift your perception of what it is (either “the letter B” or “the number 13”) at-will depending on whether you concentrate on either the horizontal or vertical lines around it. Because letters and numbers are high-level concepts, defenders of cognitive penetrability can take this as proof that your concepts are influencing your perception (instead of just the other way around).

So, when it comes to Johnson’s “talent/color” word choice, much like the Yanny/Laurel debate of 2018 or the infamous white/gold (or blue/black?) Dress of 2015, different audience members may – quite genuinely – perceive the mumbled word in wholly different ways. Obviously, this raises a host of additional questions about the epistemological and ethical consequences of cognitive penetrability (many researchers, for example, are concerned to explore perceptions influenced by implicit biases concerning racism, sexism, and the like), but it does make Channel Four’s mistaken subtitling much easier to understand without needing to invoke any nefarious agenda on the part of sneaky anti-Johnson reporters.

Put more simply: even though Johnson didn’t explicitly assert a racist agenda in Derbyshire, it is wholly unsurprising that people have genuinely perceived him to have done so, given the many other times he has done precisely that.

The Jezebel Stereotype and Hip-Hop

photograph of Lil' Kim on stage

Back in the day, black people were depicted in media through a series of racist caricatures that endured the majority of the 20th century. These caricatures became popularized in films, television, cartoons, etc. There was the classic sambo–the simple minded black man often portrayed as lazy and incoherent. Then, there was the mammy–the heavyset black woman maid who possessed a head-scratching loyalty to her white masters. The picaninny depicted black children as buffoons and savages. The sapphire caricature was your standard angry black woman, a trope that is still often portrayed in media today. But perhaps one of the most enduring caricatures is that of the jezebel. This caricature had an insatiable need for sex, so much so that they were portrayed as predators. One of the ways that this stereotype has endured time is through hip-hop. It could be argued that some black women in the rap game today reflect some of the attributes of the jezebel due to the promiscuity in their music. Therefore, are black women in rap facilitating the jezebel stereotype and, in turn, adversely affecting the depiction of black women in general?

Before we get any further, it should be noted that rap music has never been kind to women, especially black women (see “Hip-Hop Misogyny’s Effects on Women of Color”). You wouldn’t have to look far to confirm this. After all, Dr. Dre’s iconic album The Chronic has a song called “Bitches Ain’t Shit” with uncle Snoop Dogg singing the hook. It’s become a staple in rap music to disregard women in some form or fashion. But perhaps a line from Kanye West’s verse on The Game’s song “Wouldn’t Get Far” best embodies treatment of women and black women in the rap genre. West raps “Pop quiz how many topless, black foxes did I have under my belt like boxers?” In the music video, a bunch of black women in bikinis dance around West while he raps. Black women in rap are presented as objects of sexual desire–they’re arm candy. It’s the updated version of the jezebel. Before, as a racist caricature, the jezebel stereotype was used by slave masters to justify sex with female slaves. But even prior to that, Europeans traveled to Africa and saw the women there with little to no clothing and practicing polygamy. To Europeans, this signaled an inherently promiscuous nature rather than a social tradition. To them, it meant sexual desire.

Now, there’s a narrative of black women rappers in hip-hop who are embracing their sexualization in media. Junior M.A.F.I.A rapper and the Notorious B.I.G. femme fatale Lil’ Kim started this trend, spitting verses that your parents definitely would not have let you listen to as a kid. For example, on her song “Kitty Box,” Kim raps,

“Picture Lil’ Kim masturbatin in a drop

Picture Lil’ Kim tan and topless on a yacht

Picture Lil’ Kim suckin on you like some candy

Picture Lil’ Kim in your shirt and no panties.”

Fast forward from Lil’ Kim, and there’s Nicki Minaj with her song “Anaconda,” where the music video features her and several other black women twerking. But even past Nikki Minaj, there’s new rapper Megan Thee Stallion, who, although having developed an original sound, seems to have traces of Kim and Minaj in her music. On her song “Big Ole Freak,” Megan raps,

“Pop it, pop it, daydreaming ‘bout how I rock it.

He hit my phone with a horse so I know that mean come over and ride it.”

Posing a compelling contrast to “Big Ole Freak,” is another MC, Doja Cat. In the music video for her song “Juicy,” Doja dances to her lyrics that sound like a mash up of Megan Thee Stallion and Nicki Minaj, rapping,

“He like the Doja and the Cat,

yeah, He like it thick he like it fat,

Like to keep him wanting more.”

Though Doja’s music has traces of that jezebel stereotype with sexual desire, there’s a positive aspect to it as well. With all of the sexual innuendos in “Juicy,” at its core, the song is about body positivity. While rapping about that “natural beauty,” Doja features women of all shapes and sizes in her music video and is unapologetic about her figure–it’s as if her message is more about empowerment than it is sex. Megan Thee Stallion also incorporates empowerment for women in her raps with the term she coined “Hot Girl Summer,” which to Megan, is where women are unapologetic about their sexuality and simply enjoying life. At the same time, women in rap have also always put forth some positive sentiment in their music. One of the pioneering rap artists for women were MC’s like Queen Latifah, Lauryn Hill, and MC Lyte. For example, in her song U.N.I.T.Y., Queen Latifah begins her verse by rapping,

“Every time I hear a brother call a girl a bitch or a ho,

Tryna make a sister feel low, You know all of that gots to go.”

So, are the rappers today merely facilitating the jezebel stereotype and sexualization of black women? True, the messages in their music are reminiscent of some aspects of the jezebel trope, but there’s an aspect of positivity that challenges this reductionist view. It could also be that rappers like Doja Cat and Megan Thee Stallion are just smart entrepreneurs who understand that sex sells and are simply capitalizing on an opportunity. But these rappers might also be changing the sexualization of black woman by taking over the narrative for themselves.

But what does this mean for the rest of us? How does this help the black women who have to endure that stereotype everyday? They don’t have the platform like Doja Cat and Megan Thee Stallion do to start trends and see its impact. But maybe that’s where trends like “Hot Girl Summer” come in handy here. While the music and image from rap artists like Doja and Megan seem negative to some, it’s a form of empowerment for black women. Perhaps listening to “Juicy” lets some black women feel proud about their bodies and trends like “Hot Girl Summer” let them feel unapologetic about their bodies. Simultaneously, it’s important to understand that as time passes, stereotypes–how we define people–change meaning or lose meaning completely. But with that said, it’s still important to not forget the history of where those ideas came from.

Forget PINs, Forget Passwords

photograph of two army personnel using biometric scanner

By 2022, passwords and PINs will be a thing of the past. Replacing these prevailing safety measures is behavioral biometrics – a new and promising generation of digital security. By monitoring and recording the pattern of human activity such as finger pressure, the angle at which you hold your device, hand-eye coordination and other hand movements, this technology creates your digital profile to prevent imposters from accessing your secure information. Behavioral biometrics does not focus on the outcome of your digital activity but rather the manner in which you enter data or conduct a specific activity, which is then compared to your profile on record to verify your identity. Largely used by banks at present, research sites predict that by 2023, there will be 2.6 billion biometric payment users.

Biometric systems necessitate and operate based on a direct and thorough relationship between a user and technology. Consequently, privacy is one of the main concerns raised by critics of biometric systems. Functioning as a digitized reserve of detailed personal information, the possibility of biometric systems being used by unauthorized parties to access stored data is a legitimate fear for many. Depending on how extensive the use of biometric technology becomes, an individual’s biometric profile could be stolen and used against them to gain access to all aspects of their life. Adding to this worry is the potential misuse of an individual’s personal information by biometric facilities. Any inessential use of private information without the individual’s knowledge is intuitively unethical and considered an invasion of privacy, yet the US currently has no law in place requiring apps that record and use biometric data to disclose this form of data collection. If behavioral biometrics is already being used to covertly record and compile user activity, who’s to say how extensive and intrusive unregulated biometric technology will become over time?

Another issue with biometric applications is the possibility of bias towards minorities, given the prominence of research that suggests certain races are more likely to be recognized by face recognition software. A series of extensive independent assessments of face recognition systems conducted by the National Institute of Standards and Technology in 2000, 2002 and 2006 showed that males and older people are more accurately identified than females and younger people. Therefore, algorithms could be designed without accounting for the possibility of unintended biases, which would make these systems unethical.

By the same token, people with disabilities may face obstacles when enrolling in biometric databases if they lack physical characteristics used to register oneself in the system. An ethical biometric system must cater to the needs of all people and allow differently abled and marginalized people fair opportunities to enroll in biometric databases. Similarly, a lack of standardization of biometric systems that can cater to geographic differences could lead to compromised efficiency of biometric applications. Because of this, users could face discrimination and unnecessary obstacles in the authentication process.

Behavioral biometrics is gaining traction as the optimum form of cybersecurity designed to prevent fraud via identity theft and automated threats, yet the social cost of incorporating technology as invasive and meticulous as this has not been fully explored. The social and ethical consequences the use of behavioral biometrics may have on individuals and society at large deserves significant consideration. It is therefore imperative that developers and utilizers of biometric systems keep in mind the socio-cultural and legal contexts of this type of technology and compare the benefits of depending on behavioral biometrics for securing personal information against its costs. Failure to do so can not only hinder the success of behavioral biometrics, but can also leave us unequipped to tackle its possible repercussions.

“OK Boomer” and the Generational Divide

photograph of unsmiling girl giving thumbs up

Millennials and members of Generation Z, fed up with condemnatory think-pieces (which deride everything about young people, from their taste for expensive brunch food to their role in the death of the napkin industry), have a new retort to combat dismissive baby boomers. “OK boomer,” a pithy and dismissive response to any patronizing or out-of-touch statement made by an older person, has become common parlance both online and off. The meme started on Twitter sometime in 2018, but it recently garnered attention from mainstream news sources when nineteen year-old college student Peter Kuli released a remix of Jonathan William’s song “OK Boomer,” which mainly consists of Williams repeating the song’s title interspersed with a few lines poking fun at baby boomers, on the social media app Tik Tok. The song includes lyrics like, “You’re all old and racist / All about that fakeness / I’m tryna pay my bills / But I’m all on the waitlist,” and “The way you wear that MAGA hat / Lookin’ like a facist.”

Baby boomers are generally taking the meme as an ageist attack against their generation. The language they use to describe the meme is violent and martial; economist Tyler Cowen called it “the latest linguistic weapon of generational warfare,” and Meghan Gerhardt, the founder of a movement aimed at promoting harmony between generations in the workplace called Gentelligence, called it “a pre-emptive strike against baby boomers [launched] using the most powerful weapons in [Generation X’s] arsenal—social messaging platforms TikTok, Snapchat and Instagram.” Some have taken their resentment to almost cartoonish extremes. Bob Lonsberry, a conservative radio show host, called it “the n-word of ageism” on Twitter. He received a significant amount of backlash, and, of course, many people responded to the original (and now deleted) Tweet with “OK boomer.”

Many think pieces about “OK boomer” (because this meme, of course, has become yet another source for countless condescending think pieces about the follies of young people) have elevated what might have been laughed off as a harmless joke to a serious issue with moral weight. It’s worth considering whether or not young people are actually fostering generational divide by propagating this meme, and if so, what the moral ramifications of that could be. While the notion that strict demarcations divide us into “generations” has been called into question, the idea that a shared set of values, or the memory of a transformative cultural event, binds us to other people in our age group persists. Whether or not it actually exists in a quantifiable sense, many of us still perceive a difference between the young and the old.

The controversy around this meme is based in large part on a question of privilege. Baby boomers who dislike the meme argue that young people who use the meme are truly the ones who are privileged, or have at the very least inherited privileges from their parents that they are incapable of acknowledging. In an article for The Guardian, Bhaskar Sunkara implies that young people ought to turn their attention towards the truly privileged, the “capitalists, [and] the politicians who serve them,” rather than their parents. This statement, however, implies that there is no overlap between the two groups, that capitalists cannot be baby boomers or that those born in the post-World War II era have not in large part created our current economic situation.

At the same time, many argue that this meme attacks those from the baby boomer generation who were marginalized or underprivileged. This becomes evident when the idea of discrimination in the workplace enters the picture. Gerhardt writes about the harm that ageist sentiments can inflict in the workplace, claiming that,

“Generational difference is one of the final frontiers where identity-based stereotypes, prejudice and putdowns are allowed to not only run rampant […] As a new generation comes of age, it’s an ideal time for all of us to become aware of the harm this does—and the potential to be found in generations respecting and learning from each other instead.”

She argues we should value generational difference and the new perspectives it gives us, both in and out of the workplace. This criticism, that we gain more from solidarity between generations than division, is certainly valuable.

Another criticism of this meme claims that it relies too heavily on a white middle-class perspective; children of the poor and people of color, as some on Twitter have pointed out, can hardly subscribe to the idea that their parents have it easy or are in possession of socioeconomic advantages that their children lack. “Okay Boomer,” in other words, is a meme that primarily speaks to the anger of white teenagers that feel locked out from privileges and economic prosperity their parents enjoyed. However, as evident in the song that made this meme so popular, “Okay Boomer” is not a putdown for baby boomers in general. Rather, it attacks the most vocal and powerful group within that demographic; the wealthy, the white, and the conservative. It is within this context that the meme is most often used, and its older critics almost invariably come from this demographic.

Even more central to this story than privilege is the idea of voice; whose voices are valued in our society, who is allowed a platform, who is allowed to criticize whom. Both sides feel dismissed and undervalued, and both perceive the other as holding the power to speak and be heard. “OK boomer” is, in its most common and widely proliferated use, a way of dismissing a privileged voice from an assumed non-privileged position, but we should still be aware of how our assumptions and how voice can shape the way we perceive generational difference.

Pope Francis, Edward Gallagher, and Just War Theory

photograph of armed soldiers in file

In his remarks during a trip to Japan, Pope Francis denounced not only the use, but also the mere possession, of nuclear weapons as morally unacceptable. While this has been Pope Francis’ position throughout his tenure as Pope, it marks a change in the Vatican’s official position toward nuclear weapons from the era of Pope John Paul II, at which time the church merely denounced the actual use of nuclear weapons. Neither of these comments are motivated by a general principle of pacifism on the part of the Catholic Church, which both currently and historically has supported the existence and use of military force. The contemporary Church recognizes war as legitimate only in the context of national self-defense.

Relatedly significant controversy has attended President Donald Trump’s meddling in the case of Navy SEAL Edward Gallagher, who was tried and acquitted of war crimes. The idea of a war crime can itself seem perplexing as to many it is intuitive that the point of war is simply to win quickly and by whatever means necessary. How do nations like the United States, which has actively pursued military means of executing its international agenda, square their activities with idea of a war crime? Are institutions like the United States and the Catholic Church contradicting themselves, or is there actual principle at work?

A good way to understand this is to look into the specific provisions of so-called Just War Theory, the roots of which are in the work of famed (and Catholic) philosopher Thomas Aquinas in his Summa Theologicae. Far from pacifism, Just War Theory advocates that there is a way to enter into, conduct, and conclude wars which is not merely morally excusable but wholly justified. Nor is this sort of thinking limited to the Catholic tradition. In the Muslim tradition, the concept of jihad is one which prescribes with whom it is morally acceptable to go to war and how it is permissible to prosecute such a war. Similar sentiments can also be found in the writings of the Confucian and Mohist schools of philosophy in Ancient China as well as in Ancient Roman concepts of the laws that govern conduct among nations.

For the sake of simplicity and brevity, let’s stick with Just War Theory. A ban on the use of nuclear weapons would come under the heading of jus in bello, the part of Just War Theory that deals with what counts as prosecuting war in a morally justified fashion. Accounts of the aftermath of the use of nuclear weapons by the United States against Japan in 1945 are harrowing. Those people who survived the initial explosion suffered from extensive and horrible burns as well as a lifetime of health problems due to exposure to intense levels of radiation. These aspects inefficiently achieve the licit goals of military action as allowed by Just War Theory, namely to incapacitate a wrongly aggressing force without excessive damage to civilians and non-military infrastructure. Further, nuclear weapons in general create the possibility of nuclear fallout, which is the transmission of radioactive material throughout the atmosphere by weather patterns. Importantly the spread of nuclear fallout is not in the direct control of those who deploy nuclear weapons in the first instance. Hence the area and number of people affected is indiscriminate, with no clear way of controlling collateral damage.

Both of these features of nuclear weaponry make them a means of conducting war that is arguably male in se, in the terms of Just War Theory. This means that it is a method that is inherently bad, regardless of who uses it and how. Examples of methods that are treated as mala in se without controversy are slavery, pillaging and raping, group punishment as well as chemical and biological weapons (e.g., mustard gas and weaponized infectious agents). Nuclear weapons are not banned, but the similarity of the effects that they have to chemical and biological agents has led many to advocate for disarmament and an international ban on the possession, use, and development of nuclear weapons.

Not only are certain methods of killing and incapacitating enemies and civilians forbidden in Just War Theory, so is certain treatment of prisoners of war. The war crimes accusations against Edward Gallagher concerned the murder of an Islamic State prisoner of war. In general, prisoners of war (and otherwise incapacitated combatants), are not allowed to be killed, tortured, or humiliated. Unlike criminal prisoners, prisoners of war are not being held as a means of punishment for their actions. Even where the captured military personnel are responsible for actions considered international crimes, the ground personnel of the opposing military are not considered legitimately empowered to execute punishment. Here another aspect of Just War Theory enters the picture, jus post bellum, which concerns appropriate behavior upon the conclusion of war. Any prosecution for war crimes must be done by with respect for due process, including full court proceedings, within a court with the appropriate jurisdiction.

Just War Theory attempts to carve out a middle path between two monolithic alternatives. On the one hand there is pacifism, which argues that all violent, military action is morally unacceptable. On the other hand there is so-called realism about war, which argues that war is not immoral but beyond morality. However every nation belonging to any international political or governing body (at least in theory) subjects itself to rules of warfare meant to limit what are seen as moral excesses in the conduct of an otherwise (possibly) justifiable enterprise. The concept of a war crime in general, and the Catholic Church’s evolving position on warfare in particular, both manifest attempts to stay between the twin implausibilities of pacifism and realism concerning war.

Impeachment as a Means to an End

photograph of Capitol building with U.S. flag flying below the Statue of Freedom

As the House unrolled its impeachment inquiry despite polling evidence that public sentiment was not on its side, a slew of editorials suggested that, as with Watergate, the impeachment proceedings themselves were likely to tip public opinion. So far, poll numbers have not borne fruit. Support for impeachment seems to be eroding as support for Trump inches upward. If the Senate is unlikely to vote for conviction, the best Democrats can do is weaken Trump’s 2020 campaign. If it only seems to be strengthening his support, was impeachment a poorly calculated mistake? Only a shallow understanding of politics, however, should lead us to think that impeachment has been a political failure.

We often think of politics as the art of the possible, assuming that any political ploy that does not aim at straightforwardly achievable policy goals is misguided. But as Simone de Beauvoir already pointed out in 1945, in her “Moral Idealism and Political Realism”, this is to misunderstand what the possible is. Beauvoir struggled with the question of how means and ends relate to each other in politics. The French who collaborated with the Nazi occupation often claimed in their defense that resistance could not succeed and they did what was necessary to save France. They adjusted their means to their ends, believing that collaboration was a bow to inescapable reality.

Beauvoir takes the collaborators to task. A brazen political realism of this sort assumes that the ends and the means are separate. If the end is important enough—the defeat of Nazism, for example—it seems as if any means are good enough. Similarly, if the goal is to remove Trump from office, Democrats should pursue only the strategy best calculated to achieve it; this seemingly commonsense view also arises in voters’ oft-discussed concern with electability as a driving consideration in the primaries. Beauvoir’s response is that the means are not simply technical instruments designed to achieve a distinct outcome; they are part of the outcome.

Removing Trump from office is not in itself the goal. What has occasioned impeachment is this administration’s attempt to reduce American foreign policy to a Soviet-style crony government, where political transactions are carried on through personal influence and shadow policy entirely outside normal channels. Compared to Trump’s other impeachable offenses, like violations of the emoluments clause and obstruction of justice, this one is especially grievous because it redefines our place in the international community. That place has already been severely damaged by our withdrawal from the Paris Accords, violation of the Iran nuclear deal, support for Putin and other autocrats, abandonment of our Kurdish allies, and a host of other diplomatic malversations. But on top of that, and ultimately more politically troubling, it is now clear that U.S. foreign policy is dictated by the political and financial needs of the President and his inner circle. Corruption on this scale is extraordinarily difficult to flush out of domestic affairs once it has set in, but the difficulty is dramatically increased in international affairs, when not only our diplomatic corps but also those of foreign governments become thoroughly compromised.

The struggle for the soul of American politics is not merely a struggle for Trump’s removal. It’s a struggle to restore the idea—however flawed it may be in practice—of America as a moral leader, with the soft power capable of defending human rights, democratic institutions, and the rule of law around the world. In the current political climate, that idea has not merely evaporated; it is actively being replaced by the specter of the U.S. as a world power using its awesome capacity for incentive and disincentive to serve political cronies. This damage cannot be undone simply by changing leadership. It can be undone only through a political transformation.

That transformation isn’t simply a matter of new government. Imagine if a different president were elected in 2020. That might signal that the U.S. is ready for a different diplomatic model, but it would not restore its position of leadership. If that position can be restored, how we get there is crucial. Protest, both by citizens and members of Congress, is important, but it doesn’t signal political change. Government officials must not only pay lip service to fighting corruption, but must also act against it. To have a chance of success at winning back the mantle of moral leadership, the U.S. must show that corruption will not be tolerated. The impeachment hearings must disclose its scope, and future trials must impose consequences.

Beauvoir argues that the means are an ineliminable part of any human end: it matters not only that I receive a trophy, but that I earn it; not only that we have universal health care, but that we as a nation pass it; world peace reached through mass genocide would be a peace stained forever. In the case of impeachment, the hearings are not only a way of getting to a particular political state of affairs. They are crucial to what it could mean to reinstate a polity that both American citizens and foreign governments could rely on, because that polity would be one that is not only ruled by law, but also established and maintained by law.

This isn’t to say that we should turn to an idealism, indifferent to what it is politically possible to accomplish. Beauvoir has some strong words for moral purists whose aims are so lofty that pursuing them with clean hands is impossible, and who thus give up entirely on acting to change the world. What’s possible, however, depends on what we take to be possible. If we were to decide that the only realistic option for returning the U.S. to a rule of law were through elections, keeping quiet about Trump’s wide-scale corruption in the meantime, then what’s possible would be not the reinstatement of the rule of law, but only a change in priorities. If the end is to replace not only the corrupt regime, but also the degraded image of the U.S. on the world stage, only public investigations and trials, shored up by political will, can make that possible.

A U.S. without the Trump presidency would likely be less corrupt than the U.S. with a Trump presidency. By that logic, the removal of Trump, even without impeachment, is better than nothing. But a U.S. that has confronted corruption at the top, exposed it, and put an end to it, is a far stronger and more reliable country. It would be a country that is better equipped to fight such corruption at the top in the future (an important consideration, since allowing corruption to go unchallenged also creates future precedent) as well as one that is more strongly set against corruption in its political orientation and institutions. These two ends, in other words—Trump’s removal without impeachment and Trump’s removal via impeachment—are different from each other, because the means are part of the end result. Even if impeachment fails to lead to removal and Trump is removed instead via elections in 2020, the country would be different than if it had failed to chance impeachment at all, because it will have built the moral and political courage necessary to undertake it. Ultimately, even if Trump wins a second term, it will be in a country that has resisted and not simply capitulated, and thus a better country.