← Return to search results
Back to Prindle Institute

The Morehouse Gift and Reliance on Billionaires

photograph of bell at Morehouse College

Headlines were made earlier this month when billionaire Robert F. Smith, during a commencement speech at Morehouse College, promised that he would pay off the student loans of everyone in the graduating class. Although it’s not yet clear just how much money the student debts of the 396 people in the graduating class amounts to, it is thought that the sum will be close to $40 million.

Smith is not the only billionaire in the news for attempting to help out with the growing student debt crisis in America: for instance, Smith’s gift came on the heels of another recent large donation from billionaire Ken Lagone, who pledged $100 million to pay for the tuition of medical students at NYU. Instead of cancelling out the groups of student debt directly, other members of the ultra-rich have also recently started or invested in initiatives attempting to approach the student debt problem from other angles, with billionaires like Tony James and Geoff Lewis starting institutes with the goal of finding different and hopefully better ways for students to fund their college education. For example, the Silicon Valley start-up Lambda School implements an “income share agreement” (or ISA): the school will pay for tuition, which the student will pay back via a percentage of their post-graduation salary, provided that it is high enough. The ISA is money-making for the school insofar as the amount paid back by the student (should they be in a good enough financial position to pay it back) is greater than the initial loan – for example, a student may be required to pay $30,000 from a percentage of their salary on a $20,000 loan.

While Smith’s gift to the Morehouse graduating class was generous and by all accounts well-intentioned, and while using one’s wealth to address the student debt crisis generally seems like a good thing to do, we might still be concerned about relying on billionaires in addressing the student debt crisis. Take the case of Smith’s donation first: while it seems unambiguously good to give tens of millions of dollars of one’s own money to help out an entire graduating class, his gift did not come without any complications. For instance, former Morehouse Student Jordan Long went to twitter to report his dismay at learning that had he not dropped out of Morehouse due to mounting student debt that he, too, would have been the beneficiary of Smith’s generosity. While happy for his classmates, Long was concerned that future students may be led to believe in the “fairy tale” that the 2019 graduating class was experiencing, namely that a “random prince” would swoop in and save them from their debt. Instead of relying on the unpredictable philanthropy of billionaires, Long suggests that it would be better overall if a more sensible taxation scheme was implemented for the ultra-rich, something that could make a much more sizable impact on student debt than one-off acts. Others have expressed similar concerns with Smith’s gift. For example, writer Anand Giridharadas echoes Long’s concerns, noting that due to various tax loopholes, Smith pays taxes to a similar rate that will likely be applied to the graduates that he helped support, and questions whether Smith’s money could not have been more effectively spent in other ways.

None of this is to say that Smith’s donation was a bad thing, or that he shouldn’t have done it, or that Morehouse students shouldn’t have accepted it. But Smith’s donation has only served to draw more attention to an enormous problem, one that a single act of charity cannot solve. Concerns expressed by Long and Giridharadas are especially pressing given that the conditions that allowed billionaires like Smith to be billionaires are those that contributed towards the production of the student debt crisis in the first place. So while a one-time donation of tens of millions of dollars is impressive and flashy, we might ask whether there is not a better way to make sure that billionaires are doing their part in helping others.

What about a more systematic approach to student debt, like ISAs employed by the Lambda School? As mentioned above, some members of the ultra-rich have begun investing in these kinds of projects; is this a better use of billionaire money? On first blush an ISA might seem like a win-win situation: the only students who end up paying tuition are those who can ultimately afford it, allowing students who may not have had the opportunity otherwise to attend college, while the school itself can recoup the cost of tuition from its wealthiest graduates. There are, however, reasons to be concerned with this kind of program. Writing at the New York Times, Andrew Ross Sorkin expresses some concerns with the thought of implementing an ISA program at a large scale:

By seeking safe investments, programs like this could cast aside the strides made to expand educational opportunities to higher-risk students and reduce the appeal of educations that focus on noble, but lower compensated, professions.

The worry, then, is that ISAs will encourage schools to accept students who are deemed to be at the lowest risk of dropping out, as well as those having the highest chance of going into a lucrative career, in order to ensure that investors will make a profit. Sorkin was particularly concerned, for example, that widespread implementation of ISAs would result in schools heavily discouraging students from pursuing degrees in the liberal arts.

It is perhaps not surprising that there are concerns that billionaire-backed ISAs pitched as altruistic attempts to address the student debt crisis are little more than new investment opportunities in disguise. This is not to suggest that something like ISAs could not, in principle, be put to good use. Nor is it to suggest that the actions of billionaires should be unilaterally viewed with suspicion (although some healthy suspicion is no doubt often warranted). However, what the recent philanthropy of billionaires like Smith and others should draw our attention to is whether their actions, no matter how charitable in the short-term, are perhaps more of a distraction from more plausible long-term solutions than a solution in themselves.

Game of Thrones: Dragons, Despots, and Just War

photograph of used Game of Thrones book

** SPOILER WARNING: This article contains spoilers for Game of Thrones up to and including the show’s Season 8 ending.

Game of Thrones, the popular television show based on the book series by George RR Martin, aired its final episode last week. Set in a medieval fantasy world the strength of its appeal is in its exploration of real-world themes of politics, power and war.

For a story infamous for subverting narrative expectations with radical plot twists, in the last two episodes the actions and ultimate fate of Daenerys Targaryen shocked even the most intrepid fans. Hitherto one of the story’s heroines and presumptive savior of Westeros, fans watched in horror as Daenerys chose to burn hundreds of thousands of innocent civilians alive in her quest to ‘build a better world’. What happened? Was this a calculated tactic, and if so, what possible reason or justification can there have been for her to choose the route of extreme violence?

Daenerys Targaryen was one of many contenders for the Iron Throne, the seat of power on the continent of Westeros, and the only one with dragons. In Martin’s fictional world, dragons (which can grow to immense size, and breathe columns of fire so hot it can melt stone and steel) had traditionally been used by the ruling Targaryen family as weapons of war and conquest, but had been extinct for a century prior to events of the story.

As different characters and factions vie for the Iron Throne and the rulership of Westeros, Daenerys Targaryen, living in exile after her father the “Mad King” was deposed by the current ruler(s) during a rebellion, magically hatches three petrified dragon eggs. Already possessed of the belief that the throne belonged to her by right of succession, as her dragons grew to maturity she was also in possession of a formidable military weapon with immense firepower. In a medieval world where weapons of war are swords and arrows, the dragons represent the destructive might of nuclear weapons against the mere capacities of conventional ones.

Despite occasionally displaying the fiery Targaryen temper, Daenerys was to begin with relatively restrained in the use of this significant advantage in pursuing her military goals and achieved her ascendency to ruler of Slaver’s Bay (a region on the continent in which she was in exile) with only sparing use of dragon fire. For most of the story Daenerys seems to be (so to speak) on the right side of history as, in Astapor, Yunkai and Meereen she liberated the large population of slaves and presided over the abolition of the institution and practice of slavery. To this end she was a revolutionary, and styled herself as a liberator and a ruler for the downtrodden on a mission to create a better world.

When she finally turns her gaze to Westeros to retake the Iron Throne even as her advisors, including Tyrion Lannister, implore her to hold back her firepower and not to use her dragons to attack the city of King’s Landing (the seat of her rival, tyrannical and ruthless queen Cersei Lannister and also home to a large civilian population) but to pursue other military options less likely to result in large numbers of civilian casualties, many of her allies (Yara Greyjoy, Ellaria Sand and Olenna Tyrell) encourage her to hit King’s Landing with all of her might.

This is a real ethical dilemma in military tactics. Where one party has a weapon of immense superiority, such as a nuclear weapon, there is a case to be weighed up between using it to obtain swift victory, avoiding a potentially protracted conflict which may eventually lead to a great deal more death and suffering, as against holding back to avoid the possibility of an egregious, even gratuitous, victory born from a one-sided conflict. As such, the debate continues on whether the use of nuclear weapons by the United States on the Japanese cities of Hiroshima and Nagasaki to end WWII was morally justified. One informal, voluntary poll shows that over 50% of respondents believe that it was indeed justified.

Viewers who had followed Daenerys’ ascent from frightened girl to Khaleesi, Mother of Dragons, and Breaker of Chains generally trusted in her, as a flawed but fundamentally good character, to do what was right. But as in the real world, in the world of Game of Thrones, it is not always clear what the right thing is. Even so, one of the things that the Game of Thrones seems to point up is the need for benevolent rulers. When Daenerys responds that: “I am here to free the world from tyrants… not to be queen of the ashes” we seem to instinctively understand that benevolence and indiscriminate violence cannot easily coexist.  

Yet following the failure of other tactics, she finally unleashes the immense firepower of a dragon against the Lannister army, literally incinerating it, and afterwards, presenting the captured soldiers and nobles with a ‘choice’ – to “bend the knee” (accept her as their ruler) “or die.” She executes with dragon-fire those who do not acquiesce. If Daenerys is going to win, if she is going to take the throne and build her better world, she needs some victories – but is this use of firepower, and subsequent refusal of mercy the right thing to do? At this point her advisors, and possibly her supporters, are uneasy.

In our world, the rules of just war have been formulated, and latterly enshrined in international law, in order to regulate, and limit when and how war is waged. Just war theory includes the principles of jus ad bellum and jus in bello.

Jus ad bellum is a set of criteria to be consulted prior to resorting to warfare to determine whether war is permissible, that is, whether it is a just war. This principle has a long history in the western philosophical tradition. In his Summa Theologica, circa 1270 Thomas Aquinas writes: “[we deem as] peaceful those wars that are waged not for motives of aggrandizement, or cruelty, but with the object of securing peace, of punishing evil-doers, and of uplifting the good.” One principle of just war, particularly relevant here is the principle of proportionality which stipulates that the violence used in the war must be proportional to the military objectives.

Daenerys does, in accordance with Aquinas’ stipulation, have as her objective the ‘securing of peace’ and ‘uplifting of the good’. Her attack on the Lannister armies, following the exhausting of other tactics, probably passes the test of proportionality – though executing prisoners of war would contravene the Geneva Convention.

However, in the penultimate episode of the series, the worst fears of those who had counselled her against unleashing the full force of her destructive capacities upon a whole city – civilian and military alike, come to pass. Worse still, this action is not taken as a last resort. Using her armies and dragons she has already overcome the enemy’s military. The city has surrendered and the bells of surrender are ringing out as she begins methodically to raze the city to the ground. In this moment she cedes the moral high ground and loses all the moral authority she had earned as a warrior for justice and liberator of the people. But why does she do this?

Daenerys believes that the end will justify the means. She believes that the good of the new world she wants to build will outweigh the suffering caused by the destruction of the old one. She relies on a consequentialist justification here, but if an action can be justified morally by its consequences, then one must know what the consequences will be, and one must know that the resultant good will be certain to morally outweigh the suffering.

Theoretically, if the death of, say, hundreds of thousands prevented the deaths of millions then it could be justified in consequentialist terms. Many philosophers find in such reasoning grounds to reject consequentialist ethics. The reason no one accepts this rationale from Daenerys is that the magnitude of devastation renders it nearly impossible to see it as anything other than utterly, horrifically disproportionate, and the fact that the city had surrendered renders such a justification moot since the immense suffering can not be shown to have been necessary for the better world she claims to be trying to build. As such, it is hard to avoid the conclusion that it was gratuitous.

If Daenerys actions cannot be justified as a sacrifice in pursuit of “mercy towards future generations who will never again be held hostage to a tyrant” the other explanation is more sinister – and also more realpolitik. In his book on ruling and the exercise of power, Niccolò Machiavelli wrote: “it is much more safe to be feared than to be loved when you have to choose between the two.

Coming to the bitter realization that she was not going to win the Iron Throne nor hold power in Westeros by virtue of the love of those whom she needed on her side, Daenerys knows that if she is to rule, fear is her only pathway to power. She says as much to Jon Snow before the attack on King’s Landing: “alright then, then let it be fear.” As such, she had made her choice before hand and knew that were the city to surrender she would not pull back, as Tyrion urged her to do, but unleash the full force of her fiery might.

Perhaps Daenerys thought that this was the right thing to do, perhaps, cornered, she thought it was the only thing left for her to do. As Cersei Lannister so prophetically said to Ned Stark in Season One “When you play the game of thrones you win or you die, there is no middle ground.” Is this Machiavellian move compatible with the goal of building a better world? Daenerys had wanted to free the world of tyrants, but what is a tyrant but someone who must rule by fear? Sadly the Khaleesi, Mother of Dragons, and Breaker of Chains in the end became what she despised.

On Censorship, Same-Sex Marriage, and a Cartoon Rat

photograph of tv screen displaying an Arthur episode

On May 13th, the children’s television program Arthur, based on the popular storybooks featuring anthropomorphic animals, premiered its 22nd season with an episode titled “Mr. Ratburn and the Special Someone.” Alabama Public Television refused to broadcast it, citing concerns about its inclusion of a same-sex wedding. In the story, Arthur and his friends learn that their teacher, Nigel Ratburn (a staple character for decades), is engaged to be married, but they worry about what that will mean for their future when they see him with a grumpy, unfamiliar woman. Eventually, the children realize that the woman is Mr. Ratburn’s sister who is in town to officiate the ceremony – Nigel’s actual partner is revealed to be Patrick, a kind chocolatier introduced in the episode. In the show’s closing moments, the children celebrate their teacher’s happiness as one comments “It’s a brand new world,” before they all chuckle at Mr. Ratburn’s embarrassing attempts to dance.

The notion that homosexuality is scandalous and demands censoring for younger audiences is not new, but the majority of Americans actually claim to support same-sex marriage, particularly since its federal legalization in 2015. A Gallup poll from May of last year indicates that as many as two-thirds of U.S. adults say that gay marriages should be legally valid and, although the numbers in the South are lower, a majority (55%) still support marriage equality. Nevertheless, APT explained its decision to not air Mr. Ratburn’s wedding as a matter of free choice: “Our broadcast would take away the choice of parents who feel it is inappropriate,” explained programming director Mike McKenzie.

Initially, Alabama was not alone in its decision: the Arkansas Educational Television Network similarly cited content concerns about the story, saying, “In realizing that many parents may not have been aware of the topics of the episode beforehand, we made the decision not to air it.” This sentiment reflects the position of Christian advocacy group One Million Moms, a division of the American Family Association which stated on its website, “Just because an issue may be legal or because some are choosing a lifestyle doesn’t make it morally correct. PBS Kids should stick to entertaining and providing family-friendly programming, instead of pushing an agenda.” AETN later reversed its decision and plans to air the episode at the end of May.

Censorship is (perhaps a bit ironically) a topic long-discussed in political philosophy: ever since Socrates was put to death for his “impious” words, the Western philosophical tradition has harbored a certain wariness about the notion of silencing discussion or debate prematurely. Towards the end of the second chapter of On Liberty, John Stuart Mill lists four reasons why he considers the unrestrained dissemination of ideas within society important (all other things being equal):

  1. Silenced opinions might turn out to be true,
  2. Only the “collision of adverse opinions” can root out the actual truth,
  3. People will only come to authentically believe a truth if that truth is honestly discussed, and
  4. Silencing dissent turns true opinions into irrational dogmas.

The only sort of censorship that Mill even hints at supporting is that of “vituperative language” against minority positions (in the interest of encouraging further discussion) – and, even then, he clarifies firstly that it should be a censorship of presentation, not content, and secondly that it would be accomplished via social stigma, not formal law. From Mill’s position, the decision to prevent the dissemination of a playful children’s program (which, it might be noted, never actually explicitly discusses Mr. Ratburn’s sexuality) seems difficult to defend.

Of course, one might then ask: does this mean that nothing can ever be censored? Should PBS follow the latest Arthur episode with a news report filled with graphic footage of the day’s disasters? This seems wrong for an entirely different set of reasons. Tactfulness and discerning the appropriate context within which a topic can be properly discussed is a matter of skill that does not come easy to everyone; we’ve all had experiences where someone has “overshared” personal information in professional meetings or referenced something disgusting over what had previously been a polite meal. The public television stations in Alabama and Arkansas held that a cartoon is an inappropriate setting for a gay marriage celebration, even one as subdued as Mr. Ratburn’s (the scene itself never actually shows the technical ceremony and neither Patrick nor Mr. Ratburn speak).

Two questions, then, remain – one universal to the matter of censorship and the other more specific to this case:

  1. Will people be harmed by the presentation in question?
  2. Should the silent indication that a character is heterosexual be treated differently than the silent indication that one is not?

Regarding (1), consider how security issues might well require that some information be kept quiet (about, for example, the travel plans of a state official) or how, for the sake of the mental stability (or even physical safety) of given members of an audience, some topics should be avoided or ignored – the so-called matter of “deplatforming,” as when white nationalist Richard Spencer decided to cancel his speaking tour last year after persistent protests. If the concerns of (1) are severe enough, the pro-censorship side can defend its position as a matter of safety, not ideological purity, thereby avoiding Mill’s concerns.

To answer the second question affirmatively is simply to express a bias against any relationship that is not heterosexual – something which might actually indirectly violate (1) and which is itself up for legal review by the SCOTUS with a ruling expected next year.

But it seems like the position of APT and AETN was that airing “Mr. Ratburn and the Special Someone” could potentially provoke harmful situations in homes that do not affirm same-sex marriages and would, as a consequence of the program, need to explain such things to young family members – that is, the stations based their decision on (1), not (2). Certainly, by calling it “agenda pushing” and labeling it as something other than “family-friendly programming,” One Million Moms explicitly thought the episode to be harmful in some way. If one works with a sufficiently broad definition of “harm,” then the uncomfortable tenor of some household conversations might well qualify – but, that’s ultimately no different than saying “not everyone is going to like this thing that I’m talking about” and unanimous approval is an impossibly high bar for permissibility. At this point, we seem perilously close to former Supreme Court Justice Potter’s famously ad hoc inability to describe ‘obscene speech’ – but his insistence that “I know it when I see it.”

PBS Kids spokesperson Maria Vera Whelan explained, “We believe it is important to represent the wide array of adults in the lives of children who look to PBS Kids every day” and Marc Brown, the creator of Arthur, told People magazine that “We all know people who are gay, who are trans, and it’s something that is socially acceptable. Why is there this discomfort that it takes a leap into our national media?” Particularly given the general support for same-sex marriage across the country, this discomfort is curious indeed. Same-sex relationships have been around far longer than the Obergefell v Hodges precedent and pretending otherwise only serves to perpetuate unnecessary ignorance.

After all, as one conservative pundit likes to frequently remind his fans, “Facts don’t care about your feelings.

Elevating the Elite in Music: El Sistema and Cultural Hegemony

photograph of a girl playing the trumpet in a group of young students

Since 1975, the Venezuelan program known as “El Sistema” has brought European classical music to the disadvantaged youth of the Latin American nation. Similar programs have been initiated in at least sixty countries around the world, sparking a global movement which aims to use classical music as a force for social change. El Sistema and its founder, José Antonio Abreu, have been showered with praise and awards by the international community; supporters say it produces not just excellent musicians but also model citizens, preventing widespread violence and criminality in society. Implicit in this philosophy, however, is the proposition that European classical music is both aesthetically and morally superior to other forms of music. While promoting or preferring classical music is not necessarily harmful, the tendency to elevate it over other musics, including local musical traditions, has the potential to enforce a harmful European cultural hegemony and suppress otherwise vibrant artistic practices.

The argument for El Sistema as an agent of social change references both the music and the performance practice of European classical tradition. The discipline required to learn an instrument, conduct rehearsals, and perform the music instills in children the values of hard work and commitment, keeping them from the path of immorality. Child’s Play India Foundation, a program inspired by El Sistema, uses strong language to describe the division between those “who may have otherwise gone down the drugs and drink route” from those who “lead a life of dignity, joy and empowerment.” This sort of language is reminiscent of historical criticism of various popular music genres, including jazz, rock, and rap. Valorizing the discipline and training of classical musicians implies that successful musicians in popular, commercial, and folk traditions lack the same work ethic, talent, or requisite moral fiber. Taken to its most egregious extreme, this attitude can be used to support racist portrayals of native people in historically colonized countries like Venezuela and India as lazy, hedonistic, and unrefined.

Although race-oriented denigration of popular music is obviously unacceptable, there may be some merit to the notion that European classical music, being an “art music,” requires more training and technical skill than other forms. (Some styles of popular music even emphasize accessibility while criticizing classical music as elitist.) On the other side of the coin, the time and resources required to play classical music is often cited as a barrier to entry for disadvantaged people — in fact, righting this imbalance is one of the primary purposes of programs like El Sistema. That said, the focus of the Child’s Play Foundation in India on European classical music is difficult to defend, considering India is home to two major classical traditions of its own, each of which bears comparable complexity and historicity to European classical music. While I cannot presume to have a complete grasp of the moral and social context of Indian classical music, one worries that Eurocentricity has played a role in the promotion of European classical music over local traditions in India.

This is not to argue that the people of Venezuela and India should not play European classical music or should be restricted to their own local traditions. Cross-cultural exchange and participation ensure vitality in the arts, and to suggest that only musicians native to a given region can play music from that region would be to impose an overly simplistic view of race, geography, and history. Music, art, and culture are ever-changing and ever-blending entities. On the other hand, when European music is elevated socially above local music traditions, even by local musicians themselves, Eurocentric elitism threatens to cause great and lasting damage to the cultural identity of a people and rob the world of artistic contribution and much needed diversity.

Camille Paglia and Campus Free Speech

Photograph of empty seats in university lecture hall

Camille Paglia has long been a magnet for controversy. The writer and academic first made waves in 1990 with her controversial work Sexual Personae: Art and Decadence from Emily Dickinson to Nefertiti, an exploration of the Western canon of the visual arts. One reviewer, summarizing the backlash against the piece, described it as “profoundly anti-feminist”. Paglia, who has been teaching at the University of the Arts in Philadelphia since 1984 and currently holds a tenured position, is once again making headlines for her divisive views. Early this month, a group of UArts students, described by The Atlantic as “a faction of art-school censors,” started a an online petition (which has nearly reached its goal of 1,500 signatures) demanding that she either be removed from her teaching position or that alternative classes be offered to students who wish to avoid contact with her. In the description of the petition, the students explain that,

“[Paglia] believes that most transgender people are merely participating in a fashion trend (‘I question whether the transgender choice is genuine in every single case’), and that universities should not consider any sexual assault cases reported more than six months after the incident, because she thinks those cases just consist of women who regret having sex and falsely see themselves as victims.”

Paglia’s views on sexual assault are well-documented. In a lecture she gave on what she called the “victim mentality” of young college-aged women in the #MeToo era, Paglia said that “girls have been coached now to imagine that the world is a dangerous place, but not one that they can control on their own […] They’re college students and they expect that a mistake that they might make at a fraternity party and that they may regret six months later or a year later, that somehow this isn’t ridiculous?” She complains, “To me, it is ridiculous that any university ever tolerated a complaint of a girl coming in six months or a year after an event. If a real rape was committed go frigging report it.”

The transgender students and survivors of sexual assault who started the petition have expressed deep discomfort with her continued prominence at the university, to which Conor Friedersdorf of The Atlantic responds, “Even if students who feel that way should be able to avoid Paglia’s classes, they should not try to impose their preferences on their peers.” Friedersdorf further argues that her views cannot possibly be harmful to actual transgender people, because 1) Paglia identifies as transgender herself and 2) her writing is theoretical, and therefore, he claims, doesn’t have the power to influence the daily lives of trans people or shape government policy.

In many ways, this story feels like another chapter in the ongoing debate over free speech in universities. This situation is unusual in that Paglia is a tenured professor who is being condemned by students from within her own institution, but in most ways the controversy over her views speaks to a number of concerns in the free speech debate. Perhaps most saliently, it asks us to examine whether or not students (even a minority of students) should have the power to determine who should or should not be a member of their faculty. We also have to consider the effect hateful rhetoric spouted in an academic setting can have outside of the classroom, and whether or not sexist thought can exist in a harmless vacuum like Friedersdorf suggests.

A number of states, including Oklahoma and Maine, are currently deliberating over bills protecting the right to free speech on university campuses. These bills are mainly in reaction to “free-speech zones,” special areas on college campuses committed to unrestricted expression. These zones are supposed to protect a wide range of activity, from student-led demonstrations to lectures from visiting speakers. The Oklahoma and Maine legislation challenges the constitutionality of these zones, as marking off an area as safe for free speech implies that the rest of the campus is not. Another free-speech bill recently proposed in Missouri, which also seeks to expand the free speech of students, advises that “faculty should be careful not to introduce matters that have no relationship to the subject taught, especially matters in which they have no special competence or training.” This approach to the free-speech debate is one possible response to Paglia’s case, considering that she does not teach sociology or gender studies and therefore has no training in those areas. But some are concerned that Missouri’s solution, which is constructed to both expand the rights of students and protect faculty from provoking controversy, may be antithetical to the purpose of education.  

UArts, it seems, has taken a different approach to addressing student issues. In the wake of protests against Paglia, the president of the university sent a letter to the student body defending Paglia’s position and right to intellectual freedom, unambiguously shooting down any hope of her losing tenure.

In a lecture on free speech, Paglia said that “we have got to stop this idea that we must make life ‘easy’ for people in school […] Maybe the world is harsh and cruel, and maybe the world of intellect is challenging and confrontational and uncomfortable. Maybe we have to deal with people who hate us, directly, face-to-face.” She argues that “it does not help you to develop your identity by putting a cushion between yourself and the hateful reality that’s out there.” But in situations like this, it always ends up being students, many of whom come from vulnerable or otherwise marginalized positions, who are forced to accommodate the views of “controversial” professors and become exposed to ideas that denigrate their very right to personhood, not the other way around.

Should We Mute Michael Jackson?

photograph of Michael Jackson wax figure

rThe documentary Leaving Neverland features first-personal accounts from Wade Robson and James Safechuck of child sexual abuse allegedly committed by “The King of Pop,” Michael Jackson. This is not the first time Jackson has been the subject of such allegations. But the renewed attention to these allegations, combined with the fact that movements such as #metoo, #timesup, and #muterkelly have heightened awareness of sexual harassment and sexual assault in the entertainment industry, has raised new questions about how we should respond to Jackson’s music. Should radio stations still play it? Should streaming services take it down? Should we listen to it in private? In short, should we mute Michael Jackson because of his alleged immorality?

Many seem to be in favor of muting. A number of radio stations in Australia, Canada, New Zealand, and the Netherlands have decided to stop playing his music (at least temporarily). Along with this limited radio suspension, many have declared on social media that they will no longer play his music, an episode of The Simpsons featuring Michael Jackson has been withdrawn, and Drake has removed Don’t Matter to Me, featuring Jackson’s vocals, from his tour setlist.

Others, however, have been less keen to “mute” Michael Jackson. The director of Leaving Neverland, Dan Reed, said that, “It seems to have had an effect on people who have watched the film, the reaction I’ve heard most often is that people don’t want to hear his music. But it’s a personal thing. I wouldn’t get behind a campaign to ban his music, I don’t think that makes any sense.” Greek radio station 95.2 Athens played a Michael Jackson song every hour over a weekend to protest against the documentary. Jackson’s estate have also said that they will sue for damages. And dedicated Michael Jackson fans have even taken to the streets to protest the documentary. Importantly, his fans mostly seem to be asserting his innocence. In their view, because he is not guilty of what he’s been accused, there is no ground to mute his music. But there are good reasons not to play Jackson’s music in the current context that don’t depend on him being guilty.

To see why, consider the case of former Lostprophets’ vocalist Ian Watkins. He is currently serving a 29-year prison sentence for a string of sex crimes, including the attempted rape of a baby. Unlike Jackson, Watkins has been convicted of the allegations made against him. In response, British retailer HMV decided to stop selling their music. His former band mates stopped performing under the name Lostprophets and vowed that their new band would never perform any songs that had been written by Watkins. In such a clear case, there are good reasons not to play his music: for instance, it might cause revulsion as it reminds us of his crimes, it might be disrespectful to his victims and the victims of similar crimes, and it might appear that we are willing to overlook his horrific actions so that we can continue to enjoy his music.

These reasons may appear to depend on Watkins’s guilt. However, overlooking someone’s horrific actions is just one way that we might support someone. It is also possible to support the “merely” accused against allegations. For instance, playing Michael Jackson’s music at this time may send the message that we support him against these allegations. Sometimes this is exactly the message people intend to send. This indeed is the reason New Zealand café owner Kalee Haakma gives for why she now dedicates each Monday to playing only Michael Jackson’s songs, saying: “There is evidence out there that supports his innocence.” Here Haakma clearly expresses that her reason for playing Jackson’s music is to show her support for him in the face of these allegations.

As we have argued elsewhere, our actions can also have meanings that we do not intend. Consider the recent case of British radio DJ Danny Baker, who was fired from the BBC for sending a tweet depicting the new British royal baby as a chimpanzee. The tweet was taken by many to be making a derogatory comment about the baby’s mixed racial heritage. While it is possible that this was what Baker intended, he claims that he meant to make a comment about social class and the media spectacle surrounding royalty. Even if these were his intentions, comparisons with monkeys have long been used to degrade and abuse black and mixed-race people. Given this, it seems more than reasonable for others to interpret Bakers’ tweet as making a similar comparison. The context of his action means that it conveys a racist message even if this is not the message Baker intended.

In a similar way, at a time when significant allegations have been made against Jackson, playing his music publicly could reasonably be interpreted as an expression of support for him against those allegations. It sends this message in part because the DJ has chosen to play Jackson’s music instead of the vast catalog of other music available when recent, detailed and compelling testimony has been given against Jackson. Prompting us to appreciate Jackson’s talent as a musician at such a time might reasonably be interpreted as expressing support for Jackson. This is especially true at a time when Jackson’s supporters are publicly denying the allegations made against him and responding with vitriol to those making them, such as accusing Safechuck and Robson of lying and being motivated purely by money. It is also relevant that the form this protest often takes is playing and celebrating his work in public.

Of course, there are a number of reasonable ways to interpret this support. It might be interpreted as protest against “political correctness going mad”. It might be interpreted as holding that his music should still be appreciated regardless of his guilt or innocence. It might instead be interpreted as an unqualified defense of the man and a wholesale rejection of the accusations made against him. The final form of support clearly sends a further message that is potentially harmful for victims of sexual abuse, particularly victims of important or talented men. It sends the message that if they decide to go public with their accusations then not only will they not be believed, they will also be publicly vilified. DJs might try to avoid sending a harmful message by being explicit that they do not intend to support Jackson in a way that overlooks the accusations. But it is hard to avoid this harmful message being sent when playing his music given the background context of the entertainment industry’s long history of turning a blind eye to allegations of sexual harassment and abuse made against talented men. Against this background, DJs and radio stations shouldn’t play his music.

Does this mean you shouldn’t listen to Michael Jackson’s music in private? Not necessarily. A large part of the problem with radios playing his music is that it is in public. Whether or not you can continue to enjoy his music in private may come down to your personal relationship to the music, as Blindboy Boatclub from Irish hip hop duo The Rubberbandits argues. You might not feel able to if it reminds you of his crimes, but it may still be acceptable to do so. But you might, as Blindboy does, appreciate Jackson’s music more for Quincy Jones’ production than for Jackson’s lyrics and vocals, making Jackson’s immoral behavior largely irrelevant to your appreciation of the music. It is important to acknowledge that Jackson’s success is not solely down to his own musical talents and that many people contributed to the making of his music and the production of his celebrity status. Perhaps if we acknowledged these contributions as much as we do the stars’ contributions, listening to Michael Jackson’s music now wouldn’t send such a harmful message. But given the reality of a celebrity culture that turns talented musicians into stars who are to be worshiped and not to be challenged, we have good reason, at least for now, to mute musicians when credible accusations of seriously immoral conduct are made against them.

Game of Thrones, Avengers: Endgame, and the Ethics of Spoilers

photograph of "all men must die" billboard for Game of Thrones

Early on the morning of April 27th, an early-evening moviegoer in Hong Kong was beaten in the cinema parking lot as he walked to his car; though his injuries were not life-threatening, his story nevertheless went viral thanks to how the attack was provoked – reportedly, the man had been spoiling the just-released Avengers: Endgame by loudly sharing plot details for the crowd (who had not yet seen the movie) to hear. As the culmination of nearly two dozen intertwined movies released over the course of more than a decade, as well as the resolution to the heart-wrenching cliffhanger at the end of 2018’s Avengers: Infinity War, Endgame was one of the most greatly-anticipated cinematic events in history and shattered most every financial record kept at the box office (including bringing in over $1 billion worldwide on its opening weekend). According to some fan reactions online, the spoiling victim actually deserved the attack for ruining the fun of the other people in line.

Contrast this reaction to the events of April 28th, when the third episode of Game of Thrones’ final season aired on HBO: within minutes, fans were actively spoiling each scene as they live-tweeted their ways through the show together, sending over 8 million tweets out into cyberspace and setting the top nineteen worldwide-trending topics on Twitter. By the time the Battle of Winterfell was over, the internet was swimming with jokes and memes about the story to a degree that even Time Magazine reported on the phenomenon. And this is not an unusual occurrence: each episode of the show’s eighth season has captured the Internet’s attention on the Sunday nights when they air. While HBO has taken great pains to keep the details of the season under wraps, there has been no #DontSpoilTheEndgame-type campaign for Game of Thrones as there has been from Marvel for Avengers: Endgame – what should we make of this?

While ‘the ethics of spoilers’ is far from the most existentially threatening moral question to consider in 2019, it is an issue with a strange pedigree. Spoiler Alert!, Richard Greene’s recent book on the philosophy of spoilers, argues that the notion began with Agatha Christie’s 1952 play The Mousetrap, which ended with an exhortation to the audience to keep the ending a secret. The term itself was coined in a 1971 National Lampoon article where Doug Kenney jokingly ‘saved readers time and money’ by telling them the twist endings to famous stories; according to Greene, it was the moderator of a sci-fi mailing list that first implemented a ‘spoiler warning’ policy in 1979 regarding emails that discussed the plot of the first Star Trek movie (the actual phrase ‘spoiler warning’ wouldn’t be applied until discussions concerning the release of Star Trek II: Wrath of Khan three years later). Skip ahead to 2018 and you’ll find serious reporting about a stabbing in Antarctica being precipitated by the victim habitually ruining the ends of the novels his attacker read; although the story turned out to be groundless, it seemed plausible to enough people to make headlines several continents away.

Why do we care about spoilers? Especially considering that psychologists studying the phenomenon have determined so-called ‘spoiled’ surprises to be consistently more satisfying than ones that remain intact for the audience? And, even more curiously, why don’t we care about spoilers consistently? What gives Game of Thrones spoilers a pass while Endgame spoilers ‘deserve’ a punch?

Some have argued that it’s largely a feature of the medium itself: despite the ubiquity of contemporary streaming services, we still assume that stories released in a TV format are culturally locked to their particular airtimes – much like the Super Bowl, if someone misses the spectacle, then that’s their loss. However, movies – especially ones at the theater – are designed to explicitly disengage us from our normal experience of time, transporting us to the world of the film for however long it lasts. Similarly, TV shows are crafted to be watched in your living room where your cell phone is near at hand, while movie theaters still remind you to avoid disrupting the cinematic-experience for your fellow patrons by illuminating your screen in the middle of the film. Perhaps the question of format is key, but I think there’s a deeper element at play.

Aristotle tells us that humans are, by nature, “political animals” – by this, he does not mean that we’re biologically required to vote (or something). Rather, Aristotle – and, typically, the rest of the virtue-ethics tradition – sees the good life as something that is only really possible when pursued in community with others. In Book One of the Politics (1253a), Aristotle says that people who can stand to live in isolation “must be either a beast or a god” and in the Nicomachean Ethics, the Philosopher explains at length the importance of friendship for achieving eudaimonia. In short, we need each other, both to care for our practical, physical needs, but also to create a shared experience wherein we all can not only survive, but flourish – and this good community requires both aesthetic and ethical components.

We need each other, and stories are a key part of holding our cultures together; this is true both mythologically (in the sense that stories can define us as sociological groups), but also experientially – think of the phenomenon of an inside joke (and the awkward pain of knowingly being ignorant of one told in front of you). At their worst, spoilers turn stories into essentially the same thing: a reminder that a cultural event has taken place without you. Spoilers exclude you (or underline your exclusion) from the audience – and that exclusion can feel deeply wrong.

Think of why we host watching parties, attend conventions dressed as our favorite characters, and share endless theories about where a story’s direction will go next: it’s not enough for us to simply absorb something from a screen, passively waiting as our minds and muscles atrophy – no, we crave participation in the creation of the event, if not of the narrative itself, then at least of the communal response to it. The nature of online communities (and the relatively-synchronous nature of television broadcasting) facilitate this impulse beyond our physical location; we can share our ideas, our reactions, and our guesses with others, even when we are far apart. The etiquette of the movie theater limits this, but not entirely – even in our silence, we still like to go to movies together (and, quite often, the experience can be anything but quiet!).

So, while Game of Thrones’s finale aired this past weekend, the community it has engendered will live on (and not only because George R.R. Martin still has two more books to write). The experience of a film like Avengers: Endgame may be over in a snap, but the ties we build with each other can withstand the tests of time. Spoilers threaten to undermine these sorts of connections, which may be why we react so strongly to them – when we don’t get to participate. After all, we can’t forget: the night is dark and full of terrors – one more reason to face it together.

Christianity’s Role in Alt-Right Terrorism: More than an Aesthetic

photograph of alt-right rally

In the wake of the April 27th, 2019 shooting at the Chabad of Poway synagogue in San Diego County, California, fears of a rise in modern antisemitism continue to grow. The gunman that opened fire on the congregation’s Passover worship—killing 60-year-old Lori Kaye and wounding three others—posted an “open letter” filled with political conspiracies, racial slurs, biblical scripture, and Christian theology to the website 8chan shortly before the attack. The gunman’s rhetoric and motives classify him as a member of the alt-right: “a range of people on the extreme right who reject mainstream conservatism in favor of forms of conservatism that embrace implicit or explicit racism or white supremacy.”

For the most part, the internet is the primary radicalizing force for alt-right members. Website chat-rooms like 8chan and Gab, flaunting the value of free speech, attract people hoping to share their odious views and plan acts of violence. In corners of the internet, hate and ignorance combine for deadly affect. In his Prindle Post article, author Alex Layton examined the role that antisemitic political conspiracy played in the October 27, 2018 attack on the Tree of Life synagogue in Pittsburg, noting that the shooter “bought into [and was motivated by] a conspiracy that the Hebrew Immigrant Aid Society (HIAS) was leading the caravan of refugees who have been migrating from Honduras to the U.S.-Mexico border in recent weeks.” These antisemitic political conspiracies are characterized by what’s known as “secondary antisemitism” where the roles of perpetrator and victimhood are reversed. Prindle Post author Amy Elyse Gordon analyzed how secondary antisemitism was used in the manifesto of the Tree of Life synagogue attacker, saying, “This . . . rhetoric of victimization, including his claims that Jews were committing genocide against ‘his people’ . . . moments before he shoots up a crowd of morning worshipers, is the idea that the real relationships of victimhood are being obscured. This statement reads like a pre-emptive self-absolution for a mass shooting as an act of self-defense.” Political conspiracies and secondary antisemitism certainly motivate attackers, but an underexamined area of influence on alt-right terrorists and their sympathizers may actually lie in the disparate texts reflecting debate and diversity within early Christian tradition.  

The Berkeley Center for Religion, Peace, and World Affairs notes that ideas of “traditional Christianity” have heavily influenced the rise of the American alt-right movement, but that “it is important to note that it is almost exclusively an aesthetic phenomenon and not a theological one. Actual Christian theology, in general, is quite hostile ground for the theories of scientific racism . . . and blood and soil ‘volkism’ favored by the alt-right to take root.” The claim of a primarily aesthetic connection between Christianity and the alt-right is to say that Christian symbolism is being exploited to create the appearance of Christendom within alt-right worldviews. For example, the Christian Identity movement—one of seventeen Christian hate groups listed by the Southern Poverty Law Center—is based on the postulate that only European whites are the descendants of the Lost Tribes of Israel. The movement, then, is built on an aesthetic of Judeo-Christian tradition despite the fact that its white supremacist reading of the bible is entirely unfounded. While alt-right terrorism certainly fabricates a Christian aesthetic, how deep are the theological roots of antisemitism on which they base their ideology?

Antisemitism is a complex, vile, and ever-evolving prejudice against the Jewish community. Antisemitism manifests itself in many ways, but one major example stems from the early Christian idea that the Jewish people were responsible for the murder of Jesus. Despite the fact that only Roman authorities had the power to condemn people to death, the canonical gospels depict the Jewish people as demanding the crucifixion Christ. The Gospel of Matthew, even portrays the Jewish crowds as verbally accepting the responsibility for the death of Christ: “When Pilate saw that he was getting nowhere, but that instead an uproar was starting, he took water and washed his hands in front of the crowd. ‘I am innocent of this man’s blood,’ he said. ‘It is your responsibility!’ / All the people answered, ‘His blood is on us and on our children’” (Matthew 27:24-25). This passage was cited verbatim as justification for the attack on the Chabad of Poway synagogue in the shooter’s open letter.

Professor of Hebrew and Judaic studies at New York University, Annette Reed, says that the diabolization of the Jewish people was “just one of a broad continuum of different [rhetorical] strategies by which followers of Jesus made sense of their relation to Judaism.”

Christianity was not made legal in the Roman empire until 313 CE when emperor Constantine issued the edict of Milan—roughly three hundred years after the Crucifixion. Downplaying the role of Roman authorities in the death of Christ would have been advantageous for a religion attempting to gain political and cultural acceptance in Rome. At this time, the Christian tradition was also working through tensions of self-identification and began to define itself as separate from Judaism.

At the heart of the separation between early Jesus followers and Judaism lies an anxiety about Christianity’s responsibility for antisemitism. John Gager, Professor of Religion at Princeton University, writes, “The study of relations between Judaism and early Christianity, perhaps more than any other area of modern scholarship, has felt the impact of WWII and its aftermath. The experience of the Holocaust reintroduced with unprecedented urgency the question of Christianity’s responsibility for anti-Semitism: not simply whether individual Christians had added fuel to modern European anti-Semitism, but whether Christianity itself was, in its essence and from its beginnings, the primary source of anti-Semitism in Western culture.”

Embedded within the very identity of Christianity lies a troubling cause of antisemitism: the idea that Christians have replaced the Jewish people as the people of God. The Epistle to the Hebrews stands as one example of this idea—called supersessionism. The Jewish Annotated New Testament (second edition) says Christianity “understood itself as having replaced not just the covenant between Israel and God, but Judaism as a religion . . . Supersessionist theology inscribes Judaism as an obsolete, illegitimate religion, and in the New Testament this idea is articulated no more plainly than in Hebrews.”  Hebrews argues for the superiority of Christ over Jewish tradition—one point in a complex navigation of Christian-Jewish relations by early Jesus followers. However, the Christian view of Judaism as an invalid religion coupled with a scapegoating of the Jewish people for the Crucifixion of Christ can and has been read to justify egregious acts of violence.

Instead of asking if antisemitism ‘exists’ in the earliest thoughts and writings of Christ-followers, it may be more helpful to ask if the New Testament motivates antisemitic thought—whether it’s ‘there’ or not. Professor Reed points out that glaring anti-Jewish messages in the New Testament existed within a context of  “inner-Christian debate in which there were also others who were stressing instead the Jewishness of both Jesus and authentic forms of Christianity.” These anti-Jewish sentiments should then be understood within the context of the early Christian movement to separate itself from Judaism. Mark Leuchter, a Professor of religion and Judaism at Temple University, says, “Once the New Testament became holy specifically to Christians, the original context for debate was lost,” allowing the New Testament to become “justification for anti-Jewish violence and hatred . . . in ways that many Christians don’t even realize.”

The Chabad of Poway synagogue shooter was a member of the Orthodox Presbyterian Church—an evangelical denomination founded to counter liberalism in mainline Presbyterianism. After reading the Christian theology present in the shooter’s manifesto, Reverend of the church, Mika Edmondson, said, “We can’t pretend as though we didn’t have some responsibility for him — he was radicalized into white nationalism from within the very midst of our church.”

Also in response to reading the shooter’s manifesto, Reverend Duke Kwon of the Presbyterian Church in America says, “you actually hear a frighteningly clear articulation of the Christian theology in certain sentences and paragraphs. He has, in some ways, been well taught in the church.” To address the violent and growing crisis of alt-right, domestic terrorism in the United States, the Christian church must do more than simply condemn such acts. Christians, especially conservative, evangelical denominations whose political ideology engage alt-right views, should recognize that their teachings can and are being conflated with white nationalism. Practicing Christians are quick to defend the Bible in the face of criticism, but in this case there is more at stake than reputation. The connections between alt-right ideology and Christianity go dangerously beyond simple aesthetics. The reason Christian aesthetics are so widely co-opted by proponents of white supremacy is because early Christian scripture and the very identity of the Christian tradition has roots in anti-Jewish sentiment. Those who choose to ignore this reality become complacent in its tragic consequences.

Fixing What We’ve Broken: Geoengineering in Response to Climate Change

underwater photograph of reef

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Extending over 1,200 miles, the Great Barrier Reef is the largest reef system on the planet. It is the only system of living beings visible from space and is one of the seven wonders of the natural world. The reef is home to countless living beings, many of which live nowhere else on the planet.

The Great Barrier Reef is valuable in a number of ways. It has tremendous instrumental value for the living beings that enjoy its unique features, from the creatures who call it home to the human beings that travel in large numbers to experience its breathtaking beauty. One also may think that functioning ecosystems have intrinsic value. This is the position taken by notable 20th century environmentalist Aldo Leopold in his work Sand County Almanac. Leopold claims that “A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community; it is wrong when it tends otherwise.” The idea here is that the ecosystem itself is valuable, and ought to be preserved for its own sake.  

When something has value, then, all things being equal, we ought to preserve and protect that value. Unfortunately, if we have an obligation to protect the Great Barrier Reef, we are failing miserably. The culprit: anthropogenic climate change. As David Attenborough points out in his interactive series Great Barrier Reef, “The Great Barrier Reef is in grave danger. The twin perils brought by climate change – an increase in the temperature of the ocean and in its acidity – threaten its very existence.” One result of this process is what is known as “coral bleaching.” Coral has a symbiotic relationship with algae. Changes in ocean temperatures disrupt this relationship, causing coral to expel algae. When it does so, the coral becomes completely white. This is more than simply an aesthetic problem. The algae is a significant source of energy for the coral, and most of the time, coral does not survive bleaching. Devastatingly, this isn’t just a problem for the Great Barrier Reef — it’s a global problem.

One general category of approach to this problem is geoengineering, which involves using technology to fundamentally change the structure of the natural world. So, as it pertains specifically to the case of coral bleaching, geoengineering as a solution would involve using technology to either cool the water, lower the acidity levels, or both. For example, one such approach to rising ocean temperatures is to pump cooler water up from the bottom of the ocean to reduce surface temperatures. To deal with acidity, one suggestion is that we use our knowledge of chemistry to alter the chemical composition of ocean water.

Geoengineering has been proposed for a broader range of environmental issues. For example, some have suggested that we send a giant mirror, or a cluster of smaller mirrors, into space to deflect sunlight and reduce warming. Others have suggested that we inject sulfuric acid into the lower stratosphere, where it will be dispersed by wind patterns across the globe and will contribute to the planet’s reflective power.

Advocates of geoengineering often argue that technology got us into this problem, and technology can get us out. On this view, climate change is just another puzzle to be solved by the human intellect and our general propensity for using tools. Once we direct the unique skills of our species toward the problem, it will be solvable. What’s more, they commonly argue, we have an obligation to future generations to develop the technology that will give future people the tools they need to combat these problems. Preventing climate change from happening in the first place requires behavioral changes from too many agents to be realistic. Geoengineering requires actions only from reliable scientists and entrepreneurs.

Critics raise a host of problems for the geoengineering approach. One of the problems typically raised concerns the development of new technologies in general, but is perhaps particularly pressing in this case: How much must we know about the consequences of implementing a technology before we are morally justified in developing that technology? The continued successful function of each aspect of an ecosystem depends in vital ways on the successful function of the other aspects of that ecosystem. There is much that we don’t know about those relationships.  In the past, we’ve developed technologies under similar conditions of uncertainty; we tried to control the number of insects in our spaces through the use of pesticides to devastating and deadly affect. We don’t have a great track record with this kind of thing (as the phenomenon of anthropogenic climate change itself demonstrates). There is potential for good here, but also the potential for great and unexpected harm.

Another problem has to do with which parties should be responsible for implementing geoengineered approaches. Who should get to decide whether these approaches are implemented? All life on earth will be affected by the decisions that we make here. Should such decisions be made through a mechanism that is procedurally just, like some form of a democratic process? If so, representative governments might be the appropriate actors to implement geoengineered strategies. That may seem intuitively appealing, but we must remember that our actions here have consequences for global citizens. Why should decisions made by, say, citizens of the United States have such substantive consequences for citizens of countries that, for either geographic or economic reasons, are more hard hit by the effects of climate change? What about the sovereignty of nations?  

An alternative approach is that entrepreneurs could pursue these developments. Often the most impressive innovations are motivated by the competitive nature of markets. This approach faces some of the same challenges faced by the governmental approach—it is counterintuitive that people who have primarily financial motivations should direct something as critical as the future of the biosphere.

Finally, critics argue that the geoengineering approach is misguided in its focus. What is needed is a paradigm shift in the way that we think about the planet. The geoengineering approach encourages us to continue to think about biosphere as a collection of resources for human beings to collect and manipulate any way that we see fit. A more appropriate approach, some argue, is for human beings to make fundamental changes to their lifestyles. They must stop thinking of themselves as the only important characters in the narrative of the planet. Instead of focusing on fixing what we break, we should be focusing on avoiding breaking things in the first place. Toward this end, they argue, our primary focus should be on reducing carbon emissions.

Blame and Forgiveness in Student Loan Debt

photograph of campus quad with students

US Senator and presidential hopeful Elizabeth Warren has recently proposed a pair of debt relief efforts that aim to address the growing problem of student loan debt in America. The first proposal would cancel “$50,000 in student loan debt for every person with household income under $100,000” (with lesser reductions for those with higher household incomes), while the second aims to help prevent student loan debts from becoming a problem again in the future by eliminating “the cost of tuition and fees at every public two-year and four-year college in America.” Here I want to focus on the ideas behind Warren’s first proposal. Should student debts be forgiven?

Regardless of where one falls on the political spectrum, it is undeniable that mounting student debt is an enormous problem in America. Recent studies have shown that approximately 40 million Americans have student loan debt, and that student debt has become the second-highest category of debt, second only to mortgage debt. Although younger people have the bulk of student debt, individuals from all age ranges have felt the effects, such that “the number of Americans over the age of 60 with student loan debt has more than doubled in the last decade.” There are, of course, consequences to so many people having so much debt: if you are spending a significant amount of your income on repaying student loans then you are going to find it difficult, for example, to buy a house, or car, or save, or invest for your future. It’s also unclear what will happen if a significant portion of those with debt default on their loans, with some economists comparing the student debt situation to the mortgage crisis a decade ago. With student debt being an urgent problem, the idea of addressing it by implementing a debt forgiveness plan might then seem like a good first step.

There are many practical questions to be asked about the implementation of a debt-forgiveness plan like the one Warren proposes (she has, of course, thought about the details). There have been concerns with Warren’s plan, however, that aren’t so much about the dollars and cents as they are about blame and accountability. In answering the question of whether debt should be forgiven we need to first think about who is to blame for it.

A natural place to locate blame is with the students themselves. Here is an example of an argument that one might make for this view:

Those signing up for college know full well what they’re getting themselves into: they know how much college costs, how much they will have to borrow, and generally what that entails for repaying those debts in the future. No one is forcing them to do this: they want to go to college, most likely for the reason that they want a higher paying job that requires a college degree. It may very well be the case that it is difficult to be ridden with debt, but it is debt for which they are themselves accountable. Instead of this debt being forgiven they ought to just work until it’s paid off.

Arguments of this sort have been presented in numerous recent op-eds. Consider, for example the following by Robert Verbruggen at the National Review:

“Where to start with [Warren’s proposal]? With the fact that student loans are the result of the borrowers’ own decisions – often good decisions that increased their earning power? With the fact that people who’ve been to college are generally more fortunate than those who have not? With the fact that this discriminates against people who paid off their loans early, as well as older borrowers who have been making payments for longer?”

In another article, Katherine Timpf similarly claims that student debt should not be forgiven, and that student debt became such a problem only because students were “encouraged to take out loans that they could not afford in the first place.” Curiously, she goes on to claim that while Warren’s debt-forgiveness plan is “a terrible, financially infeasible idea,” it is nevertheless the case that it is a culture that encouraged over-borrowing that is ultimately to blame. It is difficult to make coherent sense of this position: if it is indeed a culture that encourages excessive borrowing that is to blame, then it is hard to see why all the blame should fall to the students.

That student debt is primarily the result of broader societal factors, and not that of bad decision-making, laziness, or unwillingness to “stick it out”, is the driving thought behind many of those who are in favor of debt forgiveness. There are undoubtedly many such factors that have contributed to mounting student debt, but there are typically two that are appealed to most frequently: the skyrocketing cost of tuition and the stagnation of wages. While Warren herself notes that she was able to afford college by working a part-time job, doing so in the modern economy is often very close to impossible. Without independent support it seems that students have little choice but to take out increasingly large loans.

Here, then, is where the ideological heart of the debate lies: those who argue in favor of debt-forgiveness will generally see the blame for the student loan crisis as predominantly falling on societal factors (like increased tuition and stagnated wages), whereas those who argue against it generally see the blame as predominantly falling on the students themselves. Presumably we should assign responsibility where the blame lies, and so depending on who we think is most to blame will determine whether we should implement something like debt forgiveness.

However, we have seen that there is substantial data supporting the view that the student debt crisis is largely attributable to societal factors outside of the control of the students. Furthermore, the thoughts that students are simply “not working hard enough” or “just want a handout” tend to be based on little more than anecdotes and bias (stories of students working multiple jobs just to make ends meet are readily available). This is not to say that students should not be assigned any blame whatsoever for their decisions to go into debt for their educations. However, it does seem that significant contributors to those debts are ones that are outside of a student’s control. As a result, it does not seem that students should be fully blamed for their debts.

Even if this is so, should we think that the best way to take responsibility for those debts is to implement debt forgiveness? As we have seen, some have expressed concerns that forgiving debts would be, in some way, “unfair”. There are two kinds of unfairness that we might consider: first, it might seem to be unfair to those who have already paid off their student loans through years of hard work; second, it might seem to be unfair to those who have to pay for someone else’s debt – Warren’s proposal to finance her debt forgiveness plan, for example, is to generate funds from a tax increase on the extremely wealthy, and one might think it unfair that these individuals should have to cover the debts of someone else. Would a debt-forgiveness proposal be unfair in these ways, and if so, is that good enough reason to say that it shouldn’t be implemented?

While these concerns about fairness might seem like appealing reasons to reject debt forgiveness, upon closer inspection they do not stand up to scrutiny. Consider the first worry: if a debt forgiveness plan is implemented it will indeed be the case that there will be some people who have just finished paying off their debt prior to the policies taking effect and so will not be able to take advantage of their debt being forgiven. It would then seem unfair to privilege one group over another, where the only relevant difference is that the former took on their debt later than the latter. But it is hard to see why this should result in not having any debt forgiveness at all: the argument that “well if I don’t get it, they shouldn’t either!” does not solve any problems. This is not to say that such unfairness should not be addressed at all – perhaps there could be some kind of reimbursement of those who paid off debt before it was forgiven – but it does not seem like a good enough reason to not offer any debt forgiveness to anyone. The second worry similarly fails to hold much water: unless someone takes issue with the idea of taxation in general, then there does not seem to be anything particularly unfair about having the extremely wealthy pay more to aid others.

There are, of course, many factors to take into consideration when considering something like a debt-forgiveness plan, and Warren’s plan in particular. Regardless, it seems that given the severity of the student debt problem, and that the factors that contributed to the problem are largely out of the control of the students themselves, that the responsibility for student debt cannot fall solely on the students themselves.

Is the Filibuster Democratic?

bird's eye photograph of Maryland state senate chamber

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


A wide range of policy debates has already dominated the front lines of the 2020 Democratic Primary over proposals including Medicare for all, raising the national minimum wage, and mandating an increase on teachers’ salaries. However, another emerging policy proposal that has gained some attention in recent months is abolishing the use of the filibuster to block legislation in the senate.

Filibustering is a tactic frequently used by senators in which they can prolong debate over a bill almost indefinitely simply by holding the debate floor for as long as they can, thus effectively blocking the bill. A filibuster may be sustained even if the senator is discussing a topic other than the legislation at hand. For instance, in 2013 Sen. Ted Cruz held a filibuster against a version of the Affordable Care Act for 21 hours and 19 minutes by doing things such as reading bedtime stories to his two young daughters and announcing messages that had been sent to his Twitter account. The only formal way to stop a filibuster is for the senate to vote in favor of “cloture,” which requires a three-fifths supermajority vote (or 60 votes out of 100). A filibuster may also be stopped by more informal means if a senator must stop debating to use the bathroom or to sit down.

As a function of the senate, the filibuster is very well-established, making it a tradition that is rarely evaluated. However, new Democratic candidates are beginning to question whether or not the filibuster truly helps senators represent their constituents. Answering this question may require consideration of historical context.

The first effective filibuster was “discovered” in 1841 by Alabama Senator William R. King when he threatened an indefinite debate against Kentucky Senator Henry Clay over the creation of a Second Bank of the United States. Other senators realized there was no rule mandating a time limit for debate and sided with Senator King. Although it had been discovered, this did not make filibustering a common practice in the senate. Indeed, the cloture rule was not established until 1917 when a group of just 11 senators managed to kill a bill that would have allowed President Woodrow Wilson to arm merchant vessels in the face of unrestricted German aggression at the dawn of U.S. involvement in World War I. Even so, filibustering still did not establish itself until 1970 when the “two-track system” was implemented in the senate. The two-track system allows for two or more pieces of legislation to be on the senate floor simultaneously, with debate divided up throughout the day. This made filibustering much easier for senators to maintain, as they could filibuster one bill without halting Senate activity altogether. From this point forward, filibustering became increasingly more common in the U.S. Senate. However, from its history, it is clear that the filibuster is not a long-time tradition of the senate, but rather a loophole in senate rules that gained popularity as a strategy for obstruction of bills. Yet, many believe it to be an indispensable function of senate rules.

Many senators would argue that filibustering is necessary to adequately representing their states’ policy needs. Its primary purpose is to balance tyranny by the majority and preserve minority rights in the senate. Take the gun control debate as an example. Senate Democrats have long pursued reforms on gun laws through the senate but have had little to no success due to Republicans holding the senate majority and not allowing gun reform legislation to even reach the floor for a vote. Therefore, Democratic Senator Chris Murphy of Connecticut filibustered for 14 hours and 50 minutes in the wake of a mass shooting at Pulse nightclub in Orlando, Florida. The filibuster swayed Senate Majority Leader Mitch McConnell to hold two votes on gun reform: one proposal to expand background checks for potential gun owners, and another proposal to block suspected terrorists from purchasing guns. In this case, the senate minority was able to come together and prevent cloture on an issue that they could otherwise not have pursued due to senate rules. It is instances like these that lead many to call the filibuster the “Soul of the Senate” and praise the filibuster’s ability to encourage more in-depth debate on highly-contested issues. However, others take issue with the the way the filibuster is used.

While the filibuster balances the power of the senate majority, this function can also be limited. This is because the Senate Majority Leader must approve bills before they are brought to the floor, meaning that senators in the minority must beg the Senate Majority Leader to introduce a piece of legislation to the floor before they can even initiate a filibuster. In a highly polarized senate, where current Majority Leader McConnell controls the floor ruthlessly, even getting a filibuster started is extremely difficult. Despite this, there are still some who argue that the ability to filibuster gives the senate minority too much power. The primary reasoning behind this argument is that cloture and its 60-vote requirement are difficult to acquire, especially through rampant hyperpartisanship that currently exists in the senate. The possibility of a filibuster essentially sets a supermajority requirement on all major pieces of legislation, thus hindering congress’s productivity. The senate minority’s ability to filibuster also gives unpopular policy proposals more time over senate proceedings than they should have. A prime example of this was in 1964 when a small coalition of Southern Democrats filibustered the Civil Rights Act for 75 hours.

Beyond giving the senate minority too little or too much power, it is also alleged that the filibuster is applied unevenly between political parties. While filibustering does alternately inconvenience one side or the other depending on which party holds the senate, fundamental parts of Democrats’ and Republicans’ platforms allow the filibuster to disadvantage Democrats more in the long run. The modern Democratic party tends to push policy that introduces new or enhances existing government programs, while the Republican party leans on a platform of blocking these programs and cutting taxes. Republican policies of blocking social welfare and cutting taxes are more compatible with the budget reconciliation process than are Democratic policies. Because filibustering is not allowed in the budget reconciliation process under senate rules, Republicans can easily push their agenda through reconciliation, while Democrats are left to struggle for a 60-vote supermajority to advance most of their legislation.

Whether it should be retained or scrapped, what is most important is that the filibuster is under public scrutiny by high-profile politicians. As injustices in America’s legislative mechanisms become more apparent, public criticism of these mechanisms has also become more popular. Along with debating over the pros and cons of the filibuster and its implications for democracy, presidential candidates for 2020 are also entertaining drastic structural reforms such as doing away with the Electoral College, increasing the size of the Supreme Court, and offering statehood to Washington, D.C. and Puerto Rico. Whether people believe these reforms are operational or not, the public discussion around taking fundamental action to make the U.S. legislative process more democratic and representative is one that is well worth the nation’s effort.

Political Animosity and Estrangement

balck-and-white photograph of protester with "Don't Separate Families" sign

It can be extremely difficult to navigate the current political climate. This task can be especially daunting given the changes occurring within families, where differences in party identification can strain relationships. Heated debates have turned into feuds which last far longer than election night, and now many Americans are wondering whether political affiliations are justification enough to permanently distance themselves from certain family members. Growing political animosity has prompted many to consider the ethical dimensions of cutting off one’s family on the basis of their politics.

These questions of estrangement were recently addressed in a New York Times ethics advice column. “Can I Cut Off a Relative with Hateful Views” explores this idea in the case of one woman who has a friendly relationship with her brother in law however, recently he has become “so radical in his political and world views that I am no longer comfortable maintaining a relationship. He has a blog and is an occasional radio host, so his are very public opinions that are filled with hate and even calls to violent action.” The anonymous individual asking for advice outlines that she feels angered by his hateful speech and wonders about the ethics surrounding deciding to cut off contact by declining invitations to meet up, or if she should clearly let him know that his behavior offends her before taking such actions. The author discusses the importance of discourse between disagreeing family members and stresses that although it is acceptable to cut ties with one’s family over political differences, one should have a conversation addressing why.

This story is one of many which deal with relationships that were complicated due to politics, various similar stories were shared in a Washington Post article which described how holidays have changed for American families since the Trump election. Some stories describe how the election brought their family closer together, while others resulted in rifts that may never heal. Drew Goins writes about a mother from Denver, Colorado who reports the difficulties associated with alienating oneself from family members and also the necessity to do so given circumstances. Cynthia Dorro’s adopted daughters are nonwhite immigrants and her side of the family’s votes for Trump resulted in confusion concerning how best to support her children, “My parents’ votes for Trump tore through me like a bullet… knowing that more than half the people in my home state cast a vote in support of divisiveness and bitter hate toward people like my children has left me unable to even contemplate a visit there.” Now Dorro spends holidays exclusively with her husband’s side of the family which she describes as “devastating, and I can’t really characterize what we do as celebrating the holidays anymore.” Dorro’s is one of many stories which demonstrate the heartbreaking nature of politics on pushing families apart, but not all Americans share this story. Julann Lodge describes his family becoming closer due to a rejection of Trump and his policies, “The Republican members of my family have left the party because they are horrified by the direction the GOP has taken… Now, we are all united in opposing racism, lies and environmental devastation. Trump has united my family more than he will ever know.” Beyond families which go separate ways and those who are closer than ever before, politics has also become an unmentionable topic between relatives of different parties not wishing to quarrel. A St. Louis family describes their love for each other which has fueled an avoidance of all politics related topics which might drive them away from each other, “My aunts and uncles are almost exclusively Republicans. But we love each other deeply, so we all grit our teeth and promise ourselves we will behave. Most of us avoid any mention of politics at all; when someone ignores the unspoken rule, everyone else just kind of drifts out of the room.” These narratives from families across the country illustrate the ways in which their family dynamics have shifted due to the rise of Trump, but others are still confused about when they should draw the line between not speaking about politics, limiting visits with certain family members, and cutting people off altogether. These questions result in a debate which goes much deeper than relationships between feuding family, but rather explores the political polarization occurring in America since the 2016 election.

The Pew Research center emphasizes this rising political animosity according to their survey research. “In choosing a party, disliking the policies of opponents is almost as powerful a reason as liking the policies of one’s own party.” These negative factors which influence individuals political affiliation have fueled the unrest between people from different parties and further research shows that adjectives such as close-minded, lazy, immoral, dishonest, and unintelligent were utilized by almost half of both democrats and republicans to describe members of the opposite party. In fact, growing animosity between parties has been consistently increasing since 2004, and the Pew Research Center found 30% of Democrats believe that the opposite party is a threat to the well-being of the nation, while 40% of Republicans say the same of democrats.

These statistics on political divisiveness and the impressions opposite parties have of each other speak volumes about the ways in which family relations have changed since the election of 2016. Not only do these statistics allow us to better understand what the parties feel about each other, but they provide a mechanism through which we can better interpret policies and learn how to open conversation between people who may be political opposites, but who can ultimately choose to unite with one another against hateful rhetoric. Individual stories of families choosing to maintain a level of distance due political differences is telling of the enormous influence of political polarization at the national level on relationships between individuals.

Gun Control and the Ethics of Constitutional Rights

photograph of NRA protesters

Consider these starkly different positions on gun control: a month ago, just hours after a gunman killed 50 people worshiping at Friday prayers in two Mosques in Christchurch, New Zealand, the Prime Minister, Jacinda Ardern, promised to tighten the country’s gun laws. And, several days ago Donald Trump told the National Rifle Association in a speech that he intends to pull out of the United Nations arms treaty citing as a reason the protection of second amendment rights. Trump said, “Under my administration, we will never surrender American sovereignty to anyone. We will never allow foreign bureaucrats to trample on your second amendment freedom.”

In the United States it is difficult to tighten gun ownership rules, or limit the types of guns (like military assault rifles, automatic and semi-automatic weapons) because ownership of guns for self-protection has been found to be protected by the second amendment of the Constitution, which states that citizens have a right to ‘keep and bear arms.’ But no such constitutional right exists in New Zealand, where Ardern swiftly followed through on her promise; nor in Australia whose gun laws, also passed in response to a massacre, New Zealand’s were modelled on.

The man charged over the Christchurch massacre was in possession of two semi-automatic rifles as well as three other firearms, all held legally on his entry-level ‘category A’ firearms licence; the semi-automatic rifles had allegedly been modified by adding a high-capacity magazine. Less than a month later a sweeping gun law reform bill was brought before the New Zealand parliament. The bill outlaws most automatic and semi-automatic weapons, and components that modify existing weapons. During the bill’s final reading, Ardern said: “I could not fathom how weapons that could cause such destruction and large-scale death could be obtained legally in this country.”

New Zealand government Members of Parliament, in a rare show of bipartisanship, overwhelmingly backed the changes, (there was just one dissenting vote in the parliament), which were passed by a vote of 119 to 1 in the House of Representatives after an accelerated process of debate and public submission. It is now illegal to own a military style rifle in New Zealand (with the exception of heirloom weapons or those for professional pest control). Possession of such a weapon will from now on carry a penalty of up to five years in prison. The bill includes a buyback scheme for anyone who already owns such a weapon, for them to surrender it and receive compensation based on the weapon’s age and condition.

New Zealand’s gun law reform was based on similar measures taken by the Australian government in 1996, following the Port Arthur Massacre in which an individual used a semi-automatic rifle to murder locals and tourists in the small historic town, ultimately killing 35 people, including many children. Twelve days after the Port Arthur massacre, the Australian prime minister, John Howard, announced a sweeping package of gun reforms. In the wake of Port Arthur, the Australian government banned automatic and semiautomatic firearms, adopted new licensing requirements, established a national firearms registry, and instituted a 28-day waiting period for gun purchases. It also bought and destroyed more than 600,000 civilian-owned firearms, in a scheme that cost half a billion dollars and was funded by raising taxes.

In both countries gun law reform had been frustrated by conservative elements prior to the massacres. In New Zealand, as in Australia, there was some pushback from conservative politics, and from a gun-lobby, but in general there was widespread community support for the banning of military style weapons, and automatic and semi-automatic rifles, as both countries grappled with extreme tragedy.

In the United States, a similar response to major firearms massacres such as Sandy Hook in 2012 and, more recently the Parkland Florida school shooting in 2018 is almost inconceivable. Any attempt to instigate reform on the back of unspeakable tragedies such as these appears doomed; indeed Barak Obama’s pledge to push through some modest reforms following Sandy Hook encountered fierce resistance from the NRA and other organisations who, in vehemently defending the constitutional right of the Second Amendment, constitute the gun-lobby.

In Australia in 1996, and in New Zealand in 2019, the governments acted on the tragedy to amend laws so as to keep people safe and to ensure such massacres do not continue to occur. There have been no mass shootings in Australia in the over 20 years since Port Arthur; in the 20 years before it there were 13. In both cases, the gun lobby was barely given time to react, and reform was swift so that it took place in the wake of the tragedy. Even though the US has a constitutional right protecting gun ownership, the gun lobby fears the capacity of governments to move to curtail gun ownership in the wake of severe massacres and large-scale tragedies, such as Sandy Hook or Parkland, and members of the NRA have expressed concerns that such mobilisation by gun control advocates in Australia, and now New Zealand, may give hope and impetus to those campaigning for gun law reform in the United States.

A main gun-lobby tactic is to criticise gun-control advocates for capitalizing on tragedy to target gun laws. Following a mass shooting it is usual for the gun lobby, and a large number of politicians as well, to offer “thoughts and prayers” and then to vehemently oppose any suggestion that gun laws need reform. One typical response from conservative second amendment defenders is that it is bad (it seems they are suggesting it is ethically bad) to use a tragedy to further a political agenda or ideology.  

This tactic was spelled out clearly recently by Catherine Mortensen, an NRA media liaison officer, though she was unaware at the time of speaking, that she was being recorded. Recently several Australian politicians and political staffers were caught and secretly filmed visiting the United States and meeting with NRA and gun lobby officials and supporters, with a view to soliciting political donations. Said donations were slated to help these Australian right-wing conservative players electorally manoeuvre into a position from which to water down Australia’s gun laws. In meetings at the NRA’s Virginia headquarters, NRA officials provided Australian One Nation’s James Ashby and Steve Dickson with tips from the NRA playbook on how to galvanise public support to change Australia’s gun laws and coached the pair on how to respond to a mass shooting.

Catherine Mortensen’s advice, following a mass shooting, is: “Say nothing.” If media queries persist, go on the “offence, offence, offence.” She counsels gun lobby groups to smear gun-control groups. “Shame them” with statements such as: “How dare you stand on the graves of those children to put forward your political agenda?”

Of course, Mortensen’s point about not using tragedy to further a political agenda is, in this scenario, a totally disingenuous piece of sophistry. One of the principle points about such tragedies (Christchurch, Sandy Hook, Port Arthur) which determines how we should respond is the procurement and possession of weapons – in these cases the availability and use of military style weapons by a single person to massacre strangers. Whichever way it is talked about, how a government and community responds will always suggest some political end. Events like these rightly impede our political lives, and it is surely the role of politicians to act in the best interests of the community. Mortensen’s statement is also obviously hypocritical, considering the political clout of the NRA and the gun lobby.

It is surely time for the USA to look again at the ethics of its constitutional right to bear arms. A constitutional right is not a human right, and many now agree that the second amendment is a relic from the Eighteenth Century, when newly independent Americans may have needed “well organised militias” to protect themselves.

But as an ethical concept, the notion of a right needs to have some meaning. Although a right is an abstract, deontological concept, and is in an important sense a ‘good in itself’ it has also to be grounded in our experience of the world, and to emerge from what we know to be the case from our experience. We know that human rights pertaining to access to food and shelter and freedom from tyranny are fundamental because those things are necessary for us to flourish. But we can see from the number of massacres in America, not to mention other gun-death and injury statistics that high levels of gun ownership and availability do not contribute to a flourishing society.  

The right to bear arms cannot be considered to be a right that is ‘good in itself,’ or is worthy to be protected against every tragedy and every statistic that measures the harms it causes American society. In fact, in this case, the existence of a right is a primary factor standing in the way of ethical progress.

Online Discourse and the Demand for Civility

drawing of sword duel with top-hatted spectators

It often seems like the internet suffers from a civility problem: log onto your favorite social media platform and no doubt you’ll come across a lot of people angrily arguing with one another and failing to make any real progress on any points of disagreement, especially when it comes to political issues. A common complaint is that the “other side” is failing to engage in discussion in the right kind of way: perhaps they are not giving opposing views the credit they think they deserve, or are being overly dismissive, or are simply shutting down discussion before it can get started. We might think that if everyone were just to be a bit more civil, perhaps we could make some progress towards reconciliation in a divided world.

But what, exactly, is this requirement to be civil? And should be civil when it comes to our online interactions?

At first glance the answer to our first question might be obvious: we should certainly be civil when talking with others online, and especially when we disagree with them. Perhaps you have something like the following in mind: it is unproductive in a disagreement to name-call, or use excessive profanities, or to generally be rude or contemptuous of someone else. Acting in this way doesn’t seem to get us anywhere, and so seems to be something to be generally avoided.

However, when people in online debates accuse the others of failing to be civil, they are often not simply referring to matters of mere etiquette. One of the more common complaints with regards to the lack of civility is that the other side will refuse to engage with someone on a topic about which they disagree, or else if they do discuss it, not discuss it on their terms. A quick stroll through Twitter will bring up numerous examples of claims that one’s opponents are not engaging in “civil discourse”:

“I can essentially find something we agree on through civil discourse with anyone willing to engage in it. Society has become so sheltered that too many brats think their opinions matter more than others.”

https://twitter.com/illumiXnati/status/1124656164443000834

“If you are in America here, none of you understand this. Pick up a copy [of the constitution] and read it, study it, and then maybe we can engage in civil discourse. Until then you need to sit down and remain silent.”

https://twitter.com/CAB0341/status/1123957247548248064

Notably, many of those who have been banned from one or more social media platforms have claimed that their banning is a result of the relevant companies refusing to engage in the kind of civil discourse of which they take themselves to be champions. Consider, for example, former Alex Jones writer and conspiracy theorist Paul Joseph Watson who, upon his banning from Facebook, tweeted the following:

“The left has learned that they can silence dissent by labelling anyone they disagree with an ‘extremist’. I am not an ‘extremist’. I disavow all violence. I encourage peaceful, civil discourse. Anyone who has met me or is familiar with my work knows this”

https://twitter.com/PrisonPlanet/status/1124641179771994114

Or consider the following from journalist Jesse Signal:

“90% of the time ‘I will not debate someone who is arguing against my right to exist’ is simply a false derailing tactic, but if someone DOES deny your right to exist, and is in a position of power and willing to debate you, how crazy would it be to NOT debate them??”

https://twitter.com/jessesingal/status/1117077434032119808

Signal’s tweet was in response to backlash in response to his writings on trans issues, in which many took him to be portraying the trans population in America as consisting largely of people who seek to transition because of mental illness or trauma, many of whom ultimately end up regretting their decision. Signal, then, takes the refusal of trans persons to debate with him about the nature of their very being to be a “derailing” tactic, while Watson claimed that his views, regardless of their content, ought to be allowed to be expressed because he is doing so in a manner that he takes to be civil.

In the above tweets (and many others) we can see a couple of different claims about civil online discourse: the first is that so long as one’s views are expressed in a civil manner then they deserve to be heard, while the second is that an opponent who refuses to engage in such civil discussion is doing something wrong. What should be make of these claims?

In response to the Signal tweet and the resulting controversy, Josephine Livingstone argues that “[d]ebate is fruitful when the terms of the conversation are agreed upon by both parties…In fact, it is the “debate me, coward” crowd that has made it impossible to have arguments in good faith, because they demand, unwittingly or not, to set the terms.” The worry, then, is that when one demands debate from one’s opponent, one is really demanding debate on the grounds that they themselves accept. When one’s grounds and those of one’s opponent are fundamentally at odds, however – consider again the charge that Signal wants to debate people whose very right to existence he is denying – it seems impossible to make any real progress.

As Livingstone notes, there is a persistent culture of those who call for debate and, when this call is inevitably ignored, cry that one’s opponents somehow fail to meet some standard of civil discourse. The thought is that refusing to engage with an opponent in civil discourse, then, is a sign of cowardice, or that one is secretly worried that one’s views are false or will not hold up to scrutiny. But of course this is hardly what has to be the case: dismissing or putting an end to a discussion that fails to be productive is not a tacit admittance of defeat or insecurity in one’s views. Instead, if there does not seem like there will be any progress made because the discussion is not productive, refusing to engage or ending it might be the best course of action.

The assumption that there can be some kind of neutral ground for debate, then, will already make demands on one’s opponent when their values are fundamentally different from one’s own. Again, if you are arguing that I should not have the right to exist it is difficult to see how we could reach any kind of midway point on which to have a discussion, or why I should be required to do this in the first place. Far from failing to meet a standard of civility, then, refusing to engage in what one takes to be civil discourse does not seem like any kind of failing when doing so would prove unproductive.

Should Instagram Remove Its Like System?

photograph of cappuccino with heart made with foam

We live in an era where we are at the mercy of the internet. Social media gives users the power to share their life with the world. And in exchange for sharing their life, users are rewarded with likes. The act of liking something on social media, double tapping a photo or thumbs-upping a status, is a way for users to connect and interact with one another. When someone’s content is liked, it gives them digital agency. Multiple likes, those that allow users to go viral, indicate both digital agency and status. But what about the users who don’t get that many likes? In the social media realm, does their lack of likes devalue the life that they share online? Instagram is addressing this dilemma by considering removing likes from posts and videos. The photo sharing app ran a test in Canada last week and could consider making the change for the entire app. Should Instagram go forward with removing the like system…or leave things as they were?

They say that something shouldn’t be fixed if its not broken. And technically, nothing is wrong with the Instagram app itself. People connect on the app by sharing photos and their lives. They possess the freedom of expressing themselves.  People can travel the world just by logging in to the app. The issue that is being addressed is the digital currency that has been imbued in the like system and its effects on Instagram users. Instagram has been associated with anxiety, depression, low self-esteem, and bullying. Why is this? Perhaps it’s because what some people post–be it social media celebrities, other famous people, or even those in someone’s friend group. Users might feel low self-esteem when their friends post their new car that they got for college graduation or record every moment of that international spring break trip. Users who don’t have that new car or can’t go on the trip might feel inferior and feel FOMO (the fear of missing out), especially if their own posts are getting only 10 to 20 likes. But this idea could be the whole reason to why Instagram is considering removing the likes system. Some users seem to be treating Instagram like a competition–seeing who has the most likes; who has more social status. Treating Instagram in such a manner takes away from its core mission–to connect and to allow people to express themselves freely and creatively. If so many people are focused on likes and you remove all of that, all should be right with the world…right?

Removing the liking system from Instagram would affect users in others ways in addition to addressing issues of self-esteem and anxiety. Influencer marketing has had a big impact on Instagram. By 2020, it is on pace to become an $8 billion dollar industry. Per Lexie Carbon, a staff writer for Later.com, Instagram is the best performing platform for influencer partnerships with brands, for there is an average 3.21% engagement rate compared to the 1.5% engagement rate on other social media platforms. In addition, entire businesses are created on Instagram–clothing brands, photography companies, etc. For some influencers and companies on Instagram, the likes–the digital currency–translate into actual money. The success of brands is often based on likes because how many people engage with a brand indicates its status and therefore, it’s quality and success. If there is no liking system, how can brands and companies communicate to potential clients that they can do something for them that their competition cannot? Some people have based their entire paycheck and livelihood in Instagram. Without a like system, how would they make a living?

At the same time, removing a liking system might give an opportunity for smaller brands and smaller influencers to gain more exposure after being overshadowed by the pages with hundreds of thousands of followers. The same applies for users who feel depression because of other posts on Instagram. A non-liking system could give their page more exposure. But the users could also unfollow the pages that make them feel low self-esteem and the liking system could stay in place. So, should Instagram incorporate a system without likes? The pros and cons seem to meet at a stalemate. But the thing about the internet is that it’s always changing the status quo. There are constant updates and improvements. Bug fixes and concept changes. Instagram could test a system without likes and see the responses from users. If positive, the app could keep it and if not, they could change it. Either way, it’s an opportunity to alter how people interact, and at its core, that’s what social media is about.

Kylie Jenner and the Possibility of Being “Self-Made”

closeup photograph of Ben Franklin's face on $100 bill

Kylie Jenner is on the cover of Forbes as the youngest “self-made” woman billionaire, raising questions about the moniker. Dictionary.com’s twitter feed seemingly criticized the characterization, posting, “Haven’t we gone over this? Self-made: Having succeeded in life unaided.” The post, like most appeals to a dictionary definitions, doesn’t clear up the tensions involved in what it means to be a self-made billionaire, or to have your wealth qualified as “self-made” more generally. Instead, the definition suggests that the important concept or quality of the wealth is that it was accrued “unaided”.

Jenner claims that her income was not inherited, so the title is earned, but credits her “reach” via social media and the platform granted by her family connections for making her career possible. Her line of make-up, “Kylie Cosmetics”, is indisputably successful and is setting up to grow further through a distribution deal with Ulta. Forbes attributes the huge financial success of Kylie’s self-owned company to the low overhead, which is possible for just the reasons Kylie acknowledges: “She announces product launches, previews new items and announces the Kylie Cosmetics shades she’s wearing directly to the 175 million-plus who follow her across Snapchat, Instagram, Facebook and Twitter.” Her mother, Kris Jenner, is in charge of finance and PR for 10% of profits, but the majority of the money in the business goes straight to Kylie.

While the $1billion that qualifies her for achieving the accolade of becoming a billionaire (even younger than Mark Zuckerberg) was certainly not inherited, it is difficult to imagine her reaching this point without the systematic privileges of her family and the networking and PR savvy that aided her.  

Benjamin Franklin and Oprah Winfrey (along with Mark Zuckerberg) have been called “self-made”. In the case like Franklin and Winfrey, the “self-made” title can indicate just how different an individual’s financial situation became in their life (Franklin had little education and was the son of lower class craftsmen, Winfrey was raised in a rural Mississippi farming community). Forbes developed a scoring system for the title “self-made” in 2014: At the most basic level, the scores denote who inherited some or all of their fortune (scores 1 through 5) and those who truly made it on their own (6 through 10). Jenner Scores a “7” on this scale.

1: Inherited fortune but not working to increase it: Laurene Powell Jobs

2: Inherited fortune and has a role managing it: Forrest Mars Jr.

3: Inherited fortune and helping to increase it marginally: Penny Pritzker

4: Inherited fortune and increasing it in a meaningful way: Henry Ross Perot Jr.

5: Inherited small or medium-size business and made it into a ten-digit fortune: George Kaiser

6: Hired or hands-off investor who didn’t create the business: Meg Whitman

7: Self-made who got a head start from wealthy parents and moneyed background: Rupert Murdoch

8: Self-made who came from a middle- or upper-middle-class background: Mark Zuckerberg

9: Self-made who came from a largely working-class background; rose from little to nothing: Eddie Lampert

10: Self-made who not only grew up poor but also overcame significant obstacles: Oprah Winfrey

The question of what counts as a self-made rich person really may be paraphrased as: how much of your money can we credit you with earning? This is a really tough question, as it more clearly implicates the economic and personal advantages in our society that aren’t equally distributed. The Jenners and Kardashians are monetizing a commodity that they are, in a sense, inheriting – their fame, which consumers are willing to pay for. There is a spectrum, perhaps, between making money for ideas and work that is “your own”, without a significant “leg up” from your family situation, and being handed all of your wealth and influence as a matter of inheritance. There likely is no one who would fit a model of purely self-made wealth – you would need to be plopped down on the earth, have an idea or invention perhaps, and go from there. Anyone outside of that hypothetical is influenced by conditions of circumstance that they inherit, and Kylie Jenner falls into an interesting place along this spectrum where it isn’t literal money she got from her family that undermines her claims to being self-made.

Given the rampant inequality of our economy and how privilege is inherited, this is a helpful discussion to have, however. Today in our legislature, this discussion is being taken up by Sen. Elizabeth Warren (D-MA) in the form of a proposed housing bill. Sen. Warren introduced the American Housing and Economic Mobility Act in September, with a rousing statement about the unequal effect of the housing crisis on Americans of different races. The bill attempts to address the ways that housing markets currently affect classes and races inequitably and serve to exacerbate unequal distributions of wealth. Within the senate’s version of the bill is a raise for the Estate Tax, which taxes inheritance. Warren says, “The American Housing and Economic Mobility Act confronts the shameful history of government-backed housing discrimination and is designed to benefit those families that have been denied opportunities to build wealth because of the color of their skin.”

While introducing the bill, Warren shared her disgust with the history of discriminatory policies that affect the economic starting point of members of different races in the United States: “Official policy of this government was to help white people buy homes and to deny that help to black people.”

Housing discrimination is of particular import to developing wealth over a person’s life and intergenerationally, says Warren: “a home serves as security that fund other ventures to start a small business or to send a youngster to college. And if grandma and grandpa can hang on to a home and get it paid off they can often pass along an asset that boost the finances for the next generation and the one after that. And that’s exactly what white people have done for generations. But not black people.” Policies have historically “cut the legs out” from under “communities of color”, and this has recently been particularly harmful, as non-white communities experience the brunt of the housing collapse and great recession to a greater extent. According to the Boston Globe, median net worth white families living in Boston is $247500, while the median net worth of black families living in Boston is $8. The ability to develop wealth over generations, largely through housing, is an important step in ameliorating this discrepancy.

The bill would create three million new housing units, improve access to affordable housing through anti-discrimination laws, and invest in families living in historically redlined communities. According to economic analysts, “In many white communities, a young couple’s first mortgage is obtained through intergenerational wealth, such as help from in-laws. In poorer communities of color, that intergenerational wealth often does not exist due to the history of segregation. Warren’s down-payment assistance would act in lieu of intergenerational wealth, helping buyers start to build wealth themselves through home ownership. “

The discussion over what it means to be “self-made” is an important one to have to the extent that it shines light on the different economic starting points of people living in our society. Our financial system has privileged portions of the populations for generations; not everyone has monetizable assets that allow wealth to develop. A capitalist system can only hope to be just if it isn’t rigged against groups of people and if it somehow can take starting points into consideration. Even Forbes, with their “self-made” scale, agrees.

The Problem with “Google-Research”

photograph of computer screen with empty Google searchbar

If you have a question, chances are the internet has answers: research these days tends to start with plugging a question into Google, browsing the results on the first (and, if you’re really desperate, second) page, and going from there. If you’ve found a source that you trust, you might go to the relevant site and call it a day; if you’re more dedicated, you might try double-checking your source with others from your search results, maybe just to make sure that other search results say the same thing. This is not the most robust kind of research – that might involve cracking a book or talking to an expert – but we often consider it good enough. Call this kind of research Google-research.

Consider an example of Google-researching in action. When doing research for my previous article – Permalancing and What it Means for Work – I needed to get a sense of what the state of freelancing was like in America. Some quick Googling turned up a bunch of results, the following being a representative sample:

‘Permalancing’ Is The New Self-Employment Trend You’ll Be Seeing Everywhere

More Millennials want freelance careers instead of working full-time

Freelance Economy Continues to Roar

Majority of U.S. Workers Will be Freelancers by 2027, Report Says

New 5th Annual “Freelancing in America” Study Finds That the U.S. Freelance Workforce, Now 56.7 Million People, Grew 3.7 Million Since 2014

While not everyone’s Googling will return exactly the same results, you’ll probably be presented with a similar set of headlines if you search for the terms “freelance” and “America”. The picture that’s painted by my results is one in which the state of freelance work in America is booming, and welcome: not only do “more millennials want freelance careers,” but the freelance economy is currently “roaring,” increasing by millions of people over the course of only a few years. If I were simply curious about the state of freelancing in America, or if I was satisfied with the widespread agreement in my results, then I would probably have been happy to accept the results of my Google-researching, which tells me that the status of freelancing in America is not only healthy, but thriving. I could, of course, have gone the extra mile and tried to consult an expert (perhaps I could have found an economist at my university to talk to). But I had stuff to do, and deadlines to meet, so it was tempting to take these results at face value.

While Google-researching has become a popular way to do one’s research (whenever I ask my students how they would figure out the answer to basically any question, for example, their first response is invariably that they Google it), there are a number of ways that it can lead one astray.

Consider my freelancing example again: while the above headlines generally agree with each other, there are reasons to be worried about whether they are conveying information that’s actually true. One problem is that all of above articles summarize the results of the same study: the “Freelancing in America” study, mentioned explicitly in the last headline. A little more investigating reveals some troubling information about the study: in addition to concerns I raised in in my previous article – including concerns about the study glossing over disparities in freelance incomes, and failing to distinguish between the earning potentials and difference in number of jobs across different types of freelance work – the study itself was commissioned by the website Upwork, which describes itself as a “global freelancing platform where businesses and independent professionals connect and collaborate.” Such a site, one would think, has a vested interest in presenting the state of freelancing as positively as possible, and so we should at the very least take the results of the study with a grain of salt. The articles, however, merely present information from the study, but do little in the way of quality control.

One worry, then, is by merely Google-researching the issue I can end up feeling overly confident that the information presented in my search results is true: not only is the information I’m reading being presented uncritically as fact, all my search results agree with and support one another. Part of the problem lies, of course, with the presentation of the information in the first place: while it may be the case that I should take these articles with a grain of salt, it seems that by the way the above articles were written, the various websites and news outlets that presented the information in such a way that they took the results of the study at face value. As a result, although it was almost certainly not the intention of the authors of the various articles, they end up presenting misleading information.

The phenomenon by which journalists reports on studies by taking them at face value is unfortunately commonplace in many different areas of reporting. For example, writing on problems with science journalism, philosopher Carrie Figdor argues that since “many journalists take, or frequently have no choice but to take, a stance toward science characteristic of a member of a lay community,” they do not possess the relevant skills required to determine whether the information that they’re presenting is true, and cannot reliably distinguish between those studies that are worth reporting on and which are not. This, Figdor argues, does not necessarily absolve journalists of blame, as they are at least partially responsible for choosing which studies to report on: if they choose to report on a field that is not producing reliable research, then they should “not [cover] the affected fields until researchers get their act together.”

So it seems that there are at least two major concerns with Google-research: the first comes relates to the way that information is presented by journalists – often lacking the specialized background that would help them better present the information they’re reporting on, journalists may end up presenting information that is inaccurate or misleading. The second is with the method itself – while it may sometimes be good enough to do a quick Google and believe what the headlines say, oftentimes getting at the truth of an issue requires going beyond the headlines.

Cancel Culture

close-up image of cancel icon

“Cancelled” is a term that millennials have been using in the past few years to describe people whose political or social status is controversial. Celebrities, politicians, one’s peers—even one’s own mother could be cancelled if someone willed it. If a person is labeled as cancelled, they are no longer supported morally or financially by the individuals who deemed them so. It’s a cultural boycott. But is cancelling really as simple as completely cutting someone off because of their beliefs or actions? The term itself—being cancelled—presents a larger argument. What does it accomplish? Is this “cancelling culture” something that can be beneficial or is it just a social media fad?

In 2018, rapper Kanye West not only endorsed the controversial President Donald Trump, but also said that slavery was a choice in an interview with TMZ. West received a ton of backlash from the black community and some people declared Kanye West cancelled and vowed to no longer listen to his music. On one hand, canceling Kanye West can be viewed as something positive depending on one’s political stance. It questions the impact that celebrities have in a political realm and it holds celebrities responsible for their actions by placing them under scrutiny on a viral scale. But at the same time, is Kanye West really cancelled? People still listen to his music. Even after his slavery comment, his most recent album debut at the top of the Billboard chart. West has also been hosting what is now known as Sunday Service, where West and a group of singers go into a remote location and perform some of his greatest hits. Social media has been loving it, so much that Sunday Service was brought to Coachella. It’s current sentiment about West that brings into question the impact of cancelling someone.

Can West be un-cancelled if he does something that most of social media enjoys? Is cancelling someone then just based off general reactions from social media? If one person declares an individual cancelled, does that mean everyone should consider them cancelled? The obvious answer would be no, but the act of “cancelling” almost works like the transitive property. If you don’t cancel someone that everyone else does, you yourself might risk being cancelled. Can cancelling be just another way to appear hip and knowledgeable–staying up on trends and the news but challenging those who create them? If so, cancelling could simply be interpreted as social media users wanting to stay relevant and maybe even go viral. If such a situation is the case, it would only take agency away from the act of cancelling.

Although cancel culture is heavily associated with celebrities, the hierarchy of who is cancelled can become a bit more complex. Per Billboard, it was revealed that Philip Anschutz, owner of entertainment conglomerate AEG, the company that overlooks Coachella, has supported anti-LGBTQ and anti-climate change foundations. Anschutz has also shown support to the Republican party. Coachella is one of the most highly coveted events to attend for millennials, and LGBTQ rights, climate change, and liberalism rank high in their agendas. When major news outlets first began writing about Anschutz and his support for anti-LGBTQ and anti-climate change foundations back in 2017, it was also revealed that Beyoncé would be headlining Coachella as well as popular rap artists such as Kendrick Lamar. Janelle Monáe, a popular hip-hop/R&B singer who identifies as queer, has also performed at Coachella. What do we make of this? Yes, Anschutz is “cancelled,” but is Coachella? Some vowed to no longer support Coachella after learning of the foundations that Anschutz supported, but when tickets went on for sale after it was announced that Beyoncé would be headlining, they sold out in three hours. So… probably not. But is Janelle Monae “cancelled?” Kendrick Lamar? Is it even possible to “cancel” Beyoncé?

The benefits of cancelling Anschutz seem minimal when there is still mass support for Coachella. Perhaps in such a case, cancelling does seem like a social media fad because one could interpret cancelling Anschutz as a way of easing their own conscience. After all, individuals who support LGBTQ still go to Coachella. But again, cancelling could be a way for social media users to prevent themselves from being cancelled. Condemning controversial topics on social media might make one appear favorable and keep them from being shunned on social media. But maybe such an idea is a key to “cancelling” and its overall impact on the social sphere. Yes, cancelling can sometimes have a large impact in some instances. Public pressure on companies and celebrities can often influence their decisions. But sometimes, “cancelling” can be just some random social media users venting their frustration in the endless void that is the internet. Maybe once and awhile, their words go into the void and resound with another user and gain virality. But is Kanye West or whoever else is cancelled really seeing these cancelling posts? Some of them only have a few retweets, so they are unlikely to get too much traction. In addition, saying someone is cancelled can often be used a joke. The distinction, especially on social media, between a user being serious and being facetious can often be blurred. Therefore, it is difficult to ascertain the agency that cancel culture truly has. However, it does attest to the power of social media and the users who pump content into it.

A Question of Motivation: Moral Reasons and Market Change

image of beached whale with human onlookers

For thousands of years, the practice of hunting whales was exceptionally common. The animals were killed for meat, and, later, for blubber that could be converted into oil—an increasingly valued commodity during the industrial revolution. While whaling provided tremendous benefits to human beings, the practice was, of course, devastating to whale populations and to individual whales. Arguments against the practice were ready at hand. A number of species, such as grays and humpbacks, were being hunted into near extinction. The reduction of the whale population led to changes in aquatic ecosystems. What’s more, the practice was cruel—whaling equipment was crude and violent. Whales under attack often died slowly and painfully and plenty of harpooned whales were seriously injured rather than killed, causing them pain and diminishing the quality of their lives. To complicate matters, whales have enormous brains and live complex social lives. There is much that we don’t know about whale cognition, but there is at least a compelling case to be made that they are very intelligent.

Most countries have banned the practice of whaling, though some native tribes are allowed to continue the practice on a subsistence basis. One might think that we came to see the error of our ways. Surely, the true, unwavering light of reason guided us toward mercy toward our Cetacean friends? After all, the case in question raises fundamental philosophical questions. In virtue of what features is a being deserving of moral consideration? How should we balance human comfort and well-being against the suffering of non-human animals harmed in its attainment? How much collateral damage is too much collateral damage?

Alas, as Paul Shapiro points out in his book, Clean Meat, it was market forces rather than philosophical arguments that led to the slow decline of whaling practices. When alternative sources of energy, such as kerosene, became cheaper and more readily available than whale oil, consumers quickly changed their consumption habits. So, it was only after a viable alternative became available that people were finally willing to listen to the ethical arguments against the practice.

The way in which the practice of whaling fell into disrepute is a key case study for reflection on an interesting and important set of questions, some empirical and some philosophical. Is it common for people to be motivated by the sheer strength of moral reasons? Are moral considerations hopelessly secondary to concerns related to convenience? If we assume that desirable moral outcomes exist (the reduction of suffering is a plausible candidate), are we justified in changing moral attitudes by manipulating markets? How much time and effort should we spend persuading people to change their consumption habits for moral reasons?

These questions are increasingly salient. In years past, our species had the power to usher in the end of days for countless species. Indeed, technological advances have made it possible for our species to usher in the end of days for life on earth, full stop. We have created products and procedures that pollute our oceans and fundamentally change our atmosphere. What should we do in response?

One might think that the severity of the problem should give rise to a paradigm shift—a move, once and for all, away from the anthropocentric worldview that put us where we are. This would involve seeing our actions and ourselves as part of a larger biosphere. Once we adopted this view, we would recognize that resources are global, and we are just a small, albeit fulminant, part of that larger system. The fact that our actions have consequences for others may also lead to a shift in the way we think about our moral spheres of influence. Rather than thinking of moral obligation as a local matter, we may start to think about the consequences our actions have for populations in locations more impacted by climate change. We may also think about the impact our behavior has on the non-human life occupying the global ecosystem.

Or…not. It may be that such a shift fails to take root. Admittedly, this is philosophically dissatisfying. There seems to be something noble and admirable about living a Socratic life—about knowing oneself and living an examined life. This entails a willingness to reflect on one’s own biases, a disposition to reflect on what is good, all things considered, and to pursue that good.

For change to happen in the way I’ve just described requires change to happen from within. In this case, our behavior would change by way of what philosopher John Stuart Mill would call an “internal sanction”—we would be motivated to do what is good out of sheer recognition that the thing in question is good. In the absence of internal sanctions, however, external sanctions may be not just appropriate, but crucial. A change in market forces eventually led to conditions under which people could be convinced that whaling was a moral atrocity that needed to be outlawed. Perhaps similar market changes can make the difference with regard to crucial moral issues today. Perhaps if there are viable alternatives to the consumption of flesh, people will open their eyes to the horrors of factory farms. If there are compostable or reusable alternatives to single use plastics, perhaps that will open the door to a change in attitude about the way our consumption habits affect the planet.

The problem with this approach is that important moral change becomes dependent on non-moral features of the market. The alternative options must be affordable, marketable, and, ultimately, popular. What’s more, though the market might be useful for transmitting values, there is nothing inherently moral about it—it can make popular corrosive, ugly change just as easily as it can promote moral progress. In the end, if the market change doesn’t stick, neither does the moral change.

Banned Books: Why the Restricted Section Is Where Learning Happens

photograph of caution tape around library book shelves

The books included on high school reading lists have not been discussed nearly as widely as the books not included on those very lists. For years teachers and parents have debated which texts students should be able to read, and what parameters should be utilized to determine whether a text is appropriate for a certain age group. However, this debate has moved far beyond whether books are appropriate and has begun to explore how this form of censorship affects students. An article published in The New York Times discusses the banned books of 2016 and how their banned status reveals important facets of the current American psyche. In fact, the author states that the most prominent themes associated with the banned books of 2016 related to gender, LGBTQIA+ issues, and religious diversity, all of which were themes heavily discussed during the election year.

James LaRue, the director of the Office for International Freedom, illustrates his experience receiving reports from concerned parents who worry about the appropriateness of certain texts in their children’s school libraries. However, LaRue does not agree with this method of parenting and states, “They are completely attached to the skull of the child and it goes all the way up through high school, just trying to preserve enough innocence, even though one year later they will be old enough to marry or serve in the military.” This point is echoed by author Mario Tamaki who expresses that deeming books as inappropriate marginalizes groups of individuals and can adversely hurt students who relate to their characters. He states, “We worry about what it means to define certain content, such as LGBTQ content, as being inappropriate for young readers, which implicitly defines readers who do relate to this content, who share these experiences, as not normal, when really they are part of the diversity of young people’s lives.”

Both of these individuals relay their concern for the influence of banning books on young readers and this point is reiterated by Common Sense Media a non-profit organization which seeks to provide education to families concerning the promotion of safe media for children. Despite their specialization in appropriate media for children they encourage parents with the article, “Why Your Kid Should Read Banned Books,” which outlines how the most highly regarded pieces of literature were at some point banned in mainstream society. However, their banned status says nothing of the important messages held between those pages. They make the statement, “At Common Sense Media, we think reading banned books offers families a chance to celebrate reading and promote open access to ideas, both which are key to raising a lifelong reader.” This organization’s support for encouraging  a conversation regarding censorship and the importance of standing up for principles of freedom and choice is a critical facet of this continued debate.

On the other side of this debate are concerns of not only violence, language, and substance abuse, but questions about how explicit stories of suicide and self harm may influence young readers who are depressed or suicidal themselves. This concern was heightened due to literature such as the popular young adult book Thirteen Reasons Why, which revolves around a teenage girl’s suicide. Author Jay Asher has been outspoken regarding why censorship of his book specifically is harmful to teenagers. In an interview he describes knowing that his book would be controversial: “I knew it was going to be pulled from libraries and contested at schools. But the thing about my book is that a lot of people stumble upon it, but when it’s not on shelves, people can’t do that. Libraries, to me, are safe spaces, and if young readers can’t explore the themes in my book there, where can they?” Asher acknowledges that it is nearly impossible to create a book which will be appropriate for all readers. He outlines his experience talking to a student who was overwhelmed by the contents of the story. The student decided to refrain from finishing the remainder of the book until she felt completely comfortable, effectively self-censoring.

These attitudes towards censorship reveal troubling social implications when considering which books are chosen for exemption from libraries, as an article published in The Atlantic describes. There is a clear separation themes of violence and fantasy in comparison to the highly-censored themes referencing race or sexuality, which reveals a larger issue of the struggles of minority authors getting children’s books published. According to The Atlantic, “this means the industry serves those who benefit from the status quo, which is why most scholars see children’s literature as a conservative force in American society.” The author reinforces the ideas discussed by adults concerned about the limited access to a broad range of ideas in children’s literature, and concludes by stating,“This shared sensibility is grounded in respect for young readers, which doesn’t mean providing them with unfettered access to everything on the library shelves. Instead it means that librarians, teachers, and parents curate children’s choices with the goals of inspiring rather than obscuring new ideas.”

“It Wasn’t ‘Me'”: Neurological Causation and Punishment

photograph of dark empty cell with small slit of sunshine

The more we understand about how the world works the more fraught the questions of our place in the causal network of the world may seem. In particular, the progress made in understanding how the mechanisms of our brain influence the outward behavior of our minds consistently raises questions about how we should interpret the control we have over our behavior. If we can understand the neurological processes in a causal network that explain the way we act, in what sense can we preserve an understanding of our behavior as ‘up to us’?

This has been a concern for those of us with mental illness and neurological disorders for some time: having scientific accounts of depression, anxiety, mania, and dementia can help target treatment and provide us with tools to navigate relationships with people that don’t always behave like ‘themselves’. In serious cases, it can inform how we engage with people who have violated the law: there is a rising trend to use “behavioral genetics and other neuroscience research, including the analysis of tumors and chemical imbalances, to explain why criminals break the law.”

In a current case, Anthony Blas Yepez is using his diagnosis with a rare genetic abnormality linked to sudden violent outbursts to explain his beating an elderly man to death in Santa Fe, New Mexico, six years ago in a fit of rage. His condition explains how he wasn’t “fully in control of himself when he committed the crime.”

Putting aside our increasing ability to explain the psychological underpinnings of our behavior more causally or scientifically, our criminal justice system has always acknowledged a distinction between violent crimes committed in states of heightened emotionality and those performed out of more reasoned judgments, finding the latter to be more egregious. If someone assaults another immediately after finding out they cheated with a significant other, the legal system punishes this behavior less stringently than if the assault takes place after a “cooling off period”. This may be reflective of a kind of acknowledgement that our behavior does sometimes “speak” less for us, or is sometimes less in our control. Yepez’s case is one of a more systematic sort where he is subject to more dramatic emotionality than the standard distinction draws.

Psychological appeals for lesser sentences like Yepez’s are successful in about 20% of cases. Our legal system still hasn’t quite worked out how to interpret scientific-causal influences on behaviors, when they are not complete explanations. Having a condition like Yepez’s, or other psychological conditions we are gaining more understandings of every year, still manifest in complex ways in interaction with environmental conditions that make the explanations fall short of having a claim to fully determining behavior.

It does seem that there is something relevantly different in these cases; the causal explanations appear distinct. As courts attempt to determine the implications of that difference, we can consider the effect of determination-factors in how we understand behavior.

John Locke highlights the interplay between what we may identify as the working of our will and more external factors with a now-famous thought experiment. Imagine a person in a locked room. There seems to be an intuitive difference between such a person who wishes to leave the room but cannot – their will is constrained and they cannot act freely in this respect. On the other hand, something seems importantly different if the person were in the locked room and didn’t know the door was locked – say they were in rapt conversation with a fascinating partner and had no desire to leave. The world may be “set up” so that this state of affairs is the only one the person could be in at that moment, but it isn’t clear that their will is not free; the constraints seem less relevant.

We can frame the question of the significance of the determination of our wills in another way. While not all of our actions are a result of conscious deliberation, consider those that are. When you question what to eat for lunch, what route to take to get to your destination, which option to take at the mechanics, etc., what would result from your certainty that your ultimate decision is determined by the causal network of the world? If, from the perspective of making a decision, we consider ourselves not to be a source of our own behavior, we would fail to act. We would be rendered observers to our own behavior, yet in a perspective of wondering what to do.

Note an interesting tension here, however: after we decide what to do (to have a taco, take the scenic route, replace the transmission) and perform the relevant action, we can look back at our deliberative behavior and wonder at the influences that factored into the performance. It often feels like we are in control of our behavior at the time – say, when we consider tacos versus hamburgers and remember how delicious, fresh and cheap the fish tacos are at a stand nearby, it seems that these factors lead to seeking out the tacos in a paradigmatic instance of choice.

But what if you had seen a commercial for tacos that day? Or someone had mentioned a delicious fish meal recently? Or how bad burgers are for your health or the environment? What if you were raised eating fish tacos and they have a strong nostalgic pull? What if you have some sort of chemical in your brain or digestive system that predisposes you to prefer fish tacos? If any of these factors were the case, does this undermine the control you had over your behavior, the relevant freedom of your action? How do such factors relate to the case that Locke presents us with – are they more or less like deciding to stay in a locked room you didn’t know was locked?

These questions could be worrying enough when it comes to everyday actions, but they carry import when the behaviors in question significantly impact others. If there is a causal explanation underpinning even the behaviors we take to be up to our conscious deliberation, would this alter the ways we hold one another responsible? In legal cases, having a causal explanation that doesn’t apply to typical behaviors does lessen the punishment that seems appropriate. Not everyone has a condition that correlates to violent outbursts, which may make this condition a relevant external factor.

Summit Learning and Experiments in Education

photograph of empty classroom

A recent New York Times article documented a series of student-led protests at a number of public schools throughout the United States against a “personalized learning” program called Summit Learning. The program, supported by Mark Zuckerberg and Priscilla Chan, aims to improve students’ education via computer-based individual lessons, and features A.I. designed to actively develop the ideal learning program for each student. The goals of the program seem especially beneficial to the underserved public school systems where the software is being piloted. Although the initial response was positive, parents and students in communities such as McPherson, Kansas, have begun to reject the program. Among their complaints: excessive screen time and its effects on student health; the connection of the program to the web resulting in students being exposed to inappropriate content; invasive collecting and tracking of personal data; and the decline in human interaction in the classroom.

Each of these points touch on broader issues concerning the ever-greater role technology plays in our lives. There is still a great deal of uncertainty about the best way to technology into education, as well as the associated harms and benefits. It is probably unwise, then, to attempt to judge the consequences of this particular program in its infancy. It has been poorly received in some cases, but in many other cases it has been praised.

The more essential question is whether the education of young students should be handled by such a poorly understood mechanism. Some of the people interviewed in the New York Times article expressed the feeling of being “guinea pigs” in an experiment. Summit’s A.I. is designed to get better as it deals with more students, so earlier iterations will always be more “experimental” than later ones. At the same time, it would be irresponsible to risk the quality of a child’s education for the sake of any experiment. Underserved communities like those in which Summit is being applied also deserve some special protection and consideration, because they are more vulnerable to exploitation and abuse. It was precisely because of their generally low-performing schools that many of these communities so eagerly adopted the Summit Learning system in the first place.

One seemingly simple solution proposed by many of the protesting students is to allow opting-out of the program. While this would allow students a greater degree of agency and help to arrive at the optimal learning method for each student, it would also significantly undermine the already limited understanding of the efficacy of the system. If only the most enthusiastic students are participating, the results will be understandably skewed. As with other experiments involving human subjects, there is a difficult calculus in weighing the potential knowledge gained against the potential harm to individual subjects. In order to ensure the integrity of the program as a whole, opting out on an individual basis cannot be permitted, but the alternative is to force whole schools or town into either participating or not en masse.

Another consideration is whether there is a problem with the premise of Summit Learning itself, that is, “personalized learning.” Personalized learning follows the general trend in cutting-edge technology towards customization, individualization, and, ultimately, isolation. Such approaches do harm to our collective sense of community, but the harm is especially acute in learning environments. Part of education is learning together and, critically, learning to work together. We can see some evidence of this in traditional K-12 school curricula, which have historically centered on the idea that every student learns the same material; in other words, The Catcher in the Rye is only as important as it is because everybody reads it. By removing the collaborative aspect of classroom learning, we run the risk of denying students the opportunity to benefit from different perspectives and develop a common scholastic culture. Furthermore, by implementing isolating technology use in the classroom, schools sanction such practices for students, who may then feel license to repeat such behaviors outside of school.