← Return to search results
Back to Prindle Institute

The Educational and the Political

In March of last year, the medical school I attend was on the front page of Fox News. The headlines: “Indiana medical students subjected to DEI instruction on gender” and “First-year medical school students exposed to woke ‘sex and gender primer’ lesson.”

The lesson in question was delivered in our first-year anatomy course which, when I took the course, was included in a broader lecture on the embryology of the reproductive organs. This material, though, was explicitly labeled as political — even indoctrination — by commentators at Fox: to quote the latter article above,

“It’s dangerous for one major reason, and that is that these kinds of ideas are very controversial amongst Americans, and to have a medical world pick one side, if you will, pick one sort of approach in a very controversial ideological area just breeds mistrust.”

The same commentator continued:

“Politicization of the idea about gender is something that’s harming children in America, and that’s something we’re very much against, and political ideas shouldn’t be taught in a course on anatomy.”

Education has always been a topic of political interest, but the rhetoric has recently seemed to reach a fever pitch. Book bans. Bans on Critical Race Theory and the discussion of sexual orientation and gender identity.  The partisan destruction and reconstruction of New College in Florida. The list goes on, and the debate has likely reached your doorstep, just as it has mine. But even in a heated debate, the core of these arguments deserves our scrutiny, and this instance centering on medical education provides us with a case study.

What happens when education and politics collide?

*     *     *

The purpose of medical education is to train physicians — to take students with limited experience in medicine (if any) and prepare them to be excellent clinicians. The adjective excellent here is important: competence is the minimum standard, but excellence is the goal. Take a look at the mission statement of any medical school, and you’ll see this idea repeated over and over — and for good reason. You do not just want a competent doctor, you want a good doctor; you may take a competent physician in a pinch, but the doctor you want is one who not only understands the basic pathology and treatment of your malady, but is also kind, compassionate, and attentive. You want a doctor who seeks to understand the larger picture of your health, who understands your goals, who makes you feel safe, comfortable, and heard. Doctors who don’t integrate these facets of medicine into their practice may be competent, but they are not successful.

As such, a key part of the medical curriculum extends beyond the basics of pathophysiology and treatment: medical students are not only expected to be competent in the science of medicine, but also demonstrate the virtues of a good physician. In the clinical curriculum at my medical school, an attending physician provides feedback to students through a 14-point rubric — only four of which are related to medical knowledge. The remaining 10 encompass these broader attributes: how does the student seek out and account for a patient’s social context? How do they communicate with patients and their families? Do they demonstrate integrity as a member of the healthcare team? Do they recognize and respect cultural differences? Are they empathetic?

It’s important to reiterate that the reason students are assessed in this way by their attending physicians is that attending physicians are assessed in this way by patients: medical knowledge is only part of what you want from a doctor. This is why, before students reach the clinical phase of their education, the first two years of medical school are spent in the classroom, where we not only learn what causes such-and-such disease, but also the social determinants of health and how we can cultivate the “soft skills” which are fundamental to clinical practice. As part of this, we are intentionally exposed to a variety of experiences designed to contextualize the idea of health in its social moorings: we learn about how to safety plan with victims of domestic violence, how people of different ages or cultures may have different expectations regarding communication and treatment planning, how to care for people with disabilities, and how, for some patients, spirituality will play a large role in their health care. We also learn from members of the LGBTQIA+ community and are taught how to be a supportive ally. All of these experiences are important for training excellent, empathetic physicians.

With this context, we see that the claim made by the commentators at Fox — that a sex and gender primer is a political intrusion into medical education — is both wrong, and a vast oversimplification.

Above, we discussed how medical education involves training in both clinical and social knowledge and skills — and the lesson at the center of this controversy is an excellent example of teaching at that intersection. The early slides define the terms to be used: it defines sex as a primarily biological construct, and gender as a primarily social construct. It further breaks down both of these terms, drawing the distinction between genotypic and phenotypic sex and elaborating on the differences between gender identity and gender expression. These are useful distinctions to draw in clinical medicine, because all of these concepts are independent of one-another: genotypic sex does not entail a phenotypic sex (or vice-versa), and gender identity and expression can vary in their overlap. Further, an entire section of the lesson is devoted to elaborating on how students, as future healthcare providers, can be inclusive of gender- and sexual-diverse patients: for example, using appropriate pronouns and person-first and anatomy-based language. All of this is important information for medical students, who will care for people across these spectra, to know: with this knowledge, students will be better prepared to build a therapeutic relationship with these patients and provide the excellent healthcare they deserve.

The lesson also provides students with the scientific, clinical knowledge which one would expect of a medical school lecture. It makes the scientifically correct claim that sex and gender are non-binary; and though it is only briefly summarized (as, in my curriculum, the material was covered in detail in another course), the lesson shows how differences in sexual development can place individuals along an entire spectrum of sex. You can have two X chromosomes and male-like genitalia; you can have an X chromosome and a Y chromosome and female-like genitalia. A biological, scientifically-informed concept of sex admits to no binary, and medicine has understood and discussed these differences in sexual development for nearly 50 years. Again: medical students will care for patients with these differences in sexual development, and understanding these differences is a vital part of building a therapeutic relationship and providing excellent healthcare.

We see, then, that the content of this lesson is immediately and well-connected to the purpose of medical education. I do not contend that this content is politically uncontroversial, but we see, here, a very different picture than that presented by the commentators at Fox. This lesson is not indoctrination, it is not ideological, and it does not represent the insertion of political content into an educational context. Rather, pre-existing educational content which aligns with the stated and agreed upon goal of medical education — the training of future physicians to provide excellent care — has been politicized. And though it may seem like splitting hairs, the distinction is incredibly important: there is not a flood of politically-charged content into medical education at Indiana University, but a flood of politically-motivated criticism of medical education’s established curriculum and mission.

But in these terms, the debate takes a very different contour: a commentator can only decry the inclusion of this material in medical education if they disagree with training physicians to provide compassionate, socially-informed care to gender- and sex-diverse communities. And maybe they do. But I won’t put words in their mouth, and, instead, I’ll state the broader point to be had here. Politics and education frequently collide, but the blame should not always be placed on education: in many cases, the educator does not decide to be political, but the politician decides that the educator is. This is why debates about political content in the educational context are, frequently, missing the point: we should not always debate whether or not political content should be discussed in the educational environment, but we should also consider whether or not content which is well-connected to an educational mission should be considered political. I believe that, in the case of the sex and gender primer in my medical school and in many similar cases, the answer is no, and the mechanics of the debate deserve to be questioned.

No Fit Place: On Books Bound in Human Skin

closeup image of leather texture

Many museums around the world hold and display human remains. Be they in the form of the ancient Egyptian mummies housed within the British Museum, the skeleton of the notorious murderer William Burke at Edinburgh’s Anatomical Museum, or the University of Oxford’s Amazonian “shrunken heads,” the relationship between the deceased and museums is intimate. It should surprise no one, then, that museums are the site of significant ethical debate. After all, where we find death, we often find controversy.

For example, and possibly a topic with which many will be familiar, there is considerable debate surrounding how museums obtain their exhibits and whether this has any bearing on how, or even if, they should be displayed. If a body falls into a museum’s stewardship via less than official means – like graverobbing – should this affect whether a museum should exhibit the body? We might think the answer is yes, as such an action would be patiently unethical (and likely illegal). But is this true across the board? Does time play a factor here? After all, many of the bodily remains of ancient peoples were taken from their burial places many decades, even centuries ago. Should this matter when compared to the educational and social value such remains might produce?

My point so far here is not to pick sides in such debates (although I do have a side). Instead, I want to highlight that when it comes to how museums treat remains, there is a long pedigree of philosophical and political debate, as well as legislation. This is not the case, however, when it comes to another form of venue in which the public can engage in academic and educational pursuits – libraries.

Now, this might not strike you as particularly relevant; after all, libraries don’t hold human remains, they hold books. But this is, strictly speaking, not true as some libraries house books bound in human skin.

This practice, called anthropodermic bibliopegy, became fashionable in the 19th century. While it conjures up images of occult rituals and human sacrifice – vis-à-vis the necronomicon – in reality, it has closer ties to far more legitimate professions, such as medicine and criminal punishment.

One of the most famous examples of this comes from Edinburgh in the form of a name already mentioned in this piece – William Burke. Over ten months in 1828, William Burke and his accomplice, William Hare, terrorized the streets of Edinburgh by murdering at least sixteen people. This was not simply mindless violence, however, as Burke and Hare sold their victim’s bodies for dissection in anatomy lectures. The pair were eventually caught, and, while it is unclear what happened to Hare after he turned King’s evidence, Burke was sentenced to death by hanging and, with a sense of poetic justice, his corpse was publicly dissected and his skeleton was placed on display.

What is interesting here, however, is that a section of Burke’s skin was removed and used to cover a small notebook, which is now housed in Edinburgh’s Surgeon’s Hall Museums. In a very macabre tone, on the front of the book, in faded gold text, reads BURKE’S SKIN POCKET BOOK.

This is not the only example of such a morbid tome, however. While rare, there are many more examples of such items at multiple institutes, including Harvard’s Houghton Library, Philadelphia’s Mutter Museum, and Brown University’s John Hay Library, to name a few. And, while the reasons for such unusual bookbinding vary, from punishment to memorialization to collection, it is essential to remember that each book is bound in the remains of a person who, regardless of how they lived their life, was once a living, breathing individual. Thus, how we treat these items matters. These are not just books, nor are they simply bodies; they exist somewhere in between. And while these items are not exclusively found in libraries, given the fact that they are books and that it can be hard to tell what exactly they are made from (differentiating between human and pig skin is difficult without testing), libraries must acknowledge the potentially ethically tricky situation they find themselves in; not just as curators of books, but the potential guardian of human remains.

So, what should a library do in such a scenario? How should they respond if, after testing, an item in their collection is found to have as its binding human skin?

One option is to destroy it as such an item might be deemed too offensive to be allowed to continue existing. This might be because it draws up unpleasant connotations linked to how it came into being or from whom the remains come (the skin is from a murderer or was obtained through bodily desecration). It may also be offensive not because of who it is made from or how it came to be, but simply because of what it is. As such, it might be that the best way forward is to destroy the piece humanely, thus preventing anyone in the future from obtaining it or causing any further offence. Yet, while this may be the simplest option, it is far from uncontroversial. The item holds its own story, and many may learn much from it. Not just in terms of how it was created, but also in what it represents as well as the narrative of how it came to be in the form it is. Thus, to destroy it is to abandon the knowledge and legacy that the item has accrued. The issue could be further complicated if the remains used in the item were offered willingly. Is it acceptable to destroy such an item if this contradicts the wishes of the person from whom the book has been made?

An alternative, then, might be to continue holding onto the book but to keep it away from the public. To house it away in a secure room in the depths of a library and, while not forgetting about it, letting it slip from public and professional consciousness. While potentially avoiding some issues related to the offence it causes, this does little to address one’s responsibilities towards the remains, and may very well compound any such dereliction of duties. After all, if one is aware that an item in their collection contains parts of a human body, keeping it locked away in storage might be seen as ignoring the issue. Few would find this a satisfactory solution if the item in question was a severed human head. Should the fact that the remains no longer resemble a body part and are now part of a book really make this option any more palatable?

So, if destroying the book and hiding it away are not options (or at least problem-free options), then what is left? Well, the final course of action considered here is to openly acknowledge what the items are and how they came to be; make the items available to the public to come and learn about. After all, if museums can use remains for educational purposes, then why can’t libraries? Burke’s remains, for example, were put on display so that future people might reflect on his actions. It’s an easy argument to make that his skin, regardless of whether wrapped around a book or not, should serve the same purpose.

Yet, this contravenes what libraries are typically for. While they are places of learning, and their roles are ever-changing and expanding, asking them to be home to bodily remains and to take on all the additional responsibilities that come with such a role might be asking too much.

Ultimately, then, whether libraries should retain their morbid, part-human, part-bibliographic items is far from a simple question as it draws in concerns about what the library’s role is in society, how we treat the deceased, and whether the form of bodily remains alters our responsibilities to them. And, while this is not a pressing question given how many books exist, we are talking about our duties to the deceased. Thus, both sensitivity and decisiveness are required.

Children Deserve Less Screen Time in Schools

photograph of group of children using smartphones and tablets

School closures because of COVID-19 should teach us a lot about the future of educational reform. Unfortunately, we aren’t learning the lessons we should.

For many years, educational innovators championed visions of personalized learning. What these visions have in common is the belief that one-size-fits-all approaches to education aren’t working, and that technology can do better.

Before we place too much hope in technological solutions to educational problems, we need to think seriously about COVID school closures. School districts did their best to mobilize technological tools to help students, but we’ve learned that students are now further behind than ever. School closures offered the perfect opportunity to test the promise of personalized learning. And though technology-mediated solutions are most certainly not to blame for learning loss, they didn’t rise to the occasion.

To be clear, personalized learning has its place. But when we think about where to invest our time and attention when it comes to the future of schooling, we must expand where we look.

I’ve felt this way for many years. The New York Times published an article back in 2011 that reported the rise of Waldorf schooling in Silicon Valley. While technologists were selling the idea that technology would revolutionize learning, they were making sure their children were staying far away from screens, especially in schools. They knew then what we are slowly finding out: technology, especially social media, has the power to harm student mental health. It also has the potential to undermine democracy. Unscrupulous agents are actively targeting teenagers, teaching them to hate themselves and others.

It is surprising that in all the calls for parents to take back the schools, most of the attention is being paid to what is and isn’t in libraries and what is and isn’t being assigned. Why do Toni Morrison’s novels provoke so much vitriol and yet the fact that kids starting in kindergarten watch so many inane “educational videos” on their “smart boards” doesn’t.

What is more, so many school districts provide children with laptops and some even provide mobile hotspots so children can always be online. We are getting so upset about what a student might read that we neglect all the hateful, violent, and pornographic images and texts immediately available to children and teenagers through school-issued devices.

Parents are asking what is assigned and what is in libraries when they should ask: How many hours a day do my children spend on a screen? And if they are spending a great deal of time on their screens: What habits are they developing?

If we focused on these questions, we’d see that our children are spending too much time on screens. We’d learn that our children are shying away from work that is challenging because they are used to thinking that learning must be fun and tailored to them.

We must get children outside their comfort zones through an encounter with content that provokes thinking. Playing games, mindlessly scrolling, and responding to personalized prompts don’t get us here.  What we need is an education built on asking questions that provoke conversation and engagement.

Overreliance on technology has a narrowing function. We look for information that is easy to assimilate into our preferred ways of thinking. By contrast, a good question is genuinely disruptive, just as a good conversation leaves us less sure of what we thought we knew and more interested in learning about how and what other people think.

In our rush for technological solutions, we’ve neglected the art of asking questions and cultivating conversation. Expecting personalized learning to solve problems, we’ve forgotten the effort it takes to engage each other and have neglected the importance of conversation.

Rather than chase the next technological fix — fixes that failed us when we needed them most — we should invest in the arts of conversation. Not only will this drive deeper learning, but it can help address the mental health crisis and rising polarization because conversation teaches students that they are more than what any algorithm thinks they are.

Real conversation reminds us that we are bigger than we can imagine, and this is exactly what our children deserve and what our society needs. Classrooms need to move students away from an overreliance on technology and into a passionate engagement with their potential and the wonder of new and difficult ideas.

Our screens are driving us into smaller and smaller worlds. This makes us sad, angry, anxious, and intolerant. Many young people don’t have the willpower to put the screen down, so schools need to step up. Initially, it won’t be easy. Screens are convenient pacifiers. But our children shouldn’t be pacified, they deserve to be engaged. And this is where we need to devote energy and attention as we approach the upcoming academic year.

Moral Education in an Age of Ideological Polarization: Teaching Virtue in the Classroom

photograph of apple on top of school books stacked on desk

The Program for Character and Leadership at Wake Forest University was recently awarded $30.7 million by Lilly Endowment Inc. to create a national higher education network focused on virtue formation. Approximately $7 million will go towards further strengthening the program at Wake Forest, while $23 million will be earmarked for funding initiatives on character at other colleges and universities.

While this project is a big win for Lilly, which supports “the causes of community development, education and religion,” it also raises pressing questions about the role of the moral virtues within higher education. In the wake of the Unite the Right Rally in Charlottesville, Virginia, professor Chad Wellmon wrote in The Chronicle of Higher Education that the University of Virginia could not unambiguously condemn the demonstrations. This is because universities, Wellmon wrote, “cannot impart comprehensive visions of the good,” making them “institutionally incapable of moral clarity.” On Wellmon’s view, universities should focus solely on the life of the mind, leaving profound moral questions to churches, political affiliations, and other civic organizations.

Supporting this vision of the university, many conservatives have complained that higher education is insufficiently neutral when it comes to moral and political values. In rejecting courses on Black history deemed to lean too far left, Florida Governor Ron DeSantis claimed that citizens “want education, not indoctrination.”

If higher education ought to remain neutral and eschew a deep moral vision, however, then how is it possible for universities to stay true to their mission while, like Wake Forest, simultaneously engaging in character education?

One thing that can be said is that institutions of higher education already do engage in virtue education. Due to their commitment to help their students think well, colleges and universities encourage their students to be curious, open-minded, and intellectually humble. As even Wellmon acknowledges, forming the life of the mind requires robust intellectual virtues, including “an openness to debate, a commitment to critical inquiry, attention to detail, and a respect for argument.”

Along with these intellectual virtues, higher education also supports a number of civic virtues as well. Because colleges and universities are tasked with preparing students to be responsible citizens, they often aim at promoting civility, tolerance, and civic engagement. These virtues equip graduates to contribute within liberal democracies, coupling their intellectual development with civic preparation.

The obvious objection to these examples is that the virtues in question are not moral virtues. Intellectual and civic virtues may be well within the purview of higher education, but should professors really take it upon themselves to teach compassion, courage, generosity, integrity, and self-control?

While these might seem strange in context of the modern university, it is interesting to note that higher education does emphasize at least one moral virtue – the virtue of honesty. Regardless of the institution, academic honesty policies are ubiquitous, forbidding cheating, plagiarism, and other forms of academic dishonesty. We have, then, at least one obvious example of a moral virtue being promoted at the university level. If the moral virtues generally seem so out of place at colleges and universities, then why does honesty get a pass?

The intellectual virtues find their place within the academic world because of the ways they promote the mission of higher education. The flourishing life of the mind requires the intellectual virtues, and so there are no complaints when professors help students form their intellectual characters.

But honesty also plays an important role in thinking well. If, every time a student encounters an intellectual challenge, they turn to cheating or plagiarism, they are missing out on an opportunity to do the difficult work of developing the intellectual virtues. Academic dishonesty short-circuits their ability to grow in the life of the mind, making it important for instructors to not only encourage the intellectual virtues, but to guide students towards honesty as well.

From this we can see that, while universities do not typically engage in moral education, this is not because they must always remain neutral on moral issues. Instead, universities simply do not see the other moral virtues as necessary for their mission.

But such an omission is not always well-motivated, as there are many moral virtues that are integral to the goals that universities have for their students. Consider, for example, the goal of helping students prepare for careers post-graduation. While employers might be looking for candidates that are open-minded and intellectually curious, they likely also hope to hire professionals with honesty, integrity, and self-control. Employers want doctors who are compassionate, professors who are humble, and lawyers who are just.

If college presidents, deans, and provosts see it as part of their mission to prepare students for the working world, then there is a place for character formation on campus. While some may contest that job training is not the most important mission of the university, it is nevertheless a significant one, making the task of developing morally virtuous teachers, nurses, and engineers a central mission of higher education.

This emphasis on moral virtue, of course, still allows universities to leave space for students to develop their own visions of what a good and meaningful life might look like. Emphasizing the moral virtues does not require compromising the ideological neutrality necessary for a diverse and challenging university experience. Instead, emphasizing character can only deepen and strengthen what higher education has to offer, teaching students to not only be good thinkers, but to be good people as well.

With Students Like These, Who Needs a Fatwa?

photograph of empty classroom

On August 12, 2022, a twenty-four-year-old man nearly murdered Salman Rushdie for something he wrote before the man was born. The assailant set upon Rushdie as he was about to deliver a public lecture at the Chautauqua Institute in New York, stabbing him multiple times in the neck, stomach, eye, and chest and inflicting over twenty wounds. That night, Salman’s life seemed to be hanging in the balance. But the next day, the world learned that he would survive, albeit with grave and permanent injuries, including the loss of sight in one eye and use of one hand. Nevertheless, Ayatollah Khomeini’s religious decree or fatwa, issued in 1989 and calling for Rushdie’s assassination, remains unfulfilled.

Yet reading about recent statements and actions of students and administrators at Hamline University in St. Paul, Minnesota, one could be forgiven for concluding that Khomeini’s message has, at least in part, carried the day.

Last fall, an adjunct professor there was fired for displaying a fourteenth-century painting of the Prophet Muhammed in her class after a student complained that showing the image was an act of “disrespect.” In the fatwa against Rushdie, Khomeini explained that Rushdie’s murder was warranted because his novel, The Satanic Verses, “insult[s] the sacred beliefs of Muslims . . . .” Of course, no one at Hamline was calling for the lecturer’s blood. But that showing the image was, as school officials averred, “undeniably inconsiderate, disrespectful, and Islamophobic,” or that avoiding “disrespecting and offending” Islam should always “supersede academic freedom,” are ideas that seem more at home in an Islamic theocracy than a liberal democracy.

Not being an expert in the history or theology of Islamic iconoclasm, I will not engage with the argument that showing an image of the Prophet is always and clearly Islamophobic. It’s worth noting, though, that the image was created for an illustrated world history commissioned by a Muslim ruler of Persia and written by a Muslim convert. That fact in itself suggests a further fact that, at least outside of Hamline’s Office of Inclusive Excellence, is widely acknowledged: there is a broad range of views within Islam about the propriety of such depictions. As Amna Khalid trenchantly observes, it is the assumption that the Muslim community is a monolith with respect to this issue that seems Islamophobic.

But besides bolstering the argument that showing the image served a legitimate pedagogical purpose and was not aimed at causing offense, the contention that the image is not insulting to all Muslim is somewhat beside the point. The real question is: even if it were, would that automatically make showing it in a university classroom impermissible?

Suppose that, in a class about the history of European political satire or journalistic ethics, a professor displayed the cartoons whose publication by the French satirical magazine Charlie Hebdo led to the murder of twelve people in 2015. These cartoons are undeniably irreverent and, yes, even insulting and offensive to some. But unless showing them has no pedagogical benefit under any set of circumstances — unless it is undeniably an attempt simply to insult students — academic freedom absolutely supersedes these students’ hurt feelings. The very idea of an institution dedicated to the production and dissemination of knowledge, through the exchange of ideas and arguments among diverse participants, each with their own unique perspective, depends upon this principle. If anyone’s bare claim to be disrespected, offended, or insulted is sufficient to justify censorship, then there is almost no topic of any human interest that can be discussed with the candor required to examine it at any level of depth or sophistication.

It seems, however, that a non-trivial number of students at Hamline disagree with me. When the university’s student newspaper, The Oracle, published a defense of the lecturer written by Prof. Mark Berkson, the chair of the Hamline Department of Religion, the ensuing backlash led its editorial board to retract the article within days. In an unsigned editorial explaining the move, the board wrote that because one of its “core tenets” is to “minimize harm,” the publication “will not participate in conversations where a person must defend their lived experience or trauma as topics of discussion or debate.” In other words, publishing the chair’s defense adversely affected other students by “challenging” their “trauma.”

There are two features of this argument I find interesting: the “minimize harm” principle, and the use of the term “trauma.” Both, I think, can be fruitfully examined in light of a useful distinction the philosopher Sally Haslanger draws between a term’s manifest concept and its operative concept.

According to Haslanger, a term’s manifest concept is determined by the meaning that language users understand a term to have; it is the term’s “ordinary” or “dictionary” definition. By contrast, its operative concept is determined by the properties or entities actually tracked by the linguistic practice in which the term is employed. In her work on race, Haslanger observes that the manifest concepts associated with the term “race” and similar terms include some biological or physical components, yet the way we actually apply these terms does not track any physical characteristic (think of how the term “white” was once not applied to Sicilians).

Using this distinction, we can see how the editorial board performs a neat sleight of hand in its use of the term “trauma.” The dictionary definition or manifest concept of “trauma” is something like Merriam-Webster’s “a disordered psychic or behavioral state resulting from severe emotional stress or physical injury.” When The Oracle’s editorial board uses the term, and further, implicitly asserts that no one should question whether a person’s trauma is warranted or justified, this sounds eminently reasonable because of the term’s manifest concept. But when we look at how the board actually uses the term, it becomes clear that its operative concept is something like “insult, offense, or a feeling of being disrespected.” Once we see this, the claim that a person’s “trauma” should never be questioned begins to look quite doubtful. A person may be mistaken in feeling insulted or offended, and in such situations, it may sometimes be permissible to respectfully point this fact out to them. This is precisely what Prof. Berkson was trying to do in his defense of the lecturer. And once again, I must insist that it is even sometimes justifiable to cause offense in the classroom in order to achieve a legitimate pedagogical goal.

There is another sleight of hand at play in the board’s “minimize harm” principle. The board invokes the Pulitzer Center’s characterization of this principle as involving “compassion and sensitivity for those who may be adversely affected by news coverage.” On its face, this seems beyond reproach — particularly since the Center’s definition clearly implies that newspapers may justifiably publish material that adversely affects others, so long as they do so in a sensitive and compassionate manner. But the board’s application of the principle to this case reveals that for it, “minimize harm” really means “cause no harm,” or even “cause no offense.”

While the principle of minimizing harm implicitly calls for exercising moral judgment in weighing whether the harm caused is justified by the benefits to be gained, and moral courage in defending that judgment when it is challenged, the principle of causing no harm is, for journalists, equivalent to a demand that they not do their job.

For example, if The Oracle published an article uncovering massive corruption in the Office of Inclusive Excellence that led to multiple school officials’ termination, it would cause concrete harm to those officials. “Cause no offense” is, of course, an even more craven abdication of the journalist’s vocation.

There is a final point that I think is worth making about this sorry affair. Before showing the painting to her students, the lecturer reportedly took every possible precaution to safeguard their exceedingly fragile mental health. She made that particular class activity optional. She provided a trigger warning. And she explained exactly why she was showing the painting: to illustrate how different religions have depicted the divine and how standards for such depictions change over time. She behaved like a true pedagogue. None of this prevented the mindless frenzy that followed. This suggests that instead of actually helping students cope with “trauma,” trigger warnings and the like may actually prime students to have strong emotional reactions that they would not otherwise have. Indeed, the complainant told The New York Times that the lecturer’s provision of a trigger warning actually proved that she shouldn’t have shown the image. What a world.

Is Academic Philosophy Pointless?

photograph of Dead End sign where road meets woodline

Back when I taught philosophy, certain students — often the ones most interested in the subject — would invariably confront me at the end of the semester with the same complaint. “I’ve read brilliant arguments for diametrically opposed positions,” they would say, “and I’ve read brilliant critiques of every argument. Now I don’t know which position to choose. And if I can’t choose a position, what was the point of working through them all?” At the time, I didn’t have a good answer for them. I think I have a better answer now — more on that in a bit — but I fundamentally sympathize with their complaint. There is, indeed, something futile about academic philosophy. Or so I will argue.

I left professional philosophy two years ago for a variety of reasons, but mainly that, after three years on the job market, the prospect of securing a tenure-track position at a decent institution appeared dim. Since then, I have had some time to reflect on what I decided to do with my third decade on Earth. I’ve concluded that I’m very happy to have studied philosophy for over ten years, but that I do not in any way regret leaving the profession. In this column, I will explain why I feel this way. Part of the explanation comes back to my students’ complaint.

First, why was getting a PhD worth it for me? I came to graduate school with a burning desire to answer two questions that had puzzled me since high school: what is the nature of moral facts, and what is the true ethical theory? (I didn’t use this language in high school, of course).

After spending a decade thinking about the various answers philosophers have mooted, I arrived at conclusions that remain reasonably satisfactory to me. Even leaving aside the friends I made, the brilliant people I got to talk to, and the other things I learned, getting those answers alone made the experience worthwhile.

I am, however, all too aware that the answers I’ve come to, and the arguments for them that I find convincing, strike a good proportion of academic philosophers — many much smarter and more able than I — as less than compelling. Some have even said so in print. I would expect no less from philosophers, since they are trained to analyze arguments — particularly to see where they may fail.

This leads me to why I don’t regret leaving the profession. The problem is not that I dislike disagreements. The issue I have with academic philosophy is that most of the discipline’s research questions are inherently unresolvable. By “resolution,” I mean the provision of answers or solutions which the preponderance of the available evidence and arguments favor over all others.

In other words, academic philosophy’s questions do not remain unresolved because they’re hard, or because we just haven’t discovered the best arguments or sufficient evidence yet. They are unresolvable in principle, because of their very nature.

Among my reasons for thinking this is that most of the basic questions in academic philosophy have remained pretty much the same for over 2000 years. I’m not an expert in metaphysics or epistemology, but I can confirm that this is true with respect to the most important questions in ethics. Moreover, many prominent contemporary answers to these ethical questions can be found in some form in the classic ancient texts. Jeremy Bentham may have invented the term “utilitarianism” to describe his ethical theory, but the same basic approach can be found in Platonic dialogues and the gnomic pronouncements of Epicurus. And really, if Bentham, John Stuart Mill, Henry Sidgwick, J.J.C. Smart, G.E. Moore, either of the Peters (Singer and Railton), James Griffin, Walter Sinnott-Armstrong, or Richard Brandt — among many, many others — have not come up with arguments for consequentialism that establish it as the theory more likely to be correct than all the others, how likely could it be that such arguments are still out there, waiting to be discovered?

The fact of continued disagreement over these fundamental questions among some of the most brilliant minds of many generations is at least suggestive that these issues will never be resolved — and not because they’re just hard.

Before I explain why I think this fact may make much of academic philosophy pointless, I must observe that judging by their conversation, some philosophers are not willing to concede the essential irresolvability of philosophical questions. I have actually met Kantians who think deontology is not just the right ethical approach, but obviously the right approach. You’d have to be crazy to be a consequentialist. I don’t know how seriously to take this talk; it may be partly explained by various institutional and cultural incentives to engage in intellectual chest-thumping. Still, the fact of persistent disagreement highlighted in the last paragraph surely makes the view that deontology — or consequentialism or virtue ethics — is obviously the correct approach to ethics somewhat farcical. You’d have to be crazy to think plausible answers to deep philosophical problems are ever obviously true or false.

The reason I think that the irresolvability of philosophical problems makes academic philosophy substantially pointless is that academic disciplines that purport to be in the business of evaluating truth claims should be able, at least in principle, to make progress. By “progress,” I mean nothing other than resolving the research questions or problems that characterize that discipline. Note that this view allows that the research questions themselves might change over time; for example, resolving some questions might raise more questions. But the inability of a truth claim-oriented discipline to resolve its research questions is a problem that has to be addressed.

There are a number of ways an advocate for academic philosophy might respond. First, she might point out that there are other truth claim-oriented disciplines in which unresolvable questions are commonplace. All agree that these disciplines are not pointless, so the inference from unresolvable questions to pointlessness is flawed. I’m unable to fully assess this argument because I’m not sufficiently familiar with every truth claim-oriented discipline, and all the advocate of academic philosophy really needs is one example. But I could imagine her invoking some other humanities discipline, like history. Historical questions are often unresolvable, but history’s value as a discipline seems unassailable.

History, though, is different from philosophy in two ways. First, some of the unresolvable questions in history are questions of how best to interpret sets of historical facts, and it’s not clear that the primary criterion for evaluating historical interpretations is related to truth rather than, say, fruitfulness or explanatory power. Did the Holocaust inevitably flow from the logic of Nazism, or was it not inevitable until it became official state policy sometime in 1941? Historians arguing this question all draw on the same body of evidence: for example, the genocidal implications of Hitler’s Mein Kampf; his 1939 speech in which he threatened that if another world war began, European Jewry would be annihilated; his plan to deport Jews to Madagascar after France fell in 1940; and records of the 1942 Wannsee conference. The debate concerns not what the facts are, or whether we have good reasons for believing them, but rather which interpretation of the facts better or more fruitfully explains the Nazi genocide.

More importantly, to the extent that historical questions concern historical truth claims, their irresolvability is a function of the paucity of evidence, not the nature of the questions themselves.

Looked at one way, the Holocaust question hinges on the motives of the historical actors involved. We may simply be unable to determine those motives by a preponderance of the available evidence. This implies that new evidence could come to light that would resolve this question. By contrast, as I’ve suggested, philosophical questions are not unresolvable because we don’t have enough evidence at the moment. They are unresolvable by nature.

It’s no doubt true that many questions in a wide range of disciplines remain, and perhaps always will remain, unresolved. In general, that’s because we lack the evidence required to prove that a particular answer is more likely to be true than all the others. This does not make these disciplines futile, in part because we can’t know a priori whether sufficient evidence will become available to resolve their research questions. We have to do the research first. Moreover, the fact is that many disciplines do resolve their characteristic questions.

A second argument for academic philosophy is that it makes progress of a sort, even if it cannot resolve its questions. Philosophical progress consists in refining competing answers to philosophical questions, as well as the questions themselves. You can find the fundamental tenets of consequentialism in the ancient texts, but modern philosophers have arguably explored the theory at a much higher level of detail, sophistication, and thoroughness. Similarly, modern philosophers have been able to refine our understanding of a classic question in metaethics — why be moral? — with some even arguing that the question isn’t well-formed. Thus, even if academic philosophy doesn’t resolve its questions, its exploration of the logical space of answers is a good enough reason to support it. (Incidentally, this iterative process of refinement has also led philosophers to develop an elaborate jargon that makes cutting-edge articles in ethics nearly impossible for laypeople to understand, but in my view that’s not objectionable in itself.)

Although I grant that this is a form of progress, and it certainly requires great intellectual ingenuity, I’m not sure continual refinement alone can justify a discipline.

Suppose that the question whether the universe is heliocentric were for some reason unresolvable in principle. In this world, astronomers are doomed to merely add more and more elaborate conceptual curlicues to their preferred heliocentric or geocentric theories for all eternity — and they know it. Would this question still be worth the effort and resources expended to try and answer it?

A third argument is that learning and doing philosophy are valuable in all sorts of ways for those who engage in these activities. Among other things, they help individuals and societies think through problems they may actually confront in real life. This is obviously true for subfields like ethics and political philosophy, but it also fully applies to epistemology and metaphysics as well. For example, I have argued that a certain view about the nature of race underlies conservatives’ arguments against affirmative action. The question of what races are is a metaphysical question.

There are other very good reasons to learn and do philosophy. Philosophy is intellectually stimulating. It helps develop critical reasoning skills. It promotes both open-mindedness and a healthy skepticism. It helps us ask better questions and to evaluate possible answers.

Academic philosophers do and learn philosophy. They therefore benefit in all of the ways I’ve described, and it might be argued that this justifies the discipline. Obviously, this is a dubious argument, since it seems implausible that benefits to practitioners of the discipline alone can justify a discipline. More compelling is the fact that academic philosophers teach students, thereby enabling and encouraging the latter to do and learn philosophy and reap the benefits.

I do not dispute that it is valuable for academic philosophers to teach philosophy. The trouble is that, in my view, the contemporary discipline of academic philosophy is not primarily focused on pedagogy or public outreach. When I was in graduate school, instruction in pedagogy was, to put it charitably, an afterthought. American Philosophical Association meetings, which largely serve as showcases for new research, remain the most important annual events in the academic philosophy world. Of course, some professional philosophers practice the discipline differently from others. At some colleges, research output does not even factor into tenure decisions, and professors therefore focus more on teaching. Yet no one rises in the profession by winning a teaching award or publishing an opinion piece in The New York Times. Prominence in academic philosophy is primarily a function of publishing books and articles that other professional philosophers admire.

So, the value of learning and doing philosophy fails to justify the discipline of philosophy as currently practiced — or so it seems. But the advocate for academic philosophy may reply that effective teaching or public philosophizing actually requires ongoing philosophical research. Imagine if philosophers had stopped doing research in moral philosophy after G.E.M. Anscombe published her famous article, “Modern Moral Philosophy,” in 1958. (In that article, Anscombe declared that “[i]t is not profitable for us at present to do moral philosophy”). In this world, students could study, and professors teach, only books and articles that are at least sixty years old. They could not, for instance, examine any critiques of the arguments found in that article that were published after it appeared. Wouldn’t that be, well, crummy?

This argument has some visceral force for me. It gains added force when we remember that philosophers certainly make a kind of progress by exploring the logical space of possible answers.

Philosophers can enlighten the public about these possible answers, which we sometimes call “traditions” (e.g., the just war tradition), which can in turn help the public think through real-world problems. Because continual research can uncover more possible answers, it can be valuable for this reason.

Does this justify academic philosophy as currently practiced? Frankly, I’m not sure. In my experience, many philosophical articles are written as if aimed at resolving their questions — something I’ve argued they cannot do in principle. As I’ve mentioned, there is also a heavy emphasis on criticizing opposing views. Is this the best way of exploring the logical space of plausible answers? Adam Smith famously observed that “it is not through the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.” His point is that markets work by exploiting self-interest in ways that redound to society’s benefit. Similarly, the defender of academic philosophy might argue that the best way to explore the logical space of answers to a philosophical question is to incentivize philosophers to believe, or at least to argue as if, their preferred answer actually resolves the question. In other words, what looks to me like a mistaken belief among those Kantians who think or at least act as if consequentialism is obviously wrong may redound to the benefit of philosophy as a whole. Perhaps this is true, but I’m just not sure.

To recap, I’ve argued so far that since academic philosophy cannot resolve its research questions, its only hope of justification lies in its ability to disseminate philosophical ideas and modes of thinking to the broader public. Doing this effectively may require a certain amount of research aimed at exploring the logical space of answers and identifying those that seem most plausible. But for me, it is an open question whether the way research is currently conducted is the best way to explore the logical space of answers.

I must conclude, then, that much of academic philosophy as currently practiced may, indeed, be pointless. Curiously, though, I think I have a better answer to my students’ complaint about why they should study philosophy, despite its inherent irresolvability. As a layman who seeks answers to philosophical questions, one need not wait until arguments are found showing that one answer is more likely to be correct than all the others in order to endorse that answer. One can rationally choose whatever answer is most subjectively satisfactory, as long as it is at least as plausible as any other answer. In addition, the value of learning and doing philosophy does not solely consist in finding answers to difficult questions. As Socrates shows us, it also lies in learning how to ask the right questions.

Book Bans, the First Amendment, and Political Liberalism

photograph of banned book display in public library

Book bans in public schools are not new in America. But since 2021, they have reached levels not seen in decades, the result of efforts by conservative parents, advocacy groups, and lawmakers who view the availability of certain books in libraries or their inclusion in curricula as threats to their values. In one study that looked at just the nine-month period between July 1, 2021 and March 31, 2022, the free expression advocacy organization PEN America found nearly 1,600 instances of individual books being banned in eighty-six school districts with a combined enrollment of over two million students. Of the six most-banned titles, three (Gender Queer: A Memoir, All Boys Aren’t Blue, and Lawn Boy) are coming-of-age stories about LGBTQ+ youth; two (Out of Darkness and The Bluest Eye) deal principally with race relations in America; and one (Beyond Magenta: Transgender Teens Speak Out) features interviews with transgender or gender-neutral young adults. 41% of the bans were tied to “directives from state officials or elected lawmakers to investigate or remove books.”

The bans raise profound ethical and legal questions that expose unresolved issues in First Amendment jurisprudence and within political liberalism concerning the free speech rights of children, as well as the role of the state in inculcating values through public education.

What follows is an attempt to summarize, though not to settle, some of those issues.

First, the legal side. The Supreme Court has long held that First Amendment protections extend to public school students. In Tinker v. Des Moines Independent Community School District, a seminal Vietnam War-era case about student expression, the Court famously affirmed that students in public schools do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.” Yet student expression in schools is limited in ways that would be unacceptable in other contexts; per Tinker, free speech rights are to be applied “in light of the special characteristics of the school environment.”

Accordingly, Tinker held that student speech on school premises can be prohibited if it “materially and substantially disrupts the work and discipline of the school.”

The Court has subsequently chipped away at this standard, holding that student speech that is not substantially and materially disruptive — including off-campus speech at school-sponsored events — can still be prohibited if it is “offensively lewd and indecent” (Bethel School District No. 403 v. Fraser), or can be “reasonably viewed as promoting illegal drug use” (Morse v. Frederick). In the context of “school-sponsored expressive activities,” such as student newspapers, the permissible scope for interference with student speech is even broader: in Hazelwood School District v. Kuhlmeier, the Court held that censorship and other forms of “editorial control” do not offend the First Amendment so long as they are “reasonably related to legitimate pedagogical concerns.”

Those cases all concerned student expression. A distinct issue is the extent to which students have a First Amendment right to access the expression of others, either through school curricula or by means of the school library. Book banning opponents generally point to a 1982 Supreme Court case, Board of Education, Island Trees Union Free School District No. 26 v. Pico, to support their argument that the First Amendment protects students’ rights to receive information and ideas and, as a consequence, public school officials cannot remove books from libraries because “they dislike the ideas contained in those books and seek by their removal to prescribe what shall be orthodox in politics, nationalism, religion, and other matters of opinion.”

There are, however, three problems with Pico from an anti-book banning perspective. First, those frequently cited, broad liberal principles belong to Justice Brennan’s opinion announcing the Court’s judgment. Only two other justices joined that opinion, with Justice Blackmun writing in partial concurrence and Justice White concurring only in the judgment. Thus, no majority opinion emerged from this case, meaning that Brennan’s principles are not binding rules of law. Second, even Brennan’s opinion conceded that school officials could remove books from public school libraries over concerns about their “pervasive vulgarity” or “educational suitability” without offending the First Amendment. This concession may prove particularly significant in relation to books depicting relationships between LGBTQ+ young adults, which tend to include graphic depictions of sex. Finally, Brennan’s opinion drew a sharp distinction between the scope of school officials’ discretion when it comes to curricular materials as opposed to school library books: with respect to the former, he suggested, officials may well have “absolute” discretion. Thus, removals of books from school curricula may be subject to a different, far less demanding constitutional standard than bans from school libraries. In short, Pico is a less-than-ideal legal precedent for those seeking to challenge book bans on constitutional grounds.

The question of what the law is is, of course, distinct from what the law should be. What principles should govern public school officials’ decisions regarding instructional or curricular materials and school library books?

A little reflection suggests that the Supreme Court’s struggle to articulate clear and consistent standards in the past few decades may be due to the fact that this is a genuinely hard question.

Political liberalism — the political philosophy that identifies the protection of individual liberty as the state’s raison d’être — has traditionally counted freedom of expression among the most important individual freedoms. Philosophers have customarily offered three justifications for this exalted status. The first two are broadly instrumental: according to one view, freedom of expression promotes the discovery of truth; according to another, it is a necessary condition for democratic self-governance. An important non-instrumental justification is that public expression is an exercise of autonomy, hence intrinsically good for the speaker.

The instrumental justifications seem to imply, or call for, a corresponding right to access information and ideas. After all, a person’s speech can only promote others’ discovery of truth or help others govern themselves if that speech is available to them. Simply having the unimpeded ability to speak would not contribute to those further goods if others were unable to take up that speech.

Yet even if the right of free speech implies a right to access information and ideas, it may be plausibly argued that the case for either right is less robust with respect to children. On the one hand, children generally have less to offer in terms of scientific, artistic, moral, or political speech that could promote the discovery of truth or facilitate democratic self-governance, and since they are not fully autonomous, their speech-acts are less valuable for them as exercises of their autonomy. On the other hand, since children generally are intellectually and emotionally less developed than adults, and also are not allowed to engage in the political process, they have less to gain from having broad access to information and ideas.

Obviously, even if sound, the foregoing argument only establishes lesser rights of free speech or informational access for children, not no such rights. And the case for lesser rights seems far weaker for teenagers than for younger children. Finally, the argument may be undermined by the state and society’s special interest in educating the young, which may in turn provide special justification for more robust free speech and informational access rights for children. I will return to this point shortly.

All the states of the United States, along with the federal government, recognize an obligation to educate American children. To fulfill that obligation, states maintain public schools, funded by taxation and operated by state and local government agencies, with substantial assistance from the federal government and subject to local, state, and federal regulation. As we’ve seen, the Supreme Court has mostly used the educational mission of the public school as a justification for allowing restrictions on students’ free speech and informational access rights inasmuch as their exercise would interfere with that mission.

Thus, the Court deems student speech that would disturb the discipline of the school, or books that would be “educationally unsuitable,” as fair game for censorship.

This is not radically different from the Court’s approach to speech in other public institutional contexts; for example, public employees’ speech is much more restricted than speech in traditional public forums. The combination of the sort of considerations adduced in the last paragraph, together with idea that speech and informational access can be legitimately restricted in public institutions, may lead one to conclude that student expression and informational access in public schools can be tightly circumscribed as long as it is for a “legitimate pedagogical purpose.”

This conclusion would, I think, be overhasty. The overriding pedagogical purpose of the public school does not cleanly cut in favor of censorship; in many ways, just the opposite. Educating students for citizenship in a liberal democracy must surely involve carefully exposing them to novel and challenging ideas. Moreover, mere exposure is not sufficient: the school must also encourage students to engage with such ideas in a curious, searching, skeptical, yet open-minded way. Students must be taught how to thrive in a society replete with contradictory and fiercely competing perspectives, philosophies, and opinions. Shielding students from disturbing ideas is a positive hindrance to that goal. This is not to deny that some content restrictions are necessary; it is merely to claim that the pedagogical mission of the public school may provide reason for more robust student free speech and informational access rights.

But what about conservatives’ objections — I assume at least some of them are made in good faith — to the “vulgarity” of certain books, irrespective of their intellectual content? Their determination to insulate students from graphic descriptions of sex might seem quixotic in our porn-saturated age, and one might think it is no worse than that. In fact, insofar as these objections derive from the notion that it is the job of public schools to “transmit community values,” as Brennan put it in Pico, they raise an important and unresolved problem for political liberalism.

Many versions of political liberalism hold that the state should strive to be neutral between the competing moral perspectives that inevitably exist in an open society.

The basic idea is that for the sake of both political legitimacy and stability, the state ought to be committed to a minimal moral framework — for example, a bill of rights — that can be reasonably accepted from different moral perspectives, while declining to throw its weight behind one particular “comprehensive doctrine,” to use John Rawls’s phrase.

For example, it would be intuitively unacceptable if state legislators deliberated about the harms and benefits of a particular policy proposal in terms of whether it would please or enrage God, or of its tendency to help the public achieve ataraxia, the Epicurean goal of serene calmness. One explanation for this intuition is that such deliberation would violate neutrality in employing ideas drawn from particular comprehensive doctrines, whether secular or religious, that are not part of that minimal moral framework with which most of the public can reasonably agree.

If state neutrality is a defensible principle, it should also apply to public education: the state should not be a transmitter of community values, at least insofar as those values are parochial and “thick,” rather than universal and “thin.” Concerns about children’s exposure to graphic depictions of sex may be grounded in worries about kinds of harm that everyone can recognize, such as psychological distress or, for certain depictions, the idea that they encourage violent sexual fantasies that might later be enacted in the real world. But conservatives’ worries might also be based in moral ideas that don’t have much purchase in the liberal moral imagination — ideas about preserving sexual purity or innocence, or about discouraging “unnatural” sexual conduct like homosexuality. These ideas, which are evidently not shared by a wide swath of the public, do not have a place in public education policy given the imperative of state neutrality.

Unfortunately, while perhaps intuitively compelling, the distinction between an acceptably “minimal” moral framework and a “comprehensive doctrine” has proved elusive. For example, are views about when strong moral subject-hood begins and ends necessarily part of a comprehensive doctrine, or can they be inscribed in the state’s minimal moral framework? Even if state neutrality can be adequately defined, many also question whether it is desirable or practically possible. Thus, it remains an open question whether the transmission of parochial values is a legitimate aim of public education.

Public educators’ role in mediating between students and the universe of ideas is and will likely remain the subject of ongoing philosophical and legal debate. However, this much seems clear: conservative book bans are just one front in a multi-front struggle to reverse the sixty-year trend of increasing social liberalization, particularly in the areas of sex, gender, and race.

On an Imperative to Educate People on the History of Race in America

photograph of Martin Luther King Jr. Statue profile at night

Many people don’t have much occasion to observe racism in the United States. This means that, for some, knowledge about the topic can only come in the form of testimony. Most of the things we know, we come to know not by investigating the matter personally, but instead on the basis of what we’ve been told by others. Human beings encounter all sorts of hurdles when it comes to attaining belief through testimony. Consider, for example, the challenges our country has faced when it comes to controlling the pandemic. The testimony and advice of experts in infectious disease are often tossed aside and even vilified in favor of instead accepting the viewpoints and advice from people on YouTube telling people what they want to hear.
This happens often when it comes to discussions of race. From the perspective of many, racism is the stuff of history books. Implementation of racist policies is the kind of thing that it would only be possible to observe in a black and white photograph; racism ended with the assassination of Martin Luther King Jr. There is already a strong tendency to engage in confirmation bias when it comes to this issue — people are inclined to believe that racism ended years ago, so they are resistant and often even offended when presented with testimonial evidence to the contrary. People are also inclined to seek out others who agree with their position, especially if those people are Black. As a result, even though the views of these individuals are not the consensus view, the fact that they are willing to articulate the idea that the country is not systemically racist makes these individuals tremendously popular with people who were inclined to believe them before they ever opened their mouths.
Listening to testimonial evidence can also be challenging for people because learning about our country’s racist past and about how that racism, present in all of our institutions, has not been completely eliminated in the course of fewer than 70 years, seems to conflict with their desire to be patriotic. For some, patriotism consists in loyalty, love, and pride for one’s country. If we are unwilling to accept American exceptionalism in all of its forms, how can we count ourselves as patriots?
In response to these concerns, many argue that blind patriotism is nothing more than the acceptance of propaganda. Defenders of such patriotism encourage people not to read books like Ibram X. Kendi’s How to be an Anti-racist or Ta-Nehisi Coates’ Between the World and Me, claiming that this work is “liberal brainwashing.” Book banning, either implemented by public policy or strongly encouraged by public sentiment has occurred so often and so nefariously that if one finds oneself on that side of the issue, there is good inductive evidence that one is on the wrong side of history. Responsible members of a community, members that want their country to be the best place it can be, should be willing to think critically about various positions, to engage and respond to them rather than to simply avoid them because they’ve been told that they are “unpatriotic.” Our country has such a problematic history when it comes to listening to Black voices, that when we’re being told we shouldn’t listen to Black accounts of Black history, our propaganda sensors should be on high alert.
Still others argue that projects that attempt to understand the full effects of racism, slavery, and segregation are counterproductive — they only lead to tribalism. We should relegate discussions of race to the past and move forward into a post-racial world with a commitment to unity and equality. In response to this, people argue that to tell a group of people that we should just abandon a thoroughgoing investigation into the history of their ancestors because engaging in such an inquiry causes too much division is itself a racist idea — one that defenders of the status quo have been articulating for centuries.
Dr. Martin Luther King Jr. beautifully articulates the value of understanding Black history in a passage from The Autobiography of Martin Luther King, Jr.:

Even the Negroes’ contribution to the music of America is sometimes overlooked in astonishing ways. In 1965 my oldest son and daughter entered an integrated school in Atlanta. A few months later my wife and I were invited to attend a program entitled “Music that has made America great.” As the evening unfolded, we listened to the folk songs and melodies of the various immigrant groups. We were certain that the program would end with the most original of all American music, the Negro spiritual. But we were mistaken. Instead, all the students, including our children, ended the program by singing “Dixie.” As we rose to leave the hall, my wife and I looked at each other with a combination of indignation and amazement. All the students, black and white, all the parents present that night, and all the faculty members had been victimized by just another expression of America’s penchant for ignoring the Negro, making him invisible and making his contributions insignificant. I wept within that night. I wept for my children and all black children who have been denied a knowledge of their heritage; I wept for all white children, who, through daily miseducation, are taught that the Negro is an irrelevant entity in American society; I wept for all the white parents and teachers who are forced to overlook the fact that the wealth of cultural and technological progress in America is a result of the commonwealth of inpouring contributions.

Understanding the history of our people, all of them, fully and truthfully, is valuable for its own sake. It is also valuable for our actions going forward. We can’t understand who we are without understanding who we’ve been, and without understanding who we’ve been, we can’t construct a blueprint for who we want to be as a nation.
Originally published on February 24th, 2021

On an Imperative to Educate People on the History of Race in America

photograph of Selma anniversary march at Edmund Pettus Bridge featuring Barack Obama and John Lewis

Many people don’t have much occasion to observe racism in the United States. This means that, for some, knowledge about the topic can only come in the form of testimony. Most of the things we know, we come to know not by investigating the matter personally, but instead on the basis of what we’ve been told by others. Human beings encounter all sorts of hurdles when it comes to attaining belief through testimony. Consider, for example, the challenges our country has faced when it comes to controlling the pandemic. The testimony and advice of experts in infectious disease are often tossed aside and even vilified in favor of instead accepting the viewpoints and advice from people on YouTube telling people what they want to hear.

This happens often when it comes to discussions of race. From the perspective of many, racism is the stuff of history books. Implementation of racist policies is the kind of thing that it would only be possible to observe in a black and white photograph; racism ended with the assassination of Martin Luther King Jr. There is already a strong tendency to engage in confirmation bias when it comes to this issue — people are inclined to believe that racism ended years ago, so they are resistant and often even offended when presented with testimonial evidence to the contrary. People are also inclined to seek out others who agree with their position, especially if those people are Black. As a result, even though the views of these individuals are not the consensus view, the fact that they are willing to articulate the idea that the country is not systemically racist makes these individuals tremendously popular with people who were inclined to believe them before they ever opened their mouths.

Listening to testimonial evidence can also be challenging for people because learning about our country’s racist past and about how that racism, present in all of our institutions, has not been completely eliminated in the course of fewer than 70 years, seems to conflict with their desire to be patriotic. For some, patriotism consists in loyalty, love, and pride for one’s country. If we are unwilling to accept American exceptionalism in all of its forms, how can we count ourselves as patriots?

In response to these concerns, many argue that blind patriotism is nothing more than the acceptance of propaganda. Defenders of such patriotism encourage people not to read books like Ibram X. Kendi’s How to be an Anti-racist or Ta-Nehisi Coates’ Between the World and Me, claiming that this work is “liberal brainwashing.” Book banning, either implemented by public policy or strongly encouraged by public sentiment has occurred so often and so nefariously that if one finds oneself on that side of the issue, there is good inductive evidence that one is on the wrong side of history. Responsible members of a community, members that want their country to be the best place it can be, should be willing to think critically about various positions, to engage and respond to them rather than to simply avoid them because they’ve been told that they are “unpatriotic.” Our country has such a problematic history when it comes to listening to Black voices, that when we’re being told we shouldn’t listen to Black accounts of Black history, our propaganda sensors should be on high alert.

Still others argue that projects that attempt to understand the full effects of racism, slavery, and segregation are counterproductive — they only lead to tribalism. We should relegate discussions of race to the past and move forward into a post-racial world with a commitment to unity and equality. In response to this, people argue that to tell a group of people that we should just abandon a thoroughgoing investigation into the history of their ancestors because engaging in such an inquiry causes too much division is itself a racist idea — one that defenders of the status quo have been articulating for centuries.

Dr. Martin Luther King Jr. beautifully articulates the value of understanding Black history in a passage from The Autobiography of Martin Luther King, Jr.:

Even the Negroes’ contribution to the music of America is sometimes overlooked in astonishing ways. In 1965 my oldest son and daughter entered an integrated school in Atlanta. A few months later my wife and I were invited to attend a program entitled “Music that has made America great.” As the evening unfolded, we listened to the folk songs and melodies of the various immigrant groups. We were certain that the program would end with the most original of all American music, the Negro spiritual. But we were mistaken. Instead, all the students, including our children, ended the program by singing “Dixie.” As we rose to leave the hall, my wife and I looked at each other with a combination of indignation and amazement. All the students, black and white, all the parents present that night, and all the faculty members had been victimized by just another expression of America’s penchant for ignoring the Negro, making him invisible and making his contributions insignificant. I wept within that night. I wept for my children and all black children who have been denied a knowledge of their heritage; I wept for all white children, who, through daily miseducation, are taught that the Negro is an irrelevant entity in American society; I wept for all the white parents and teachers who are forced to overlook the fact that the wealth of cultural and technological progress in America is a result of the commonwealth of inpouring contributions.

Understanding the history of our people, all of them, fully and truthfully, is valuable for its own sake. It is also valuable for our actions going forward. We can’t understand who we are without understanding who we’ve been, and without understanding who we’ve been, we can’t construct a blueprint for who we want to be as a nation.

Originally published on February 24th, 2021

Color Blindness and Cartoon Network’s PSA

photograph of small child peeking through his hands covering his face

Cartoon Network’s latest anti-racist PSA is undeniably clever. “See Color” takes place on the set of a PSA, where Amethyst, a Crystal Gem from the show Steven Universe (don’t ask me what this means), leads a couple of tots in a song about color blindness.

“Color blindness is our game, because everyone’s the same! Everybody join our circle, doesn’t matter if you’re white or black or purple!”

Amethyst isn’t buying it. “Ugh, who wrote this?” she says. “I think it kinda matters that I’m purple.” The children register their agreement.

“Well, I’m not an alien,” says the Black child, “but it definitely matters to me that I’m Black.”

“Yeah, it makes a difference that I’m white,” the white child chimes in. “The two of us get treated very differently.”

The Black child explains further: “My experience with anti-Black racism is really specific…But you won’t see any of that if you ‘don’t see color.’”

The idea that color blindness is deficient as a means of extirpating racism — because it blinds people to existing discrimination and invalidates legitimate race-based affirmative action — is not new. Indeed, the rejection of the philosophy and practice of color blindness has by now become the new orthodoxy in academic and left-leaning circles. That this rejection has trickled down to kids’ shows is surely a powerful measure of its success.

Conservative critics complain that the new anti-color blindness position is antithetical to Dr. Martin Luther King, Jr.’s dream of a society in which people are judged by the content of their character rather than the color of their skin. This is a mistake. To see this, it is useful to understand the distinction in political philosophy between ideal theory and non-ideal theory. 

The distinction was first introduced by John Rawls in his classic A Theory of Justice. According to Rawls, ideal theory is an account of what society should aim for given certain facts about human nature and possible social institutions. Non-ideal theory, by contrast, addresses the question of how the ideal might be achieved in practical, permissible steps, from the actual, partially just society we occupy.

Those who reject color-blindness can see the color-blindness envisioned by King as a property of an ideal society, a society in which racism does not exist. In that society, the color of a person’s skin really does not matter to how they are in fact treated; hence, it is something we can and ought to ignore in our treatment of them. Unfortunately, we don’t live in this society, and in addition, we ought not pretend that we do. Instead, we ought to recognize other people’s races so that we may treat them equitably, taking into account the inequitable treatment to which they have and continue to be subjected.

But just as the norms which we must follow in a non-ideal society are perhaps different from those we ought to follow in an ideal society, so the norms we ought to teach our children should perhaps be different from the ones adults ought to follow. And there is a danger in teaching children to “see color” while also asking them, as we still do, to embrace King’s vision: it may very easily lead to confusion, or worse, a rejection of a color blindness as an ideal. After all, how many children are equipped to understand the distinction between ideal and non-ideal theory? Imagine white children criticizing King as a racial reactionary because of the latter’s insistence that in his ideal society, judgments of people’s merits would not take their race into account.

On the other hand, perhaps risking this outcome is better than the alternative: another generation of white children who believe that because race shouldn’t matter in some ideal society, it therefore ought not matter to us. Can we really afford to risk another generation of white people who believe that the claim that Black lives matter is somehow antithetical to the claim that all lives matter? Perhaps not.

There are good reasons to reject color blindness as a philosophy and practice for the real world: it leads us to ignore actual discrimination and vitiates the justification for race-based affirmative action. But there are limits to what children can be asked to understand, and ensuring that they are neither led astray nor confused requires careful thought.

The Continued Saga of Education During COVID-19

photograph of empty elementary school classroom filled with books and bags

In early August, Davis County School District, just north of Salt Lake City, Utah, announced its intention to open K-12 schools face-to-face. All of the students who did not opt for an online alternative would be present. There would be no mandatory social distancing because the schools simply aren’t large enough to allow for it. Masks would be encouraged but not required. There was significant pushback to this decision. Shortly thereafter the district announced a new hybrid model. On this model, students are divided into two groups. Each group attends school two days a week on alternating days. Fridays are reserved for virtual education for everyone so that the school can be cleaned deeply. In response to spiking cases, Governor Herbert also issued a mask mandate for all government buildings, including schools. Parents and students were told that the decision would remain in place until the end of the calendar year.

On Tuesday, September 15th, the school board held a meeting that many of the parents in the district did not know was taking place. At this meeting, in response to the demands of a group of parents insisting upon returning to a four or even five-day school week for all students, the board unanimously voted to change direction mid-stream and switch to a four-day-a-week, all-students-present model. Many of these same parents were also arguing in favor of lifting the mask mandate in the schools, but the school board has no power to make that change.

Those advocating for a return to full-time, in-person school are not all making the same arguments. Some people are single parents trying to balance work and educating their children. In other households more than one adult might be present, but they might all need to be employed in order to pay the bills. In still other families, education is not very highly valued. There are abusive and neglectful homes where parents simply aren’t willing to put in the work to make sure that their children are keeping up in school. Finally, for some students, in-person school is just more effective; some students learn better in face-to-face environments.

These aren’t the only positions that people on this side of the debate have expressed. For political, social, and cultural reasons, many people haven’t taken the virus seriously from the very beginning. These people claim that COVID-19 is a hoax or a conspiracy, that the risks of the virus have been exaggerated, and that the lives of the people who might die as a result of contracting it don’t matter much because they are either old or have pre-existing conditions and, as a result, they “would have died soon anyway.”

Still others are sick of being around their children all day and are ready to get some time to themselves back. They want the district’s teachers to provide childcare and they believe they are entitled to it because they pay property taxes. They want things to go back to normal and they think if we behave as if the virus doesn’t exist, everything will be fine and eventually it will just disappear. Most people probably won’t get it anyway or, if they do, they probably won’t have serious symptoms.

Parents and community members in favor of continuing the hybrid model fought back. First and foremost, they argued that the hybrid model makes the most sense for public health. The day after the school board voted to return to full-time in-person learning, the case numbers in Utah spiked dramatically. Utah saw its first two days of numbers exceeding 1,000 new cases. It is clear that spread is happening at the schools. Sports are being cancelled, and students are contracting the virus, spreading the virus, and being asked to quarantine because they have been exposed to the virus at a significant number of schools in the district.

Those in favor of the hybrid model argue that it is a safe alternative that provides a social life and educational resources to all students. On this model, all students have days when they get to see their friends and get to work with their teachers. If the switch to a four-day-a-week schedule without social distancing measures in place happens, the only students who will have access to friends and teachers in person are the community members who aren’t taking the virus seriously and aren’t concerned about the risks of spreading it to teachers, staff, and the community at large. It presents particular hardship for at-risk students who might have to choose the online option not only for moral reasons, but also so they don’t risk putting their own lives in jeopardy. Those making these arguments emphasize that the face-to-face model simply isn’t fair.

Advocates of this side of the debate also point out that we know that this virus is affecting people of color at a more significant rate, and the evidence is not yet in on why this is the case. The children who are dying of COVID-19 are disproportionately Black and Hispanic. The face-to-face option has the potential to disproportionately impact students of color. If they attend school, they are both more likely than their white classmates to get sick and more likely to die. Many of these students live in multi-generational homes. Even if the students don’t suffer severe symptoms, opening up the schools beyond the restrictions put in place by the hybrid model exposes minority populations to a greater degree of risk.

Slightly less pressing, but still very important, considerations on this side of the debate have to do with changing directions so abruptly in the middle of the term. The school board points out that students that don’t want to take the risk of attending school four days a week can always just take part in the online option, Davis Connect. There are a number of problems with this. First, Davis Connect isn’t simply an extension of the school that any given child attends; it is an independent program. This means that if students and their families don’t think it is safe to return to a face-to-face schedule, they lose all their teachers and all of the progress that they have made in the initial weeks of the semester. Further, the online option offers mostly core classes. High school students who chose the online option would have to abandon their electives — classes that in many cases they have come to enjoy in the initial weeks of the semester. Some students are taking advanced placement or dual-enrollment courses that count for college credit. These students would be forced to give up that credit if they choose the online option. The result is a situation in which families may feel strongly coerced to allow their children to attend school in what they take to be unsafe conditions and in a way that is not consistent with their moral values as responsible members of the community.

Those on this side of the argument also point out that community discussions about “re-opening the schools” tend to paint all students with the same brush. The evidence does not support doing so. There is much that we still don’t know about transmission and spread among young children. We do know that risk increases with age, and that children and young adults ages 15-24 constitute a demographic that is increasingly contracting and spreading the virus. What’s more, students at this age are often willful and defiant. With strict social distancing measures in place and fewer students at the school, it is more difficult for the immature decision-making skills of teenagers to cause serious public health problems. It is also important to take into account the mental health of teenagers. Those on the other side of the debate claim that the mental health of children this age should point us in the direction of holding school every day. In response, supporters of the hybrid model argue that there is no reason to think that a teenager’s mental health depends on being in school four days rather than two. Surely two days are better than none.

Everyone involved in the discussion has heard the argument that the numbers in Davis County aren’t as bad as they are elsewhere in the state. In some places in the area, schools have shut down. In a different district not far away, Charri Jenson, a teacher at Corner Canyon High, is in the ICU as a result of spread at her school. The fact that Davis County numbers are, for now, lower than the rates at those schools is used to justify lifting restrictions. There are several responses to this argument. First, it fails to take into consideration the causal role that the precautions are playing in the lower number of cases. It may well be true that numbers in Davis County are lower (but not, all things considered, low) because of the precautions the district is currently taking. Other schools that encountered significant problems switched to the hybrid model, which provides evidence of its perceived efficacy. Second the virus doesn’t know about county boundaries and sadly people in the state are moving about and socializing as if there is no pandemic. The virus moves and the expectation that it will move to Davis County to a greater degree is reasonable. You don’t respond to a killer outside the house by saying “He hasn’t made his way inside yet, time to unlock the door!”

To be sure, some schools have opened up completely and have seen few to no cases. This is a matter of both practical and moral luck. It is a matter of practical luck that no one has fallen seriously ill and that no one from those schools has had to experience the anguish of a loved one dying alone. It is a matter of moral luck because those school districts, in full possession of knowledge of the dangers, charged forward anyway. They aren’t any less culpable for deaths and health problems — they made the same decisions that school districts that caused deaths made.

A final lesson from this whole debate is that school boards have much more power than we may be ordinarily inclined to think. There are seven people on this school board and they have the power to change things dramatically for an entire community of people and for communities that might be affected by the actions of Davis County residents. This is true of all school boards. This recognition should cause us to be diligent as voters. We should vote in even the smallest local elections. It matters.

Essential Work, Education, and Human Values

photograph of school children with face masks having hands disinfected by teacher

On August 21st, the White House released guidance that designated teachers as “essential workers.” One of the things that this means is that teachers can return to work even if they know they’ve been exposed to the virus, provided that they remain asymptomatic. This is not the first time that the Trump administration has declared certain workers or, more accurately, certain work to be essential. Early in the pandemic, as the country experienced decline in the availability of meat, President Trump issued an executive order proclaiming that slaughterhouses were essential businesses. The result was that they did not have to comply with quarantine ordinances and could, and were expected to, remain open. Employees then had to choose between risking their health or losing their jobs. Ultimately, slaughterhouses became flash points for massive coronavirus outbreaks across the country.

As we think about the kinds of services that should be available during the pandemic, it will be useful to ask ourselves, what does it mean to say that work is essential? What does it mean to say that certain kinds of workers are essential? Are these two different ways of asking the same question or are they properly understood as distinct?

It might be helpful to walk the question back a bit. What is work? Is work, by definition, effort put forward by a person? Does it make sense to say that machines engage in work? If I rely on my calculator to do basic arithmetic because I’m unwilling to exert the effort, am I speaking loosely when I say that my calculator has “done all the work”? It matters because we want to know whether our concept of essential work is inseparable from our concept of essential workers.

One way of thinking about work is as the fulfillment of a set of tasks. If this is the case, then human workers are not, strictly speaking, necessary for work to get done; some of it can be done by machines. During a pandemic, human work comes with risk. If the completion of some tasks is essential under these conditions, we need to think about whether those tasks can be done in other ways to reduce the risk. Of course, the downside of this is that once an institution has found other ways of getting things done, there is no longer any need for human employees in those domains on the other side of the pandemic.

Another way of understanding the concept of work is that work requires intentionality and a sense of purpose. In this way, a computer does not do work when it executes code, and a plant does not do work when it participates in photosynthesis. On this understanding of the concept of work, only persons can engage in it. One virtue of understanding work in this way is that it provides some insight into the indignity of losing one’s job. A person’s work is a creative act that makes the world different from the way it was before. Every person does work, and the work that each individual does is an important part of who that person is. If this way of understanding work is correct, then work has a strong moral component and when we craft policy related to it, we are obligated to keep that in mind.

It’s also important to think about what we mean when we say that certain kinds of work are essential. The most straightforward interpretation is to say that essential work is work that we can’t live without. If this is the case, most forms of labor won’t count as essential. Neither schools nor meat are essential in this sense — we can live without both meat and education.

When people say that certain work is essential, they tend to mean something else. For some political figures, “essential” might mean “necessary for my success in the upcoming election.” Those without political aspirations often mean something different too, something like “necessary for maintaining critical human values.” Some work is important because it does something more than keep us alive; it provides the conditions under which our lives feel to us as if they are valuable and worth living.

Currently, many people are arguing for the position that society simply cannot function without opening schools. Even a brief glance at history demonstrates that this is empirically false. The system of education that we have now is comparatively young, as are our attitudes regarding the conditions under which education is appropriate. For example, for much of human history, education was viewed as inappropriate for girls and women. In the 1600’s Anna Maria van Schurman, famous child prodigy, was allowed to attend school at the University of Utrecht only on the condition that she do so behind a barrier — not to protect her from COVID-19 infested droplets, but to keep her very presence from distracting the male students. At various points in history, education was viewed as inappropriate for members of the wealthiest families — after all, as they saw it, learning to do things is for people that actually need to work. There were also segments of the population that for reasons of race or status were not allowed access to education. All of this is just to say that for most of recorded history, it hasn’t been the case that the entire population of children has been in school for seven hours a day. Our current system of K-12 education didn’t exist until the 1930s, and even then there were barriers to full participation.

That said, the fact that such a large number of children in our country have access to education certainly constitutes significant progress. Education isn’t essential in the first sense that we explored, but it is essential in the second. It is critical for the realization of important values. It contributes to human flourishing and to a sense of meaning in life. It leads to innovation and growth. It contributes to the development of art and culture. It develops well-informed citizens that are in a better position to participate in democratic institutions, providing us with the best hope of solving pressing world problems. We won’t die if we press pause for an extended period of time on formal education, but we might suffer.

Education is the kind of essential work for which essential workers are required. It is work that goes beyond simply checking off boxes on a list of tasks. It involves a strong knowledge base, but also important skills such as the ability to connect with students and to understand and react appropriately when learning isn’t occurring. These jobs can’t be done well when those doing them either aren’t safe or don’t feel safe. The primary responsibilities of these essential workers can be satisfied across a variety of presentation formats, including online formats.

In our current economy, childcare is also essential work, and there are unique skills and abilities that make for a successful childcare provider. These workers are not responsible for promoting the same societal values as educators. Instead, the focus of this work is to see to it that, for the duration of care, children are physically and psychologically safe.

If we insist that teachers are essential workers, we should avoid ambiguity. We should insist on a coherent answer to the question essential for what? If the answer is education, then teachers, as essential workers, can do their essential work in ways that keep them safe. If we are also thinking of them as caregivers, we should be straightforward about that point. The only fair thing to do once that is out in the open is to start paying them for doing more than one job.

Removing Monuments, Grappling with History

Statue of confederate general Robert E Lee with spray painted writing on plinth

In the wake of nationwide protests against racial discrimination by the police, politicians and activists in a number of American cities have called for the removal of monuments to Confederate leaders from public spaces. The U.S. military even expressed its willingness to rename military bases named after Confederate generals. Some activists took matters into their own hands, toppling statues or defacing them with red-painted slogans and symbols.

Supporters of removal argue that Confederate monuments harm people of color by conveying messages of support for white supremacy. Critics allege that there is a slippery slope from Confederate figures to the Founding Fathers or Abraham Lincoln. They also claim that removing monuments is tantamount to an Orwellian erasure of history, the sort of practice one would expect in totalitarian regimes, not democracies. So, what should we do with the statues? 

Let’s examine the arguments in greater detail. The argument that Confederate monuments harm people of color is based on a claim about what the monuments mean, or what messages they convey. The intentions of their creators are a particularly important source of their meaning, since they determine such basic facts as what and whom they represent, as well as the values they express. Most monuments to the Confederacy were erected either in the wake of Reconstruction or during the Civil Rights movement, when African-Americans in the South were agitating for greater political power and social equality, and they were intended to express opposition to these developments. Even apart from this history, monumental, idealized depictions of leaders of a state dedicated to the perpetuation of racial slavery are reasonably interpreted as endorsements of the values the Confederacy embodied. And when these monuments are sited on public land, as most are, this can be reasonably interpreted as conveying the endorsements of the public and the state.

Why does this matter? As the philosopher Jeremy Waldron points out, public art and architecture are important means by which society and government can provide assurances to members of vulnerable groups that their rights and constitutional entitlements will be respected. Such assurances are an important part of people’s sense of safety and belonging. But when the public art of a society instead conveys endorsements of subordination and discrimination, this robs members of vulnerable groups of these assurances, transforming the public world into a hostile space and encouraging withdrawal into the private sphere. Thus, vulnerable groups that are intimidated by monuments that express approval for their subordination may be less able to advance their political, social, and economic interests. Importantly, none of these baneful consequences turn on anyone’s being merely offended by racist monuments.

What about the claim that tearing down Confederate monuments will inevitably lead to the removal of monuments to the Founders and other beloved figures? There is a kernel of truth to this argument: questioning the appropriateness of honoring Confederates likely will lead to questioning society’s attitudes towards other historical figures. But it is not clear that this should not happen. At the same time, there are morally relevant differences between some historical figures and others. For this reason, reducing the harms caused by monumental depictions of some historical figures need not always require removing them from public space. What government needs to do with respect to those monuments it wishes to keep on public display is (1) forthrightly acknowledge the problematic aspects of a historical figure’s legacy; (2) endeavor to reduce the harms that might be caused by the monument; and (3) provide an adequate justification for not removing the monument from the public space. For example, while Abraham Lincoln’s actions towards Native Americans were reprehensible on the whole, there is a good case for honoring those aspects of his legacy that continue to inspire citizens of all backgrounds. Yet the less honorable episodes of his presidency ought to be acknowledged alongside celebrations of his achievements. 

Some claim that removing monuments constitutes an erasure of history, comparing it to burning books. If “erasing history” simply means “destroying something that existed in the past,” tearing down a monument erases history in precisely the same way as tearing down an old house. But as this example suggests, there are many cases of erasing history that seem morally unobjectionable, and the mere fact that something from the past will cease to exist is not in itself a reason to preserve it. Opponents of taking down the monuments sometimes argue that they teach us important lessons about our shared history. This argument at least offers a reason why it might be desirable to preserve this particular class of objects. The trouble is that the story they tell is often distorted and misleading precisely because they were intended not to educate, but to intimidate one group of citizens and cultivate admiration for the Confederacy in another. Monuments are more like billboards than books. Museums can educate the public more effectively than monuments, and without the negative consequences described above. Indeed, in some cases, monuments have found new homes in museums, where they can be properly contextualized for public consumption. 

 As Americans continue to grapple with their history, it seems likely that monuments to the Confederacy will not be the last lapidary victims of our historical reappraisals. But at least with respect to Confederate monuments, public opinion is coming around to the fact that this is a necessary and justified concomitant of the effort to make our society more equal and more just. 

Religious Liberty and Science Education

photograph of empty science classroom

In November, the Ohio House of Representatives passed “The Ohio Student Religious Liberty Act of 2019.” The law quickly garnered media attention because it seems to allow students to get answers wrong without penalty if the reason they get those answers wrong is because of their religious beliefs. The language of the new law is the following:

Sec. 3320.03. No school district board of education, governing authority of a community school […], or board of trustees of a college-preparatory boarding school […] shall prohibit a student from engaging in religious expression in the completion of homework, artwork, or other written or oral assignments. Assignment grades and scores shall be calculated using ordinary academic standards of substance and relevance, including any legitimate pedagogical concerns, and shall not penalize or reward a student based on the religious content of a student’s work.

Sponsors of the bill claim that students will be required to learn the material they are being taught, and to answer questions in the way that the curriculum supports regardless of whether they agree with it. Opponents of the law disagree. The language of the legislation prohibits teachers from penalizing the work of a student when that work is expressive of religious belief. This seems to entail that a teacher cannot give a student a bad grade if that student gets an answer wrong for religious reasons. In any event, the vagueness of the law may affect the actions of teachers. They might be reluctant to grade assignments correctly if they think doing so may put them at odds with the law.

Ohio is not the only state in which bills like this are being considered, though most have failed to pass for one reason or another. Some states, such as Arizona, Florida, Maine, and Virginia have attempted to pass “controversial issues” bills. The bills take various forms. Arizona Bill 202, for example, attempted to prohibit teachers from advocating any positions on issues that are mentioned in the platform of any major political party (a similar bill was proposed in Maine). This has implications for teaching evolution and anthropogenic climate change in science classes. Other controversial issue bills prohibit schools from punishing teachers who teach evolution or climate change as if they are scientifically controversial.

Much of the recent action is motivated by attitudes about Next Generation Science Standards, a science education program developed by 26 states in conjunction with the National Science Teachers Association, the American Association for the Advancement of Science, and the National Research Council. The program aims to teach science in active ways that emphasize the important role that scientific knowledge plays in innovation, the development of new technologies, and in responsible stewardship of the natural environment. NGSS has encountered some resistance in state legislatures because the curriculum includes education on the topics of evolution and anthropogenic climate change.

Advocates of these laws make a number of different arguments. First, all things being equal, there is value in freedom of conscience. We should set up our public spaces in such a way that respects the fact that people can believe what they want to believe. The U.S. Constitution was intentionally written in a way that provides protections for citizens to form beliefs independently of the will of governments. In response, an opponent of this legislation might say that imposing a set of standards for curriculum based on the best available evidence is not the same thing as forcing citizens to endorse a particular set of beliefs. A student can learn about evolution or anthropogenic climate change, all the while disagreeing with what they are learning.

A second, related argument might be that school curriculum and grading policies should respect the role that religion plays in people’s lives. For many, religion provides life with meaning, peace, and hope. Given the importance of these values, our public institutions shouldn’t be taking steps that might undermine religion.

A third argument concerns parental rights to raise children in the way that they see fit. This concern is content-neutral. It might be a principle that everyone should respect. Parents have significant interests in the way that their children turn out, and as a result they have interests in avoiding what they might view as indoctrination of their children by the government. Attendance at school is mandatory for children. If the government is going to force them to attend, they shouldn’t be forced to “learn” things that their parents might not want them to hear.

A fourth argument has to do with the value of free speech and the expression of alternative positions. It is always valuable to hear opposing positions, even those that are in opposition to received scientific knowledge, so that science doesn’t just become another form of dogma. In response, opponents would likely argue that we get closer to the truth when we assess the validity of opposing viewpoints, but not all opposing viewpoints are created equal. Students only have so much time dedicated to learning science in school, so if opposing positions are considered in the classroom, perhaps it is best if they are positions advocated by scientists. Moreover, if a particular view reflects only the opinion of a small segment of the scientific community, perhaps it is a waste of valuable time to discuss those positions at all.

Opponents of this kind of legislation would insist that those in charge of the education of our children must value best epistemic practices. Some belief-forming practices contribute to the formation of true beliefs more reliably than others. The scientific method and the peer review process are examples of these kinds of reliable practices. It is irresponsible to treat positions that are not supported by evidence as if they are equally deserving of acceptance as beliefs that are supported by evidence. Legislation of this type presents tribalism and various forms of pernicious cognitive bias as adequate evidence for belief.

Furthermore, opponents argue, the passage of these bills is nothing more than political grandstanding—attempts to solve non-existent problems. The United States Constitution already protects the religious liberty of students. Additional legislation is not necessary.

Education, in part, is the creation of responsible, productive, autonomous citizens. What’s more, the issues at stake are crucially important. Denying the existence of anthropogenic climate change has powerful, and even deadly, consequences for millions of current living beings, as well as future generations of beings. Our best hope is to create citizens who are well-informed on this issue and who are therefore in a good position to mitigate the effects and to construct meaningful climate policy in the future. This will be impossible if future generations are essentially climate illiterate.

The Ethics of Homeschooling

photograph of young girl doing school work in room

The National Home Education Research Institute labelled homeschooling one of the fastest growing forms of education in the US with an estimated two to eight percent rise in the population of homeschooled children each year over recent years. Although home-based learning as a concept is an old practice, it is now being adopted by a diverse range of Americans. This trend of homeschooling extends to countries around the globe including Brazil, the Philippines, Mexico, France, and Australia, among other nations.

One of the commonly cited motivations for homeschooling children is parents’ concern for their child’s safety. Homeschooling provides children with a safe learning environment, shielding them from exposure to possible harms such as physical and psychological abuse, bullying from peers, gun violence, and racism. Exposure to such harms can lead to poor academic performance and long-term self-esteem issues. Recent research suggests that homeschooled students often perform better on tests than other students. Additionally, homeschooling can also provide an opportunity for an enhanced parent-child bond, and is especially convenient for parents of special needs children requiring attentive care.

Homeschooling was legalized throughout the US in 1993, but the laws governing homeschooling vary from state to state. States with the strictest homeschool laws (Massachusetts, New York, Pennsylvania, Rhode Island, and Vermont) mandate annual standardized testing and an annual instruction plan. But policing in the least restrictive states (Texas, Oklahoma, Indiana and Iowa) border on negligence. Iowa, in particular, has no regulations at all, and considers notifying the district of homeschooling merely optional.

Even though homeschooling is legal and gaining traction in the US today, it is not immune to skeptics who view homeschooling as an inadequate and flawed form of education for students. The prevailing critique of homeschooling has to do with the lack of social interaction amongst homeschooled children with peers, which is an important aspect of a child’s socialization into society. However, as most of homeschooled children’s social interactions are limited to adults and their family members, this could lead to the child developing issues in the future regarding learning to handle individuals with different backgrounds, belief systems, and opinions. Homeschooling advocates counter this critique by contending that the environment at home is superior to the environment children are exposed to at school, but it raises the question, at what cost?

Another aspect of homeschooling that is a point of contention is the lack of qualification of parents who choose to homeschool their children. While teachers have experience teaching students over the course of years and therefore develop action plans that work best with students, the same cannot be said for most parents who are not teachers by profession. Therefore, while homeschooling parents may have the best intentions for their children, they may be ill-equipped to provide the standard of education offered in public or private schools. Furthermore, the learning facilities offered by parents at home may not be on par with the learning facilities available in schools.

An additional issue that must be taken into consideration is that homeschooled children in states with lax regulations are at increased risk for physical abuse that goes unreported and undetected, as a result of children being sequestered in their homes. Approximately 95% of child abuse cases are communicated to authorities by public school teachers or officials. By isolating the homeschooled child, unregulated homeschooling allows abusive guardians to keep their abuse unnoticed. Isolating children at home also poses a public health risk as schools require students to be immunized, but this is legally required of homeschooled children in only a few states. Not only are unimmunized children vulnerable to a multitude of diseases, but also put other children and adults alike at risk of contracting illnesses.

Parental bias is an added complication that homeschooled children must deal with. Parental bias refers to dogma a homeschooled child may be exposed to on account of being raised solely on their parents’ belief systems. For example, most homeschooled children come from pious, fundamentalist Protestant families. Elaborating on the possible repercussions of unregulated homeschooling, Robin L. West, Professor of Law and Philosophy at Georgetown University Law Center writes in her article The Harms of Homeschooling, “[..] in much of the country, if you want to keep your kids home from school, or just never send them in the first place, you can. If you want to teach them from nothing but the Bible, you can.” Parental bias can therefore cause an individual to develop a skewed understanding of the world and can also pose issues in the individual’s life outside of home, when they are exposed to ideologies that are at odds with their own. If the homeschooled individual was raised in an environment with a homogeneous view on political, social or cultural issues, and if that is the only outlook that the child is exposed to, adjusting to the outside world with a plethora of opinions and values could cause internal dissension within the individual.

Given that one’s early experiences in life can shape our persona as an adult, going to a regular school instead of being homeschooled can serve as a primer to being better equipped at handling the “real world.” Furthermore, with the rising demand of homeschooling, it becomes essential to ask if the child is better off by learning about the “real world” while being sheltered by one’s guardians. If homeschooling is indeed the superior option, perhaps constructing a standard curriculum for homeschooling could address the concerns raised by critics of home-based learning.

Summit Learning and Experiments in Education

photograph of empty classroom

A recent New York Times article documented a series of student-led protests at a number of public schools throughout the United States against a “personalized learning” program called Summit Learning. The program, supported by Mark Zuckerberg and Priscilla Chan, aims to improve students’ education via computer-based individual lessons, and features A.I. designed to actively develop the ideal learning program for each student. The goals of the program seem especially beneficial to the underserved public school systems where the software is being piloted. Although the initial response was positive, parents and students in communities such as McPherson, Kansas, have begun to reject the program. Among their complaints: excessive screen time and its effects on student health; the connection of the program to the web resulting in students being exposed to inappropriate content; invasive collecting and tracking of personal data; and the decline in human interaction in the classroom.

Each of these points touch on broader issues concerning the ever-greater role technology plays in our lives. There is still a great deal of uncertainty about the best way to technology into education, as well as the associated harms and benefits. It is probably unwise, then, to attempt to judge the consequences of this particular program in its infancy. It has been poorly received in some cases, but in many other cases it has been praised.

The more essential question is whether the education of young students should be handled by such a poorly understood mechanism. Some of the people interviewed in the New York Times article expressed the feeling of being “guinea pigs” in an experiment. Summit’s A.I. is designed to get better as it deals with more students, so earlier iterations will always be more “experimental” than later ones. At the same time, it would be irresponsible to risk the quality of a child’s education for the sake of any experiment. Underserved communities like those in which Summit is being applied also deserve some special protection and consideration, because they are more vulnerable to exploitation and abuse. It was precisely because of their generally low-performing schools that many of these communities so eagerly adopted the Summit Learning system in the first place.

One seemingly simple solution proposed by many of the protesting students is to allow opting-out of the program. While this would allow students a greater degree of agency and help to arrive at the optimal learning method for each student, it would also significantly undermine the already limited understanding of the efficacy of the system. If only the most enthusiastic students are participating, the results will be understandably skewed. As with other experiments involving human subjects, there is a difficult calculus in weighing the potential knowledge gained against the potential harm to individual subjects. In order to ensure the integrity of the program as a whole, opting out on an individual basis cannot be permitted, but the alternative is to force whole schools or town into either participating or not en masse.

Another consideration is whether there is a problem with the premise of Summit Learning itself, that is, “personalized learning.” Personalized learning follows the general trend in cutting-edge technology towards customization, individualization, and, ultimately, isolation. Such approaches do harm to our collective sense of community, but the harm is especially acute in learning environments. Part of education is learning together and, critically, learning to work together. We can see some evidence of this in traditional K-12 school curricula, which have historically centered on the idea that every student learns the same material; in other words, The Catcher in the Rye is only as important as it is because everybody reads it. By removing the collaborative aspect of classroom learning, we run the risk of denying students the opportunity to benefit from different perspectives and develop a common scholastic culture. Furthermore, by implementing isolating technology use in the classroom, schools sanction such practices for students, who may then feel license to repeat such behaviors outside of school.

In Colorado, The Right to Comprehensive Sex Education

A photograph of the Colorado Capitol Building in Denver, with green grass and blue sky

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On January 30th, 12-year old Moira Lees testified at the Colorado Capitol in favor of HB19-1032, the new bill centered around sex education for public schools in Colorado. Moira was one of at least six other students who testified in support of the new bill. She bravely talked about how she wished that they taught what consensual relationships are at her own middle school. Consent was just one of the topics presenting in the new sex-education bill for Colorado which was an updated version of a sex education bill from 2013.  

In 2013, the General Assembly of Colorado revised a 2007 law on comprehensive sex education in public schools. This new law said that students had the right to a curriculum that was age appropriate, medically accurate, culturally sensitive to LGBT and disabled individuals, and must include information about safe relationships and sexual violence. However, schools were able to find loopholes in the bill. Schools that wanted to offer an abstinence-only curriculum could contract with non-profit groups and would provide the abstinence only education on school grounds on the weekends. Another loophole allowed charter schools to teach their own versions of human sexuality that often didn’t meet state standards. These loopholes were motivation behind the new bill, HB19-1032 that was testified for on January 30th.

The new bill proposes to get rid of abstinence-only education but most paramount, it teaches consent in sexual relationships. Susan Lontine (D-Denver), the bill’s proposer, says that the bill describes “how to communicate consent, recognize communication of consent, and recognize withdrawal of consent.”  This was one of the least discussed topics during the 10-hour long testimony, mostly because it was one of the unanimously agreed upon topics. Centennial Institute Director Jeff Hunt is a critic of the bill but agrees with the consent portion and believes that people of faith also support it. Hunt states that the lengthy testimony was centered more around debate of topics that should be openers for family discussion about values as opposed to public school curriculum.

Another important part of the bill is that the curriculum will have open lessons about human sexuality. The bill opens with a survey form 2017 Healthy Kids Colorado Survey that states 9.6% of females and 18.5% of LGBT-identifying kids have felt physically forced into sexual relationships against their will. “These statistics reflect a dire need for all Colorado youth to have access to comprehensive human sexuality education that teaches consent, hallmarks of safe and healthy relationships, self-acceptance, and respect for others,” according to HB19-1032. Lessons about human sexuality cannot be “explicitly or implicitly” endorsing religious ideology and shame-based language should not be used.

Those opposed to HB19-1032 worry that parents would not have full knowledge of the information that their children are receiving, according to Jeff Hays, GOP Chairman. The bill states that parents would be notified about human sexuality classes and given the option to remove their children but would not be notified about the specific lesson plans. Colorado Catholic Conference worries that the teachings will stigmatize Catholic beliefs and will teach children that the church’s values regarding sex, relationships, and gender are wrong. Also under review is that currently HB-19-1032 does not require schools to tell students about “safe haven laws” which allows a parent to turn over a newborn less than 72 hours to any fire station or hospital with no questions asked, in order to protect the lives of newborns. If HB19-1032 is passed, schools would have to choose between teaching this new curriculum or teaching nothing at all on the matter.

At the heart of the debate regarding HB19-1032 is a question about the purpose of childhood education and how sex education supports those goals. According to philosopher Joel Feinberg, education is a part of the “right to an open future” and enables children to gain the knowledge, skills and tools, to shape their own individual life plans. The goal of sex education is for students to learn about sex and sexuality to gain skills for healthy relationships and manage one’s own sexual health. However, the question of the matter resides in if schools owe it to children to teach sex education in a comprehensive manner.  

Not teaching children on comprehensive sex education to the extent that bill HB19-1032 does could cripple youth’s ability to exercise their current and future sexual rights. Having sexual rights is to have one’s control over their own body and sexuality without violence, coercion, and intimidation. Without education on the subject, students could be exposed to additional harm including assault, sexually transmitted infections, and unwanted pregnancies. This bill is unique in that it addresses many aspects of “traditional” sex education like the biological aspects of sex but it also dives deeper into the social aspects.

The need for sex education corresponds to our developmental stages, according to Sigmund Freud and other developmental psychologists. During adolescence (twelve to eighteen years old) a major task is the creation of a stable identity and becoming a productive adult. Dramatic changes occur that lead to increased opportunities to engage in risky behaviors like sexual promiscuity. Adolescents are novices in reflective cognitive thinking which is why education on risky behavior, like sex education, is important at this stage of development.

But a government-mandated sexual education program feels, to some parents, like a violation of their autonomy. Some parents want to be a part of the discussions revolving around these topics, in order to talk about family values and have open discussions. There is the fear that when the state regulates this curriculum, it takes away from the parent’s say in the matter. At the same time, without this regulation, teachers could have full freedom to teach as they please on the course matter. Without regulation, teachers would have the opportunity to teach their own code of sexual ethics.

Kids are under more influence than ever about what is deemed as “acceptable” sexual behavior in society, from mass media to their friends, family, and religious expectations. With these added pressures, it is more important than ever for legislation like bill HB19-1032 to define to what extent teachers, schools, and the government have responsibility in teachings students about sex education.

When It Comes to the Environment, is Education Morally Obligatory?

Image of plastic bottles floating in the ocean

In April of this year, scientists from the Alfred Wegener Helmholtz Center for Polar and Marine Research reported finding record amounts of plastic particles in the Arctic sea. Ice core samples were taken from five regions in the area. Up to 12,000 pieces of micro-plastic particles per liter of ice were found in the samples.  Scientists believe that much of the plastic, cigarettes butts, and other debris came from the Great Pacific Garbage Patch, a mass of floating waste occupying 600,000 square miles between Hawaii and California.

Plastics in the sea pose substantial dangers for ecosystems and marine life. As evidence of this fact, earlier this year, a dead sperm whale washed up on a beach in Spain. Scientists concluded that it was death by garbage—64 pounds of plastics and other waste were found in the young whale’s stomach.

Continue reading “When It Comes to the Environment, is Education Morally Obligatory?”

Is the Global Citizenship Movement the New “White Man’s Burden”?

Photograph of Palais des Nations building with flags in two columns in front of it

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In an age of red-eye flights and the ability to communicate digitally across thousands of miles, the world has never felt so small and interconnected. Despite this, the governments of countries across the world remain relatively segregated in regards to policies concerning citizenship and human rights. These issues were discussed at large during the 37th session of the UN Human Rights Council, which took place on March 6 in the Palais des Nations. During the session, many different methods for improving human rights worldwide were discussed, including the concept of Global Citizenship Education. The movement for global citizenship counters the concept of isolationism and advocates for a set of moral standards to apply a global society. Though global citizenship sounds relatively straightforward, the movement is often steeped in questions of justice and national self-determination.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) Program for Global Citizenship Education aims to improve human rights worldwide “by empowering learners of all ages to understand that these are global, not local issues and to become active promoters of more peaceful, tolerant, inclusive, secure and sustainable societies.” But does the call for global citizens risk the erasure of national identity? Are the standards set for development always just? And does the initiative for global citizenship encourage a 21st century “white man’s burden” mentality?

The term global citizen refers to “someone who is aware of and understands the wider world – and their place in it.” Though the concept of global citizenship is not necessarily new, the organized global citizen movement began less than 10 years ago. The organization Global Citizen was founded in 2011 with the mission of empowering individuals and communities to make an impact worldwide on a variety of human rights issues. Though one of Global Citizen’s largest goals is to eradicate extreme poverty, the organization also works toward securing gender equity, environmental health, and civil rights.  

Global Citizen takes steps to empower people at the local level, but is there a danger in redefining certain issues as inherently global? There have been ardent critics of this concept of the sentiments of global citizenship, such as British Prime Minister Theresa May and President Donald Trump. In a speech delivered in October of 2016, May expressed her discontent with the attitude of global elites. She commented, “Today, too many people in positions of power behave as though they have more in common with international elites than with the people down the road, the people they employ, the people they pass on the street.” It’s obvious that May feels that globalization has erased local and national connections. She continued with a more controversial statement, declaring that “If you believe you are a citizen of the world, you are a citizen of nowhere.” Only a few months after May’s comments, Donald Trump expressed a similar sentiment. He proclaimed, “There is no global anthem. No global currency. No certificate of global citizenship,” in a speech given during his presidential victory tour. Trump appealed to the isolationist values of his supporters, promising that “Never anyone [sic] again will any other interests come before the interest of the American people.” For these politicians, and their supporters, global citizenship means abandoning national identity.

Though May and Trump’s comments were blasted as ignorant, uncompassionate, and even sympathetic to fascist values, whether or not one can ethically reconcile nationalism and globalism remains largely unanswered. In an article titled “Global Citizens vs the People,” Jim Butcher of online political magazine Spiked explains how global citizenship does not only contradict conservative ideology, but liberal populist ideology as well. Butcher argues that global citizenship does not necessarily mean renouncing nationalist sentiments, but he notes that global citizenship runs the risk of “seeking respite from democracy” and can in fact be “a way of avoiding having to address the political views and arguments of your fellow citizens.”

But some proponents of global citizenship claim that such criticism fails to acknowledge the difference between soft and critical global citizenship. In her article “Soft versus critical global citizenship education,” Vanessa Andreotti acknowledges the potential harm in global citizenship education, but believes that it can be eradicated by emphasizing critical thinking. It is possible for global citizen education to encourage its proponents to “project their beliefs and myths as universal and reproduce power relations and violence similar to those in colonial times.” This need not be the inevitable result, however, if global citizenship education gives learners the tools to critically assess issues of inequality and injustice, which are skills soft global citizenship fails to provide.

For example, while soft global citizenship may frame the solutions to global issues as humanitarian, critical citizenship frames these solutions as a political and ethical. Teaching these critical distinctions keeps the global citizen aware of their own biases and safeguards the global citizen movement from a patronizing mindset. And organizations such as Global Citizen arguably embody critical global citizenship education, with its mission explaining that “everyone from citizens, governments, businesses, and charities have a role – because none of aid, trade nor charity can do this alone.”

But global citizenship is also inherently reliant on globalization. Though critical global citizenship encourages people to think critically about the negative effects of globalization, it assumes that globalization can be used as a positive force. Some might argue that globalization has in fact created a new world hierarchy, specifically in terms of the standards of development. Post-development theory holds that the global crusade for development has failed and that in some cases development operates as a modern form of imperialism that is “a reflection of Western-Northern hegemony over the rest of the world.”

Prominent post-development scholars, such as Wolfgang Sachs, have argued that because development standards often include assimilation to Western lifestyle, “it is not the failure of development which has to be feared but its success.” It is undeniable that development standards are often set and enforced by the Western world, with Western countries holding the majority of power in prominent global organizations working toward development, such as the United Nations and the World Bank. A paper funded by The National Bureau of Economic Research’s Economics of National Security program found that since the UN’s inception, there has been an unwavering power bias in the secretariat in favor of Western countries. Though the UN maintains its mission is to keep peace throughout the world, one could argue that such an imbalance in institutional power is obstructive to this mission and actually reflects the continuation of geographical and racial inequalities spurred by the history of Western imperialism.

However, many supporters of globalization argue that the concrete effects of globalism justify its problematic ideological implications. The United Nations was single-handedly responsible for the eradication of smallpox, with its initiative in the 1960’s and 70’s through the World Health Organization. The UN also touts a variety of achievements through partner organizations like UNICEF, which, between 1990 and 2015, reportedly saved the lives of over 90 million children. The World Bank, an organization working to end worldwide poverty that also grew out of globalization, is responsible for providing essential health services to over 600 million people and providing 72 million with better access to clean water. These achievements are undeniably significant, and would not be possible without a unified effort and collection of people advocating for positive globalization.

Though the struggle for human rights and global economic equality has improved, the problems made apparent by globalization are nowhere close to disappearing. A world of global citizens might just be the solution that marks the 21st century as the century that eradicates extreme poverty and radically improves justice and equality. However, the effects of increasing the global stake in these issues will not be easy to backtrack if such initiatives are unsuccessful or have unintended consequences. We should not underestimate the potential of the global citizen movement to encourage a “developed man’s burden” outlook if it fails to encourage critical and ethical examinations of its followers.

The 21st-Century Valedictorian and the Battle for First Place

An image of high school graduates during a commencement ceremony.

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


According to 16-year-old Ryan Walters of North Carolina, abolishing the title of valedictorian in high schools only serves to “recogniz[e] mediocrity, not greatness.” Ryan was interviewed for a Wall Street Journal article about ridding schools of valedictorian titles, and he provides a voice of disapproval and disappointment. After working toward the glorious title of valedictorian for many years of his life, Ryan’s dream is over, as his high school has decided to do away with recognizing the top performer in each graduating class. This harsh critique by the Heritage High School junior may have some validity, but it can also be refuted.

Across the country, high school administrators are beginning to question the productivity of declaring a valedictorian every year. Many students work toward the title of valedictorian from a young age; it is a testament to perseverance, intelligence, and hard work. However, it can also create extreme competition among students and determine one’s value based heavily upon grades. Some school administrators argue that the title of valedictorian motivates students to study harder and achieve more academically. Others argue that declaring a valedictorian promotes unhealthy competition and does more to harm students than to help them. This debate raises the question: is it ethical for high school administrations to declare a valedictorian each year?

The critics of the valedictorian system argue that recognizing a valedictorian places an unhealthy amount of pressure on students. This is a large reason why around half of the schools in the country have eliminated the title. According to the National Institute of Mental Health, 8 percent of high schoolers are diagnosed with some form of anxiety. Suicide was the second leading cause of death in teenagers 15-19 years old in 2014. Although a direct correlation between the stress of school and suicide cannot be made, the anxiety developed because of academic pressures surely contributes. School counselors have expressed concern about the impact that pressure to perform is having on adolescent anxiety. In an article in The Atlantic, Kirkwood High School counselor Amber Lutz said, “high performance expectations surrounding school and sports often result in stress and, in turn, anxiety.

Declaring a valedictorian increases competition among students. As classmates vie for first in their class, the emphasis can be taken off of learning and bettering oneself, and placed upon winning. If a student is aiming for valedictorian but does not achieve it, they may lose appreciation for their accomplishments and simply focus on the fact that they “lost.” In addition, a GPA is not a reflection of one’s high school experience. It does not include creativity, learning style, experience, and passion for certain subjects. It is a number, not a holistic view of an individual. The title of valedictorian separates one student from their peers who may have worked as hard or be of equal inteligence. Many factors affect a grade, including distribution of points, class load, grading rubrics, and more. A GPA is too narrow in its summary of achievement, and too dependent on other factors for it to declare the best student in a class of many.

A question follows this conclusion: should schools be comparing their students to one another at all? Is ranking adolescents based on GPA an exercise that will push students to do their best work? Or is it counterproductive to development?

Competition can be productive. Advancements are made because of competition, and individuals are pushed to achieve more when they are not the only ones aiming for a goal. Certain aspects of society do not function without competition. A customer is not going to buy all five versions of a laptop; rather, they are going to buy what they consider the best option. Competition is also the reason there are five laptops to choose from. In the same way, that technology company is not going to hire all applicants for an open software developer position. They are able hire the best developer out of the five and create a better laptop because of competition. It is important that students are aware of competition and the ways it manifests within society. However, declaring a valedictorian is not the sole method with which this can be taught.

Many high school students play sports in which they win or lose. One may question how this is different from declaring a valedictorian. This question requires the examination of the purpose of education. Schools must decide whether education is meant to increase equality or separate “the best” from the rest. Pittsburg high school superintendent, Patrick J. Mannarino of North Hills High, rid his school of the valedictorian designation and said:  “Education’s not a game. It’s not about ‘I finished first and you finished second.’ That high school diploma declares you all winners.” If a sports game ends in defeat for a teenager, they are surely upset, but their entire athletic career is not rated based on a single game. However, a class ranking does summarize a student’s academic career; therefore, the title must have a greater impact on the self esteem of a student than the outcome of a sports game.

A compromise has been implemented across the country. In recent years, schools have started declaring multiple valedictorians in an effort to recognize more than one high-achieving student. Some argue this solution minimizes the glory that one valedictorian could have and harms the motivation to work hard. Others argue that it presents the same dilemmas as declaring a single valedictorian. The difference between one and seven valedictorians is nonexistent, in the sense that it still separates students and equates the value of each student with their GPA.

The tradition of declaring a valedictorian has been passed down for generations, and valedictorians go on to make great contributions to society. But, if the title of valedictorian was taken away, would the futures success of those students be affected? Would students lose motivation to work hard? Or would schools adapt a more inclusive environment in which students are intrinsically motivated and want to work together? It may be time for schools to reconsider what environment is best for producing intelligent, hardworking students who appreciate what they have accomplished and do not need to compete to have these accomplishments recognized.

Perhaps declaring a valedictorian provides a healthy dose of competition to schools around the country. Maybe it is teaching students to work hard and preparing them for adult life. Or, perhaps ranking adolescents based on their academic performance is contributing to  the growing rates of anxiety and depression in the United States. Maybe declaring a valedictorian is taking the emphasis off of learning and placing it on competing.

Pronouns and Provocateurs: Wilfrid Laurier University’s Free Speech Controversy

A photo of an academic building at Wilfrid Laurier University

At the beginning of November, Lindsay Shepherd, a graduate student at Canada’s Wilfrid Laurier University,  made the fateful decision to show a video clip of a debate about pronouns to her tutorial for students in a large first-year writing class. The debate, which aired on Canadian public television a year ago, featured firebrand Jordan Peterson, a professor of clinical psychology at the University of Toronto and a crusader against political correctness.

Continue reading “Pronouns and Provocateurs: Wilfrid Laurier University’s Free Speech Controversy”

Should Universities Abandon Placement Exams?

A photo of California State University's campus

At most universities in the United States, students are required to take placement exams to determine their developmental level in math and English.  Students are placed in classes that are appropriate for a student at that developmental level in each of those disciplines.  Students who are placed in non-college ready, remedial classes are required to take up to three such classes before they can enroll in courses that actually count toward their degree.  Last week, the Chancellor of the California State University educational system issued an executive order doing away with placement exams.  Instead, students can try their hands at classes at a higher difficulty level than the placement exam would have indicated was appropriate.  Many community colleges have already moved away from the use of placement exams, but the move to this approach in the large Cal State system is noteworthy.

Continue reading “Should Universities Abandon Placement Exams?”