← Return to search results
Back to Prindle Institute

Contra Khalid: A Defense of Trigger Warnings

Amna Khalid, writing at Persuasion, argues that trigger warnings are futile. The research, she says, shows that trigger warnings do not minimize emotional distress or intrusive thoughts; she references, for example, a meta-analysis which found “that people felt more anxious after receiving the warning.” But beyond these empirical critiques, Khalid also asserts that trigger warnings “pander to student sensitivities—to the extent that it starts undermining the mission of the university.” When trigger warnings are used, she says, “we fail to equip our students with the skills and sensibilities necessary to cope with life” and “[do] them a great disservice.” “Instead of coddling our students,” she writes, “we should be asking why they feel so emotionally brittle. Might it be that their fragility is the result of limited exposure to what constitutes the human condition and the range of human experience?” She concludes: “perhaps, in the end, what [students] need is unmediated, warning-free immersion in more literature, not less.

Khalid’s argument is heavy on generalization — and lacking in rigor. It’s worth noting at the outset that the article she cites as evidence that trigger warnings don’t minimize intrusive thoughts doesn’t mention trigger warnings, and the meta-analysis she cites is a pre-print. But the problems in Khalid’s argument extend well beyond the data she cites; and no matter the pedigree of those who support it, we have good reason to reject it. Khalid doesn’t just advance a misinformed argument; she fundamentally misunderstands the point.

I will not bury the lede: I argue here that trigger warnings represent a basic act of kindness which demonstrates our respect for the trauma others have endured.

In what follows, I discuss adverse childhood experiences; violent crime and physical violence; severe illness; PTSD; and, finally, sexual assault and rape. No matter your choice to engage with this work or not, you have my thanks.

. . .

The banality of trauma is difficult to overstate. Adverse childhood experiences, as defined in the Journal of the American Medical Association, include

experiencing physical, emotional, or sexual abuse; witnessing violence in the home; having a family member attempt or die by suicide; and growing up in a household with substance use, mental health problems, or instability due to parental separation, divorce, or incarceration.

60.9% of adults have had at least one adverse childhood experience; 15.6% have had four or more. 82.7% of Americans have been exposed to a traumatic event. 2.6 million Americans over the age of 12 have been the victim of a violent crime.

In the context of higher education, the pattern persists. 35% of matriculating undergraduates have seen a loved one experience a life-threatening illness or had such an illness themselves; 24% have personally seen or been the victim of physical violence, and 7% have been sexually assaulted. The same study found that 9% of matriculating students met criteria for PTSD. 20.4% of women at American universities reported experiencing non-consensual penetration, attempted penetration, sexual touching by force, or assault via inability to consent — since they have been enrolled at their institution.

Khalid’s suggestion that students are fragile because of “limited exposure to what constitutes the human condition,” then, is either ignorant or dishonest: students come to the classroom bearing the full weight of the trauma which has been inflicted upon them.

But the problems for Khalid’s argument run deeper. The banality of trauma paints a picture which is difficult to ignore: every day, you interact with people who have been deeply and unforgettably traumatized. And contained in this truth is a question: how will this change how you interact with others?

On one hand, you may choose compassion. To broach a difficult conversation with a friend, for example, you may say to them: “There’s something difficult that we need to talk about soon, but I understand it if you’re not ready right now. Let me know when we can meet, and in what setting you’d be most comfortable having this conversation.”

Similar conversations occur in the professional context. Medicine and social work, for example, have recently begun a shift towards trauma-informed practice. Prior to discussing sexual health or other sensitive topics, a physician may say: “I have some questions which can be uncomfortable. I ask them because I want to provide the best care that I can, but I also understand if you’re not in a place to talk about them right now.” A social worker, when onboarding a new client, may say “I understand that things have been challenging for you lately, but I want to meet you where you are. Tell me when you’re ready to talk about what’s been bothering you, and I’ll do my best to support you in whatever ways you need.”

Or, analogously, a professor may say to their students: “As part of our next class, you may be exposed to topics and material that may bring about complex emotions. I want you to know that, for all of my concern for you as a student, I care for you as a human being more. I will do my best to ensure that our conversation is respectful and affirming; but if you need to not participate in this conversation, or not attend this particular discussion, I completely understand. And if you need support before or after, I am here to listen and help however I can.”

Each of these statements, spanning personal and professional interactions, represents a “trigger warning” of a kind: critics frequently ignore that portending a difficult conversation is a normal part of both personal and professional life.

But these critics also misunderstand the purpose of such statements. Trigger warnings are not about minimizing emotional distress or intrusive thoughts.

Furthermore, it should be taken as an obvious truth that trigger warnings increase anxiety: anyone who is told that a difficult conversation lies ahead will be understandably anxious. When Khalid and the researchers she cites argue through reference to data on these outcomes, they fundamentally miss the point.

Trigger warnings, as represented in all of the examples above, are a communicative act: they communicate a speaker’s understanding that traumatic experiences are ubiquitous, their desire to support others, and their respect for how challenging a conversation can be. They portend what is to come, but vitally, communicate that you are not alone in your struggle. Trigger warnings, then, show a respect for the trauma which others have endured, and solidarity with them as they navigate life after; they represent a basic act of kindness through which we, as individuals and as professionals, can express our respect for others. When understood in this light, Khalid’s argument against trigger warnings is made all the more cruel. To “equip our students with the skills and sensibilities necessary to cope with life,” should we withhold our respect and kindness from them? Should we ensure that they experience “unmediated, warning-free immersion” in the content of their trauma, and extol our virtue for doing so?

I answer no — but the choice remains yours. Compassion is not the only option, and you may choose its alternative; and as I have written in these pages before, the choices you make represent who you really are. If trigger warnings represent coddling or pandering, count me among the coddlers and panders; if respecting the trauma of others conflicts with the mission of the university, I reject the university and all it stands for.

I, for one, will choose compassion.

Digital Degrees and Depersonalization

photograph of college student stressing over books and laptop

In an article titled “A ‘Stunning’ Level of Student Disconnection,” Beth McMurtie of the Chronicle of Higher Education analyzes the current state of student disengagement in higher education. The article solicits the personal experiences and observations of college and university faculty, as well as student-facing administrative officers and guidance counselors. Faculty members cite myriad causes of the general malaise they see among the students in their classes: classes switching back and forth between virtual and remote settings; global unrest and existential anxiety, stemming from COVID-19 and the recent war between Ukraine and Russia; interrupted high school years that leave young adults unprepared for the specific challenges and demands of college life; the social isolation of quarantines and lockdowns that filled nearly two years of their lives. Some of these circumstances are unavoidable (e.g., global unrest), while others seem to be improving (classroom uncertainty, lockdowns, and mask mandates). Still, student performance and mental health continues to suffer as badly as it did two years ago, and college enrollment is nearly as low as it was at the start of the pandemic.

McMurtie also takes the time to interview some college students on their experience. The students point to a common element that draws together all the previously-mentioned variables suspected of causing student disengagement: prolonged, almost unceasing, engagement with technology. One college junior quoted in the article describes her sophomore year as a blur, remembering only snippets of early morning Zoom classes, half-slept-through, with camera off, before falling back asleep. Each day seemed to consist in a flow between moments of sleep, internet browsing, and virtual classes. When COVID-19 restrictions subsided and classrooms returned to more of a traditional format, the excessive use of technology that had been mandatory for the past two years left an indelible psychological mark.

As she returned to the classroom, Lyman found that many professors had come to rely more heavily on technology, such as asking everyone to get online to do an activity. Nor do many of her courses have group activities or discussions, which has the effect of making them still seem virtual. ‘I want so badly to be active in my classroom, but everything just still feels, like, fake almost.’

Numerous scientific studies offer empirical support for the observation that more frequent virtual immersion is positively correlated with higher levels of depersonalization — a psychological condition characterized by the persistent or repeated feeling that “you’re observing yourself from outside your body or you have a sense that things around you aren’t real, or both.” In an article published last month in Scientific Studies, researchers reported the following:

We found that increased use of digital media-based activities and online social e-meetings correlated with higher feelings of depersonalisation. We also found that the participants reporting higher experiences of depersonalisation, also reported enhanced vividness of negative emotions (as opposed to positive emotions).

They further remarked that the study “points to potential risks related to overly sedentary, and hyper-digitalized lifestyle habits that may induce feelings of living in one’s ‘head’ (mind), disconnected from one’s body, self and the world.” In short, spending more time online entails spending more time in one’s “head,” making a greater percentage of their life purely cerebral rather than physical. This can lead to a feeling of disconnect between the mind and the body, making all of one’s experiences feel exactly as the undergraduate student described her life during and after the pandemic: unreal.

If the increase and extreme utilization of technology in higher education is even partly to blame for the current student psychological disconnect, instructors and university administrators face a difficult dilemma: should we reduce the use of technology in classes, or not? The answer may at first appear to be an obvious “no”; after all, if such constant virtual existence is taking a psychological toll on college students, then it seems the right move would be to reduce the amount of online presence required to participate in the coursework. But the problem is complicated by the fact that depersonalization makes interacting with humans in the “real world” extremely psychologically taxing — far more taxing than interacting with others, or completing coursework, online. This fact illuminates the exponentially increasing demand over the past two years for online degrees and online course offerings, the decrease in class attendance for in-person classes, and the rising rates of anxiety and depression among young college students on campus. After being forced into a nearly continuous online existence (the average time spent on social media alone — not counting virtual classes — for young people in the United States is 9 hours per day) we feel wrenched out of the physical world, making reentering the world all the more exhausting. We prefer digital existence because the depersonalization has rendered us unable to process anything else.

Some philosophers, like Martha Nussbaum, refer to these kinds of preferences as “adaptive preferences” — things we begin to prefer as a way of adapting to some non-ideal circumstances. One of Nussbaum’s cases focuses on impoverished women in India who were routinely physically abused by their husbands, but preferred to stay married. Some of the women acknowledge that the abuse was “painful and bad, but, still, a part of women’s lot in life, just something women have to put up with as part of being a woman dependent on men.” Another philosopher, Jon Elster, calls these kinds of desires “sour grapes,” because a fox that originally desires grapes may convince himself the grapes he previously wanted were sour (and therefore not to be desired) if he finds himself unable to access them.

Are in-personal classes, social engagement, and physical existence on campus becoming “sour grapes” to us? If we have, to some extent, lost the ability to navigate these spaces with psychological ease, we may convince ourselves that these kinds of interactions are not valuable at all. But as we move further and further from regular (non-virtual) physical interactions with others, the depersonalization continues and deepens. It may be a self-perpetuating problem, with no clear path forward for either students or instructors. Should instructors prioritize meeting students where they are currently and providing virtual education as far as possible? Or should they prioritize moving away from virtual education with hope for long-term benefits? This is a question that higher education will likely continue to grapple with for many years to come.

The Ethics of Reproducing Trauma in Celebrity Biopics

photograph of Pamela Anderson and Tommy Lee

Practically every streaming service available has a new biopic revisiting celebrity scandals or scandals that turned former unknowns into cultural villains. Hulu just released The Dropout, a series that focuses on the infamous Elizabeth Holmes who lied about the effectiveness of her company’s game-changing blood test for diagnosing diseases. Netflix just released both Inventing Anna and The Tinder Swindler exposing the rise and fall of two con artists turned elite socialites. A number of other biopics are set to be released this year covering music stars like Elvis and Bob Dylan as well as documenting the important stories of Emmet Till and the journalists who broke the story on Harvey Weinstein’s rampant sexual abuse in Hollywood. As this wave of series and films are released, it is important to remember that these depictions memorialize difficult personal moments and are often told from a very specific angle – whether it be sympathetic or not to their subjects. The act of trying to tell other people’s real-life stories raises a multitude of questions about the ethics of taking a private event and turning it into a public spectacle. These questions become particularly pertinent when the biopics are made without the consent of the subject they are covering.

Obviously, there are many things to be gained from films and television that cover real historical events and people. After all, Schindler’s List – considered one of the most important movies ever made – is a historical drama that explores the tragedy and resistance in Nazi Germany, as well as the brave efforts of Oskar Schindler, a real man who saved Jewish people during WWII. The film covers one of the most traumatic events in modern human history, yet surely no one would argue against making a film such as this.

A more recent example of such a successful historical biopic is the story of Black Panther leader Fred Hampton in Judas and the Black Messiah. The film depicts the violence that Black Americans faced in the 1960s from governmental organizations trying to quell the civil rights movement. Fred Hampton’s end is, tragically, also a traumatic one, but is one that a majority of Americans need to see in order to unlearn a white-washed version of American history. The movie’s makers were able to get permission from Hampton’s family to produce his story on screen, but securing consent from the subject or subject’s family is not always an option. Does that mean that the film shouldn’t be made? Surely the subjects of The Dropout and Tinder Swindler would rather not have their crimes brought to the screen.

What happens when filmmakers reach out for consent, but are denied? Studios have to weigh the risk of not getting permission from the subject and then potentially being sued later for defamation if the subject objects to the way that they were portrayed. Oftentimes, however, these cases get thrown out because it can be particularly hard to prove defamation, especially if the story is already well-known. The other option that studios have is to buy the life rights to a story. Life rights offer multiple benefits for studios because they provide legal protection and insider-access to the subject in a competitive movie-making market. These legal protections allow studios to exercise a good deal of creative license over a story, which often strays from the truth of actual events. Yet, they are still able to claim that their film is based off of a true story, which often lends extra significance to a story.

Ultimately, asking for subjects’ consent in film-making is treated more as a friendly gesture than a priority. The current state of affairs leaves very little room for the actual subject to have any sort of agency over how their story is told, especially when they are up against multi-million movie production companies. While studios may claim they are simply producing art, in the 21st century it is highly possible that an audience would take what they see on-screen at face value, especially when it is labeled a true story.

There are two very recent examples of this sort of conflict in the film House of Gucci and the Hulu series Pam & Tommy. In the first, the story of Patrizia Reggiani’s plot to assassinate the heir of the Gucci legacy, Maurizio Gucci, in the 1990s. The film featured an all-star cast with the role of Patrizia being played by Lady Gaga, who created an internet frenzy when it was revealed Gaga did not want to meet the character she was playing in the film. While Gaga had her own fair reasons for not wanting to meet the convicted murderer, Reggiani has criticized Gaga’s decision to not reach out. Meanwhile, the Gucci family has been vocal in their criticism of the film, particularly the sympathetic view the movie takes toward Reggiani as a woman trying to climb a patriarchal ladder. They also charged the film with chasing profits first and foremost, without a thought as to the potential impact the film might have on the family. In response to this criticism, director Ridley Scott only pointed out that the Gucci family has its own history of profit-seeking, which has placed them in the “public domain.” We might wonder, however, whether this reasoning is enough to outweigh the potentially traumatic impact of seeing their family member’s murder played out on the big screen. What obligations might filmmakers have when telling someone else’s story – especially a version they can sell to the public?

The new Hulu series, Pam & Tommy (2021), complicates this question even further as Pamela Anderson not only refused to give consent to the show, but has also spoken out about the trauma she endured. The release of the tape over two decades ago forever scarred her life as Anderson faced all manner of slut-shaming, misogyny, and invasions of privacy. Of course, as her career plummeted, her abusive partner in the tape only gained more status to his rock ‘n’ roll image.

All of this was mostly forgotten by younger generations who might’ve never even known the names otherwise. But Hulu’s series drudges up the sordid details and presents them anew. While showrunners claim to be defending Anderson in a way that she was not in the early 2000s, the series still does harm by simply revisiting all of this past trauma and bringing it to the forefront of headlines and social media. The reactions of Anderson and Lee make it clear who still benefits from this production. While Tommy Lee has praised the actor portraying him, Anderson posted to Instagram about refusing to be victimized once again, and continues to identify herself as a survivor. Anderson has also revealed that she’ll will be able to make her own documentary that truly tells the tale from her perspective. Will shining a new light on the story justify its production?

Anderson’s case is especially troubling because of the potential retruamatization. The Gucci family too stand to be deeply impacted, not only emotionally, but also financially by their family’s, and brand’s, name being dragged by a Hollywood film. These productions raise serious concerns over the lack of agency that one can have over how their story is told. What are the ethical boundaries of memorializing someone’s darkest moment for the world to see? What sort of responsibilities should showrunners be held to when attempting to produce “true” versions of someone else’s tale? And what might it say about modern society that we are so hungry for these fictionalized accounts of other people’s lives that they’ve become such lucrative projects?

Content Moderation and Emotional Trauma

image of wall of tvs each displaying something different

In the wake of the Russian invasion of Ukraine, which has been raging violently since February 24th of 2022, Facebook (now known as “Meta”) recently announced its decision to change some of its content-moderation rules. In particular, Meta will now allow for some calls for violence against “Russian invaders,” though Meta emphasized that credible death threats against specific individuals would still be banned.

“As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’ We still won’t allow credible calls for violence against Russian civilians,” spokesman Andy Stone said.

This recent announcement has reignited a discussion of the rationale — or lack thereof — of content moderation rules. The Washington Post reported on the high-level discussion around social media content moderation guidelines: how these guidelines are often reactionary, inconsistently-applied, and not principle-based.

Facebook frequently changes its content moderation rules and has been criticized by its own independent Oversight Board for having rules that are inconsistent. The company, for example, created an exception to its hate speech rules for world leaders but was never clear which leaders got the exception or why.

Still, politicians, academics, and lobbyists continue to call for stricter content moderation. For example, take the “Health Misinformation Act of 2021”, introduced by Senators Amy Klobuchar (D-Minnesota) and Ben Ray Luján (D-New Mexico) in July of 2021. This bill, a response to online misinformation during the COVID-19 pandemic, would revoke certain legal protections for any interactive computer service, e.g., social media websites, that “promotes…health misinformation through an algorithm.” The purpose of this bill is to incentivize internet companies to take greater measures to combat the spread of misinformation by engaging in content-moderation measures.

What is often left out of these discussions, however, is the means by which content moderation happens. It is often assumed that such a monumental task must be left up to algorithms, which can scour through mind-numbing amounts of content at a breakneck speed. However, much of the labor of content-moderation is performed by humans. And in many cases, these human content-moderators are poor laborers working in developing nations for an extremely small salary. For example, employees at Sama, a Kenyan technology company that is the direct employer of Facebook’s Kenya-based content moderators, “remain some of Facebook’s lowest-paid workers anywhere in the world.” While U.S.-based moderators are typically paid a starting wage of $18/hour, Sama moderators make an average of $2.20/hour. And this low wage is their salary after a recent pay-increase, which happened a few weeks ago. Prior to that, Sama moderators made $1.50/hour.

Such low wages, especially for labor outsourced to poor or developing nations, is nothing new. However, content moderation can be a particularly harrowing — in some cases, traumatizing — line of work. In their paper “Corporeal Moderation: Digital Labour as Affective Good,” Dr. Rae Jereza interviews one content moderator named Olivia about her daily work, which includes identifying “non‐moving bod[ies]”, visible within a frame, “following an act of violence or traumatic experience that could reasonably result in death.” The purpose of this is so videos containing dead bodies can be flagged as containing disturbing content. This content moderator confesses to watching violent or otherwise disturbing content prior to her shift, in an effort to desensitize herself to the content she would have to pick through as part of her job. The content that she was asked to moderate ranged over many categories, including “hate speech, child exploitation imagery (CEI), adult nudity and more.”

Many kinds of jobs involve potentially traumatizing duties: military personnel, police, first responders, slaughterhouse and factory farm workers, and social workers all work jobs with high rates of trauma and other kinds of emotional/psychological distress. Some of these jobs are also compensated very poorly — for example, factory and industrial farms primarily hire immigrants (many undocumented) willing to work for pennies on the dollar in dangerous conditions. Poorly-compensated high-risk jobs tend to be filled by people in the most desperate conditions, and these workers often end up in dangerous employment situations that they are nevertheless unable or unwilling to leave. Such instances may constitute a case of exploitation: someone exploits someone else when they take unfair advantage of the other’s vulnerable state. But not all instances of exploitation leave the exploited person worse-off, all things considered. The philosopher Jason Brennan describes the following case of exploitation:

Drowning Man: Peter’s boat capsizes in the ocean. He will soon drown. Ed comes along in a boat. He says to Peter, “I’ll save you from drowning, but only if you provide me with 50% of your future earnings.” Peter angrily agrees.

In this example, the drowning man is made better-off even though his vulnerability was taken advantage of. Just like this case, certain unpleasant or dangerous lines of work may be exploitative, but may ultimately make the exploited employees better-off. After all, most people would prefer poor work conditions to life in extreme poverty. Still, there seems to be a clear moral difference between different instances of mutually-beneficial exploitation. Requiring interest on a loan given to a financially-desperate acquaintance may be exploitative to some extent, but is surely not as morally egregious as forcing someone to give up their child in exchange for saving their life. What we demand in exchange for the benefit morally matters. Can it even be permissible to demand emotional and mental vulnerability in exchange for a living wage (or possibly less)?

Additionally, there is something unique about content moderation in that the traumatic material moderators view on any given day is not a potential hazard of the job — it is the whole job. How should we think about the permissibility of hiring people to moderate content too disturbing for the eyes of the general public? How can we ask some people to weed out traumatizing, pornographic, racist, threatening posts, so that others don’t have to see it? Fixing the low compensation rates may help with some of the sticky ethical issues concerning this sort of work. Yet, it is unclear whether any amount of compensation can truly make hiring people for this line of work permissible. How can you put a price on mental well-being, on humane sensitivity to violence and hate?

On the other hand, the alternatives are similarly bleak. There seem to be few good options when it comes to cleaning up the dregs of virtual hate, abuse, and shock-material.

Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant

an image of an anthill

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In a new study published in Proceedings of the Royal Society B, behavioral ecologist Erik Frank at the University of Lausanne in Switzerland and his colleagues discuss their findings that a species of sub-Saharan ants bring their wounded hive-mates back to the colony after a termite hunt. This practice of not leaving wounded ants behind is noteworthy on its own, but Frank and fellow behavioral ecologists note that the Matabele ants (Megaponera analis) engage in triage judgments to determine which injured ants are worth or possible to save–not all living wounded are brought back to the nest for treatment.

Continue reading “Questions on the Ethics of Triage, Posed by a Sub-Saharan Ant”

For Humanitarian Organizations in War Zones, the Ethical Challenge of Neutrality

An image of a cemetery near Mosul, Iraq

When institutions fail to fulfill their long-established responsibilities, other groups must fill the void and meet the needs that are going unmet. When this happens, the new responsibilities assumed can conflict with these groups’ prior expectations and prior responsibilities. In states of war and civil unrest, such problems are compounded a thousand-fold.

Continue reading “For Humanitarian Organizations in War Zones, the Ethical Challenge of Neutrality”

Has Your Newsfeed Hurt Your Mental Health?

Within the past few years, it has become even easier to put up videos on social media instantaneously. So many of those that go viral depict something violent, such as the many horrible instances of police brutality that have made the news this year alone. Though often shocking, disturbing, and tragic, these videos do serve as evidence in cases of violence, and sharing them on Facebook can help spread awareness against the crimes committed in them.

Continue reading “Has Your Newsfeed Hurt Your Mental Health?”

Resilience, an ideal that hurts more than it helps

Resilience–the ability to bounce back after trauma or crisis–is an ideal that is increasingly central to our culture. “Bouncing back” can mean breaking even, but generally people think resilience is the ability to come out ahead of where you started, the ability to, as Chicago Mayor Rahm Emanuel put it, never let a crisis go to waste.

Resilience is thought to be the most valuable capacity individuals, populations, and states can possess. For example, British education policy leaders think resilience ought to be taught in schools because it is key to social mobility. Resilience is also a commonly-used term and oft-cited ideal in ecological thought and environmental science, as well as both clinical and popular psychology. The American Psychological Association website features a guide to cultivating personal resilience, and there are countless stories about disabled people who overcome their supposed limitations and achieve above-average feats.

Resilience sounds like a straightforwardly positive thing: the ability to recuperate from loss and injury is essential to human life, after all. However, as Mark Neocleous has argued, when resilience becomes a norm or expectation, it does more damage than good. There are many ways to work through and recover from trauma, and though resilience looks like one on the outside, deeper down it isn’t. Resilience discourse uses therapeutic practices and methods as engines of social and economic production. As a practice or method, resilience has three steps: (1) perform damage so that others can see, feel, and understand it; (2) recycle or overcome that damage, so that you come out ahead of where you were even before the damage hit; (3) pay that surplus value–that value added by recycling–to some hegemonic institution, like white supremacist patriarchy, or capital, or the State, something like that. What Autumn Whitefileld-Madrano calls the “therapeutic body image narrative” is an example of this logic. As she argues, the way we expect women to feel about their bodies has changed. Traditionally, women are pressured to conform to an unattainable ideal (thin, blonde, etc.) and to feel guilty and inadequate when they do not meet this norm. However, nowadays we expect women to love their bodies: everything from the beauty industry (Dove’s “Real Beauty” campaign) to the pop music industry (Meghan Trainor’s “All About That Bass”) tells women that they shouldn’t hate their bodies, but love them. But this love isn’t supposed to be straightforward; rather, it must be the outcome of a struggle with negative body image. As Whitefield-Madrano explains:

“The narrative of body image—with its triumphant tale of overcoming obstacles such as self-loathing, mass media, and the collateral damage of girlhood—is inscribed upon us, particularly among consumers of women’s media, to the point where we forget other bodily narratives may exist.”

The overcoming narrative doesn’t replace the original narrative, but builds on it. So, the original source of harm isn’t eliminated, but becomes a prerequisite. If the ability to overcome trauma and crisis is something everyone is required to demonstrate, then this means everyone ought to undergo some trauma or crisis: you can overcome only if you’ve first been set back. Instead of preventing trauma and crisis, resilience discourse makes it a prerequisite that everyone must experience in order to demonstrate that they are healthy and normal. Resilience discourse treats trauma and crisis as compulsory experiences. In turn, this lets society off the hook for systematic problems like poverty, climate change, and sexism. Resilience discourse outsources the work of addressing, surviving, and coping with the harms of systemic, institutionalized inequality to private individuals. If you still feel the negative effects of, say, sexism, it’s your fault because you’re just not resilient enough. Society doesn’t have to spend any resources solving or alleviating harm, nor does it have to put any more effort into reproducing the relations of inequity that cause these harms. If everyone has to experience some loss and damage, the people who began with more resources and more access to privilege will always have an easier route to recovery–and often a more successful outcome–than those without.

The main thing that distinguishes resilience from other forms of coping is that resilience ultimately benefits hegemonic institutions more than it benefits you. Just as wage labor generates profits for employers, resilience is a type of laboring on the self that generates literal and/or ideological profits for someone else, often at your expense. This isn’t just coping–it’s a very specific form of coping designed to get individuals to perform the superficial trappings of recovery from deep, systemic and institutional issues, all the while reinforcing and intensifying the very systemic issues it claims to solve.

Next month I’ll talk more about resilience, gender, and women.

A Student Perspective on Trigger Warnings

I first encountered the classroom trigger warning in the fall semester of my junior year. The course in question covered humanitarian intervention, a particularly dark topic amongst any number of dismal subjects in political science. As a result, soon after talking through the syllabus, our professor made special mention of the topics at hand. The classes to come, we were told, would cover a number of heavy topics: genocide, ethnic cleansing, wartime rape and other forms of systematic violence. Reading about such material on a daily basis, the professor warned, could be emotionally upsetting. Drawing attention to this fact wasn’t an effort to silence the topics or distract from their discomfort. In communicating their emotional gravity, our professor was simply trying to prepare us, encouraging us to keep tabs on our mental well-being as we proceeded through each difficult discussion.

Continue reading “A Student Perspective on Trigger Warnings”

In Wake of the Tomahawk

It was a chilly afternoon in Belgrade, and my group and I had already seen a lot. For the past few hours we had toured much of the city, stopping at places like the grave of Josip Broz Tito and the National Assembly building. All of the locations we had seen were politically significant in some way or another, part of a crash course on recent Serbian history. But what we were about to see was different.

Continue reading “In Wake of the Tomahawk”