← Return to search results
Back to Prindle Institute

The Promises and Perils of Neurotechnology

image of light beams refracting from model brain

In late May, a groundbreaking study published in Nature outlined how new developments in neurotechnology have allowed a man to walk again after being paralyzed for the better part of a decade. The patient in question – Gert-Jam Oskam – sustained a spinal cord injury in a cycling accident ten years prior, leaving him entirely unable to walk. This injury – like most spinal injuries – essentially meant that Oskam had suffered an interruption in the communication between his brain and certain parts of his body. In the revolutionary new procedure, Oskam received a brain-spine interface (BSI) that essentially created a “digital bridge” between the brain and spinal cord. The treatment was highly effective, with Oskam recovering the ability to stimulate leg muscles mere minutes after implantation. Within a year, Oskam was once again able to stand, walk, climb stairs, and navigate complex terrain.

The rapid development of neurotechnology will provide a raft of new medical interventions; from restoring spinal injuries such as Oskam’s, to allowing the control of prosthetic limbs. It also creates promising opportunities for the treatment of dementia and Parkinson’s disease, as well as more common mental health issues such as depression, insomnia, anxiety, depression, and addiction. Given this battery of potential medical applications, it would seem that neurotechnology is clearly a force for good. But is this really the case?

On a consequentialist analysis, we must not only consider the benefits of new scientific developments, but also their potential costs. What concerns, then, might arise in the context of neurotechnology?

Given its highly invasive nature, neurotechnology’s greatest threats involve potential breaches to both our (1) privacy, and (2) autonomy. Consider, first, privacy. Neurotechnology literally creates a digital connection to our minds – the very thing that makes us us. In doing so, it holds the capacity to gain intimate knowledge of our (previously) most private psychological states. There are very real concerns, then, about what neurotechnology might do with this information. Many of us know the surprise, frustration, and – perhaps –  indignation that comes when we are targeted by a commercial tailored specifically to our internet browsing history. Imagine, then, what would happen if such marketing was based on neurotechnology’s knowledge of our innermost thoughts. Consider the audacity of receiving an advert for the latest SUV just moments after thinking “I really need to buy a new car.”

Of course, this threat to privacy already exists thanks to the ubiquity of technology in our daily lives. While not nearly as invasive, digital technology currently enjoys unprecedented access to our lives via our phones and myriad other smart devices (all, of course, in communication with each other and with unfettered access to our social media, digital communications, and financial transactions). In this way, then, neurotechnology might only represent a difference in the degree of our loss of privacy, rather than an entirely novel intrusion in our lives.

Consider, then, how neurotechnology might instead threaten our autonomy. A vital component of autonomy is retaining complete control over our thoughts and actions. The inclusion, via neurotechnology, of any kind of “digital bridge” necessarily compromises this control – creating a vulnerability that might compromise our autonomy. If there is a digital “middleman” between my psychological desire to lift a glass of water, and my hand’s physical performance of this task, then there is the opportunity for my autonomy to be threatened. What if my BSI refuses to perform the action I desire? What if the BSI is hacked, and I am forced to perform an action that I do not desire? In this sense, neurotechnology poses a threat that prior technological advancements – like phones and smart devices – have not yet created. While social media implements algorithms to monopolize our attention, and advertisers might use every trick in the book to manipulate us into purchasing their products, they have not (yet) been able to wrest control of our physical bodies. With the advent of neurotechnology, however, this may become a possibility.

In addition to concerns relating to our privacy and autonomy, there is the larger concern that neurotechnology might threaten our very humanity. There is, of course, much debate in philosophy about what it means to be “human” – or whether there is any such thing as “human nature” in the first place. However, in Enough: Staying Human in an Engineered Age, author Bill McKibben argues that human life would be meaningless if every challenge we faced could be easily overcome. By this reasoning, then, neurotechnology might threaten to strip meaning from our life by allowing us to strive over adversity without hard work and the development of important skills and character traits.

Of course, this doesn’t imply that the use of all neurotechnology is wrong. We routinely implement medical technology to make people’s lives better, and certain applications of neurotechnology – like the BSI that allowed Gert-Jam Oskam to walk again – are really no different to this. The novelty of neurotechnology, however, is in its capacity to go beyond therapy and provide enhancement – to take us beyond our traditional nature and, in doing so, threaten our very nature. This concern – coupled with those regarding the threats it raises to privacy and autonomy – mean we should practice caution in its development and implementation. What stands to be seen however, is whether such fears are merely the techno-paranoia of Luddites, or reasonable concerns about the wholesale exploitation of technology to threaten our privacy, autonomy, and humanity.

We Are Running Out of Insults

photographs of different people yelling

When it comes to speech, kindness is often the best policy, but those who need a sharp word may find themselves in a predicament: how to express what they mean without using language that is demeaning towards marginalized groups.

Perhaps unsurprisingly, many words used as insults have historically been used to oppress. One prominent example is the trio “idiot,” “imbecile,” and “moron,” which were codified as scientific classifications for mental disability in the 19th century and then popularized by the eugenicist psychologist Henry Goddard in the early 20th century. A word you might call your little brother when he is being — well, frustrating — is deeply ableist and tied historically to eugenics.

Language has long been used as a tool of oppression. But in many cases, the link to a word’s more explicit oppressive use is lost to many contemporary users of that word.

Awareness of many words’ oppressive histories is growing, thanks in large part to efforts from scholars and members of marginalized communities, such as disability activist Hannah Diviney, who called out Beyoncé and Lizzo for use of an ableist slur in their lyrics. However, many people are simply unaware that the words they are using are tied historically to ideologies and practices they themselves would find immoral.

Is it permissible to use words that are historically tied to oppression, especially when many people are unaware of the link? In many cases, a person’s ignorance of facts relevant to a situation can absolve that person of moral responsibility. For example, if I give you a ticket for a flight that unbeknownst to me will end in a crash, I will not be morally responsible for the harm that comes to you when you board that flight. I could not have known. But we are morally responsible for ignorance that is borne out of negligence, and higher stakes increase our responsibility for educating ourselves.

Because the terms in question have been used to suppress entire groups of people, there’s ample opportunity for collateral damage when they are used.

For example, if someone calls their friend a misogynist slur, that slur not only aims its disdain at its direct target (the friend), but also at women and girls in general. In fact, many misogynist insults work only via their denigration of women. When someone is called a sissy, for example, the insult just is the identification of the target with femininity — and therefore, by implication, weakness. This point of view is communicated to anyone who reads or hears the insult.

The philosophical term for the idea that certain words can cause damage to those to whom they are not directed is known as a term being “leaky.” A leaky term is one which, even when it is merely quoted or mentioned rather than used, is still felt as damaging or offensive. The n-word is a paradigm case of a leaky term. The n-word is so-called because, for many, saying the full term in any context is racist.

Not only are insults tied to oppressing marginalized groups potentially leaky and prone to cause collateral damage (often by design, as in the “sissy” case); people’s speech also reaches further now than any time in history. Gone are the days of cursing out a politician for a small audience consisting only of one’s family in one’s own living room. Now, a simple @ of the public figure’s username will direct one’s insult right to the screens of hundreds, thousands, or millions of users of social media. More is at stake, because more people are affected.

Generally, when the repercussions for an action are greater, the moral responsibility for carefulness increases.

It may not be a big deal for a parent to serve their family food that seems unspoiled without first consulting its expiration date; the same cannot be said of a restaurant. Similarly, the wider one’s speech reaches, the less one’s ignorance of its meaning is morally exculpatory. Put simply, the leakiness of insults and the far reach of social media increase the stakes for considering the meaning of one’s speech, even beyond speech that is very obviously ableist, racist, or misogynist.

We can also question whether the connections with the past uses are really lost, even in cases less commonly thought of as full-on slurs. The current use of a word can reveal a tacit commitment to the immoral ideals the word represented more explicitly in the past. Consider “idiot” with its historical tie to eugenics. It was an ableist term meant to designate a so-called mental age of the person being evaluated, designating low intelligence. It’s still used to designate low intelligence today, as an insult rather than a clinical diagnosis. And as such, it still carries an ableist dismissiveness of people with cognitive disabilities.

So what’s a non-ableist, anti-racist, non-misogynist to do? The answer may be, of course, that one could simply refrain from insulting another person. This is an excellent suggestion, and it has much to recommend it as a general policy. However, some cases will require the use of an insult, and some people, at least, will find it easier to change their terms than to quit the habit of insulting others.

More generally, people need the ability to transgress a little with language — as when reserving taboo words for special expressions of outrage. Wouldn’t it be helpful for a language to have terms of insult that hit their intended target and no one else?

Perhaps we should invent new insults. Media aimed at children often does. The problem with novel insults is that they often come across as unserious. Similarly to fictional profanity, such as “frak” in place of “f***” on the TV show Battlestar Galactica, novel terms often lack the bite of the original. The attempt to replace the original term, which is damaging beyond its intended meaning, results in a term that is often not strong enough. “Dillweed,” with its surprisingly illustrious career, may get a laugh on TV, but real life may require something stronger.

Numerous lists offer examples of alternative words, but as with much of language, change comes slowly. It may be that change in social consciousness surrounding terms that are ableist, racist, misogynist, etc., must precede widespread use of insults that avoid these pitfalls.

Neurodivergence, Diagnosis, and Blame

photograph of woman using TikTok on iphone

If your For You page on TikTok looks anything like mine, you know that there is a veritable trove of content about autism and ADHD, much of it focused on questions about diagnosis. The spread of this online content and discussion has been lauded for the potential good it can do, allowing women and non-binary people to have access to information about conditions that are often missed in those populations or by giving voice to traditionally marginalized groups who often deal with others speaking inaccurately on their behalf.

At the same time, the algorithm may function in ways that trend towards stereotyping the populations in question or pushing content that associates ADHD and autism with things not necessarily related to diagnostic criteria (e.g., ADHD with talking fast or autism with disliking the big room light). This can lead to misunderstandings and poor self-diagnosis that misses underlying issues, such as someone mistaking bipolar for ADHD. While similar misunderstandings and misdiagnoses can happen in medical contexts, those who rely on questionably-credentialed social media influencers may be more susceptible to misinformation.

But why is having a diagnosis so appealing? What does the diagnosis do for autistic and ADHD individuals?

I suspect that at least one part of the answer is found in our practices of blame and our beliefs about who deserves support: the diagnosis promises less self-blame and blame from others and more understanding and accommodations.

How might a diagnosis lead to less self-blame and blame from others? There are several possible philosophical answers to this question.

The first answer is relatively common: ADHD and autism are caused by brain chemistry and structure — they should be seen as medical or neurological conditions, not moral ones. On the purely medical view, ADHD and autism have nothing to do with character or who that person is as a moral agent. So, if someone is diagnosed with ADHD or autism, they shouldn’t be blamed for anything resulting from those conditions because they’re simply medical problems that are out of one’s control.

This answer has a few benefits:

the medical diagnosis adds a sense of legitimacy to the experience of individuals with ADHD and autism, it provides access to medical care, and it gives a clear conceptual apparatus to communicate to others about the specific accommodations that are needed.

At the same time, the purely medical answer has key drawbacks.

First, the medical mode is often moralized in its own way, with its own norms about health, disease, and disorder. Sometimes this is appropriate, but other times natural variations in human expression become labeled as disorders or deficits when they should not be (see how intersex people have been treated or the history of eugenics). The aim of medicine is often to provide a cure, but some things do not need to be cured. Medical care can and often has been helpful for individuals needing access to Adderall or Ritalin to function, but the purely medical mode has its limits for understanding the experiences of individuals with ADHD and autism.

Second, the medical mode tends to locate the problem in the individual, though some public health approaches have started to move towards structural and social thinking. For those with ADHD and autism, they may experience their condition as a disability in large part because of a lack of social support and understanding rather than a purely internal discomfort.

Third, the medical mode cannot always be separated from character. See, for example, the overlap of depression and grief or the fact that even normal psychological states are also caused by brain chemistry and structure.

In the case of autistic and ADHD individuals, the condition isn’t something that can be easily carved off from the person because they affect broad domains of the person’s life. In trying to separate out the autism or ADHD, others can easily create this idea of the “real” non-autistic, non-ADHD person, which can lead to failing to love and appreciate the actual person.

The second philosophical answer to the question as to how a diagnosis might lead to less blame is a capacities-based view of moral responsibility. This view is similar to the medical mode, in that the focus is often primarily on the individual, but it differs in its decidedly moral focus. On the capacities view, agents are morally responsible if they have some normal (or minimally normal) capacities of reasoning and choice. Agents are not responsible if they lack these capacities. There are ways of refining this kind of view, but let’s take the basic idea for now.

If we combine this kind of philosophical idea with the idea that ADHD and autistic people are deficient with regard to some of these capacities necessary to be a morally responsible agent, then it would make sense that ADHD and autistic folks would be either less responsible or not responsible at all in certain domains. But if the point of accommodations is to increase capacities, then accommodations should be supported. However, like the medical approach, there are a few drawbacks to at least some versions of this view.

First, there isn’t a clear capacities hierarchy between neurotypical people and neurodivergent people. While someone with ADHD may have trouble starting on a large project in advance, they may work exceptionally well under pressure. Someone with autism may have more difficulty in social situations but could have the ability to focus their time and energy to learn immense amounts of knowledge about a special interest. While parts of the ADHD and autistic experience involve deficits in certain capacities, the overall assessment is much less clear.

Second, claiming that someone with autism and ADHD can’t be a fully morally responsible agent also seems to have a troubling implication that they might not be full, self-legislating members of the moral community. This kind of view places people with autism and ADHD in the position of, say, a child who has some understanding of moral principles but isn’t yet a full agent.

Neither the medical model nor at least some versions of the capacities model seem to fully provide what people are looking for in a diagnosis. While both offer rationales for removing blame, they can have a dehumanizing effect. The drawbacks to these views, however, teach us some lessons: a good view should 1) consider the whole, actual person, 2) think about the person in their social context, and 3) avoid making the autistic or ADHD person out to be less than full moral agents.

I think the right question to ask isn’t “how is this person deficient in some way that removes responsibility?” but instead “what expectations are reasonable to place on this person, given who they are at this point in time?”

This is a rough suggestion that requires more development than I can give it here.

There are ethical considerations that enter in at the level of expectations which go beyond questions about capacity. What would it look like to be kind? To give each other space to be comfortable? To accept parts of ourselves we can’t change? To build a world that works for everyone? Capacity is certainly implicated by these questions, but it isn’t the whole picture.

By shifting our focus to the question about what expectations are reasonable to place on an individual person, we are recentering the whole person and recognizing the dis/abilities that the individual experiences.

Experiences with autism and ADHD can be very different from person to person, and the accommodations needed will vary from person to person. The expectations we can reasonably place on people with ADHD and autism may not be any less than those without — they may just be different.

And neurotypical people who interact with ADHD and autistic people may also be reasonably expected to provide certain accommodations. Everyone’s needs should be considered, and no one should be othered.

For example, say that an autistic person says something that comes off as rude to a neurotypical friend. This has happened a few times before, each within a new domain of conversation. Every time, the autistic individual apologizes and explains how autism affects their social communication and understanding of social norms and how they’re trying to get things right. Eventually the neurotypical friend gets upset and says “why do you always use the autism as an excuse to get out of responsibility?”

In this case, it doesn’t seem that the autistic person is abnegating responsibility, it seems that they’re clarifying what they are actually responsible for. The autistic person isn’t responsible for intentionally saying something rude, they’re responsible for accidentally saying something rude despite their best intentions otherwise. And the autistic person still apologizes for the hurt caused and promises that they will continue to try to do better in the future. Whichever way the two friends negotiate this part of their relationship, it seems important that they each understand where the other is coming from and that each friend’s feelings are given space.

What does this example tell us about relationship between diagnosis and blame? Perhaps we need to develop alternative frameworks to recontextualize responsibility, rather than simply diminish it.

Were Parts of Your Mind Made in a Factory?

photograph of women using smartphone and wearing an Apple watch

You, dear reader, are a wonderfully unique thing.

Humor me for a moment, and think of your mother. Now, think of your most significant achievement, a long-unfulfilled desire, your favorite movie, and something you are ashamed of.

If I were to ask every other intelligent being that will ever exist to think of these and other such things, not a single one would think of all the same things you did. You possess a uniqueness that sets you apart. And your uniqueness – your particular experiences, relationships, projects, predilections, desires – have accumulated over time to give your life its distinctive, ongoing character. They configure your particular perspective on the world. They make you who you are.

One of the great obscenities of human life is that this personal uniqueness is not yours to keep. There will come a time when you will be unable to perform my exercise. The details of your life will cease to configure a unified perspective that can be called yours. For we are organisms that decay and die.

In particular, the organ of the mind, the brain, deteriorates, one way or another. The lucky among us will hold on until we are annihilated. But, if we don’t die prematurely, half of us, perhaps more, will be gradually dispossessed before that.

We have a name for this dispossession. Dementia is that condition characterized by the deterioration of cognitive functions relating to memory, reasoning, and planning. It is the main cause of disability in old age. New medical treatments, the discovery of modifiable risk factors, and greater understanding of the disorder and its causes may allow some of us to hold on longer than would otherwise be possible. But so long as we are fleshy things, our minds are vulnerable.

*****

The idea that our minds are made of such delicate stuff as brain matter is odious.

Many people simply refuse to believe the idea. Descartes could not be moved by his formidable reason (or his formidable critics) to relinquish the idea that the mind is a non-physical substance. We are in no position to laugh at his intransigence. The conviction that a person’s brain and and a person’s mind are separate entities survived disenchantment and neuroscience. It has the enviable durability we can only aspire to.

Many other people believe the idea but desperately wish it weren’t so. We fantasize incessantly about leaving our squishy bodies behind and transferring our minds to a more resilient medium. How could we not? Even the most undignified thing in the virtual world (which, of course, is increasingly our world) has the enviable advantage over us, and more. It’s unrottable. It’s copyable. If we could only step into that world, we could become like gods. But we are stuck. The technology doesn’t exist.

And yet, although we can’t escape our squishy bodies, something curious is happening.

Some people whose brains have lost significant functioning as a result of neurodegenerative disorders are able to do things, all on their own, that go well beyond what their brain state suggests they are capable of, which would have been infeasible for someone with the same condition a few decades ago.

Edith has mild dementia but arrives at appointments, returns phone calls, and pays bills on time; Henry has moderate dementia but can recall the names and likenesses of his family members; Maya has severe dementia but is able to visualize her grandchildren’s faces and contact them when she wants to. These capacities are not fluky or localized. Edith shows up to her appointments purposefully and reliably; Henry doesn’t have to be at home with his leatherbound photo album to recall his family.

The capacities I’m speaking of are not the result of new medical treatments. They are achieved through ordinary information and communication technologies like smartphones, smartwatches, and smart speakers. Edith uses Google Maps and a calendar app with dynamic notifications to encode and utilize the information needed to effectively navigate day-to-day life; Henry uses a special app designed for people with memory problems to catalog details of his loved ones; Maya possesses a simple phone with pictures of her grandchildren that she can press to call them. These technologies are reliable and available to them virtually all the time, strapped to a wrist or snug in a pocket.

Each person has regained something lost to dementia not by leaving behind their squishy body and its attendant vulnerabilities but by transferring something crucial, which was once based in the brain, to a more resilient medium. They haven’t uploaded their minds. But they’ve done something that produces some of the same effects.

*****

What is your mind made of?

This question is ambiguous. Suppose I ask what your car is made of. You might answer: metal, rubber, glass (etc.). Or you might answer: engine, tires, windows (etc.). Both answers are accurate. They differ because they presuppose different descriptive frameworks. The former answer describes your car’s makeup in terms of its underlying materials; the latter in terms of the components that contribute to the car’s functioning.

Your mind is in this way like your car. We can describe your mind’s makeup at a lower level, in terms of underlying matter (squishy stuff (brain matter)), or at a higher level, in terms of functional components such as mental states (like beliefs, desires, and hopes) and mental processes (like perception, deliberation, and reflection).

Consider beliefs. Just as the engine is that part of your car that makes it go, so your beliefs are, very roughly, those parts of your mind that represent what the world is like and enable you to think about and navigate it effectively.

Earlier, you thought about your mother and so forth by accessing beliefs in your brain. Now, imagine that due to dementia your brain can’t encode such information anymore. Fortunately, you have some technology, say, a smartphone with a special app tailored to your needs, that encodes all sorts of relevant biographical information for you, which you can access whenever you need to. In this scenario, your phone, rather than your brain, contains the information you access to think about your mother and so forth. Your phone plays roughly the same role as certain brain parts do in real life. It seems to have become a functional component, or in other words an integrated part, of your mind. True, it’s outside of your skin. It’s not made of squishy stuff. But it’s doing the same basic thing that the squishy stuff usually does. And that’s what makes it part of your mind.

Think of it this way. If you take the engine out of your ‘67 Camaro and strap a functional electric motor to the roof, you’ve got something weird. But you don’t have a motorless car. True, the motor is outside of your car. But it’s doing basically the same things that an engine under the hood would do (we’re assuming it’s hooked up correctly). And that’s what makes it the car’s motor.

The idea that parts of your mind might be made up of things located outside of your skin is called the extended mind thesis. As the philosophers who formulated it point out, the thesis suggests that when people like Edith, Henry, and Maya utilize external technology to make up for deficiencies in endogenous cognitive functioning, they thereby incorporate that technology (or processes involving that technology) into themselves. The technology literally becomes part of them by reliably playing a role in their cognition.

It’s not quite as dramatic as our fantasies. But it’s something, which, if looked at in the right light, appears extraordinary. These people’s minds are made, in part, of technology.

*****

The extended mind thesis would seem to have some rather profound ethical implications. Suppose you steal Henry’s phone, which contains unbacked biographical data. What have you done? Well, you haven’t simply stolen something expensive from Henry. You’ve deprived him of part of his mind, much as if you had excised part of his brain. If you look through his phone, you are looking through his mind. You’ve done something qualitatively different than stealing some other possession, like a fancy hat.

Now, the extended mind thesis is controversial for various reasons. You might reasonably be skeptical of the claim that the phone is literally part of Henry’s mind. But it’s not obvious this matters from an ethical point of view. What’s most important is that the phone is on some level functioning as if it’s part of his mind.

This is especially clear in extreme cases, like the imaginary case where many of your own important biographical details are encoded into your phone. If your grip on who you are, your access to your past and your uniqueness, is significantly mediated by a piece of technology, then that technology is as integral to your mind and identity as many parts of your brain are. And this should be reflected in our judgments about what other people can do to that technology without your permission. It’s more sacrosanct than mere property. Perhaps it should be protected by bodily autonomy rights.

*****

I know a lot of phone numbers. But if you ask me while I’m swimming what they are, I won’t be able to tell you immediately. That’s because they’re stored in my phone, not my brain.

This highlights something you might have been thinking all along. It’s not only people with dementia who offload information and cognitive tasks to their phones. People with impairments might do it more extensively (biographical details rather than just phone numbers, calendar appointments, and recipes). They might have more trouble adjusting if they suddenly couldn’t do it.

Nevertheless, we all extend our minds into these little gadgets we carry around with us. We’re all made up, in part, of silicon and metal and plastic. Of stuff made in a factory.

This suggests something pretty important. The rules about what other people can do to our phones (and other gadgets) without our permission should probably be pretty strict, far stricter than rules governing most other stuff. One might advocate in favor of something like the following (admittedly rough and exception-riddled) principle: if it’s wrong to do such-and-such to someone’s brain, then it’s prima facie wrong to do such-and-such to their phone.

I’ll end with a suggestive example.

Surely we can all agree that it would be wrong for the state to use data from a mind-reading machine designed to scan the brains of females in order to figure out when they believe their last period happened. That’s too invasive; it violates bodily autonomy. Well, our rough principle would seem to suggest that it’s prima facie wrong to use data from a machine designed to scan someone’s phone to get the same information. The fact that the phone happens to be outside the person’s skin is, well, immaterial.

What Should Disabled Representation Look Like?

photograph of steps leading to office building

Over the course of the last two years, the COVID-19 pandemic has infected millions, with long-haul symptoms of COVID permanently impacting the health of up to 23 million Americans. These long-haul symptoms are expected to have significant impacts on public health as a whole as more and more citizens become disabled. This will likely have significant impacts on the workforce — after all, it is much more difficult to engage in employment when workplace communities tend to be relatively inaccessible.

In light of this problem, we should ask ourselves the following question:

Should we prioritize disabled representation and accommodation in the corporate and political workforce, or should we focus on making local communities more accessible for disabled residents?

The answers to this question will determine the systematic way we go about supporting those with disabilities as well as how, and to what degree, disabled people are integrated into abled societies.

The burdens of ableism — the intentional or unintentional discrimination or lack of accommodation of people with non-normative bodies — often fall on individuals with conditions that prevent them from reaching preconceived notions of normalcy, intelligence, and productivity. For example, those with long COVID might find themselves unable to work and with little access to financial and social support.

Conversely, accessibility represents the reversal of these burdens, both physically and mentally, specifically to the benefit of the disabled individual, rather than the benefit of a corporation or political organization.

Adding more disabled people to a work team to meet diversity and inclusion standards is not the same as accessibility, especially if nothing about the work environment is adjusted for that employee.

On average, disabled individuals earn roughly two-thirds the pay of their able-bodied counterparts in nearly every profession, assuming they can do their job at all under their working conditions. Pushing for better pay would be a good step towards combating ableism, but, unfortunately, the federal minimum wage has not increased since 2009. On top of this, the average annual cost of healthcare for a person with a disability is significantly higher ($13,492) than that for a person without ($2,835). Higher wages alone are not enough to overcome this gap.

It is our norm, societally, to push the economic burden of disability onto the disabled, all while reinventing the accessibility wheel often just to make able-bodied citizens feel like they have done a good thing. In turn, we have inventions such as $33,000 stair-climbing wheelchairs being pushed — inventions that rarely are affordable for the working disabled citizen, let alone someone who cannot work — in instances where we could just have built a ramp.

In order for tangible, sustainable progress to be made and for the requirements of justice to be met, we must begin with consistent, local changes to accessibility.

It can be powerful to see such representation in political and business environments, and it’s vital to provide disabled individuals with resources for healthcare, housing, and other basic needs. But change is difficult at the large, systemic level. People often fall through the cracks of bureaucratic guidelines. Given this, small-scale local changes to accessibility might be a better target for achieving change for the disabled community on a national scale.

Of course, whatever changes are made should be done in conversation with disabled members of the community, who will best understand their own experiences and needs. People with disabilities need to be included in the conversation, not made out as some kind of problem for abled people to solve.

This solution morally aligns with Rawls’ theory of justice as fairness, which emphasizes justice for all members of society, regardless of gender, race, ability level, or any other significant difference. It explains this through two separate principles. The first focuses on everyone having “the same indefeasible claim to a fully equal basic liberties.” This principle takes precedence over the second principle, which states that “social and economic inequalities… are to be attached to offices and positions open to all… to the greatest benefit of the least-advantaged.”

By Rawls’ standards, because of the order of precedence, we should prioritize ensuring disabled citizens’ basic liberties before securing their opportunities for positions of economic and social power.

But wouldn’t access to these positions of power provide a more practical path for guaranteeing basic liberties for all disabled members of society? Shouldn’t the knowledge and representation that disabled individuals bring lead us towards making better policy decisions? According to Enzo Rossi and Olúfémi O. Táíwò in their article on woke capitalism, the main problem with an emphasis on diverse representation is that, while diversification of the upper class is likely under capitalism, the majority of oppressive systems for lower classes are likely to stay the same. In instances like this, where the system has been built against the wishes of such a large minority of people for so long, it may be easier to effect change by working from the bottom up, bringing neighbors together to make their communities more accessible for the people who live there.

Oftentimes, disabled people simply want to indulge in the same small-scale pleasures that their nondisabled counterparts do. When talking to other disabled individuals about their desires, many of them are as simple as able-bodied counterparts’ daily taken-for-granted lives: cooking in their own apartment, navigating public spaces simply, or even just being able to go to the bank or grocery store. These things become unaffordable luxuries for disabled people in inaccessible areas.

In my own experience with certain disabilities, particularly in my worst flare-ups that necessitated the use of a wheelchair, I just wanted to be able to do very simple things again. Getting to class comfortably, keeping up with peers, or getting to places independently became very hard to achieve, or simply impossible.

Financial independence and some kind of say in societal decisions would certainly have been meaningful and significant, but I really just needed the basics before I could worry about career advancement or systemic change.

Accessibility for disabled people on such simple scales only improves their independence, and independence for nondisabled people as well. Any change for disabled people at a local scale would also benefit the larger community. Building better ramps, sidewalks, and doors for people with mobility limitations within homes, educational environments, and recreational areas not only eases the burden of disability, but it also improves quality of life for children, the temporarily disabled, and the elderly in the same community.

Obviously, there is something important to be said about securing basic needs — especially housing, healthcare, food, and clean drinking water — but these, too, would be best handled by consulting local disabled community members to meet their specific requirements.

From here, we could focus on making further investments in walkable community areas and providing adequate physical and social support like housing, basic income, and recreation. We can also make proper changes to our current social support systems, which tend to be dated and ineffective.

The more disabled peoples’ quality of lives improve, the more likely they will feel supported enough to make large-scale change. What matters at the end of the day is that disabled people are represented in real-life contexts, not just in positions of power.

Representation isn’t just being featured in TV shows or making it into the C-Suite, it’s being able to order a coffee at Starbucks, get inside a leasing office to pay rent, or to swim at the local pool.

This is not the end-all be-all solution to end ableism, nor is it guaranteed to fix larger structural and political issues around disability, like stigma and economic mobility. But, by focusing on ableism on a local scale in a non-business-oriented fashion, we can improve the quality of life of our neighbors, whether they are experiencing long COVID or living with another disability. Once we have secured basic liberties for disabled folks, then we can worry about corporate pay and representation.

Movies, Beliefs, and Dangerous Messages

photograph of a crowd marching the streets dressed as witches and wearing grotesque masks

In spite of being a welcome part of our lives, movies are not always immune from criticism. One of the most recent examples has been the movie adaptation of Roald Dahl’s book “The Witches,” starring Anne Hathaway. Hathaway, who plays the role of the leader of the Witches, is depicted as having three fingers per hand, which disability advocates have criticized as sending a dangerous message. The point, many have argued, is that by portraying someone with limb differences as scary and cruel – as Hathaway’s character is – the movie associates limb differences with negative character traits and depicts people with limb differences as persons to be feared.

Following the backlash, Hathaway has apologized on Instagram, writing:

“Let me begin by saying I do my best to be sensitive to the feelings and experiences of others not out of some scrambling PC fear, but because not hurting others seems like a basic level of decency we should all be striving for. As someone who really believes in inclusivity and really, really detests cruelty, I owe you all an apology for the pain caused. I am sorry. I did not connect limb difference with the GHW [Grand High Witch] when the look of the character was brought to me; if I had, I assure you this never would have happened.”

As Cara Buckley writes in The New York Times, examples of disfigured people being portrayed as evil abound. From Joker in “The Dark Night” to the “Phantom of the Opera,” cases where disabilities are associated with scary features are far from isolated instances. For as much as these concerns regarding “The Witches” might be justified (as I believe they are), critics seem to already anticipate a criticism of their criticism: that the backlash against the movie is exaggerated, and sparked by, to use Hathaway’s words, a “scrambling PC fear.” As Ashley Eakin, who has Ollier disease and Maffucci syndrome, remarks in Buckley’s article, “[o]bviously we don’t want a culture where everyone’s outraged about everything.”

So, is the backlash against the movie exaggerated? I want to suggest here that it is not; I want to suggest that the association between disability and evil portrayed by movies is a real issue, one that connects with recent philosophical discussions. The argument in favor of the dangers of this association seems to be that by portraying people with disabilities as ugly or scary, viewers may internalize that association and then transfer it onto the real world, thus negatively (and unjustly) impacting the way they see people with visible differences. As quoted in Buckley’s piece, Penny Loker, a visible difference advocate, argues that one of the problematic aspects of “The Witches” is that it is a family movie, and this might make the association between limb differences and evil even more pernicious because “kids absorb what they learn, be it through stories we tell or what they learn from their parents.”

Loker’s line of reasoning touches upon an issue that has recently been examined in philosophy and psychology, particularly with respect to how individuals differentiate facts from fiction. Researcher Deena Skolnick Weisberg, for example, who studies imaginative cognition, argues that despite children being competent in distinguishing imagination from reality, they not always and necessarily do so when it comes to consuming fiction. Quoting a study a study from Morison and Gardner (1978), Weisberg suggests that “[e]ven though children tend not to confuse real and fictional entities when asked directly, they do not necessarily see these as natural categories into which to sort entities.” This is made even more acute in the presence of negative emotions. Weisberg says that “[c]hildren are more likely to mis-categorize pretend or fictional entities that have a strong emotional valence, particularly a negative emotional valence.” Weisberg remarks, and this is my own conjecture, seem to be relevant for the depiction of Hathaway as a witch to be afraid of. One may worry that children who are scared of Hathaway’s character might have more difficulties separating fiction from reality, thus making Loker’s concern even more pressing.

If what has been said so far may apply to children, then what about their parents? Intuitively, one would think that when adults know that what they are watching is fictional, then the worry of associating limb differences with evil does not have any application to reality precisely because adults would categorize what they see in the movie as being purely fictional.

Yet, things are not as simple. As philosopher Neil Levy argues, adults are not always good at categorizing mental representations (such as beliefs, desires, or imaginings). Levy’s argument focuses on fake news and suggests that consuming news that we know to be fake does not insulate us from dangerous consequences. Meaning, under certain circumstances we can “acquire” information as well as beliefs even when we know that the “source” is “fictional.” The main context that situates Levy’s argument is fake news, but I think that its conceptual import can teach us something even in the context of movies. If it is true that even adults have a hard time categorizing mental representations when they know they are fake, then this could potentially impact the way adults, similarly to children, absorb what they see when watching films as well as how they employ it in real life.

What should we make of this? I think one important lesson to draw from this reflection is that once the movie industry recognizes the considerable impact their films can have in the way both children and adults internalize what they see, the industry has an obligation to consider the consequences that portraying certain connections can have. Given how viewers absorb what they see, regardless of their age, the movie industry should strive to be more alert and spot problematic associations like this one.

Life-Life Tradeoffs in the Midst of a Pandemic

photograph of patients' feet standing in line waiting to get tested for COVID

Deciding who gets to live and who gets to die is an emotionally strenuous task, especially for those who are responsible for saving lives. Doctors in pandemic-stricken countries have been making decisions of great ethical significance, faced with the scarcity of ventilators, protective equipment, space in intensive medical care, and medical personnel. Ethical guidelines have been issued, in most of the suffering countries, to facilitate decision-making and the provision of effective treatment, with the most prominent principle being “to increase overall benefits” and “maximize life expectancy.” But are these guidelines as uncontroversial as they initially appear to be?

You walk by a pond and you see a child drowning. You can easily save the child without incurring significant moral sacrifices. Are you obligated to save the child at no great cost to yourself? Utilitarians argue that we would be blameworthy if we failed to prevent suffering at no great cost to ourselves. Now, suppose, that you decide to act upon the utilitarian moral premise and rescue the child. As you prepare to undertake this life-rescuing task, you realize the presence of two drowning children on the other side of the pond. You can save them both – still at no cost to yourself – but you cannot save all three. What is the right thing to do? Two lives count more than one, thus you ought to save the maximum number of people possible. It seems evident that doctors who are faced with similar decisions ought to maximize the number of lives to be saved. What could be wrong with such an ethical prescription?

Does the ‘lonely’ child have reasonable grounds to complain? The answer is yes. If the child happened to be on the other side of the pond, she would have a considerably greater chance of survival. Also, if, as a matter of unfortunate coincidence, the universe conspired and brought closer to her two extra children in need of rescue, she would have an even greater chance of survival – given that three lives count more than two. But, that seems to be entirely unfair. Whether one has a right to be rescued should not be determined by morally arbitrary factors such as one’s location and the number of victims in one’s physical proximity. Rather, one deserves to be rescued simply on the grounds of being a person with inherent moral status. Things beyond your control, and which you are not responsible for, should not affect the status of your moral entitlements. As a result, every child in the pond should have an equal chance of rescue. If we cannot save all of them, we should flip a coin to decide the one(s) that can be affordably saved. By the same logic, if doctors owe their patients equal respect and consideration, they should assign each one of them, regardless of morally arbitrary factors (such as age, gender, race, social status), an equal chance to receive sufficient medical care.

What about life expectancy? A doctor is faced with a choice of prolonging a patient’s life by 20 years and prolonging another patient’s life by 2 months. For many, maximizing life expectancy seems to be the primary moral factor to take into account. But, what if there is a conflict between maximizing lives and maximizing life? Suppose that we can either save a patient with a life expectancy of 20 years or save 20 patients with a life expectancy of 3 months each. Maximizing life expectancy entails saving the former, since 20 years of life count more than 5 years of life, while maximizing lives entails saving the latter. It could be argued that the role of medicine is not merely to prolong life but to enhance its quality; this would explain why we may be inclined to save the person with the longest life expectancy. A life span of 3 months is not an adequate amount of time to make plans and engage in valuable projects, and is also accompanied by a constant fear of death. Does that entail that we should maximize the quality of life as well? Faced with a choice between providing a ventilator to a patient who is expected to recover and lead a healthy and fulfilling life and providing a ventilator to a patient who has an intellectual disability, what should the doctor do? If the role of medicine is merely to maximize life quality, the doctor ought to give the ventilator to the first patient. However, as US disability groups have argued, such a decision would constitute a “deadly form of discrimination,” given that it deprives the disabled of their right to equal respect and consideration.

All in all, reigning over life and death is not as enviable as we might have thought.

Life, Death, and Aging: Debating Radical Life Extension

photograph of grandmother and grandson under blankets with a book laid down

An article from The Atlantic has resurfaced in the last week, sparking new discussions about the impact of healthcare on our end of life desires and decision-making. In 2014, Ezekiel J. Emmanuel articulated his reasons for wanting to die at 75 in a provocative op-ed. In 2019, he confirmed that his position has not changed. Emmanuel’s worry is that,

It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.

When polled in 2016, over half of people in the U.S. would not want to adopt enhancements that would enable them to live longer, more healthy lives. While 68% of those polled responded that they thought “most people” would “want medical treatments that slow the aging process and allow the average person to live decades longer, to at least 120 years,” only 38% of respondents said that they personally would want such treatments. In this same poll, 69% were almost in agreement with Emmanuel, that their ideal lifespan would be 79-100 years (only 14% said 78 or younger, and Emmanuel is actually in this small camp). There are many considerations that go into this preference.

One motivation against life extension is thinking that we are only deserving some natural amount of time on this earth, perhaps in order to fulfill a religious or spiritual commitment to “move on.” Over half of the respondents in the Pew survey considered treatments that extend life to be “fundamentally unnatural.” The distinction in bioethics between “treatment” and “enhancement” could be playing a significant role here; it is easy to justify intervention to make someone whole, to restore or to ensure a state of health. Such interventions are deemed “treatment,” and are more easily covered by insurance in the U.S. “Enhancements,” on the other hand, make one better than well, or do not have wellness as an aim. Of course, there are gray areas in medical interventions that don’t fit neatly into one or the other of these categories. Obstetrics, for example, doesn’t aim to treat an illness, but nor does it seek to “enhance” the future parent.

For many, considering a life without an end point is disorienting in the extreme. Philosophers from Martin Heidegger to Bernard Williams were committed to the idea that death – a final conclusion – is necessary for bringing meaning to life. If life’s meaning is similar to the meaning that a story’s narrative has, then we may think of it as consisting of stages, with different stages shaping the import and significance of the events that came before. If a life were to go on indefinitely, it could undermine the ability to shape a narrative or derive purpose in each stage. Radically or indefinitely delaying the conclusion can be seen to thus diminish or undermine the meaning in one’s life.

For many, the considerations against life extension are grounded less in theory and more in practice. If lives are indefinitely extended, this will increase the elderly population. The potential additional strain on environmental and social resources of the additional population could be cause for concern (a la Malthus). The impact on the economy, if living a longer life means staying in the workforce longer, could mean that young people have a harder time entering the workforce when competing with workers that have decades of experience. If those who extend their lives do not remain in the workforce, then different social pressures would arise – supporting a booming retired population, for instance. Regardless of the labor considerations, an extended lifespan could alter the shape and meaning of relationships. Marriages that previously consisted of a commitment of less than 50 years now may seem like unrealistic arrangements if people can anticipate living another 50 years past the average lifespan today.

Further, the practical considerations for and against radical life extension are enmeshed in our current understandings of health care, aging, and dependence. Our worries of becoming a burden to our loved ones should our health conditions require some degree of dependent living is contingent on governmental structures not providing support, either directly to those living with conditions of dependence or to those who will care for them. The way we consider the connection between dependence and burdening is also wrapped up in the way we value IN-dependence.

In the end, the theoretical question regarding the morality of extending the average human lifespan is inextricably tied to the realities of the social and political systems in which we live.

Some Ethical Problems with Footnotes

scan of appendix title page from 1978 report

I start this article with a frank confession: I love footnotes; I do not like endnotes.
Grammatical quarrels over the importance of the Oxford comma, the propriety of the singular “they,” and whether or not sentences can rightly end with a preposition have all, in their own ways and for their own reasons, broken out of the ivory tower. However, the question of whether a piece of writing is better served with footnotes (at the bottom of each page) or endnotes (collected at the end of the document) is a dispute which, for now, remains distinctly scholastic.1 Although, as a matter of personal preference, I am selfishly partial to footnotes, I must admit – and will hereafter argue – that, in some situations, endnotes can be the most ethical option for accomplishing a writer’s goal; in others, eliminating the note entirely is the best option.
As Elisabeth Camp explains in a TED Talk from 2017, just like a variety of rhetorical functions in normal speech, footnotes typically do four things for a text:

  1. they offer a quick method for citing references;
  2. they supplement the footnoted sentence with additional information that, though interesting, might not be directly relevant to the essay as a whole,
  3. they evaluate the point made by the footnoted sentence with quick additional commentary or clarification, and
  4. they extend certain thoughts within the essay’s body in speculative directions without trying to argue firmly for particular conclusions.

For each of these functions (though, arguably less so for the matter of citation), the appositive commentary is most accessible when directly available on the same page as the sentence to which it is attached; requiring a reader to turn multiple pages (rather than simply flicking their eyes to the bottom of the current page) to find the note erects a barrier that, in all likelihood, leads to many endnotes going unread. As such, one might argue that if notes are to be used, then they should be easily usable and, in this regard, footnotes are better than endnotes.
However, this assumes something important about how an audience is accessing a piece of writing: as Nick Byrd has pointed out, readers who rely on text-to-speech software are often presented with an unusual barrier precisely because of footnotes when their computer program fails to distinguish between text in the main body of the essay versus text elsewhere. Imagine trying to read this page from top to bottom with no attention to whether some portions are notes or not:

(From The Genesis of Yogācāra-Vijñānavāda: Responses and Reflections by Lambert Schmithausen; thanks to Bryce Huebner for the example)
Although Microsoft Office has available features for managing the flow of its screen reader program for Word document files, the fact that many (if not most) articles and books are available primarily in .pdf or .epub formats means that, for many, heavily footnoted texts are extremely difficult to read.
Given this, two solutions seem clear:

  1. Improve text-to-speech programs (and the various other technical apparatuses on which they rely, such as optical character recognition algorithms) to accommodate heavily footnoted documents.
  2. Diminish the practice of footnoting, perhaps by switching to the already-standardized option of endnoting.

And, given that (1) is far easier said than done, (2) may be the most ethical option in the short term, given concerns about accessibility.
Technically, though, there is at least one more option immediately implementable:
3. Reduce (or functionally eliminate) current academic notation practices altogether.
While it may be true that authors like Vladimir Nabokov, David Foster Wallace, Susanna Clarke, and Mark Z. Danielewski (among plenty of others) have used footnotes to great storytelling effect in their fiction, the genre of the academic text is something quite different. Far less concerned with “world-building” or “scene-setting,” an academic book or article, in general, presents a sustained argument about, or consideration of, a focused topic – something that, arguably, is not well-served by interruptive notation practices, however clever or interesting they might be. Recalling three of Camp’s four notational uses mentioned above, if an author wishes to provide supplementation, evaluation, or extension of the material discussed in a text, then that may either need to be incorporated into the body of the text proper or reserved for a separate text entirely.
Consider the note attached to the first paragraph of this very article – though the information it contains is interesting (and, arguably, important for the main argument of this essay), it could potentially be either deleted or incorporated into the source paragraph without much difficulty. Although this might reduce the “augmentative beauty” of the wry textual aside, it could (outside of unusual situations such as this one where a footnote functions as a recursive demonstration of its source essay’s thesis) make for more streamlined pieces of writing.
But what of Camp’s first function for footnotes: citation? Certainly, giving credit fairly for ideas found elsewhere is a crucial element of honest academic writing, but footnotes are not required to accomplish this, as anyone familiar with parenthetical citations can attest (nor, indeed, are endnotes necessary either). Consider the caption to the above image of a heavily footnoted academic text (as of page 365, the author is already up to note 1663); anyone interested in the source material (both objectively about the text itself and subjectively regarding how I, personally, learned of it) can discover this information without recourse to a foot- or endnote. And though this is a crude example (buttressed by the facility of hypertext links), it is far from an unusual one.
Moreover, introducing constraints on our citation practices might well serve to limit certain unusual abuses that can occur within the system of academic publishing as it stands. For one, concerns about intellectual grandstanding already abound in academia; packed reference lists are one way that this manifests. As Camp describes in her presentation,

“Citations also accumulate authority; they bring authority to the author. They say ‘Hey! Look at me! I know who to cite! I know the right people to pay attention to; that means I’m an authority – you should listen to what I have to say.’…Once you’ve established that you are in the cognoscenti – that you belong, that you have the authority to speak by doing a lot of citation – that, then, puts you in a position to use that in interesting kinds of ways.”

Rather than using citations simply to give credit where it is due, researchers can sometimes cite sources to gain intellectual “street cred” (“library-aisle cred”?) for themselves – a practice particularly easy in the age of the online database and particularly well-served by footnotes which, even if left unread, will still lend an impressive air to a text whose pages are packed with them. And, given that so-called bibliometric data (which tracks how and how frequently a researcher’s work is cited) is becoming ever-more important for early-career academics, “doing a lot of citation” can also increasingly mean simply citing oneself or one’s peers.
Perhaps the most problematic element of citation abuse, however, stems from the combination of easily-accessed digital databases with lax (or over-taxed) researchers; as Ole Bjørn Rekdal has demonstrated, the spread of “academic urban legends” – such as the false belief that spinach is a good source of iron or that sheep are anatomically incapable of swimming – often come as a result of errors that are spread through the literature, and then through society, without researchers double-checking their sources. Much like a game of telephone, sloppy citation practices allow mistakes to survive within an institutionally-approved environment that is, in theory, designed to squash them. And while sustaining silly stories about farm animals is one thing, when errors are spread unchecked in a way that ultimately influences demonstrably harmful policies – as in the case of a 101-word paragraph cited hundreds of times since its publication in a 1979 issue of the New England Journal of Medicine which (in part) laid the groundwork for today’s opioid abuse crisis – the ethics of citations become sharply important.
All of this is to say: our general love for academic notational practices, and my personal affinity for footnotes, are not neutral positions and deserve to be, themselves, analyzed. In matters both epistemic and ethical, those who care about the accessibility and the accuracy of a text would do well to consider what role that text’s notes are playing – regardless of their location on a given page.
 
1  Although there have been a few articles written in recent years about the value of notes in general, the consistent point of each has been to lament a perceived downturn amongst the general attitude regarding public disinformation (with the thought that notes of some kind could help to combat this). None seem to specifically address the need for a particular location of notes within a text.

Ali Stroker, Radio City Music Hall, and the Value of Accessibility

photograph of the curtains coming up at Radio City Music Hall

On June 9th, Ali Stroker made history when she became the first performer in a wheelchair to receive a Tony Award. Winning ‘Best Performance by a Featured Actress in a Musical’ for her portrayal of the boisterous Ado Annie in the Broadway revival of Rodgers and Hammerstein’s Oklahoma!, Stroker dedicated her award to “every kid who is watching tonight who has a disability, who has a limitation or a challenge, who has been waiting to see themselves represented in this arena.” Although award nominees typically wait in the audience until the winner’s name is called, Stroker was backstage, having just performed a solo as a part of the evening’s festivities. This was fortuitous: later in the evening, Stroker was unable to celebrate in the spotlight with the cast and crew of Oklahoma! after their production won ‘Best Revival of a Musical’ – the stage of Radio City Music Hall has no wheelchair-accessible ramp.

This story highlights two important issues regarding contemporary ableism: problematic representation in the entertainment industry and an overall lack of access.

Regarding the first, the fact that Stroker’s win is historic at all highlights the lack of attention that Hollywood and Broadway have given to disabled performers. Despite the fact that portrayals of disabled characters routinely receive critical acclaim, essentially all of the actors receiving those awards have been able-bodied. By the Washington Post’s count, roughly half of Best Actor Oscars, for example, have been given for depictions of people with disability or illness – but none of the actors awarded have been actual members of the disabled communities they portrayed. 

At best, this sort of production choice encourages inaccurate portrayals by non-experts and perpetuates stereotypes about disabled individuals; at worst, it turns disability into a plot device to be overcome (or, even worse, into punishment for a perceived villain). For example, the 2017 film The Shape of Water was heavily criticized for its clunky depiction of American Sign Language and its explicit comparison of its disabled main character with an inhuman monster – the movie won four Academy Awards and was nominated for an additional eight. Or consider how the three-time Tony-Award-winning musical Wicked treats its character Nessarose: living in a wheelchair, the wicked Nessa not only repeats the overplayed trope of “deformity of body equals deformity of soul,” but her explicit request to be “cured” by her magical sister, in the words of Towson University’s Beth Haller, underscores the “underlying ableist message that disabled people are broken and need to be fixed.”

So, while Stroker’s success is wonderful and much deserved, it comes from within an entertainment industry rife with a problematic relationship to portrayals of disability. Representation of society’s diversity in popular culture is important for many reasons; matters of accuracy and tone are just a few.

Secondly, the inaccessibility of the Tony Awards’ stage prevented Stroker from experiencing the honored tradition of an award winner walking down the aisle while the audience applauds their success – to say nothing, of course, of her literal inability to accept the second award along with the rest of the cast and crew. As Melissa Blake explained for CNN, “For all the talk about inclusion and representation, the disability community continues to remain merely an afterthought, where places are made accessible only after people with disabilities bring it to others’ attention that they can’t get into a building or onto a stage.”

Instead of useful accommodation and equitable treatment, disabled citizens are often the recipients of technological gizmos that do more for the inventors’ social graces than the needs of the purported users. Dubbed ‘disability dongles’ by advocate and strategist Liz Jackson, devices that claim to do everything from translating written text for those who cannot see it to translating sign language into spoken words make waves on social media, but ultimately do quite little to help those who actually use sign language or have difficulty seeing. In most cases, people with disabilities simply don’t need the help – suggesting otherwise can be insultingly patronizing, if not obviously unnecessary. What would be preferable: providing a $33,000 stair-climbing Scewo (as heralded by dozens of international news outlets) to wheelchair users around the world, or simply building more ramps for them to use?

What if Stroker had been driving a Scewo on the night of the Tonys? Though she would have been able to access the stage from the audience, the expensive device climbs much too slowly for the fast-paced production of the televised broadcast; by the time Stroker would have made it to the podium, she likely would have had no time for her speech. And though it moves quickly enough on its own when not navigating stairs, it seems like a motor-powered wheelchair would actually limit Stroker’s mobility during the fast-paced dance numbers of her production (never mind what might happen were it to malfunction). Though some able-bodied people might feel pleased to hear about flashy dongles aimed to help others, such gadgets run the risk of diffusing our sense of corporate – and, indeed, our personal – responsibility to promote a just society.

Re-designing our public architecture to be more representative of the needs of all citizens is far easier, cheaper, and effective at treating every person fairly than is investing in complicated and expensive new technologies that don’t actually solve real problems. As Stroker explained to the New York Times following her win, “I think I had a dream that maybe there could be a ramp built. It’s more than just a logistical thing — it’s saying that you are accepted here, in every part of you.”

The industry would do well to listen.

On Gene Editing, Disease, and Disability

Photo of a piece of paper showing base pairs

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


On November 29, 2018, MIT Tech Review reported that at Harvard University’s Stem Cell Institute, “IVF doctor and scientist Werner Neuhausser says he plans to begin using CRISPR, the gene-editing tool, to change the DNA code inside sperm cells.” This is the first stage towards gene editing embryos, which is itself a controversial goal, given the debates that rose in response to scientists in China making edits at more advanced stages in fetal development.

Frequently the concern over editing human genes involves issues of justice, such as developing the unchecked power to produce humanity that would exist solely to service some population – for example, organ farming. The moral standing of clones and worries over the dignity of humanity when such power is developed get worked over whenever a new advancement in gene editing is announced.

The response, or the less controversial use of our growing control over genetic offspring, is the potential to cure diseases and improve the quality of life for a number of people. However, this use of genetic intervention may not be as morally unambiguous as it seems at first glance.

Since advanced testing was developed, the debate about the moral status of selective abortion has been fraught. Setting aside the ethics of abortion itself, would choosing to bring a child into the world that does not have a particular illness, syndrome, or condition rather than one that did be an ethical thing for a parent to do? Ethicists are divided.

Some are concerned with the expressive power of such a decision – does making this selection express prejudice against those with the condition or a judgment about the quality of the life that individuals living with the condition experience?

Others are concerned with the practical implications of many people making selections for children without some conditions. It is impractical to imagine that widespread use of such selection would completely eradicate the conditions, and therefore one worry could be that the individuals with conditions in the hypothetical society where widespread selection takes place will be further stigmatized, invisible, or have fewer resources. Also, the prejudice against conditions that involve disability might lead to selections that result in lack of diversity in the human population based on misunderstandings of quality of life.

Of course, on the other side of these discussions is the intuitive preference or obligation for parents or those in charge of raising people in society to promote health and well-being. Medicine is traditionally thought to aim at treating and preventing conditions that deviate from health and wellness; both are complex concepts, to be sure, but preventing disease or creating a society that suffers less from disease seems to fall within the domain of appropriate medical intervention.

How does this advancement in gene editing relate to the debate in selective birth? The Harvard example seeks to prevent Alzheimer’s disease, taking sperm and intervening to prevent disease. Lack of human diversity, pernicious ablest expressive power, and negative impact on those that suffer from the disease are the main concerns with intervening for the purported sake of health.

The Shifting Ethical Landscape of Online Shopping

An image of an abandoned mall.

Throughout the course of 2017, after a disappointing bottom line during the 2016 holiday season, Macy’s department store closed 100 of its locations nationwide.  Gap Inc. announced last year that it would close 200 underperforming Gap and Banana Republic locations, with an eye toward shifting greater focus to online sales.  Shopping malls across the country resemble ghost towns—lined with the empty façades of the retail giants that once were.

Continue reading “The Shifting Ethical Landscape of Online Shopping”

Iceland Has Almost Eliminated Down Syndrome through Selective Abortion. Is That a Good Thing?

Ultrasound image

A recent article from CBS News reported that almost 100 percent of pregnant women in Iceland choose to terminate their pregnancy, should a pre-natal screening test come back positive for Down Syndrome. Nearly 85 percent of all pregnant women in Iceland take this optional test. Only around one or two children are now born in Iceland with Down Syndrome per year. On the other side of the Atlantic, the Ohio state legislature is currently considering bills to criminalize selective abortion done for terminating a fetus with Down Syndrome. Obviously, opinions differ drastically on the moral permissibility of the termination of Down Syndrome pregnancies.

Continue reading “Iceland Has Almost Eliminated Down Syndrome through Selective Abortion. Is That a Good Thing?”

Finding New Language to Reimagine Disability

Back in 1994, Sarah Dunant published a collection of essays titled The War of Words: The Political Correctness Debate, and it was met with a great deal of media coverage. A large portion of the country had been moved by a sensitivity to language that could offend or contribute to an undesirable power schema, and this sensitivity had been met with scorn or doubt by another large portion of the country, often along party lines.

Continue reading “Finding New Language to Reimagine Disability”

Moral Philosophy Doesn’t Need a License to Cause Harm

Philosophers Peter Singer and Jeff McMahan recently wrote a very controversial op-ed in The Stone (a blog published by The New York Times) arguing that Anna Stubblefield may have been unjustly treated in her sexual assault conviction. Stubblefield engaged in multiple sexual acts with a person who was severely cognitively impaired. Continue reading “Moral Philosophy Doesn’t Need a License to Cause Harm”

To Whose Benefit is Aversion Therapy?

It is doubtful that any individual ever grows up expecting to have a child with any type of physical or mental disability. No one plans their life thinking that one day they will have to care for a person with special needs. Parenting is a challenge as it is, but learning to parent a child with disabilities is infinitely more difficult because of this lack of preparedness.

Continue reading “To Whose Benefit is Aversion Therapy?”

Patriotism Run Awry

With the 2016 presidential election looming, we’re inundated with a number of messages from both major political parties. Many of these messages invoke patriotism as a reason to support one particular party or candidate over another. Republican candidate Ted Cruz, for instance, refers to “faith, family, patriotism” as fundamentally conservative values. Lest one think that only Republicans appeal to patriotism, in a Veterans’ Day speech, Democratic candidate Bernie Sanders said: “If patriotism means anything, it means that we do not turn our backs on those who defended us, on those who were prepared to give their all.” It’s no surprise that both the DNC and the RNC have adopted the colors of the American flag for their logos and websites. After all, who would object to patriotism—love for and pride in one’s country?

I think it’s important to keep in mind that while patriotism can be a political virtue, it can also go wrong. One need not conjure up images of Nazi rallies in the 1940s or pro-Taliban demonstrations in Afghanistan to see instances of patriotism that many in our current political climate would take to be problematic. Our own political history should teach us that we ought to be cautious about patriotism given the opportunity for it to run awry. As just one of many instances, consider how claims of patriotism have been used in the US to justify discrimination against individuals with disabilities, particularly disabled immigrants. At numerous times in our nation’s history, immigration law has been used to keep out individuals that we think would ‘pollute’ or ‘dilute’ our country. Loving our country, it was thought, requires us to protect it. And protecting it requires us to keep the wrong kind of people out. The ‘Red Scare’ following World War I led to tightened controls on immigration as a way of keeping out those who had different political views.

But it’s not just suspicions of anarchism or socialism that led to restrictions on immigration. The US has a long history of using disability to the same end. The Immigration Acts of 1882 and 1924, for example, allowed for government officials to restrict the immigration of those who were either disabled or even likely to become so. In the early 20th century, immigration officials were told that “any mental abnormality whatever … justifies the statement that the alien is mentally defective” (Nielsen, 103), a judgment that could be used to prevent an individual’s immigration into the US. (Perhaps not surprisingly, such laws resulted in a higher deportation rate for individuals from Asia than from Europe.) These, and other laws, were used to exclude people from immigrating or to push them underground once they were in the country. In the late 19th and first few decades of the 20th centuries, numerous cities—from San Francisco to Chicago—enacted laws that prohibited those with disabilities or other ‘mutilated or deformed bodies’ from being in public.

The use of ‘disability’ as a way of marginalizing or discriminating against individuals in US history does not just apply to the disabled. It was also used to justify slavery in the 19th century. For instance, Samuel Cartwright, a medical doctor and proponent of scientific racism, argued that “blacks’ physical and mental defects made it impossible for them to survive without white supervision and care” (Nielsen, 57). In the 1870s, influential educational leaders argued that attempting to educate women led to their becoming disabled. And as late as the 1940s, the claim that Native Americans were particularly prone to disability was used to justify failing to extend full rights to indigenous population.

Finally, the US has a long history of using appeals to patriotism as a reason to forcibly sterilize the disabled. Supreme Court Justice Oliver Wendell Holmes’ claim that in the service of the “public welfare … three generations of imbeciles is enough” defended the constitutionality of such a practice, which is not only legal but still practiced in numerous states. The reason for such sterilization, presumably, is for the good of the American people.

So, this coming election, feel free to be patriotic. But make sure that the vision of our country that you’re supporting and working to enact is one that is worthy of your love and pride. For not all patriotism is worth it.


Quotations from Kim E. Nielsen’s A Disability History of the United States (Boston: Beacon Press, 2012). For a discussion of how eugenics influenced US perceptions of disability and contributed to immigration restrictions, see Daniel J. Kelves’ In the Name of Eugenics (Cambridge, MA: Harvard University Press: 1985).