← Return to search results
Back to Prindle Institute

Black Friday and Ethical Consumption

photograph of blurred crowds moving through two-story mall

Every year millions of pieces of clothing get bought, worn, and discarded in a constantly repeating cycle of consumption. Issues of the fashion industry such as its strong negative influences on the environment and its repeated neglect of worker’s rights are by now well known among consumers. Certain retail stores like H&M have started transitioning towards more sustainable production chains. But regardless how much more sustainable materials and higher wages the industry adopts, there remains an ethical concern inherent to the field of fashion—constant consumption. As Robin Givhan of The Washington Post Magazine writes: “Because fashion’s fundamental operating principle rests on planned obsolescence, brands are in a ceaseless cycle of replacement and replenishment. Fashion’s job is to goad you into wanting, needing more.” So, let’s take a moment to discuss consumerism before we rush into stores for the Black Friday sales.

Shopping undeniably makes people happy. Casually browsing through isles of clothes, spending time meandering around malls with friends, or rewarding oneself with a new pair of jeans are just some key elements of a culture that finds clothes-shopping highly enjoyable. Winter, Spring, Black Friday, Christmas, and other kinds of sales, advertisements, and attractive offers keep consumers constantly engaged and active. As people take great pride in their appearance and cultivating a sense of personal style, buying a new piece of clothing is an affordable, guaranteed, and immediate way to make oneself happy. And if we predominantly thrift-shop, buy clothing from sustainable lines and companies, and regularly give away clothes that we do not wear to friends or charities, we might feel we’ve done our part to minimize whatever negative impact might come. Indeed, many believe that as long as you take steps to reduce the harm inflicted on nature and others through your consuming habits, you are an ethical consumer.

But it may be the only truly ethical consumer is the one who consumes as little as possible. Guilt may be the appropriate response to any purchase made that you know you do not need. Even if you buy the most sustainably produced scarf you might feel a sense of discomfort if you know for sure that it is simply going to be hanging in your closet. As such, it may be the excess that is bothersome to our moral intuitions—the over in over-consumption. We are encouraged to be moderate in everything we do, and this may include our buying habits.

Moderation may be a desirable moral goal, but one might still think that the pleasure that shopping and shopping-centered social interactions give to people should not be disregarded in weighing how frequently to engage in consumerist culture. Indeed, shopping brings pleasure to many, but an argument could be made that most shopping-caused pleasure is inferior to pleasure that one may get from other more intellectually or spiritually engaging activities such as reading, meditating, having meaningful conversations with friends, or mastering a new skill. Rather than centering our pleasure and company-seeking activities around fleeting and empty joys of purchasing clothes, perhaps we should aim for the more lasting and sustainable happiness that we gain through activities that are meaningful as well as enjoyable.

In order for us to function normally in our modern day society there are many things we need to buy. However, if we are being perfectly honest with ourselves, there are also many things that we buy even though we do not really need them. When we discuss fast fashion in light of its influences on the environment, it is important that we do not skip the conversation about consumerism itself. As Givhan writes: “The simplest, best path to sustainability is not anti-fashion; it’s anti-gorging.” To address the roots of our environmental problems we should not only be asking ourselves what harms our individual purchases will inflict upon the environment but also what kind of a culture we are embracing through our actions. While ethical business practices start with ethical consumers, ethical consumers start by asking themselves the question: Do I really need this?

Freedom of Speech and Sexist Tweets

photograph of Indiana University campus

On November 7th, 2019, Indiana University Bloomington economics professor Eric Rasmusen tweeted a link to an article titled, “Are Women Destroying Academia? Probably.” In his tweet, Rasmusen pulled out one quote in particular as worthy of special emphasis, “geniuses are overwhelmingly male because they combine outlier high IQ with moderately low Agreeableness and moderately low Conscientiousness.” Among other things, the article claims that 1) the inclusion of women as students in universities has led to the deterioration of rigor in those institutions because emotion has replaced the cold, unemotional evaluation of facts and arguments, 2) women are highly prone to “Neuroticism,” precluding them from logical thought, 3) the presence of women in academia reduces the production of the “genius type,” a type which is overwhelmingly male, and 4) female academics are too high in agreeableness but low in I.Q. to adequately nurture the mind of the male genius. Thus, the article claims, the inclusion of females in academia both as students and as professors, is harmful to male education and has a chilling effect on the number of geniuses produced by universities.

This isn’t the first time that this professor has made headlines for his tweets. In 2003, Rasmusen came under fire for his comments on the fitness of male homosexuals to serve as teachers. In a response to this ongoing controversy, he re-affirms that position,

“I am on record as saying that homosexuals should not teach grade and high school. I don’t think they should be Catholic priests or Boy Scout leaders either. Back in that kerfuffle when I was widely attacked for saying that, I was careful to say that academia was different. Professors prey on students too, so there is a danger, but the students are older and better able to protect themselves, and there is more reason to accept the risk of a brilliant but immoral teacher.”

In response to the most recent tweet, people immediately began to call on the university to terminate Rasmusen’s employment. Lauren Robel, Executive Vice President and Provost of Indiana University Bloomington issued a statement condemning Rasmusen’s actions as “stunningly ignorant, more consistent with someone who lived in the 18th century than the 21st.” She also makes it perfectly clear that the university cannot fire Rasmusen for his comments because the First Amendment protects them.

The university did take some corrective action in response to Rasmusen’s behavior. In her statement, Robel provided the details of the steps the university is taking. First, no class offered by Rasmusen will be mandatory. In this way, students can avoid taking classes from him entirely. Second, grading of students in Rasmusen’s classes will be blind to avoid the bias that might be present. This means that assignments will be set up in such a way that Rasmusen will not know which student’s paper he is grading when he is grading it. If the nature of certain assignments is such that they cannot be graded in this way, a different faculty member will grade those assignments. With these measures in place, students can avoid any potential bias that they might expect from someone who has expressed the kinds of ideas that Rasmusen has expressed.

The public response to the incident involves some confusion about why exactly it is that Rasmusen can’t be fired. Some people view this incident as an indictment of the tenure system. It is not Rasmusen’s tenured status that makes it the case that he can’t be fired over this issue. Indiana University Bloomington is a public, state institution, funded by taxpayer dollars. As such, they cannot fire a professor for the content of the speech he or she engages in as a private citizen, and on his twitter account, Rasmusen was speaking as a private citizen.

Legal protections aside, there are compelling moral reasons that speak in favor of this position. It is valuable, both as a matter of personal liberty and for the good of society, for ideas to be expressed and evaluated. It is important to the aims of democracy that people can speak truth to power. In some cases, the speech involved will be very ugly, but the general practice is so important that we must be committed to it come what may. Punishing speech on the basis of content may seem to make good sense under certain conditions, but good, well-intentioned people don’t always have the final say in what “makes good sense.” To protect our basic liberties, we might sometimes have to be content with procedural justice, even when it seems to fly in the face of substantive justice.

Even if Rasmusen were not speaking as a private citizen, it is possible that his claims should still be protected because of the value of academic freedom. Courts have consistently ruled that academic speech—speech related to teaching and scholarship—is deserving of special protections. There are compelling moral reasons for this position as well. The practice of putting forward ideas that are then critically evaluated by peers is essential to the pursuit of truth and justice. When only the dominant view can be expressed without consequences, that dominant view becomes dogma. Its adherents believe it, not as the result of patient and diligent investigation, but because they would be punished for pursuing alternatives.

On the other hand, there are some real moral costs associated with keeping Rasmusen on the faculty. He seems to be sympathetic to the idea that the presence of women diminishes the quality of a college education. In light of this, it would probably be rather difficult to feel comfortable as a female student in Rasmusen’s classes. His female colleagues are also likely to find the climate he has created very unpleasant. In addition, to treat the ideas expressed by Rasmusen as if they are just as likely to be true as any other competing idea ignores the fact that we have made significant progress on these issues in recent decades. It encourages the conclusion that there is no such thing as a settled moral issue. The crusade for women’s rights was predicated on the idea that, to borrow a phrase from Jean Paul Sartre, “existence precedes essence.” The attitudes that other people have about a woman’s “function,” shouldn’t construct limitations on who she can become. Autonomy is generally viewed as tremendously valuable, in part because of the role that it plays in self-creation. When views like Rasmusen’s are treated as if they are deserving of protection, the result is discouraging (to say the least) for women, particularly young college women who are just beginning to craft their own lives.

Finally, there is the issue of moral character. Rasmusen’s behavior on social media demonstrates either a misunderstanding of or disrespect for the role that he plays as an educator. The article focuses on the role that universities play in creating “geniuses.” Geniuses are rare, and genius isn’t obviously valuable for its own sake, its value depends on how it is used. This isn’t even close to the primary role of a public university. The role of a professor at such a university is to assist in developing a well-rounded, educated citizenry. Ideally, professors should be preparing students to live productive and meaningful lives. Good teaching requires empathy for students, and a genuine desire to understand the conditions under which they are apt to learn. Professors should remember that they are public figures. This means that before posting on social media, professors should reflect on the question of whether what they are posting will contribute to a negative classroom environment that might make it more difficult for certain students to learn.

Morality is a social enterprise, and young people look to adults in positions of authority to determine how they ought to behave. It may seem unfair that public figures carry more of a burden than others to conduct themselves reasonably and with dignity on social media platforms. Ideally, a person who has achieved a certain high level of influence values virtue and has worked hard to develop a strong moral character. People who care about character are the kinds of people who deserve to be in these positions in the first place. On social media and elsewhere, public figures should think carefully about the implications of their messages.

Life on Mars? Cognitive Biases and the Ethics of Belief

NASA satelite image of Mars surface

In 1877 philosopher and mathematician W.K. Clifford published his now famous essay “The Ethics of Belief” where he argued that it is ethically wrong to believe things without sufficient evidence. The paper is noteworthy for its focus on the ethics involved in epistemic questions. An example of the ethics involved in belief became prominent this week as William Romoser, an entomologist from Ohio claimed to have found photographic evidence of insect and reptile-like creatures on the surface of Mars. The response of others was to question whether Romoser had good evidence for his belief. However, the ethics of belief formation is more complicated than Clifford’s account might suggest.

Using photographs sent by the NASA Mars rover, Romoser has observed insect and reptile like forms on the Martian surface. This has led him to conclude, “There has been and still is life on Mars. There is apparent diversity among the Martial insect-life fauna which display many features similar to Terran insects.” Much of this conclusion is based on careful observation of the photographs which contain images of objects, some of which appear to have a head, a thorax, and legs. Romoser claims that he used several criteria in his study, noting the differences between the object and its surroundings, clarity of form, body symmetry, segmentation of body parts, skeletal remains, and comparison of forms in close proximity to each other.

It is difficult to imagine just how significant the discovery of life on other planets would be to our species. Despite all of this, several scientists have spoken out against Romoser’s findings. NASA denies that the photos constitute evidence of alien life, noting that the majority of the scientific community agree that Mars is not suitable for liquid water or complex life. Following the backlash against Romoser’s findings, the press release from Ohio University has been taken down. This result is hardly surprising; the evidence for Romoser’s claim simply is not definitive and does not fit with the other evidence we have about what the surface of Mars is like.

However, several scientists have offered an explanation for the photos. What Romoser saw can be explained by pareidolia, a tendency to perceive a specific meaningful image in ambiguous visual patterns. These include the tendency of many to see objects in clouds, a man in the moon, and even a face on Mars (as captured by the Viking 1 Orbiter in 1976). Because of this tendency, false positive findings can be more likely. If someone’s brain is trained to observe beetles and their characteristics, it can be the case that they would identify visual blobs as beetles and make the conclusion that there are beetles where there are none.

The fact that we are predisposed to cognitive biases means that it is not simply a matter of having evidence for a belief. Romoser believed he had evidence. But various cognitive biases can lead us to conclude that we have evidence when we don’t, or to dismiss evidence when it conflicts with our preferred conclusions. For instance, in her book Social Empiricism Miriam Solomon discusses several such biases that can affect our decision making. For example, one may be egocentrically biased toward using one’s own observation and data over others.

One may also be biased towards a conclusion that is similar to a conclusion from another domain. In an example provided by Solomon, Alfred Wegener once postulated that continents move through the ocean like icebergs drift through the water based on the fact that icebergs and continents are both large solid masses. Perhaps in just the same way Romoser was able to infer based on visual similarities between insect legs and a shape in a Martian image, not only that there were insects on Mars, but that the anatomical parts of these creatures were similar in function to similar creatures found on Earth despite the vastly different Martian environment.

There are several other forms of such cognitive biases. There is the traditional confirmation bias, where one focuses on evidence that confirms their existing beliefs and ignores evidence that does not. There is the anchoring bias, were one relies too heavily on the first information that they hear. There is also the self-serving bias, where one blames external forces when bad things happen to them, but they take credit when good things happen. All of these biases distort our ability to process information.

Not only can such biases affect whether we pay attention to certain evidence or ignore other evidence, but they can even affect what we take to be evidence. For instance, the self-serving bias may lead one to think that they are responsible for a success when in reality their role was a coincidence. In this case, their actions become evidence for a belief when it would not be taken as evidence otherwise. This complicates the notion that it is unethical to believe something without evidence, because our cognitive biases affect what we count as evidence in the first place.

The ethics of coming to a belief based on evidence can be even more complex. When we deliberate over using information as evidence for something else, or whether we have enough evidence to warrant a conclusion, we are also susceptible to what psychologist Henry Montgomery calls dominance structuring. This is a tendency to try to create a hierarchy of possible decisions with one dominating the others. This allows us to gain confidence and to become more resolute in our decision making. Through this process we are susceptible to trading off the importance of different pieces of information that we use to help make decisions. This can be done in such a way where once we have found a promising option, we emphasize its strengths and de-emphasize its weaknesses. If this is done without proper critical examination, we can become more and more confident in a decision without legitimate warrant.

In other words, it is possible that even as we become conscious of our biases, we can still decide to use information in improper ways. It is possible that, even in cases like Romoser, the decision to settle in a certain conclusion and to publish such findings are the result of such dominance structuring. Sure, we have no good reason to infer the fact that the Martian atmosphere could support such life, but those images are so striking; perhaps previous findings were flawed? How can one reject what one sees with their own eyes? The photographic evidence must take precedence.

Cognitive biases and dominance structuring are not merely restricted to science. They impact all forms of reasoning and decision making, and so if it is the case that we have an ethical duty to make sure that we have evidence for our beliefs, then we also have an ethical duty to guard against these tendencies. The importance of such ethical duties is only more apparent in the age of fake news and other efforts to deliberately deceive others on massive scales. Perhaps as a public we should more often ask ourselves questions like “Am I morally obliged to have evidence for my beliefs, and have I done enough to check my own biases ensure that the evidence is good evidence?”

Impeachment Hearings and Changing Your Mind

image of two heads with distinct collections of colored cubes

The news has been dominated recently by the impeachment hearings against Donald Trump, and as has been the case throughout Trump’s presidency, it seems that almost every day there’s a new piece of information that is presented by some outlets as a bombshell revelation, and by others as really no big deal. While the country at this point is mostly split on whether they think that Trump should be impeached, there is still a lot of evidence left to be uncovered in the ongoing hearings. Who knows, then, how Americans will feel once all the evidence has been presented.

Except that we perhaps already have a good idea of how Americans will feel even after all the evidence has been presented, since a recent poll reports that the majority of Americans say that they would not change their minds on their stance towards impeachment, regardless of what new evidence is uncovered. Most Americans, then, seem to be “locked in” to their views.

What should we make of this situation? Are Americans just being stubborn, or irrational? Can they help themselves?

There is one way in which these results are surprising, namely that the survey question asks whether one could imagine any evidence that would change one’s mind. Surely if, say, God came down and decreed that Trump should or should not be impeached then one should be willing to change one’s mind. So when people are considering the kind of evidence that could come out in the hearings, they are perhaps thinking that they will be presented with evidence of a similar kind to what they’ve seen already.

A lack of imagination aside, why would people say that they could not conceive of any evidence that could sway them? One explanation might be found with the way that people tend to interpret evidence presented by those who disagree with them. Let’s say, for example, that I am already very strongly committed to the belief that Trump ought to be impeached. Couldn’t those who are testifying in his defense present some evidence that would convince me otherwise? Perhaps not: if I think that Trump and those who defend him are untrustworthy and unscrupulous then I will interpret whatever they have to say as something that is meant to mislead me. So it really doesn’t matter what kind of evidence comes out, since short of divine intervention all of the evidence that comes out will be such that it supports my belief. And of course my opposition will think in the same way. So no wonder so many of us can’t imagine being swayed.

While this picture is something of an oversimplification, there’s reason to think that people do generally interpret evidence in this way. Writing at Politico, psychologist Peter Coleman describes what he refers to as “selective perception”:

Essentially, the stronger your views are on an issue like Trump’s impeachment, the more likely you are to attend more carefully to information that supports your views and to ignore or disregard information that contradicts them. Consuming more belief-consistent information will, in turn, increase your original support or disapproval for impeachment, which just fortifies your attitudes.

While Coleman recognizes that those who are most steadfast in their views are unlikely to change their minds over the course of the impeachment hearings, there is perhaps still hope for those who are not so locked-in. He describes a “threshold effect”, where people can change their minds suddenly, sometimes even coming to hold a belief that is equally strong but on the opposite side of an issue, once an amount of evidence they possess passes a certain threshold. What could happen, then, is that over the course of the impeachment procedures people may continue to hold their views until the accumulated evidence simply becomes too overwhelming, and they suddenly change their minds.

Whether this is something that will happen given the current state of affairs remains to be seen. What is still odd, though, is that while the kinds of psychological effects that Coleman discusses are ones that describe how we form our beliefs, we certainly don’t think that this is how we should form our beliefs. If these are processes that work in the background, ones that we are subject to but don’t have much control over, then it would be understandable and perhaps (in certain circumstances) even forgivable that we should generally be stubborn when it comes to our political beliefs. But the poll is not simply asking what one’s beliefs are, but what one could even conceivably see oneself believing. Even if it is difficult for us to change our minds about issues that we have such strong views about, surely we should at least aspire to be the kind of people who could conceive of being wrong.

One of the questions that many have asked in response to the poll results is whether the hearings will accomplish anything, given that people seem to have made up their minds already. Coleman’s cautious optimism perhaps gives us reason to think that minds could, in fact, be swayed. At the same time it is worth remembering that being open-minded does not mean that you are necessarily wrong, or that you will not be vindicated as having been right all along. At the end of the day, then, it is difficult not to be pessimistic about the possibility of progress in such a highly polarized climate.

Some Hospitals Sue Their Delinquent Patients. Should They?

photograph of coin jar spilling out on top of medical bills

Despite the passing of the Patient Protection and Affordable Care Act — i.e., Obamacare — in 2010, health care reform remains a contentious political issue. Costly procedures and huge medical bills still pose insurmountable financial burdens for many Americans — even those who are insured; thus, the appetite to ameliorate the pain remains. As reported in a recent CNBC article, a recent study concluded that 66.5% of all bankruptcies were related to medical issues. Whatever the positive effects of health insurance reform have been, it has not provided full protection for people from the threat of financial ruin because of unpaid medical bills.

Are there policies that healthcare systems and hospitals have instituted that may be exacerbating this problem? Indeed. Some hospitals will sue their patients for these unpaid medical bills, thus subjecting some patients to the additional expenses and stresses of navigating the legal system. Now, not all hospitals do this, and some hospitals sue their patients much more than others. A recent NPR article covered a study published in The Journal of the American Medical Association (JAMA) that showed that 36% of hospitals in Virginia sued patients and garnished wages in 2017. What’s more, just 5 hospitals accounted for more than half of the lawsuits, and all but one of these hospitals were non-profit institutions. As such, it is important to recognize this as a choice made by certain hospitals, rather than a widely accepted and unavoidable practice. In fact, hospitals have other choices to make regarding unpaid debts. These debts could be passed to collection agencies or written off as “bad debt.”

Hospitals, of course, face financial pressures of their own, and suing and garnishing to recoup unpaid medical debt is one strategy for easing these pressures. Hospitals defend the practice as both legal and transparent. Detractors claim that the practice violates the ethos of hospitals, understood as institutions that exist for the community benefit. We can approach the underlying divide in this debate in terms of whether healthcare is morally special. If health care is not special — if it is a normal consumer good just like other consumer goods — then it is fitting and proper to treat trade in healthcare goods as subject to contract law, where the courts play a vital role in ensuring fairness in economic relations. On the other hand, if health care is morally special — if it is not just like other consumer goods because it has some essential connection to the concerns of justice — then different rules governing economic conflicts in the exchange of health care goods ought to apply.

Presume that we are treat healthcare like any other good. By receiving healthcare services, customers implicitly agree to pay for them. By refusing to pay, they have broken this implicit contract. The courts exist as a transparent, politically legitimate, and unbiased enforcer of these contracts, ensuring that what debts have legally and properly been incurred do get paid. If service providers are not given the public assurance that they will be paid for the services they provide, then they would have to take on the extra risk of either losing out on payments or the extra burden of trying to collect on their own. Hospitals, thus, have a legal right to sue their patients, and it is fitting that they do.

If healthcare is a different kind of good — if healthcare is considered somehow special — then the above standard analysis of why service providers ought to have a right to sue no longer applies so neatly. Two observations can be made to suggest healthcare ought to be treated as special. First, healthcare exists to protect, maintain, and enhance a person’s health. Though through most of human history, our abilities to significantly affect the course of diseases had been limited, technological and social advances of the 20th and 21st century have produced a healthcare system that indeed can prolong the length and enhance the quality of lives. Having a life, of course, is a precondition of living a good life. Sickness and premature death limit the opportunities of living a life according to one’s life-plan. If justice entails the principal that society ought to foster equal opportunity, then healthcare has special moral significance because of its connection to health and, therefore, life opportunities. This is the basic argument made by Norman Daniels in his 1985 book Just Health Care.

Healthcare’s special status may also be rooted in vulnerability. The instinctive value we place on protecting our own health and well-being makes us vulnerable to exploitation when our health is threatened. The standard model outlined above presumes that the consumer will act rationally and take into consideration things like price and need when purchasing a product. And yet for the need of prolonging one’s own life and health, there is often no price we wouldn’t accept. This is not to say that reforms to the healthcare system that would force hospitals to be more transparent about price wouldn’t be a welcome change. Rather, I doubt that this change alone would significantly protect patients’ vulnerability to exploitation on this matter.

Considering these observations, one may argue, healthcare should be given a special status, and standard norms of contract law ought not to define the rights and responsibilities of providers in attempting to collect on medical debts. If we follow this line of argument, we are still stuck with the obvious rejoinder that providers deserve to be compensated for their vital labor. We should not expect them to work for free. I think this quite quickly pushes us down the path of envisioning publicly funded schemes to finance health care, whether that be a single-payer model or some other mixed system. If healthcare’s moral importance undercuts the private rights of economic actors in the healthcare market, then public obligations ought to step in to ensure a scheme that distributes care to those in need and adequately compensates the caregivers central to the system.

Climate Justice: Whose Responsibility?

photograph of power plant smoke stacks

Now that the effects of global heating are happening and ecological collapse has begun, we are confronted with a set of urgent questions about justice and moral responsibility in responding to our climate emergency. Climate heating is of course a global problem – and one that is already disproportionately affecting the world’s poorest and most disadvantaged people. It is also a problem caused by people in rich countries continuing the unfettered consumption of resources and the failure of our governments to create policies and laws to curb this consumption, to safeguard the environment, and to transition to green economies through the decades of warnings leading up to this crisis.

The moral question of whether we must act is surely answered in the affirmative, and yet a set of questions remain about how to understand that imperative in relation to the issues surrounding disproportionate greenhouse gas outputs of developed, industrialized countries compared to the minuscule contribution of many smaller or less industrialized countries, who are often those experiencing the worst effects.

This is important because the way we understand our responsibilities has the potential to influence how solutions are pitched to the public and how policy might be implemented. Some of the arguments traditionally used to ground the moral duty of people in affluent countries giving money to the poor of the Global South are transferable, with little adjustment, to the area of climate policy.

Firstly, it is a common feature of normative moral systems that ethical ‘rules,’ ‘duties,’ or, more broadly, ethical actions are universalizable. That is, what is right for one person is right for all, and that when a rule prescribes that we act in a certain way towards one person, that is also a general rule, that we act in that way with respect to all persons.

In a globalized world, we often assume the moral community extends to all people. This ‘cosmopolitan’ argument maintains that the sphere of moral concern is global, that no individual falls outside of it. This means that where moral duties or requirements can be shown to exist, they would also extend to include people in different socioeconomic and geographical situations from our own.

Since the 1980’s Australian philosopher Peter Singer has been advocating for what he calls the expanded moral circle, using this basic idea to challenge some of our behaviors. In particular, Singer argues for the alleviation of poverty by those with the means to do so. Using his now famous drowning child example, Singer has argued that if we have a moral duty to save a drowning child who we might otherwise pass by, without sacrificing something of comparable moral value, then we have an equal duty to save a child dying from poverty-induced disease and malnutrition halfway across the world. The only difference is proximity and that, argues Singer, constitutes a logistical, not a moral, difference.

This is the expanding circle of moral concern that our moral duty to alleviate suffering is as strong for children in far away places as it is for our own children or the children next door. On Singer’s view, it would be immoral to spend our disposable income on expensive clothes, toys, or games that we do not need when there are children elsewhere dying in poverty.

Singer’s argument is a version of the argument from humanity, which says that, no matter our relationship to that suffering, whether or not we in some way caused it, our moral duty to alleviate it inheres in the fact that we are able to help. That, without sacrificing something of comparable moral worth, we can send aid to the world’s poor means we have a duty to do so. This humanitarian argument has broad application: our moral duty exists regardless of the cause of the suffering.

There is a different kind of argument a duty of justice according to which we have a moral duty to help others in need only where, and to the extent that, our actions have caused their suffering. This is an argument from responsibility; the exploitation of the Global South has enriched those in the Global North, and the moral imperative for those in rich countries to alleviate poverty is derived from causal responsibility we have a moral duty to provide redress in the form of reparations as a matter of justice.

This argument is narrower in scope. One only has a moral duty, here, according to the extent to which one has been responsible (directly or indirectly? knowingly or unknowingly?) for the suffering of others in far-flung places. But, on the other hand, this argument does embed the need to change our behavior in a way that the humanitarian argument does not.

It should be clear how this is directly transferable to the climate crisis. From a justice perspective, rich, industrialized nations have been burning fossil fuels to power their citizens’ lifestyles at such a rate that the whole global climate system is now tipping out of control. Those least responsible and least able to cope with the effects are already being disproportionately impacted. Therefore rich countries have a moral duty to alleviate the suffering of those in poor countries. (From a humanitarian perspective, rich countries have the capacity to alleviate the causes and provide aid, therefore the moral onus exists because the capacity exists.)

Whether we recognize a duty based in justice because of “polluter pays” kinds of arguments or on humanitarian grounds we owe reparations on the basis of being most able to help could make a significant difference when we start talking about managing aid and paying reparations to those affected by the climate crisis. It might, for instance, be possible to argue, along the lines of a duty from justice, for diminished responsibility based on the argument that no country meant to cause global heating and that those who have are not, or not entirely, culpable. This can be countered by reminding ourselves that there have been enough warnings, and claims about intentions are at best moot, and at worst false. What’s important to note is that these justice arguments rely upon the extent to which responsibility is admitted or can be established.

On the other hand, it might be at least as important to ask which is more likely to persuade people into action. Though for both these questions the best answer would surely be a combination of both, it is worth pursuing the implications of each a little further.

The question of which argument will be most persuasive might just seem like a pragmatic question, not necessarily a moral one. But it could easily be made to work as a moral argument, framed in terms of the moral imperative to get people to act, and act fast.

For example, Philosopher Holly Lawford-Smith argues that there are reasons to believe that people are more likely to be motivated to act by the justice argument. The humanitarian argument tracks a correlation between the existence of suffering and a moral duty to alleviate it. Everywhere there is suffering, there is also a duty to minimize it. But one might object that this is too morally demanding, and that some may not be willing to accept it. Lawford-Smith suggests this argument relates to people’s intuitions about moral omissions versus moral acts. Research shows that people are inclined to think of omissions as morally less serious than actions in scenarios where an action and an omission have the same outcome. (For example, people tend to think that killing someone through an act is morally worse than letting a person die because of an omission.)

On the other hand, according to the justice argument, the moral duty derives from culpability. The way people act and benefit from unjust institutions makes them culpable for creating the suffering in the first place. Lawford-Smith argues that people are more motivated to act if they feel that some behavior of theirs has caused the suffering. As such, she suggests that it may be more efficacious to argue from justice than from humanity to make a case to the public for why they are duty-bound to act (lobby, agitate, strike, vote, or whatever) on the climate emergency, and for appropriate aid and reparation schemes to achieve global climate justice. If the ultimate moral outcome here is, in fact, urgent action, then the moral and the pragmatic line up, and we must get on with the business of explaining to governments and citizens of rich, industrialized countries why they are, and will be, the cause of massive untold global suffering.

One final observation: at this crucial time, the need to motivate a critical mass of the world’s citizens to rise up and push for change is dire. This is the proverbial eleventh hour. If people cannot be motivated by the moral arguments for humanity or for justice, they may be motivated by arguments from self-interest which are of course not moral arguments at all. In that case, one might point out this is already no longer a crisis affecting just other people in other places. If the climate emergency has not affected you yet, it soon will. If it does not affect you, it will affect your children. Moral arguments should work, because we are a moral, altruistic, and cooperative species, by and large. But if they don’t, let’s hope that existential self-interest ones will. Sadly, though, if only these kinds of reasons will persuade people to act, those people on the planet who are not in a position to cope with the crisis, will find neither humanity nor justice.

Justice and Rodney Reed: Evidence, Sentencing, and Appeal

photograph of Rodney Reed from prison

On the morning of April 23rd, 1996, the body of 19-year-old Stacey Stites was found in a wooded area just off of a road in rural Texas. Stacey had been raped and strangled to death with her own belt. Seven months later, Rodney Reed was arrested for her murder. Reed was convicted of the crime in 1998 and was sentenced to death by lethal injection. The execution was scheduled to take place on November 20th, 2019. On November 15th, 2019, the Texas Court of Criminal Appeals issued Reed an indefinite stay of execution. The stay was issued in a climate of tremendous support for Reed. Celebrities such as Beyoncé, Kim Kardashian West, Oprah, and Dr. Phil all spoke openly and actively about their support for a potential stay in Reed’s case. Politicians who have voiced similar support include presidential hopefuls Kamala Harris and Pete Buttigieg.

When Stites was recovered, DNA was found both in and on her body. All of the prominent men in her life, including her fiancé Jimmy Finnell, were tested and ruled out. Months later, there was another attack. 19-year-old Linda Schlueter was using a drive-up payphone when she was approached by Reed for a ride. She initially declined, but eventually agreed. During the drive, Schlueter reported that Reed directed her down a dark dirt road. When she refused to take that route, Reed attacked her, repeatedly bashing her head against the steering wheel. She reported an exchange with her attacker, “And I asked him, ‘What do you want? What the hell do you want from me?’ And he said, ‘I want a blowjob. And I said, ‘You’ll have to kill me before you get anything from me.’ And he said, ‘I guess I have to kill you then.’” Schlueter saw the lights of a car approaching and was able to exit the vehicle and flee in the direction of the approaching car. Reed drove off in her car, but police were notified and Reed was quickly apprehended.

Police discovered that Reed’s DNA was already in the system because of his connection to the sexual assault of an intellectually disabled woman in 1995, a crime for which he was never tried. Reed denied knowing Stacey Stites, but when his DNA was tested against the material recovered from her body, it was a match. Reed then claimed that he had been having an affair with Stites, but that they were keeping it a secret because Stites was engaged. Once Reed became a compelling suspect in the rape and murder, his genetic profile was tested against other unsolved rapes. It was matched to two unresolved cases—the beating and rape of a 19-year-old woman, and the beating and rape of a 12-year-old girl. When confronted with the evidence in the case against the former, Reed told a similar story of a clandestine relationship. He was never tried in either of these cases. An all-white jury convicted Reed, who is black, for the murder of Stites, who was white. The evidence against Reed has stood up to the scrutiny of nine appeals.

Many people, however, believe that compelling evidence exists that supports the conclusion that Reed is innocent of Stites’ murder. First, if the semen had been deposited as early as the prosecution alleged, one would expect to find more of it. The sample had already degraded somewhat at the time at which it was recovered. The experts at the trial testified that Reed’s semen would not have been present at all if consensual intercourse had taken place more than 24 hours earlier, as Reed had alleged. Those same experts now acknowledge that sperm can actually be present many days longer than they suggested in their original testimony.

In addition, new witnesses have come forward claiming to have knowledge that Reed and Stites were, indeed, having an affair. But many are skeptical of this evidence, since these witnesses did not come forward at any point in the last two decades and did not do so at the crucial stage at which the state was building its initial case, despite the existence of reward money for information that might lead to the arrest of a suspect.

One significant piece of evidence in support of Reed’s innocence concerns Jimmy Fennell, a police officer and Stites’ fiancé at the time of her murder. A new witness claims that Fennell confessed to the murder in private conversation, offering as his motive his rage over the fact that his fiancé had been having an affair with a black man. The witness also claims knowledge that Fennell was an abusive partner to Stites. In the years following Stites’ death, Fennell was convicted of the kidnapping and rape of a different woman, a crime that he committed while on duty as a police officer. He was sentenced to ten years in prison for that crime.

This case raises challenging ethical questions. The first concerns the role that the public plays in high-profile cases. The observation that the public can significantly impact the course of a criminal proceeding is not a new one. In one noteworthy case that inspired the film The Fugitive, Dr. Sam Sheppard was convicted of the murder of his wife. After Dr. Sheppard had spent ten years in prison, the United States Supreme Court overturned his conviction because of the role the untethered media presence and public obsession with the case had played.

Public involvement in notorious criminal cases is not new, but what is new is the scope of its reach. Celebrity commentary, though profoundly lacking in any privileged insight or expertise, can be tremendously influential. If Dr. Sheppard was treated unfairly by the power of public opinion, at least the Internet didn’t exist to make his troubles exponentially worse.

One might argue that the public outcry over this case demonstrates that the speech of celebrities, politicians, and their supporters can be a force for justice. Loud voices outside of the legal system can bring about changes that perhaps never could have happened from within. One of the reasons that free speech is so valuable is that it allows citizens to speak truth to power, and as a result, it may play an important role in rectifying injustices.

In opposition to that consideration, it is important to note that our system is supposed to ensure procedural fairness. Any convicted offender, regardless of their notoriety, can expect to enjoy access to the same procedures to redress injustice in the form of the appeals process. But when the public gets involved, some cases get treated differently from others.

An additional concern has to do with the fact that the celebrities and politicians involved may not always have pure motives for speaking publicly about a particular case. A politician may, in certain cases, want to appear “tough on crime.” In others, they may want to come across as advocates of social and racial justice. A celebrity might speak out about a particular case as a publicity stunt to increase their following. These motivations are likely to be inconsistent with justice for the victims or those convicted of crimes.

This case also raises questions about what type of evidence should serve to exonerate a convicted individual on death row. Some believe that if major aspects of the prosecution’s case begin to unravel, that should be sufficient for exoneration, or it should at least mandate a new trial for the defendant. Others maintain that to justify abandoning the verdict of a judge or jury, there should be evidence of actual innocence. This is a much higher threshold to reach. Criminal trials are costly both financially and in terms of hard work and emotional cost. We simply can’t afford to bring death row cases to trial over and over again. At some point, the decisions of the jury and the courts of appeals must stand. If we are worried that this procedure isn’t reliable enough to ensure that innocent people aren’t put to death, perhaps we should not have the death penalty at all.

People who research this case at home have access to a lot of information about Reed’s past. Crucially, they have access to the fact that Reed’s DNA matched the evidence associated with two additional rapes. When viewed as a complete picture in this way, it is easy to conclude that Reed is not only a violent rapist, but a serial violent rapist. It is important to note, however, that Reed was not convicted or even tried for those crimes. What difference should that make to our assessment of the case against Reed for the murder of Stacey Stites?

Cruel and Unusual Reasoning? Some Recent SCOTUS Decisions on the Eighth Amendment

Yellow and white corridor with metallic doors of cell rooms in old prison

Between October 2 and October 24, 2002, ten people were killed and three others injured by John Allen Muhammad and Lee Boyd Malvo. This series of attacks, referred to as the D.C. Sniper Attacks, were executed within the I-95 corridor around Northern Virginia, Baltimore, and Washington, D.C. While John Allen Muhammad was executed by lethal injection in 2009, Lee Boyd Malvo was sentenced to life without parole in Virginia and six life sentences in Maryland.

Malvo, however, is now appealing his Virginia life sentences in the Supreme Court of the United States (SCOTUS), relying on that court’s 2016 ruling that recent constitutional bans on mandatory life-without-parole sentences for juvenile offenders are to be applied retroactively. The constitutional ban on such sentences was itself enacted in 2012 as an extension of a 2010 ruling that found mandatory sentences of life without parole for juvenile offenders to be in violation of the Eight Amendment ban on cruel and unusual punishments.

The SCOTUS ruling that mandatory sentences of life without parole, as well as death sentences, constitute cruel and unusual punishment for juvenile offenders, but is not cruel and unusual in general, brings up an interesting question: What does the court consider cruel and unusual? We should also ask, regardless of the SCOTUS opinions on the matter: What do we consider cruel and unusual?

Death sentences are not generally held to be cruel and unusual by the SCOTUS, nor is a death sentence held to be cruel and unusual even in instances when the sentenced person may suffer tremendously. In Bucklew v. Prescythe the court ruled that Russell Bucklew could not demand his death sentence be executed via gas chamber instead of Missouri’s standard pentobarbital lethal injection. Bucklew requested an alternative means of execution due to a rare condition he has which could cause him to drown in his own blood during execution. However the majority opinion, delivered by Justice Neil Gorsuch, argued that the State of Missouri’s interest in concluding pronounced legal sentences in a timely manner outweighed Bucklew’s claims. More specifically, the court stated that Bucklew’s case did not meet the standard set by the so-called “Baze-Glossip test,” which requires that an appeal identifies an available and easy alternative execution method that is very likely to ameliorate what would otherwise be significant suffering.

The court struck a seemingly different tone in Hudson v. McMillian when they opined that a prisoner being beaten by a guard may count as cruel and unusual punishment, even when the prisoner does not suffer lasting injuries. Justice Sandra Day O’Connor, writing for the majority, stated that it was not only the extent of a prisoner’s injury and suffering that mattered, but also the attitude with which the punishment was inflicted by state agents. Despite the apparent differences between the decisions in Bucklew v. Prescythe and Hudson v. McMillian, there is a common thread. Writing for the majority in Bucklew, Justice Neil Gorsuch argued that the mere fact of significant suffering on the part of inmate did not automatically make a punishment cruel and unusual. Rather it is whether the inmate’s suffering is directly intended by the agents of the state. In two separate decisions the attitude of state agents was the predominating consideration over the extent of a prisoner’s suffering.

In the case of death sentences and mandatory life without parole for juveniles, however, the court’s reasoning is that such punishment is cruel and unusual. The 2010 decision in Graham v. Florida stated that sentencing juveniles to mandatory life without parole for non-homicide crimes is cruel and unusual because it doesn’t allow any possibility of releasing convicted people, even when they have demonstrated a commitment to their own rehabilitation. This reasoning was extended to juveniles convicted of homicide in the 2012 Miller v. Alabama ruling. Hence the state of SCOTUS opinion at present is that it is cruel and unusual to foreclose on the possibility that a juvenile offender may reform enough that they should be considered for parole; but that it is not cruel and unusual for an offender to be executed in a fashion that may cause extreme suffering; but that it is cruel and unusual for an offender to be beaten in a way that does not cause lasting injury. Can these views be squared with each other?

To probe this question it is helpful to look at two prominent theories of punishment: utilitarianism and retributivism. The utilitarian theory considers the advisability of punishing a particular offense, or type of offense, in terms of the balance of social benefit to social harm. Questions about whether a punishment will sufficiently deter, incapacitate, or rehabilitate an offender are balanced against the needfulness, efficiency, and cost of that punishment. A punishment may be considered cruel and unusual under such a theory if the social costs outweigh the benefits. For example, sentencing minor offenders to death would greatly erode general freedom and the populace’s acceptance of the legal system. Punishing thieves by chopping off their hands may be effective, and people might even accept it, but less harsh punishments could achieve the same effect. Marijuana possession may be against the law, but it may not be worth trying to deter people from obtaining and using the drug.

Retributivist theories, on the other hand, focus on the concepts of moral desert and fittingness. That is, ensuring that punishments are proportional to offenses. In such theories the concern is more that offenders get what’s coming to them, rather than balancing benefits and detriments to society. In the extreme a retributivist theory promotes the idea of “an eye for an eye.” Hence a person who has killed someone may themselves deserve to be killed. However pronouncing a death sentence for forgery or speeding is not fitting—not proportional—to the offense. Under a retributivist theory a punishment would be cruel and unusual if it were grossly out of proportion to the offense.

In the few cases noted above, clear signs of the utilitarian view are manifested in the state of SCOTUS opinion on capital punishment. In cases like Lee Boyd Malvo’s, the court brings to bear considerations of whether a sentence allows for the possibility of rehabilitation, even when that sentence clearly fulfills deterrent and incapacitating purposes. At the same time cases like Russell Bucklew’s show that the court is also concerned with cost and efficiency. Whether a punishment is cruel or unusual turns out to be a function of a calculation balancing numerous different values. If the final tally in the eye of the court seems out of balance, even grossly so, the example of weighing a variety of factors and decided on a case-by-case basis is a good one.

Is Death Forever?: The Case of Benjamin Schreiber

photograph of defibrillator practice on a CPR dummy

On Wednesday, November 6th, an appeals court confirmed a lower court’s ruling that a death row inmate had not fulfilled his sentence when his heart stopped in a medical procedure in 2015. The inmate, Benjamin Schreiber, was convicted of murder in 1996 and sentenced to life without parole. Shreiber had argued that his sentence ended when his heart stopped during a medical emergency four years ago, even though he was later revived.

There are cases that blur the line between life and death, either because it is difficult to determine death or define it. In 2018, a woman in South Africa woke up in a morgue after mistakenly being declared dead. Paramedics at the scene found no heartbeat and detected no signs of life, but were later flummoxed when they spotted the patient breathing.

Cases like this are obviously uncommon, but they do happen. At least 38 times since 1982, patients have been recorded as experiencing “Lazarus Syndrome,” or autoresuscitation, after failed cardiopulmonary resuscitation. In such cases, medical intervention failed to restart a patient’s heart but nevertheless the patient’s heart restarted.

Definitions of medical death have changed with advances in possible medical interventions. Globally and historically, people have looked to circulation and breathing as standards for life and death. Schreiber’s standard here, therefore, the lack of a pulse, or circulation of blood throughout the body, is not without precedent. These standards became complicated the more we learned about the brain and its connection to our lives as individuals.

In 1968 the medical community came together to try to address definitions of death as organ transplants became more successful. Removing organs from patients who still had circulating and oxygenating blood increased the probability of successful transplant, but insured the death of the donor patient. According to our legal and moral standards of wrongful harm, there are reasons to only perform such procedures on patients formally pronounced dead. New understandings of the importance of brain functioning for identity and personhood provided useful distinctions to inform this pronouncement.

We know now that blood can continue to circulate without there being any hope of meaningful interaction with the world again on the patient’s part. Neuroscience, meanwhile, shows that certain brain function is necessary for personhood and when particular lacks of brain function occur, doctors can determine that death in the sense of loss of personhood has also occurred. Thus patients can be pronounced dead while their organs are still viable for transplant.

When deciding whether or not to harvest organs, the permanence or irreversibility of the state of the patient is a crucial consideration. As philosophers, we can wonder whether the finality of death is a crucial aspect of the concept for other applications, and potential applications in the future.

Using this ambiguity behind our evolving definition of “death,” Schrieber claimed to have served his time. He accepted his initial sentence of life without parole, but would not accept “life plus one day” (Schreiber claims to have been revived from septic shock against his wishes). The court found Schreiber’s claim original, but refused to side with him on the grounds that he was “unlikely” to be dead, having represented himself legally and signed his own documents.

While definitions of death today include some criterion of finality (such as the cessation of life or the permanent loss of a human’s personhood), the discussion in this case leaves open an interesting possibility: If Schreiber is present to represent his interests in court, then could he nevertheless have been dead, thus fulfilling his sentence? In other words, is a death penalty meant to shorten someone’s life or ensure they experience death?

If we can imagine a future where someone exists after a period of cessation of life that we currently understand as death under some medical criteria, then Schreiber’s case may be a relic of our stage in medical technology (just as pronouncements of life while brains lacked functioning were relics of previous centuries’ understandings of life and death). Say technology advances to the point where we can map the complicated and dynamic connections that make you who you are. If we have the ability to produce such an intelligent mapping, then your physical body could cease to live according to our current medical definitions, but there is the possibility that we could recreate a physical foundation for the map to run so as to support your conscious existence in the world once more.

If this possibility existed, there are two important questions related to Schreiber’s case. First, would we continue to use “death” in a sufficiently close enough way so as to say that if he experienced this process, he would qualify as “dead” at one time? If so, then the legal system could declare his sentence fulfilled if they understand it in a particular way (until death), or not if they understand it differently (for all of Schreiber’s life).

Second, if we had the technology described above, would the person brought into existence with the dynamic mapping of Schreiber be Schreiber? If the original person in the original body ceased to exist, then creating a supporting body for the dynamic mapping may bring in as exact a copy as possible, but this may not count as the original Schreiber. If this is the case, then it would be wrong to apply the legal punishment to the created Schreiber.

We can have a definition of death that does not include finality. With this caveat, Schreiber’s appeal becomes more compelling if the penalty applied to him is understood as “until death.” Regardless, the case brings out how we mean punishment to apply, and raises theoretical questions about how we may apply them in the future.

Some Ethical Problems with Footnotes

scan of appendix title page from 1978 report

I start this article with a frank confession: I love footnotes; I do not like endnotes.
Grammatical quarrels over the importance of the Oxford comma, the propriety of the singular “they,” and whether or not sentences can rightly end with a preposition have all, in their own ways and for their own reasons, broken out of the ivory tower. However, the question of whether a piece of writing is better served with footnotes (at the bottom of each page) or endnotes (collected at the end of the document) is a dispute which, for now, remains distinctly scholastic.1 Although, as a matter of personal preference, I am selfishly partial to footnotes, I must admit – and will hereafter argue – that, in some situations, endnotes can be the most ethical option for accomplishing a writer’s goal; in others, eliminating the note entirely is the best option.
As Elisabeth Camp explains in a TED Talk from 2017, just like a variety of rhetorical functions in normal speech, footnotes typically do four things for a text:

  1. they offer a quick method for citing references;
  2. they supplement the footnoted sentence with additional information that, though interesting, might not be directly relevant to the essay as a whole,
  3. they evaluate the point made by the footnoted sentence with quick additional commentary or clarification, and
  4. they extend certain thoughts within the essay’s body in speculative directions without trying to argue firmly for particular conclusions.

For each of these functions (though, arguably less so for the matter of citation), the appositive commentary is most accessible when directly available on the same page as the sentence to which it is attached; requiring a reader to turn multiple pages (rather than simply flicking their eyes to the bottom of the current page) to find the note erects a barrier that, in all likelihood, leads to many endnotes going unread. As such, one might argue that if notes are to be used, then they should be easily usable and, in this regard, footnotes are better than endnotes.
However, this assumes something important about how an audience is accessing a piece of writing: as Nick Byrd has pointed out, readers who rely on text-to-speech software are often presented with an unusual barrier precisely because of footnotes when their computer program fails to distinguish between text in the main body of the essay versus text elsewhere. Imagine trying to read this page from top to bottom with no attention to whether some portions are notes or not:

(From The Genesis of Yogācāra-Vijñānavāda: Responses and Reflections by Lambert Schmithausen; thanks to Bryce Huebner for the example)
Although Microsoft Office has available features for managing the flow of its screen reader program for Word document files, the fact that many (if not most) articles and books are available primarily in .pdf or .epub formats means that, for many, heavily footnoted texts are extremely difficult to read.
Given this, two solutions seem clear:

  1. Improve text-to-speech programs (and the various other technical apparatuses on which they rely, such as optical character recognition algorithms) to accommodate heavily footnoted documents.
  2. Diminish the practice of footnoting, perhaps by switching to the already-standardized option of endnoting.

And, given that (1) is far easier said than done, (2) may be the most ethical option in the short term, given concerns about accessibility.
Technically, though, there is at least one more option immediately implementable:
3. Reduce (or functionally eliminate) current academic notation practices altogether.
While it may be true that authors like Vladimir Nabokov, David Foster Wallace, Susanna Clarke, and Mark Z. Danielewski (among plenty of others) have used footnotes to great storytelling effect in their fiction, the genre of the academic text is something quite different. Far less concerned with “world-building” or “scene-setting,” an academic book or article, in general, presents a sustained argument about, or consideration of, a focused topic – something that, arguably, is not well-served by interruptive notation practices, however clever or interesting they might be. Recalling three of Camp’s four notational uses mentioned above, if an author wishes to provide supplementation, evaluation, or extension of the material discussed in a text, then that may either need to be incorporated into the body of the text proper or reserved for a separate text entirely.
Consider the note attached to the first paragraph of this very article – though the information it contains is interesting (and, arguably, important for the main argument of this essay), it could potentially be either deleted or incorporated into the source paragraph without much difficulty. Although this might reduce the “augmentative beauty” of the wry textual aside, it could (outside of unusual situations such as this one where a footnote functions as a recursive demonstration of its source essay’s thesis) make for more streamlined pieces of writing.
But what of Camp’s first function for footnotes: citation? Certainly, giving credit fairly for ideas found elsewhere is a crucial element of honest academic writing, but footnotes are not required to accomplish this, as anyone familiar with parenthetical citations can attest (nor, indeed, are endnotes necessary either). Consider the caption to the above image of a heavily footnoted academic text (as of page 365, the author is already up to note 1663); anyone interested in the source material (both objectively about the text itself and subjectively regarding how I, personally, learned of it) can discover this information without recourse to a foot- or endnote. And though this is a crude example (buttressed by the facility of hypertext links), it is far from an unusual one.
Moreover, introducing constraints on our citation practices might well serve to limit certain unusual abuses that can occur within the system of academic publishing as it stands. For one, concerns about intellectual grandstanding already abound in academia; packed reference lists are one way that this manifests. As Camp describes in her presentation,

“Citations also accumulate authority; they bring authority to the author. They say ‘Hey! Look at me! I know who to cite! I know the right people to pay attention to; that means I’m an authority – you should listen to what I have to say.’…Once you’ve established that you are in the cognoscenti – that you belong, that you have the authority to speak by doing a lot of citation – that, then, puts you in a position to use that in interesting kinds of ways.”

Rather than using citations simply to give credit where it is due, researchers can sometimes cite sources to gain intellectual “street cred” (“library-aisle cred”?) for themselves – a practice particularly easy in the age of the online database and particularly well-served by footnotes which, even if left unread, will still lend an impressive air to a text whose pages are packed with them. And, given that so-called bibliometric data (which tracks how and how frequently a researcher’s work is cited) is becoming ever-more important for early-career academics, “doing a lot of citation” can also increasingly mean simply citing oneself or one’s peers.
Perhaps the most problematic element of citation abuse, however, stems from the combination of easily-accessed digital databases with lax (or over-taxed) researchers; as Ole Bjørn Rekdal has demonstrated, the spread of “academic urban legends” – such as the false belief that spinach is a good source of iron or that sheep are anatomically incapable of swimming – often come as a result of errors that are spread through the literature, and then through society, without researchers double-checking their sources. Much like a game of telephone, sloppy citation practices allow mistakes to survive within an institutionally-approved environment that is, in theory, designed to squash them. And while sustaining silly stories about farm animals is one thing, when errors are spread unchecked in a way that ultimately influences demonstrably harmful policies – as in the case of a 101-word paragraph cited hundreds of times since its publication in a 1979 issue of the New England Journal of Medicine which (in part) laid the groundwork for today’s opioid abuse crisis – the ethics of citations become sharply important.
All of this is to say: our general love for academic notational practices, and my personal affinity for footnotes, are not neutral positions and deserve to be, themselves, analyzed. In matters both epistemic and ethical, those who care about the accessibility and the accuracy of a text would do well to consider what role that text’s notes are playing – regardless of their location on a given page.
 
1  Although there have been a few articles written in recent years about the value of notes in general, the consistent point of each has been to lament a perceived downturn amongst the general attitude regarding public disinformation (with the thought that notes of some kind could help to combat this). None seem to specifically address the need for a particular location of notes within a text.

Is This an Emergency?: Why Language Matters

image of emergency road sign

Last September, the UN Secretary General António Guterres delivered an address on climate change, calling it a ‘climate emergency’ echoing the terminology employed by the prominent climate scientist Prof Hans Joachim Schellnhuber.

The language we use matters a great deal; and itself has ethical implications.

Given the severity of the situation: warnings coming from a raft of recent reports from agencies such as the IPCC and the UN, have scientists sounding the alarm that human society is in jeopardy from the heating atmosphere, the accelerating decline of the Earth’s natural life-support systems, and other forms of ecological destruction, it is manifestly necessary to speak about the situation with an appropriate level of alarm and urgency.

There is a concern that the media have, for decades, failed to adequately report the dangers of greenhouse emissions and the scale of their increase. In fact it seems clear that some of the mainstream media – primarily right-wing and conservative presses – have been chronically under-reporting on the dangers of climate change while deliberately subverting the problem with skeptical reporting.

Many governments have been treating the issue with the same mixture of obfuscation and ignorance. In the past several years some have become much worse, notably America under Trump and the Australian government now under Scott Morrison. Morrison, recently responded to the impassioned speech given to the UN Climate Conference by Greta Thunberg by saying that “the climate change debate is subjecting Australian children to “needless anxiety.”

The first ethical implication of language choice is about truth. If we have any hope of addressing this issue, then the truth must be widely, openly, and adequately acknowledged.

It is the responsibility of government, in its role as sovereign state, to inform its citizens. Democratic governments have this responsibility in virtue of the fact that the people are needed in order to grant authority legitimacy. To function in this role, citizens must have the relevant knowledge to choose the right candidates and correctly instruct them in how to serve the community. (A free press has a democratic responsibility in this regard as well. A free press is only free when its agenda is not set by special interests.)

Recently, The Guardian made a decision about changing some of the language it uses to report on the climate and ecological emergency, introducing: “terms that more accurately describe the environmental crises facing the world.” Instead of “climate change” the new terms are “climate emergency, crisis or breakdown” and “global heating” is favored over “global warming.”

We’ve used the term ‘climate change’ for several decades in reference to what is also often called ‘global warming,’ or sometimes ‘the greenhouse effect.’ But, to many, this terminology makes the problem sounds like a gradual, natural, and passive event. But in reality we are now using it to denote something that has been caused and is rapidly being accelerated by human actions – so is neither gradual, nor natural.

António Guterres told the gathering of leaders in September 2018: “We face a direct existential threat,” adding that we have until 2020 to change our behavior or “we risk missing the point where we can avoid runaway climate change, with disastrous consequences for people and all the natural systems that sustain us.” Given that this is the case, the language of crisis and emergency is not alarmist – it is warranted and necessary.

Professor Richard Betts, of Britain’s meteorological monitoring organization, has called for a change from ‘global warming’, which many have also noted sounds rather too benign, to ‘global heating’ which more accurately reflects the reality of what is happening.

Future life on Earth and future and present human society is now in serious jeopardy. With so little time left to turn the situation around, we are going to have to start acting like it is an emergency, but complacency is still rife, and it is now the greatest barrier to urgent change.

Language has been part of the complacency, and changing the language we use is necessary for action. To combat the problem, we first need to understand our situation, and to do so we must be able to name it. We also need to reorient ourselves in the way we talk about our current predicament to reflect the fact that the effects of climate change are happening now.

The outcomes will be so bad that there is no other mode to adopt than emergency-mode if we are to mobilize in time, and our language needs to reflect that. We can talk about ‘climate change’ and then turn back to topics of ordinary life – we can drift away from ‘climate change.’ But we cannot as easily drift away from an emergency. Once you start talking about an emergency, about breakdown and collapse, then it is much harder to turn away. We are in a crucial moment – a window of opportunity, a vanishing window, we can ill afford to turn back to other, everyday subjects.

We need for our language to be unequivocal about the seriousness of the situation; to help reduce cognitive dissonance and allow us to conceptually make the connections we need to make in order to act. That is why the question of what we are calling this is a moral question.

The analogy of the burning house, evoked by Thunberg in her speech, is apt here:

The building is on fire, and all occupants need to move very quickly or face serious injury or death. If in that situation I merely say to the occupants something like: “it’s getting warmer in here” instead of something more like: “the house is on fire, quick, run for your life!” then I have essentially lied to them through omission and am guilty of moral negligence.

I can say I didn’t at first know it was on fire, or did know but didn’t believe the situation to be serious, it will still be surprising that it has taken so long to reach the conclusion that the building is on fire and we must get out. That is, as soon as one comes to the conclusion that we are in very serious trouble, one immediately wonders how we can possibly be in such serious trouble when we could easily have prevented from becoming a serious problem.

On one view, our language ought to change as the changing situation demands; but one wonders where we might be if our way of talking about the situation (our way of comprehending it) reflected its seriousness from the beginning.

Those are very important questions, and the answers we can provide to them might in the long run have a bearing on our continued survival – but not if we don’t get out of the burning building now.

There seems to be a clear moral duty here for governments, the media, and whoever else is participating in the discussion to tell it like it is – to stop softening the truth. That duty is, I believe, connected with any hope we might have of taking urgent action to mitigate the impending crisis. In one sense our language-choices seems immaterial – this is an emergency, whether we say so or not. But our survival probably depends on our saying so and then acting like we mean it.

Forbidden Knowledge in Scientific Research

cloeup photograph of lock on gate with iron chain

It is no secret that science has the potential to have a profound effect on society. This is often why scientific results can be so ethically controversial. For instance, researchers have recently warned of the ethical problems associated with scientists growing lumps of human brain in the laboratory. The blobs of brain tissue grown from stem cells developed spontaneous brain waves like those found in premature babies. The hope is that the study offers the potential to better understand neurological disorders like Alzheimer’s, but it also raises a host of ethical worries concerning the possibility this brain tissue could reach sentience. In other news, this week a publication in the journal JAMA Pediatrics ignited controversy by reporting a supposed link between fluoride exposure and IQ scores in young children. In addition to several experts questioning the results of the study itself, there is also concern about the potential effect this could have on the debate over the use of fluoride in the water supply; anti-fluoride activists have already jumped on the study to defend their cause. Scientific findings have an enormous potential to dramatically affect our lives. This raises an ethical issue: should there be certain topics, owing to their ethical concerns, that should be off-limits for scientific study?

This question is studied in both science and philosophy, and is sometimes referred to as the problem of forbidden knowledge. The problem can include issues of experimental methods and whether they follow proper ethical protocols (certain knowledge may be forbidden if it uses human experimentation), but it can also include the impact that the discovery or dissemination of certain kinds of knowledge could have on society. For example, a recent study found that girls and boys are equally as good at mathematics and that children’s brains function similarly regardless of gender. However, there have been several studies going back decades which tried to explain differences between mathematical abilities in boys and girls in terms of biological differences. Such studies have the possibility of re-enforcing gender roles and potentially justifying them as biologically determined. This has the potential to spill over into social interactions. For instance, Helen Longino notes that such findings could lead to lower priorities being made to encourage women to enter math and science.

So, such studies have the potential to impact society which is an ethical concern, but is this reason enough make them forbidden? Not necessarily. The bigger problem involves how adequate these findings are, the concern that they could be incorrect, and what society is to do about that until correct findings are published. For example, in the case of math testing, it is not that difficult to find significant correlations between variables, but the limits of this correlation and the study’s potential to identify causal factors are often lost on the public. There are also methodical problems; some standardized tests rely on male-centric questions that can skew results, different kinds of tests and different strategies for preparing for them can also misshape our findings. So even if correlations are found, where there are not major flaws in the assumptions of the study, they may not be very generalizable. In the meantime, such findings, even if they are corrected over time, can create stereotypes in the public that are hard to get rid of.

Because of these concerns, some philosophers argue that either certain kinds of questions be banned from study, or that studies should avoid trying to explain differences in abilities and outcomes according to race or sex. For instance, Janet Kourany argues that scientists have moral responsibilities to the public and they should thus conduct themselves according to egalitarian standards. If a scientist wants to investigate the differences between racial and gender groups, they should seek to explain these in ways without assuming that the difference is biologically determined.

In one of her examples, she discusses studying differences between incidents of domestic violence in white and black communities. A scientist should highlight similarities of domestic violence within white and black communities and seek to explain dissimilarities in terms of social issues like racism or poverty. With a stance like this, research into racial differences explaining differences in rates of domestic violence would thus constitute forbidden knowledge. Only if these alternative egalitarian explanations empirically fail can a scientist then choose to explore race as a possible explanation of differences between communities. By doing so, it avoids perpetuating a possibly empirically flawed account that suggests that blacks might be more violent than other ethnic groups.

She points out that the alternative risks keeping stereotypes alive even while scientists slowly prove them wrong. Just as in the case of studying mathematical differences, the slow settlement of opinion within the scientific community leaves society free to entertain stereotypes as “scientifically plausible” and adopt potentially harmful policies in the meantime. In his research on the matter Philip Kitcher notes that we are susceptible to instances of cognitive asymmetry where it takes far less empirical evidence to maintain stereotypical beliefs than it takes to get rid of them. This is why studying the truth of such stereotypes can be so problematic.

These types of cases seem to offer significant support to labeling particular lines of scientific inquiry forbidden. But the issue is more complicated. First, telling scientists what they should and should not study raises concerns over freedom of speech and freedom of research. We already acknowledge limits on research on the basis of ethical concerns, but this represents a different kind of restriction. One might claim that so long as science is publicly funded, there are reasonable democratically justified limits of research, but the precise boundaries of this restriction will prove difficult to identify.

Secondly, and perhaps more importantly, such a policy has the potential to exacerbate the problem. According to Kitcher,

“In a world where (for example) research into race differences in I.Q. is banned, the residues of belief in the inferiority of the members of certain races are reinforced by the idea that official ideology has stepped in to conceal an uncomfortable truth. Prejudice can be buttressed as those who opposed the ban proclaim themselves to be the gallant heirs of Galileo.”

In other words, one reaction to such bans on forbidden knowledge, so long as our own cognitive asymmetries are unknown to us, will be to fight back that this is an undue limitation on free speech for the sake of politics. In the meantime, those who push for such research can become martyrs and censoring them may only serve to draw more attention to the cause.

This obviously presents us with an ethical dilemma. Given that there are scientific research projects that could have a potentially harmful effect on society, whether the science involved is adequate or not, is it wise to ban such projects as forbidden knowledge? There are reasons to say yes, but implementing such bans may cause more harm or drive more public attention to such issues. Even banning research on the development of brain tissue from stem cells may be wise, but it may also cause such research to move to another country with more relaxed ethical standards, meaning that potential harms could be much worse. These issues surrounding how science and society relate are likely only going to be solved with greater public education and open discussion about what ethical responsibilities we think scientists should have.

Commodification and Exploitation in Egg Donation

image of ovarian follicles

Egg donation is a form of assisted reproductive technology (ART) in which a woman donates eggs to equip another woman to conceive a child. The process of egg donation usually involves in vitro fertilization technology, as the eggs undergo fertilization in a laboratory, or alternatively, the unfertilized egg can be frozen and stored to be used at a later time. Regulated according to guidelines set by the American Society for Reproductive Medicine, this form of ART has gained momentum in the US and around the world since the first child was born from egg donation in Australia in 1983. In the US today, egg donation accounts for about 18% of IVF births.

While the allowance of compensation for egg donors varies by the country, egg donors in the US are compensated up to $8,000 on average for the retrieval of eggs. While egg donation is a sought-after fix for those unable to conceive and stands to provide real benefits to donors and recipients alike, this form of IVF can be a sensitive subject as it raises a number of medical ethics questions.

A common concern raised by medical ethicists regarding egg donation is the type of consent obtained in the process of donating eggs. Although most donor recruiting agencies cite altruism and reliability as the most desirable qualities in a donor, the incentive of monetary compensation could hinder a donor’s capacity to make coherent and informed decisions. Studies have shown that donors motivated by financial incentives suffer more emotional trauma from the process and have a higher probability of regretting their decision than women who express altruistic motivations. In part to avoid risking the commodification of motherhood, nations such as the UK and Australia have ruled any form of monetary compensation to the egg donor to be illegal.

However, Lori Andrews (1992) notes that more often than not, “when society suggests that a certain activity should be done for altruism rather than money, it is generally a woman’s activity.” In agreement with Andrews, sociologist Anna Curtis argues, in her 2010 article Giving ‘Til It Hurts: Egg Donation and the Costs of Altruism, that women should be sufficiently compensated if egg donation is to remain legal in the US, due to the health risks the procedure poses, the emotional strain a donor is subjected to by donating an egg, and the time spent going through and recovering from the procedure.

Due to the technical and invasive nature of egg donation, donors may lack a complete understanding of all the potential short-term as well as long-term risks associated with donating eggs. Curtis also argues that the donors’ emotional investment can cause them to downplay the risks of the procedure. Curtis’ research suggests that not only did donors experience joy over a successful donation, but they also felt guilty when the procedure failed. When Curtis questioned donors regarding their knowledge of health risks associated with egg donation, she found that the women claimed to give “little or no thought to the possible short- or long-term risks involved in donating, despite their ability to list many of these very risks,” demonstrating that even if the donors are aware of the risks, they may not seriously consider the likelihood of these risks affecting them in the future, possibly because of their emotional investment in the egg donation process.

Furthermore, egg donation is a costly process — not only in terms of the emotional and physical strain put on the donor, but also in terms of the financial expenses for the recipient. The inequality of access to ART means that reproductive technology is a viable option exclusively to the wealthy. The feasibility of egg donation must therefore be analyzed recognizing that there may be a large demographic of infertile individuals who would choose ART to conceive a child had they the financial means, but are not able to so due to the high cost of reproductive technology.

The eugenic commodification of egg donors is an additional ethical concern regarding egg donation. Advertisements directed towards egg donors usually depict specific racial, physical, and intellectual characteristics as desirable, making it clear that the agencies are recruiting a certain type of woman whether it be based on ethnicity, height, or even scores obtained on standardized tests. This emphasis on eugenics perpetuates the commodification and exploitation of women’s bodies, reducing the female body to a product with reproductive value.

With these ethical concerns in mind, infertility specialists, agencies that recruit egg donors, as well as recipients of the donated egg must consider the multifaceted implications of egg donation when assessing regulations regarding egg donation. By doing so, individuals and agencies alike can make equitable and informed decisions concerning the emotional, physical, and monetary costs of egg donation to both the donors and the recipients.

Is the “Preventing Animal Cruelty and Torture Act” a Step in the Right Direction?

photograph taken of turkeys overcrowded in pens

On October 22nd, Congress unanimously passed the “Preventing Animal Cruelty and Torture Act.” The law makes certain acts of cruelty against animals federal crimes. Before the federal law was passed, legislation protecting animals was largely a matter reserved for state legislatures. The law was met with praise from both private citizens and animal welfare organizations like the American Society for the Prevention of Cruelty to Animals (ASPCA).

The scope of the law is one of its most noteworthy positive features. Many animal welfare laws arbitrarily restrict protections to only certain species of animals—often companion animals or animals that human beings tend to find cute or pleasant. Bucking that trend, this bill includes, “non-human mammals, birds, reptiles or amphibians.” Specifically, the law prohibits the “crushing” of animals, where “crushing” is defined as “conduct in which one or more living non-human mammal, bird, reptile, or amphibian is purposely crushed, burned, drowned, suffocated, impaled, or otherwise subjected to serious bodily injury.”

While the law is laudable when it comes to the range of animals it protects, it is arbitrary in other ways. The protection the law provides is subject to noteworthy exemptions. The following conduct is exempt from protection: conduct that is, “a customary and normal veterinary, agricultural husbandry, or other animal management practice,” “the slaughter of animals for food,” “hunting, trapping, fishing, a sporting activity not otherwise prohibited by Federal law, predator control, or pest control,” action taken for the purpose of “medical or scientific research,” conduct that is “necessary to protect the life or property of a person,” and conduct “performed as part of euthanizing an animal.”

On its face, the law seems like a step in the right direction. The exemptions, however, should motivate reflection on the question of what a commitment to the prevention of animal cruelty actually looks like. Exemptions to a law can be useful when there are compelling moral reasons for them. In this case, however, the exemptions highlight the inconsistency in societal attitudes about just how wrong it is to be cruel to animals. It looks as if all the law really prevents is the callous, perhaps even psychopathic, infliction of pain on animals by private individuals. This isn’t where the majority of animal abuse and cruelty takes place.

Consider the first exemption, allowing for animal cruelty in the case of “a customary and normal veterinary, agricultural husbandry, or other animal management practice.” This exemption covers a tremendous number of interactions that occur between humans and animals. What’s more, there doesn’t seem to be any obvious moral justification for the exemption. If animal cruelty is bad, why would cruelty for the purposes of “animal management” be any less bad? This exemption also constitutes a fallacious appeal to common practice. The fact that a given practice is a “customary” part of animal management practices does not mean that the practice isn’t cruel.

The slaughter of animals for food is a particularly interesting case. One might think that this exemption is morally justified. After all, we must balance the interests of animals against the very real need that human beings have for sustenance. The legislators in this case felt that this balancing act ultimately favored the needs of human beings. There are a number of problems with this argument. First of all, it assumes that the harms we are justified in causing to other creatures can ultimately be justified by human need. That assumption may not be morally defensible. Second, human beings do not need to consume animal flesh in order to satisfy their nutritional needs. We continue to consume animals, in a way that is, ultimately, unsustainable, because human beings like the taste of animal flesh. Even if the question of how we ought to treat animals must be resolved using a balancing act, it doesn’t seem like a justification that is based purely on taste preferences could ever be sufficient to come out ahead in the balance. What’s more, even if such considerations could come out ahead, factory farms currently engage in cruel practices simply to maximize the volume of their “product,” and, as a result, the size of their profits. For example, chickens are kept under conditions in which they don’t have the space to fully spread their wings. To prevent them from cannibalizing one another under these stressful conditions, chickens are often “debeaked.” This cruelty could be avoided if these farms simply raised fewer chickens. The Preventing Animal Cruelty and Torture Act does nothing to address this cruelty—it actually provides exemptions for it.

Finally, the passage of this act may provide many people with the false impression that the government is protecting animals in a real, thoroughgoing way. Many people probably believe that cruelty to animals is strictly regulated and enforced by the government. After all, how could the vicious treatment of a living being not be against the law. Before this law passed, there were two pieces of federal legislation offering limited protections to animals. First is the Animal Welfare Act, passed in 1966. The Act nominally provides for the humane treatment of animals, and its mere existence may make citizens feel at ease with the protections afforded. The Act does ensure that animals in certain contexts, are provided with “adequate housing, sanitation, nutrition, water and veterinary care.” They must also be protected against extreme temperature. However, this law contains significant exemptions as well, of the same variety as those provided in the Act passed this year. The second bit of legislation is The Humane Methods of Slaughter Act, passed in 1958 and revised in 1978. This Act only protects certain animals from being killed in particular kinds of inhumane ways. It does not prohibit cruelty full stop. The bottom line is that animals are not protected from cruelty by federal legislation. Despite the pleasant-sounding name of the “Preventing Animal Cruelty and Torture Act,” the Act fails to provide protections where animals need them the most. It’s unfortunate that sometimes psychopaths and future serial killers kill animals for kicks, and that should certainly be against the law. At the end of the day, though, the real problems that we face have to do with our attitudes about animals and with the institutions that we’re willing to go to great lengths to protect.

Marieke Vervoort and Deciding How to Die

On Tuesday, October 22nd, Belgian Paralympian Marieke Vervoort ended her life. She had signed papers eleven years prior gaining authorization to decide when to end her life, as medical aid in dying is legal in Belgium. Vervoot had a degenerative spinal disease that caused intense pain and interfered with her ability to sleep, sometimes limiting her rest to minutes a night. She won multiple Olympic medals for wheelchair racing: gold and silver medals at the 2012 London Olympics, and more in Rio de Janeiro. In interviews she explained how participating in sports kept her alive, and how the intensity of her pain could at times make those around her pass out.

Vervoot expressed that without the authorization papers she obtained over a decade ago, she would have chosen to die sooner. “I’m really scared, but those papers give me a lot of peace of mind because I know when it’s enough for me, I have those papers,” she said. “If I didn’t have those papers, I think I’d have done suicide already. I think there will be fewer suicides when every country has the law of euthanasia. … I hope everybody sees that this is not murder, but it makes people live longer.” Vervoot’s statement suggests that allowing people to make their own determinations regarding ending their lives is actually a way of valuing life – not only out of respecting their autonomy (their ability to make choices regarding their own life paths), but also offering encouragement and protection.

This attitude towards aid in dying is consistent with cases in the US, though there are important distinctions in the law between Belgium and the states where physician aid in dying is legal. The US requires a person to have a terminal illness, assessed by two diagnosing physicians, in order to be considered a candidate for aid in dying. However, the trend of patients going through the process of seeking aid in dying, but ultimately looking for something more complex than to die immediately bears true: a full one-third of the patients in Oregon and California do not end up taking the prescribed medication after going through the procedures of procuring aid in dying. This tendency is attributed to the importance of having control over the manner in which you die, for which having the option (rather than following through) is sufficient.

In California and Oregon, “pain” does not make the top three reasons that a patient reports seeking aid in dying; “autonomy” typically tops the list. As for people in all stages and conditions of life, having control over the narrative and shape of one’s life is critically important. One of the principal harms of illness is that it can take so much of this control away from a person.

It has been twenty years since Oregon adopted its Death with Dignity Act, and now most Americans support physician aid in dying. The characterization of these policies, now passed in nine states (plus DC), as “deciding how to die” plays a significant role in the public discussion. But this support is only currently available for patients who are terminal, not patients like Vervoot.

The difference between cases like Vervoot’s and cases that are legal in the US is the presence of a terminal diagnosis. This constraint restricts aid in dying from being available to those patients with dementia or with other degenerative conditions, like Vervoot or actor/comedian Robin Williams.

There are a cluster of cases that raise worries about broadening aid in dying policies to include patients beyond those with terminal diagnoses. Cases where individuals are not getting sufficient treatment, and therefore they may have a reasonable chance at a good or worthwhile life (by their own standards). And yet, the healthcare system’s not providing such options cause justifiable concern among disadvantaged groups. In such cases, a patient may opt to end their life due to injustices present in the healthcare system rather than an objective terminal diagnosis.

It is telling that the policies in the US are promoted by groups with names like “Death with Dignity”; the effects of illness on terminal patients’ lives are the focus of the discussion, empowering patients to make decisions about the end of their lives in the face of degenerating conditions and abilities. However, the very conditions said to be threatening patients’ “dignity” can be the very same conditions that differently abled individuals live with every day. For this reason, some disability rights advocates find some aid in dying discourse demeaning for people that judge their lives to be meaningful and fulfilling while also living with conditions that require significant interdependence and care: “Some right-to-die activists have written about assisted dying as an antidote for indignity that occurs at the end of life, such as needing help to dress or use the bathroom. If you’re a person with functional limitations, that’s a real slap in the face,” says Carol Gill, PhD, APA working group member and professor of disability and human development at the University of Illinois at Chicago. The stigma of living a dependent life alters doctors’ and patients’ assessments of the quality of life, and so rather than providing resources to reduce the burden of the “debilitating” conditions, physicians sometimes offer aid in ending life: “There are no assisted-dying laws that guarantee those resources, and that feels discriminatory to a lot of people with disabilities,” she says.

Practical considerations must inform the moral ones in cases like these, and the reality of healthcare disparities in the US make the question of physician aid in dying worrying for many. If patients are more likely to seek aid in dying because their health needs are not being met, this presents a real justice concern because health care resources are not being distributed equitably or anywhere near sufficiently. These factors for choosing aid in dying will weigh more heavily for disadvantaged groups, and facing the decision to hasten death could be less of an empowering narrative than the one Vervoot tells (she said it gave her control and put “my own life in my hands”). These concerns regarding vulnerable populations and the value-laden judgments regarding which lives are worth living even count against the policies allowing aid in dying that the US has passed.

There are a variety of conditions that could interfere with someone’s ability to live what they deem to be a worthwhile life. Both psychological and physical conditions can bring about such states, and terminal and non-terminal conditions may meet this standard. The relevant distinction may turn on whether reasonable hope of treatment exists or not. As we have seen, there are a number of practical difficulties to determining which cases fit into which category, especially given the inadequate care currently on offer and the unjust distribution across vulnerable populations.

MDs vs. NDs: On the Regulation of Naturopathic Medicine

photograph of stethescope and blood pressure pump

While 16 states, plus the District of Columbia and Puerto Rico, license naturopathic doctors, many physicians have expressed strong opposition against this practice. These physicians point to naturopathic treatment as an unsafe alternative to modern medicine, because they argue that Naturopathic Doctors (N.D.s) are not qualified to diagnose and treat illness. On the other hand, N.D.s want increased legitimacy within the field of holistic health to ensure that patients go to qualified practitioners. It is evident that physicians and N.D.s share a mutual goal: ensuring that patients receive quality care. However, the two parties have different ideas of how to promote the just treatment of patients: either recognize N.D.s as licensed Naturopathic Doctors, or bar them from that distinction, lumping them together with untrained practitioners. The ethical concern lies with ensuring that patients have the proper information needed to access just and safe treatment. For the good of all those involved we must ask: should N.D.s be licensed and formally recognized?

First, some critics have argued that people practicing naturopathy are not sufficiently trained in the medical field. Naturopathic medicine can be practiced in two different ways: by naturopathic doctors and by unlicensed naturopaths. While they both sit under the umbrella of naturopathy, the difference between these two practices is significant. While N.D.s graduate from a four year naturopathic school and receive a license from the Council on Naturopathic Medical Education, unlicensed naturopaths might receive informal training and are only qualified to make “general lifestyle” recommendations. N.D.s want to be recognized as legitimate medical practitioners to increase their agency and distinguish themselves from non-licensed naturopaths. Being recognized as medical practitioners would allow N.D.s the power to write prescriptions and conduct medical tests more freely, which would increase their influence over the treatment of their patients. Nevertheless, their desire for greater authority is based on concern for the well-being of patients: N.D.s are worried that patients might be going to naturopaths without understanding the distinction between naturopaths and N.D.s. 

Prospective patients needs to know that naturopaths are only trained to provide general lifestyle advice. Patients should not visit a naturopath for questions regarding specific ailments. Individuals who work as naturopaths must actively work to protect prospective clients by turning them away when their inquiries stretch beyond this scope. Prospective patients must also understand that N.D.s cannot replace physicians; believing so could prove a great risk to the patient’s well-being.

While there are guidelines for what naturopaths can and cannot do, this varies depending on individual states’ laws and regulations. Due to the relative novelty of naturopathy in the U.S, many people are unaware of the possible risks and benefits of its practice. This affects whether lawmakers believe that wider recognition for N.D.s would have a positive impact on patients’ health outcomes. A possible solution would involve a commitment to patient education, prioritizing the agency of individuals by promoting free and ample access to information. If people are equipped to make truly informed choices, they can decide whether they best belong at the M.D., N.D., or naturopath’s office. Free and equitable access to information would mean that people are less at risk of being harmed without their knowledge. The issue with this suggestion, however, is that universal access to information is not a reality. For this reason, how N.D.s are recognized matters.

Organizations like the American Academy of Family Physicians oppose a special distinction for N.D’s, because they argue that it might assert an equal status between N.D.s and physicians. This, they argue, would put patients at risk. They point out that physicians attend medical school upwards of ten years, while N.D.s undergo nearly half of that. The AAFP outlines the difference they perceive in the training of family physicians and naturopaths. They include the similarities/differences between the two programs of study:

 

 

 

 

What the table above illustrates is that, by the standards used by the AAFP, physicians receive training for a longer period of time and are vetted as possible candidates for a degree with more rigor. For physicians, it is problematic to identify those who practice both fields under the same label, because it fails to appreciate the sizable difference in qualifications of the two professions. Physicians are trained in many subjects that N.D.s are not; this is significant when discussing an individual’s health outcomes. There is a concern that recognizing N.D.s as legitimate medical practitioners actually puts patients at a greater risk, because they are making decisions based on a lie. The “lie” being that a patient can go to physician or N.D regardless of the medical problem, since both professionals have the same expertise.

It should be clear that physicians and N.D.s do not possess the same knowledge. Both fields distinguish themselves as different from each other, and N.D.s continue to recognize the need for physicians, in cases requiring surgery, for example. Some N.D.s even specify that patients should seek advice from their doctors when they seek naturopathic care. N.D.s do not desire to obtain an equal status to physicians, which should appease many. Instead, N.D.s would like their title to reflect the services they can provide and distinguish them from those who merely offers advice as naturopaths.

N.D.s insist that without more public legitimacy, people might go to unlicensed naturopaths thinking they are consulting someone with a greater level of professional training and education. The reality is that a proper solution is not clear, and there is ambiguity regarding what policy best protects the safety of patients, a common value or goal shared by both “sides.” But the parties have strong beliefs about the proper way to reach that goal and, unfortunately they don’t coincide. The hope is that by evaluating where opponents are coming from and what they care about, we can begin to draw out common interests. Then, these common interests have the potential to lead to collaborative decision-making about action steps.

In this case, physicians and N.D.s seek to protect patients and promote their well-being. While N.D.s would like more recognition that legitimizes their practice as more legitimate, physicians oppose this. If physicians don’t believe N.D.s possess sufficient knowledge, it is interesting to consider whether they would like N.D.s to receive more training or have stricter regulation on what they can treat. Naturopathy has been around for thousands of years all over the world, and its recent surge in popularity within the U.S points to its time-tested resilience. Furthermore, it will become increasingly necessary to investigate how Naturopathic medicine can be integrated among the other branches of medicine as its influence increases. The way we label the field and its practitioners will have serious consequences going forward.

CRISPR and the Ethics of Science Hype

image of pencil writing dna strand

CRISPR is in the news again! And, again, I don’t really know what’s going on.

Okay, so here’s what I think I know: CRISPR is a new-ish technology that allows scientists to edit DNA. I remember seeing in articles pictures of little scissors that are supposed to “cut out” the bad parts of strings of DNA, and perhaps even replace those bad parts with good parts. I don’t know how this is supposed to work. It was discovered sort of serendipitously when studying bacteria and how they fight off viruses, I think, and it all started with people in the yogurt industry. CRISPR is an acronym, but I don’t remember what it stands for. What I do know is that a lot of people are talking about it, and that people say it’s revolutionary.

I also know that while ethical worries abound – not only because of the general worries about the unknown side-effects of altering DNA, but because of concerns about people wanting to make things like designer babies – from what my news feed is telling me, there is reason to get really excited. As many, many, many news outlets have been reporting, there is a new study, published in Nature, that a new advance in CRISPR science means that we could correct or cure or generally get rid of 89% of genetic diseases. I’ve heard of Nature: that’s the journal that publishes only the best stuff.

I’ve also heard that people are so excited that Netflix is even making a miniseries about the discovery of CRISPR and the scientists working on it. The show, titled “Unnatural Selection” [sic], pops up on my Netflix page with the following description:

“From eradicating disease to selecting a child’s traits, gene editing gives humans the chance to hack biology. Meet the real people behind the science.”

In an interview about the miniseries, co-director Joe Egender described his motivation for making the show as follows:

“I come from the fiction side, and I was actually in the thick of developing a sci-fi story and was reading a lot of older sci-fi books and was doing some research and trying to update some of the science. And — I won’t ever forget — I was sitting on the subway reading an article when I first read that CRISPR existed and that we actually can edit the essence of life.”

89% of genetic diseases cured. Articles published in Nature and a new Netflix miniseries. Editing the essence of life. Are you excited yet???

So the point of this little vignette is not to draw attention to the potential ethical concerns surrounding gene-editing technology (if you’d like to read about that, you can do so here), but instead to highlight the kind of ignorance that myself and journalists are dealing with when it comes to reporting on new scientific discoveries. While I told you at the outset that I didn’t really know what was going on with CRISPR, I wasn’t exaggerating by much: I don’t have the kind of robust scientific background required to make sense of the content of the actual research. Here, for example, is the second sentence in the abstract of that new paper on CRISPR everyone is talking about:

“Here we describe prime editing, a versatile and precise genome editing method that directly writes new genetic information into a specified DNA site using a catalytically impaired Cas9 fused to an engineered reverse transcriptase, programmed with a prime editing guide RNA (pegRNA) that both specifies the target site and encodes the desired edit.”

Huh? Maybe I could come to understand what the above paragraph is saying, given enough time and effort. But I don’t have that kind of time. And besides, not all of us need to be scientists: leave the science to them, and I’ll worry about other things.

But this means that if I’m going to learn about the newest scientific discoveries then I need to rely on others to tell me about them. And this is where things can get tricky: the kind of hype surrounding new technologies like CRISPR means that you’ll get a lot of sensational headlines, ones that might border on the irresponsible.

Consider again the statement from the co-director of that new Netflix documentary, that he became interested in CRISPR after he read about how it can be used to “edit the essence of life.” It is unlikely that any scientist has ever made so bald a claim, and for good reason: it is not clear what it means for life to have an “essence”, nor that such a thing, if it exists, could be edited. The claim that this new scientific development could potentially cure up to 89% of genetic diseases is also something that makes an incredibly flashy headline, but again is much more tempered when it comes from the mouths of the actual scientists involved. The authors of the paper, for instance, state that the 89% number comes from the maximum number of genetic diseases that could, conceptually, be cured if the claims described in the paper were perfected. But that’s of course not saying much: many wonderful things could happen in perfect conditions, the question is how likely they are to exist. And, of course, the 89% claim also does not take into account any potential adverse effects of the current gene editing techniques (a worry that has been raised in past studies).

This is not to say that the new technology won’t pan out, or that it will definitely have adverse side effects, or anything like that. But it does suggest some worries we might have with this kind of hyped-up reaction to new scientific developments.

For instance, as someone who doesn’t know much about science, I necessarily rely on people who do in order to tell me what’s going on. But those who tend to be the ones telling me what’s going on – journalists, mostly – don’t seem to be much better off in terms of their ability to critically analyze the information they’re reporting on. We might wonder what kinds of responsibilities these journalists have to make sure that people like me are, in fact, getting an accurate portrayal of the state of the relevant developments.

Things like the Netflix documentary are even further removed from reality. Even though the documentary makers themselves do not make any specific claims as to understand the science involved, they clearly have an exaggerated view of what CRISPR technology is capable of. Creating a documentary following the lives of people who are capable of editing the “essence of life” will certainly give viewers a distorted view.

None of this is to say that you can’t be excited. But with great hype comes great responsibility to present information critically. When it comes to new developments in science, though, it often seems that this responsibility is not taken terribly seriously.

Transactionalism in U.S. Foreign Policy

image of world map with flags indiciating national boundaries

Since House Speaker Pelosi announced the start of the formal impeachment inquiry in the light of the new allegations against President Trump, the news cycle has seen abundant questions about the likelihood of impeachment, details of the process, and questioning whether there is a basis for the impeachment. The reasons for the start of the proceeding was a controversial call with Ukrainian President Zelinsky during which the president conditioned U.S. aid to Ukraine upon information about presidential candidate Biden and his son. As a result, Trump has been accused of engaging in a quid pro quo agreement, as he asked a foreign government to investigate a political rival. Yet, what goes easily unnoticed is the shift from humanitarianism to transactionalism in U.S. foreign policy that appears as a consequence of President Trump’s actions. Making U.S. foreign aid straightforwardly contingent upon political gains represents a sharp shift in the U.S. foreign policy doctrine. What are the consequences of this transactional approach?

Transactionalism is defined by Nikolas Gvosdev as “an effort to shift the basis of U.S. engagement and to define a series of quid pro quos for U.S. involvement.” This approach is meant to put tangible benefits above abstract values, and thus represents a transformation in the way the U.S. approaches assistance and aid. Until now, the U.S. has most commonly used humanitarian pretext to justify aid, but the current administration has indicated that it is not willing to continue the practice as it sees aid and financial assistance as a political tool instead.

There are several ethical questions raised by the U.S.’s new transactional approach:

First, is it morally permissible to prioritize aid to allies rather than to those who truly need it? If humans are suffering and we need to react instantly, is it morally acceptable to turn our backs on countries who do not share our values and ideologies? What obligation do we have to donate funds to causes which might frustrate our interests? Consider President Trump’s justification for constraining aid when Hurricane Dorian threatened Puerto Rico. Trump’s claim that “Puerto Rico is one of the most corrupt places on earth” was meant to justify a lack of willingness to approve further funding needed to rebuild. Is the potential misuse of federal funds, as the president has claimed, a morally justifiable reason to deny further assistance?

Second, the transactional approach has the potential of leading to crises across the globe, bringing us back to the pre-UN world order. U.S. foreign policy appears to be putting aside its long-held belief that, alongside military action, it ought to promote its values across the world and cherish alliances based on a common vision of the world. But if diplomacy turns transactional, we risk the well-established world order by prioritizing relationships of benefit.  Just recently, the U.S. changed its approach toward Syria, as President Trump decided to withdraw U.S. troops and abandon Kurdish allies. In doing so, President Trump articulated a new vision for policy based on national interest and likelihood of victory, rather than the protection of hard-won allegiances. This shift led many of the President’s supporters to openly criticize the fact that he abandoned Kurdish people who have been paramount for U.S. efforts in Syria.

Third, does the U.S. have a responsibility to the global community as its leader? This question continues to trouble academics and policymakers alike as they try to decipher what role the U.S. should play on the world stage, especially in light of the rise of other great powers. If the leader of the free world is seen as conducting foreign affairs on a quid pro quo basis, what message does this send to the rest of the world?

The ongoing conversation regarding the president’s request that a foreign power intervene in domestic politics needs to center on more than just talk about the breaking of norms and statues. Democratic interference is a real worry with its own moral concerns and weight, but just as pressing is the question about the U.S.’s foreign policy transformation and the U.S.’s shifting role in global politics. The Trump-Ukraine scandal merely marks the most recent, noteworthy event in the movement of U.S. policy from participatory to more self-interested. We should not overlook this shift in the U.S. foreign policy doctrine towards transactionalism, a shift that might have grave consequences for the U.S. as well as the larger political world.

Power and Perception: The Ethics of Urban Exploration

photograph within abandoned building looking out

While researching the subculture of Parisian catacomb explorers in his book Underland (2019), nature writer Robert Macfarlane was both impressed and troubled by the potential of urban exploration to empower the individual. He writes,

“At its more political fringes, urban exploration mandates itself as a radical act of disobedience and liberation, a protest against state constraints on freedom within the city […] There is a surprising number of female explorers, and the class base is mixed, often drawing on a disaffected and legally disobedient demographic.”

Urban exploration is “political,” he argues, because it has the power to reshape how we understand our surroundings. For example, when MacFarlane explores the urban underground of Paris, he experiences a radical shift in spatial, temporal, and social awareness. He is shocked to feel the rumble of a Metro train passing over his head, and comes into contact with the “invisible city” of cataflies linked by a sense of anarchic camaraderie. The perpetual darkness beneath the city and limestone deposits that defy human conceptions of time further contribute to a complete rewiring of his perception of urban space.

Urban exploration challenges our modern understanding of cosmopolitan living as anonymous, individualistic, and severed from the physical location cities inhabit. However, Macfarlane’s experience is tempered by the flaws he sees in the movement. He says,

“There are aspects of urban exploration that leave me deeply uneasy […] I dislike it’s intermittent air of hipster entitlement and its inattention toward those people whose working lives involve the construction, operation, and maintenance—rather than the exploration—of these hidden structures of the city.”

The ethical dilemmas of urban exploration become even more pronounced when examining the foundations of the movement.

Dr. Martin Dodge, a senior lecturer of human geography at the University of Manchester, pinpoints four definitive drives of urban explorers. First, to document sites in danger of decay or destruction; second, to experience the thrill of accessing a forbidden place; third, the desire for an “authentic” experience, an unmediated or unsanitized look at the inner-workings of a city; and lastly, a reverence for the counter-cultural aesthetic of “ruin porn,” which values ruin and decay over perpetual construction and newness.

According to Dodge, explorers distinguish themselves from mere vandals by their ethical code, which elevates their activities from a hobby to a culturally significant task. There is no one unifying code that all urban explorers agree to obey, as such a code would go against the inherent nature of the community. Urban explorers, while they may form communities like the one Macfarlane interacted with, are usually either loners or small packs who pride themselves on their independence and disregard for authority. But a cursory search through urban exploration forums and message boards reveals a broad consensus on the ethics of urban exploration within the community. This code is articulated by Jeff Chapman, a long-time urban explorer who wrote a touchstone book on the subculture. As Chapman sees it,

“Genuine urban explorers never vandalize, steal or damage anything—we don’t even litter. We’re in it for the thrill of discovery and a few nice pictures, and probably have more respect for and appreciation of our cities’ hidden spaces than most of the people who think we’re naughty. We don’t harm the places we explore. We love the places we explore.”

Many of the things that Dodge cites as characteristic of urban exploration are evident in this quote. Chapman mentions the thrill of transgression, the documentation of decay and production of ruin porn. But Chapman also claims that urban exploration has a special cultural significance, that it’s ultimately an expression of love for the specificities of a place.

This claim has been challenged by writers and scholars from diverse fields. These critics argue that urban exploration may be an expression of love, but it’s the love of an individual. It does nothing to bolster our collective responsibility to place, to renew our commitment to the people who already inhabit that space. The prevalence of “ruin porn” within the urban exploration community is at the heart of this issue. In her book Beautiful Terrible Ruins: Detroit and the Anxiety of Decline, Dora Apel explores the problematic process of aestheticizing decay. According to Apel, ruin porn “naturalizes decline and reifies the urban ‘explorer’ as possessing a privileged gaze.” In fact, Apel claims,

“A romantic fetishization of the relationship between nature and culture lies at the heart of ruin imagery and is central to what makes it appealing. Ruin images tend to picture derelict architecture in the process of being reclaimed by animals and vegetation. This suggests a ‘timeless’ struggle between nature and culture that either places nature in the ascendancy over ruined culture as part of a downward spiral or, conversely, asserts the redemption of social ruin through signs of new life in nature. Yet for the poor population, no matter how haunting or strangely beautiful ruins may be, they are not romantic artifacts but reminders of jobs and homes lost, neighborhoods destroyed, and lives derailed.”

Crucially, Apel argues that ruin porn erases the specificity of place and the historical processes that contribute to the slow decay of infrastructure. It aestheticizes the “ruin” of culture and revels in the triumph of nature, which ultimately erases the human presence at those sites. In his lecture on urban exploration, Dodge suggests that explorers think of themselves as eco-tourists; in other words, look at the environment, but leave no footprints. But the creation of ruin porn does in fact disturb the environment; it plucks decay out of context and inserts it into a narrative which portrays such decay as completely natural, rather than the product of social and historical forces. Ultimately, Apel says, the privileged gaze of the explorer is a disruptive force in itself.

Dodge also compares urban exploration to “space-hacking,” both because of the diversity of the community (there are both well-meaning people and less well-meaning people who share an interest in disruption and the desire to break into restricted spaces) and the challenge their activities pose to authority. The idea of space-hacking is both revolutionary and troubling. On the one hand, this term acknowledges how space is meant to act upon us, how our psychological landscapes are the product of our environment. To hack something is to disrupt its encoded behavior, to prevent it from acting upon you in the way it was intended to. Much like computer hackers alter programs and websites, urban explorers change the intended experience of an environment through their presence and activities. In that sense, urban exploration can be a liberating way of reclaiming our physical reality, of forging a deeper connection with a location. However, this mindset privileges the experience of the one over that of the community, tapping into the cultural fantasy of an individual wielding complete power over their environment. The more disconnected we feel from our surroundings, the more powerless we feel. But urban exploration cannot rely lone explorers if it is to help us achieve collective empowerment.

In theory, the anarchic and counter-cultural perspective of urban exploration could help us reclaim the world around us, which ideally would mean less of a sense of ownership than a sense of interconnectedness with our physical location. In the age of virtual communities where technology erases the necessity of physical coexistence for exchanges between individuals, such an activity could have tremendous value. But as Macfarlane and many others have pointed out, the movement as it currently exists privileges the individual experience over that of the community, and erases the historical specificity of place and time. These problems hinder urban exploration from actually helping us reshape our perception of urban environments and better face the dramatically altered world of the modern era.