Back to Prindle Institute

The Ethics of Cell Cultured Brains

image of brain outline in white light

Earlier this month, the New York Times reported that Yale neuroscientist Nenad Sestan and his team successfully produced active brain cells through a process of culturing the inactive brain matter of deceased creatures. The cells were active for more than mere moments—some of them survived for weeks at a time. These results may lead to important discoveries about the way the brain works, and could, in the long term, be an important step to understanding and/or curing brain diseases and disorders.

Sestan is interested in generating activity beyond individual cells to entire slices of brain matter. Doing so would allow him to study what neuroscientists call the “connectome”—essentially, the wiring of the brain and its synapses. The New York Times piece focused on Sestan’s work in particular, but he was eager to point out that other scientists are doing similar work. In fact, some scientists have cell cultured “mini-brains” that demonstrate the kind of neural activity that one might expect to see in fetuses at 25-29 weeks after conception.

In Sestan’s work, and in other work like it, brain matter is obtained from the bodies of deceased humans who, while living, consented to donate their bodies to assist in scientific research. Because the cells and, potentially, organs being cultured here are brain cells and organs, these processes are philosophical and ethical quagmires. There is much potential for discovery concerning the answers to fascinating questions, but there is also the potential for some pretty significant ethical violations.

One concern has to do with whether the individuals who donated their bodies to science actually consented to the creation of beings that can think. As long as humans have understood that brains are responsible for thought, we’ve been obsessed with the notion of a “brain in a vat.” It pops up relentlessly in pop culture, and even in academic philosophy. Noteworthy examples include the 1962 sci-fi/horror classic The Brain That Wouldn’t Die and the 1983 Steve Martin comedy The Man with Two Brains. Whenever the concept arises in popular culture, one thing is clear—we attribute personhood to the brain. That is, we think of the brain as a someone rather than a something. If this is true, though, the consent needed from the donor is not the consent required to simply use that donor’s body for testing. It is the consent that might be required if one were to clone that donor or to create a child from that donor’s reproductive material. One might think that the consent conditions for that might be very different, and might well be consent that the donor did not provide.

Some concern has been raised over whether this kind of experimentation could lead to the creation of suffering—if active brain cells or a series of connected cells have the potential to give rise to thoughts or experiences of some kind, they might give rise to negative experiences. Some neuroscientists view this possibility as remote, but, nevertheless, Christof Koch, the president and chief scientist at the Allen Institute for Brain Science, claims, “it would be best if this tissue were anesthetized.”

The existence of active brain states in a network gives rise to the possibility of the existence of mental states. One important question, then, becomes: what kinds of mental states are morally relevant? Is there something inherently valuable about thoughts or about sensory experiences? (Are there such things as sensory experiences in the absence of sense organs and an entire central nervous system?) If there is something valuable about such states, is it always a good thing to bring them about? In that case, every time a scientist creates a cell or system of cells capable of having a thought or experience, that scientist has done something that increases the overall level of value in the world. On the other hand, we have no way of knowing what kinds of experiences are being produced. If the sole experience produced in the creation of a cell or a system of cells is a negative experience, then the scientist has arguably done something wrong by generating that cell or system of cells.

Some philosophers think that it isn’t merely the presence of thoughts, but the presence of thoughts of a particular kind that make a being a person. Personhood, according to many moral theories, is a characteristic a being must possess in order to be a member of the moral community. According to philosopher Harry Frankfurt, a being is a person if and only if their first order desires are guided by their second order desires. So, a person might have a first-order desire to eat a slice of cake. They might have a second order desire to refrain from eating the cake, say, because they are on a diet. Persons, and only persons, can use their second order desires to guide their first order desires. Through the process of having thoughts about one’s own thoughts and desires about one’s own desires, a being starts to develop an identity. 

The truth is, we simply don’t know how this works—we don’t know what conditions need to be in place for either the existence of first order or of second order thought. We don’t know how brain matter works, and we don’t know exactly what “thoughts” consist of. We don’t know if or how mental states may be reducible to brain states. We don’t know what states of matter might give rise to second order beliefs and desires—we don’t know the conditions under which we might create a “brain in a vat” that is a person and has an identity. What’s more, the brain wouldn’t be capable of communicating that fact to us (unless, of course, the horror movies have it right and all such brains can communicate telepathically—but I wouldn’t bet on that.)

As technology progresses, we run into a familiar ethical issue over and over again: what steps are we morally justified in taking, given that we don’t really know what we’re doing or how our actions may ultimately affect other beings with interests that matter? When we know we’re potentially dealing with thinking beings, we must proceed with caution.

The Letters of Last Resort and MAD Ethics

photograph of submarine half-submerged in ocean

On July 24th, former London mayor Boris Johnson became the newest Prime Minister of the United Kingdom. According to tradition, one of the first actions taken by each new PM, following a briefing regarding the state of Britain’s nuclear capabilities, is to write and seal identical letters to the commanding officers of four British nuclear submarines. Called the ‘Letters of Last Resort,’ they contain instructions for what should happen in the event that UK leadership is incapacitated and unable to issue final orders. Because each UK leader writes their own letters, which are then locked inside a safe-within-a-safe on board each submarine and are destroyed without being opened when a new PM takes office, these letters will only be read in a worst-case scenario of apocalyptic proportions. To date, the specific contents of any such letters remain unknown to all but their authors.

Nevertheless, conventional wisdom indicates that there are four broad possible options for these final directives:

    1. Fire upon particular targets (including, but not limited to, those guilty of attacking the UK).
    2. Do not fire.
    3. Use your own judgment regarding what to do.
    4. Surrender the submarine (and its payload) to a particular ally.

With one exception, no former prime minister has ever spoken out regarding their thinking on which option was best: James Callaghan (who held the office from 1976 to 1979) indicated his general support for (1) – though only reluctantly as an absolute last resort which, if he were still alive to witness, he would regret until he died. While neither Johnson nor his Conservative predecessor Theresa May have commented publicly on their opinions, Labour party opposition leader Jeremy Corbyn has long been an outspoken proponent of nuclear disarmament, suggesting that he might support (2).

For some, a defense mechanism like this is a sensible element of a wider approach to global relationships between nuclear powers. The logic of a foreign policy founded on ‘mutually assured destruction’ (MAD) requires a nation’s enemies to understand the retaliatory capability of that nation, should it be attacked first. Birthed particularly as a result of the Cold War (where the standoff between the USA and the USSR was famously complex), MAD doctrines have only become more complicated as the list of countries with nuclear capabilities has grown over the last six decades. In short, on this perspective, even if they are never read, knowledge that the Letters of Last Resort exist serves as a reminder to potential enemies of the UK that, should London fall, London’s attackers will fall as well – the letters are, effectively, a nuclear-level deadman’s switch.

For others, the letters are an antiquated method of problem-solving which fails to account for any number of important variables which, in the event of a disastrous attack, would surely be relevant facts to consider. How hard might it be, for example, for a team of clever con artists to fake enough of a situation that one of the submarine commanders could be convinced to open the safeguarded letter? Or, in the event of a real emergency, what happens if the letter indicates that a submarine should fire upon a target disconnected from the actual threat? Or if they specify a target that had already been destroyed? Enshrining a particular set of instructions that are (in all likelihood) impenetrable to being updated by new information is a curiously rigid system for handling any sort of governmental program – particularly one with such dire potential consequences as a nuclear missile.

Additionally, as Ron Rosenbaum, author of How the End Begins: The Road to a Nuclear World War III, explained in an article for Slate, notifying the world that the Letters of Last Resort only might require retaliation undercuts the entire foundation for MAD in the first place; as Rosenbaum puts it, “With all due respect to our British cousins, this seems, well, insane.”

Other countries with nuclear technology have developed more complicated security measures, technological firewalls, communication networks, and backup plans to serve as alternatives in the event that one or two systems fail. The US, for example, has turned the country’s nuclear power into a badge of authority, sending the so-called ‘Nuclear Football’ and its attendant along wherever the President of the United States happens to go (a system mimicked by Pakistan, Russia, and possibly France). But these systems suffer from limitations of their own. In January of 1995, for example, a scientific rocket designed to study the Northern Lights was launched from Norway; confusion in nearby Moscow led to the Russian Football (called the cheget) being temporarily activated, though ultimately no attack was issued – perhaps the closest the world has come to the brink of nuclear disaster since the infamous Petrov incident of 1983.

It remains to be seen what a Boris Johnson administration will mean for Britain and the rest of the United Kingdom, but – by now – the Johnson Letters of Last Resort have been penned and secured beneath the waves. Until, and unless, a more secure system for managing such destructive weapons can be devised, we must continue to hope that those letters remain unread.

Cultural Heritage and the Murujuga Petroglyphs

photograph of petroglyphs etched in a number of different stone faces

The Australian continent has been continually inhabited for at least 60,000 years. The Aboriginal or First Nations people of Australia are the longest surviving continuous culture(s) in the world, though their traditional lifestyles, languages and connections to country have been severely degraded by European settlement at the end of the Eighteenth Century. The Burrup peninsula, in the north-western corner of Australia, is home to a vast gallery of petroglyphs, or rock carvings, which tell a story of human habitation that stretches back tens of thousands of years, well before the last ice age to the time when Neanderthals still inhabited Europe.

Known as Murujuga in the local Aboriginal language, the site contains more than one million petroglyphs across 36,857ha of the peninsula and surrounding Dampier Archipelago. The petroglyphs of the Murujuga peninsula “have been considered to constitute the largest gallery of such rock art in the world.” The most recent petroglyphs were carved in the 1800s, before the Yaburara People (the artists and traditional inhabitants of the area) were murdered or driven off the land in a period of sustained colonial killings in 1868 known as the Flying Foam massacre.

Among its treasures Murujuga contains pieces of rock art that are some of the oldest known examples of art by prehistoric humans. The oldest of the petroglyphs at this site date back some 40,000 years. Among many things the Murujuga petroglyphs depict, there are pictures of some species of megafauna, such as the giant flat-tailed kangaroo, which became extinct around 30,000 years ago. The Murujuga site is also home to the first known image of a human face in history, carved about 35,000 years ago. The value of these ancient carvings, not only for Australia’s First Nations people, but also for all of humanity, is inestimable.

However the northwest of Australia is also home to massive iron ore, oil, coal, mineral and gas reserves, as well as other heavy industry. Industrial scale mining in areas including the Burrup Peninsula has, from the early twentieth century, contributed to Australia becoming one of the per capita richest developed countries in the world. Thus it comes as no surprise that the preservation of the Murujuga rock art has been subordinated to economic and corporate interests. In the 1960’s development of deep-water ports to transport iron ore was carried out without any survey work, as museum recommendations on preservation following survey work on nearby rock art had hindered other proposed developments. “A great deal of rock art was destroyed on the peninsula in the 1960’s” writes Robert Bednarik, an archaeologist who, since the early 2000’s has been arguing for greater steps to be taken for protection so that the petroglyphs may be saved from further destruction.

These developments, along with those of and around the original town of Dampier, where coastline was bulldozed and filled in, including a major site on which the power-station was erected, has destroyed an estimated 20 to 25 percent of the population of petroglyphs. The Murujuga rock art is now under threat from chemicals associated with mining, and nearby fertilizer plants. The site currently sits adjacent to the largest gas refinery in the Southern Hemisphere. 

In 2018 the Western Australian government formally committed to pursuing World Heritage status for the Burrup peninsula and together with traditional Aboriginal native title land owners signed off on an application to have the site listed under the UNESCO world heritage programme. 

Central to any proposal for a site to gain recognition as world heritage is a ‘statement of outstanding universal value.’ The notion of ‘outstanding universal value’ means that sites are seen as part of the ‘heritage of mankind as a whole,’ and as such ought to be protected and transmitted to future generations. Sites of ‘outstanding universal value’ can gain World Heritage status by meeting one of ten possible criteria. At least the following three clearly apply to Murujuga: the site represents a masterpiece of human creative genius and cultural significance; it bears a unique or exceptional testimony to a cultural tradition or to a civilization which is living or which has disappeared; it is directly or tangibly associated with events or living traditions, with ideas, or with beliefs, with artistic and literary works of outstanding universal significance. A site of ‘outstanding universal value’ therefore marks a remarkable accomplishment of humanity, and stands as evidence of our cultural, intellectual and aesthetic history on the planet. 

The importance of gaining world heritage status for the Murujuga rock art is that world heritage status is a strong catalyst for better protection and management. It is also a strong statement of what we value and why. Present in the very idea of world heritage is a sense of reverence for the achievements of human life, civilization and culture through time, and the idea that relics of such achievements from the distant past teach us all something about what the human journey has been. The importance of protecting the Murujuga rock art lies in its value to humanity – as a record not just of human history as something in the past, but as a testament to human creativity.

The Murujuga gallery is a place of enormous anthropological and archaeological importance. But unlike other sites of prehistoric art, such as the ancient cave paintings in Spain and France, it is part of a living cultural tradition. For Australian Aboriginal people, places of special sacred significance, and objects and artifacts produced by ancestors, form part of a living cultural tradition, in which ancestors are ‘present’ – captured in the notion of Dreamtime with its complex understanding of place and time in which myth, narrative, past and present mingle. 

But the possibility of a successful application leading to official world heritage listing is dependent on there being a good chance the site can be preserved. This aspect of the application already looks shaky, as the West Australian government is apparently not prepared to make sacrifices to industry, current or future, that would put the interests of the petroglyphs above those of industry. 

The WA government is currently pursuing further industrial development alongside the world heritage listing. A briefing note to premier Mark McGowan leaked to the media last year warned that the timing of the latter was “critical” to ensuring industrial development continued. Regulators in Western Australia are considering proposals for two new chemical plants on the Burrup peninsula that would increase air pollution. A Senate report has warned emissions from heavy industry on the peninsula could damage the carvings, prompting rock art experts to call for a halt to new industry approvals until an accurate picture of the damage being done to the petroglyphs can be assessed. Any plans to increase industrial development in the region could damage the rock art and undermine efforts to secure world heritage listing. UNESCO has already indicated that the current level of industry on there may impinge on the possibility of World Heritage listing.

Consider the analogy between the destruction of  Murujuga and the worldwide outrage at the destruction of the Bamiyan Buddhas by the Taliban in 2001 along with the heartbreaking destruction of Palmyra by Isis in 2016. These actions caused a widespread global sense of shock both at the loss of irreplaceable historical and cultural treasures as well as at the barbarity with which they were destroyed. Is it any less barbaric to fail to prevent the slow destruction of the Murujuga petroglyphs, through insidious neglect and capitulation to industry? 

The ethical issues are clear here, and clearly connected to the line that can be drawn from the colonial attitudes to and barbaric treatment of Australia’s First Nations people, (exemplified in the governments’ historic disregard for sites of important cultural significance for Aboriginal people) to the corporate colonial interests of resource giants being allowed to continue the destruction of cultural heritage. 

Those advocating for the preservation of the Murujuga petroglyphs face a difficult fight to protect these beautiful, delicate and ancient artworks, which reach as far back as human history, from the industrial juggernauts of fossil fuel mining and heavy industry destroying our collective human future.

Moral and Existential Lessons from “Chernobyl”

Photograph of Pripyat ferris wheel from inside abandoned building

HBO’s five-part mini-series documenting the 1986 nuclear power plant disaster in the Soviet Union is powerful because of the existential and moral messages it conveys—critical messages for our time.

The explosion takes place in the opening moments of the first episode. Right out of the gate, there is a clear juxtaposition between the childlike naiveté demonstrated by the control room operators on one hand, and the microcosm of the universe that is the nuclear explosion on the other. For many of the characters, the magnitude of the event is, quite literally, beyond comprehension. At one point, a disbelieving middle manager orders an employee to climb to the top of the tower and stare directly down at the ruptured core. We are reminded of Nietzsche’s admonition that, “if you gaze long into an abyss, the abyss also gazes into you.” He turns back in a silent report on what he has seen—his face like a Munch painting, signs of deadly radiation damage already clear.

In The Myth of Sisyphus, Albert Camus describes the absurdity of the human experience. There are moments when this absurdity hits us with full force—the universe is not the kind of thing that cares about the desires of human beings. Camus describes these moments of recognition, “At this point of his effort man stands face to face with the irrational. He feels within him his longing for happiness and for rationality. The absurd is born of this confrontation between the human need and the unreasonable silence of the world.” At Chernobyl, rather than silence, the indifference is signaled by a ceaseless, pulsating hum.

Camus also describes absurd heroes—people who, in full recognition of the absurdity of their situation, respond to that absurdity authentically. The miniseries tells the true stories of the truly stunning number of people who were willing to charge into situations in which completion of the required task was unlikely and death was seemingly certain. In this way, it is an inspiring tale of human resilience and spirit.

In fact, despite the existential premise, much of the story motivates the intuition that, despite our insignificance from the perspective of the universe, the choices we make now really do matter now, and that virtue should be pursued and vice avoided. Chernobyl was a disaster of unimaginable proportions, but were it not for the actions of those who gave their health and even their lives in the service of others, it could have been much worse. The series highlights the value of courage as a virtue. It also explores the perils of blind ambition as a vice. The accident happened as a result of decisions that had foreseeable bad consequences, but those involved in the bad decision-making valued their own promotion over the safety of others.

Some of the most significant lessons from Chernobyl are epistemic—they have to do with how we form our beliefs and what we regard as knowledge. The tragedy of Chernobyl highlights the consequences of confusing power with expertise. This message is always important and is especially salient today. Chernobyl demonstrates how dangerous fallacious appeals to authority can be. Sometimes, powerful figures are presumed to be truth tellers or experts simply because they happen to be in power. Attaining knowledge can be hard work, and we should respect the process. This doesn’t mean that we should blindly accept the pronouncements of anyone with a PhD, but we should recognize that, for example, physicists know more about nuclear reactions than government bureaucrats. Similarly, climate scientists know more about global warming than presidents or the CEOs of oil companies.

Viewers are left with a better understanding of how dangerous it can be when people are put in charge of things that they know very little about. Powerful positions should not be doled out based on nepotism or on past support or level of loyalty, but should instead be based on knowledge base and experience level. Lack of qualification is easily obfuscated when times are good. Perhaps these appointments should always be made with the understanding that times can get very, very bad extremely quickly.

The series also speaks to the peril to which wishful thinking can give rise. Some beliefs are comforting, pleasant, and familiar. These aren’t good reasons for thinking those beliefs are true. When lives are on the line, it is important that we believe and act on what the best evidence supports, rather than believing whatever our strongest desires motivate us to believe.

Finally, the series is about the importance of speaking truth to power. Truth telling is important because lies have consequences—especially when those lies are about the finer details of nuclear power plants. When the government is the body doing the lying, the effects are vast. Speaking truth to power is about more than consequences however; it is also about dignity and authenticity. A person exercises autonomy of a crucial sort when they refuse to abandon their responsiveness to reasons when faced with the coerciveness of power. Such an act makes the statement that facts don’t cease to be facts because they are inconvenient for the powerful. 

We learn from Chernobyl that the consequences of letting lust for power and the fear of looking foolish can be, in the right circumstances, complete global catastrophe. There are forces that dwarf the significance of fragile human egos. Perhaps those forces should properly humble us, as we do our best to understand them.

The Ethics of Scientific Advice: Lessons from “Chernobyl”

photograph of Fireman's Monument at Cherynobl

The recently-released HBO miniseries Chernobyl highlights several important moral issues that are worth discussing. For example, what should we think about nuclear power in the age of climate change? What can disasters tell us about government accountability and the dangers of keeping unwelcome news from the public? This article will focus on the ethical issues concerning scientists potential to influence government policy. How should scientists advise governments, and who holds them accountable for their advice? 

In the second episode, the Soviet Union begins dumping thousands of tons of sand and boron onto the burning nuclear plant at the suggestion of physicist Valery Legasov. After consulting fellow scientist Ulana Khomyuk (a fictional character who represents the many other scientists involved), Legasov tells Soviet-leader Gorbachev that in order to prevent a potential disaster, drainage pools will need to be emptied from within the plant in an almost certain suicide mission. “We’re asking for your permission to kill three men,” Legasov reports to the Soviet government. It’s hard to imagine a more direct example of a scientist advising a decision with moral implications. 

Policy makers often lack the expertise to make informed decisions, and this provides an opportunity for scientists to influence policy. But should scientists consider ethical or policy considerations when offering advice? 

On one side of this debate are those who argue that scientists primary responsibility is to ensure the integrity of science. This means that scientists should maintain objectivity and should not allow their personal moral or religious convictions to influence their conclusions. It also means that the public should see science as an objective and non-political affair. In essence, science must be value-free.

This value-free side of the debate is reflected in the mini-series’ first episode. It ends with physicist Legasov getting a phone call from Soviet minister Boris Shcherbina telling him that he will be on the commission investigating the accident. When Legasov begins to suggest an evacuation, Shcherbina tells him, “You’re on this committee to answer direct questions about the function of an RBMK reactor…nothing else. Certainly not policy.”

Those who argue for value-free science often argue that scientists have no business trying to influence policy. In democratic nations this is seen as particularly important since policy makers are accountable to voters while scientists are not. If scientists are using ethical judgments to suggest courses of action, then what mechanism will ensure that those value judgments reflect the public’s values?

In order to maintain the value-free status of science, philosophers such as Ronald N. Geire argue that there is an important distinction between judging the truth of scientific hypotheses and judging the practical uses of science. A scientist can evaluate the evidence for a theory or hypotheses, but they shouldn’t evaluate whether one should rely on that theory or hypothesis to make a policy decision. For example, a scientist might tell the government how much radiation is being released and how far it will spread, but they should not advise something like an evacuation. Once the government is informed of relevant details, the decision of how to respond should be left entirely to elected officials. 

Opponents of this view, however, argue that scientists do have a moral responsibility when offering advice to policy makers and believe that scientists shouldering this responsibility is desirable. Philosopher Heather Douglas argues that given that scientists can be wrong, and given that acting on incorrect information can lead to morally important consequences, scientists do have a moral duty concerning the advice they offer to policy makers. Scientists are the only ones who can fully appreciate the potential implications of their work. 

In the mini-series we see several examples where only the scientists fully appreciate the risks and dangers from radiation, and are the strongest advocates of evacuation. In reality, Legasov and a number of other scientists offered advice on how to proceed with cleaning up the disaster. According to Adam Higginbotham’s Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster, the politicians were ignorant of nuclear physics, and the scientists and technicians were too paralyzed by indecision to commit to a solution.

In the real-life disaster, the scientists involved were frequently unsure about what was actually happening. They had to estimate how fast various parts of the core might burn and whether different radioactive elements would be released into the air. Reactor specialist Konstantin Fedulenko was worried that the boron drops were having limited effect and that each drop was hurling radioactive particles into the atmosphere. Legasov disagreed and told him that it was too late to change course. Fedulenko believed it was best to let the graphite fire burn itself out, but Legasov retorted, “People won’t understand if we do nothing…We have to be seen to be doing something.” This suggests that the scientists were not simply offering technical advice but were making judgments based on additional value and policy considerations. 

Again, according to Douglas, given the possibility for error and the potential moral consequences at play, scientists should consider these consequences to determine how much evidence is enough to say that a hypothesis is true or to advise a particular course of action. 

In the mini-series, the government relies on monitors showing a low level of radiation to initially conclude that the situation is not bad enough to warrant an evacuation. However, it is pointed out the radiation monitors being used likely only had a limited maximum range, and so the radiation could be much higher than the monitor would tell them. Given that they may be wrong about the actual amount of radiation and the threat to public health, a morally-responsible scientist might conclude that evacuation be suggested to policy makers. 

While some claim that scientists shouldn’t include these considerations, others argue that they should. Certainly, the issue isn’t limited to nuclear disasters either. Cases ranging from climate change to food safety, chemical and drug trials, economic policies, and even the development of weapons, all present a wide array of potential moral consequences that might be considered when offering scientific advice. 

It’s difficult to say a scientist shouldn’t make morally relevant consequences plain to policy makers. It often appears beneficial, and it sometimes seems unavoidable. But this liberty requires scientists to practice judgment in determining what a morally relevant consequence is and is not. Further, if scientists rely on value judgments when advising government policy, how are scientists to be held accountable by the public? Given these benefits and concerns, whether we want scientists to make such judgments and to what extent their advice should reflect those judgments presents an important ethical dilemma for the public at large. Resolving this dilemma will at least require that we be more aware of how experts provide policy advice.

Should We Return to the Moon?

photograph of the surface of the moon (half)

July 20 marks the 50th anniversary of Apollo 11 landing on the Moon, the first time humans ever set foot on the lunar surface. But December 11 will mark 48 years since the last time humans took a step on the astronomical body. NASA administrator Jim Bridenstone called it “sad” that we have not returned. Is it time to go back?

The Apollo program, which spanned from 1960 to 1973, landed six crews successfully on the surface of the Moon. It cost a total $28 billion at the time, or the equivalent of $288 billion today; undoubtedly a colossal investment. The concerns of how to finance a return has hampered any serious development of another program. But it does not appear that another crewed lunar program would cost as much now.

Currently, the United States plans to send astronauts back to the Moon by 2024. In June of this year, Bridenstone, estimated that returning to the Moon would cost between $20 and $30 billion, on top of the amount already spent on the Space Launch System (SLS) rocket and Orion spacecraft. Still, this amount constitutes a fraction of the cost of the Apollo program in today’s dollars and a fraction of one percent of the overall federal budget. For context, the U.S. spent $623 billion on defense and $639 billion on non-defense programs in 2018 alone.

Even so, some may argue that space programs and endeavors beyond our planet are impractical. The case could be made that those resources, however small relative to overall government spending, should be put to use on our home planet rather than its moon or neighboring planets. Why explore uninhabited territories in space when existing communities back home are in need of improvement and care?

The original motivation for sending humans to the Moon may have been political in nature, with the Soviet Union and the U.S. jockeying for position during the mid-twentieth century. Even John F. Kennedy expressed that he had little interest in going to the moon for the sake of space exploration. Regardless of his genuine feelings on the matter, President Kennedy sought to demonstrate the more intangible value of travelling to the Moon in a now-famous speech given at Rice University. He posited that space was an opportunity to start anew: 

“I do say that space can be explored and mastered without feeding the fires of war, without repeating the mistakes that man has made in extending his writ around this globe of ours,” he said. “There is no strife, no prejudice, no national conflict in outer space as yet. Its hazards are hostile to us all, and its opportunity for peaceful cooperation may never come again.” 

Indeed, the hostile conditions of space and the strenuous nature of voyages in space are not isolated to any particular country or creed. His sentiment suggests not only that it is the harshness of this endeavor that can unify a people in overcoming the obstacles space presents, but also that space is unvarnished by the ills of society and the sins of humans. Out there in space, a clean slate awaits humanity.

But President Kennedy anticipated the opposition to his proposal, saying: “But why, some say, the moon? Why choose this as our goal?…We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” 

We choose to go to the Moon because it is hard. This attitude rests on a motivation to display the will and capacity of humans. In voyaging to the Moon, Kennedy would argue, we are showing each other the heights of what we are capable of. What is the value of exploring beyond our home planet? To test our limits, some may argue.

But at the time of President Kennedy’s speech, humans leaving bootprints on the lunar surface was merely an aspirational dream; it would not become a reality for another seven years. Perhaps, the motivation was more understandable then than now. 

Now, in 2019, we know we are capable of travelling to and landing on the Moon. We have done it seven times. No longer does a curiosity about our capabilities with regard to lunar travel compel a return. As a writer in The Economist points out, the Moon landing served as ameans of signalling America’s extraordinary capabilities,” a point that, once made, “required no remaking.”

Additionally, the astronauts gained an abundance of important and helpful knowledge during their expeditions. The potential of knowledge to be gained from unexplored locations may remain, but does the Moon, specifically, have any more to offer? 

The Moon does have one thing to offer that we did not know of before: ice. Ice has been discovered deep within polar craters, which could provide drinking water and breathable air. This potential of water, air, and even rocket fuel would make developing a sustained presence on the Moon more realistic.

Yet the discovery of ice is not a reason to return but rather a reason to be hopeful of maintaining astronauts on the Moon. It does not answer the fundamental question of this debate: Should we go back to the Moon? Maybe the ethos has changed since President Kennedy’s speech. Maybe now the motivation has become: we can; therefore, we should.

Racism, Refugees, and the Ripple Effect

photograph of barbed wire fence with camp in the distance

Trump has been embroiled in discussions about walking back or defending his latest racist behavior this past week. After saying that four congresswomen should “go back” to their countries of origin and presiding over chants of “send her back!” at a campaign rally, he tweeted Sunday that the congresswomen were “not capable of loving our Country.” 

As part of his “go back” rhetoric, Trump articulated his view that if the congresswomen didn’t like living in the US, they should leave and attempt to improve conditions in their supposedly broken countries. (Multiple late-night hosts noted the irony involved in Trump’s statements, as the congresswomen’s country is the US, which at the moment can feel broken and in need of fixing, and which, as members of congress, seems to be what these women are attempting to do.)

At the same time as he encouraged some Americans to leave, Trump rolled out new policy making it more difficult for others to leave their own dangerous countries. His administration has implemented a policy that requires refugees who have traveled through another country to have applied for asylum in that country as well. The ACLU quickly announced their intention to challenge the policy in court, and the administration instructed the southern border agents to implement it as quickly as possible before it may be blocked.

The so-called third-country asylum rule is incredibly restrictive, especially against asylum seekers at the US’s southern border. Such restrictive policies towards people seeking safety brings obvious ethical questions to the fore. There is, perhaps, a tension between a purported sovereign right to autonomy for nations to determine who will reside or travel within their borders and the rights of humans to be free from violence and persecution. These human rights can be seen to ground the right to freedom of movement between nation-states. Though international law recognizes that immigration and citizenship policies are, and should be, left up to each state, the UN has exceptions for refugees, whose basic human rights are in dire need of protection and overrule states’ right to make such policies.

Importantly, the freedom to make immigration and citizenship policies does not mean that all such policies are created equal, from a moral point of view. From a moral perspective, immigration policies that are transparent and ensure migrants have access to basic human goods are preferable to an opaque and unpredictable set of policies that makes navigating the systems that provide basic goods difficult, though both are legally acceptable.

But, beyond the legal space to determine immigration and citizenship policies is the commitment to accept refugees. This commitment is based in the idea that humans should not be condemned to suffer, when there is a place they could live without being persecuted. Many nations, including the US, have agreed to policies that commit them to accepting asylum-seekers: countries cannot force migrants who have entered their territory to return to places where their safety is under threat. This is the “non-refoulement” principle from the United Nations 1951 Convention of Refugees, and even countries that are not participants in the convention have endorsed the spirit of the principle.

Trump’s policy builds off of a crucial exception to this principle, which concerns migrants who have come through a country considered to be “safe.” Countries are deemed “safe” according to the Immigration and Nationality Act, which governs asylum law, pursuant to a bilateral or multilateral agreement. Currently the US only has such an agreement with Canada. Trump attempted to sign such an accord with Guatemala, but the president cancelled the trip to sign the third country agreement in order to see what Guatemalan courts ruled regarding the treaty.

In the US, asylum rates have been declining over the past six years, and this trend is on track to continue. Six years ago the denial rate was just 42.0 percent, but last fiscal year saw 70 percent of applications denied. In 2018, a particularly high spike in denials was the result of a policy shift made by Attorney General Jeff Sessions. Sessions banned asylum requests on the basis of domestic violence and gang violence, though this ban was later struck down in the courts. The new third-country policy would “effectively end” asylum on the southern border.

The decline in granting asylum and other relief to refugees does not just affect the groups at our border, however. This trend in US policy has been reflected in the policies of other large and wealthy nations. For instance, the EU currently attempts to prevent asylum-seekers from reaching their shores – supporting border agents in countries like Libya that catch migrants attempting to cross the Mediterranean Sea and detain them in Africa in deplorable conditions in detention centers.

This is leading to worldwide declines in aid to those seeking relief: “It’s called a ripple effect,” says Jeff Crisp, a research associate at the Refugee Studies Centre at Oxford University. “When the largest and wealthiest nations get away with breaking international human-rights laws, then other countries wonder, why can’t we?”

India, a country with a long history of hosting asylum-seekers, currently has 40,000 refugees from Myanmar, and now is treating them as illegal migrants. It has begun sending Rohingya refugees back to Myanmar, the site of a 2017 genocide sponsored by the current government.

Similarly, Trinidad and Tobago sent 80 Venezuelans back to their devastated homeland last year, while Peru returned 40 Venezuelans for “allegedly being part of criminal gangs or for not having legal papers.”

Refugee policies are just one part of a racist and exclusionary nationalist landscape. The rhetoric that the US has engaged in bolsters other countries with similar constituencies. Hungary has explicitly praised the US’ nationalist tendencies and cited the “America First” anti-immigration policies as providing them with the support they need to enact similar attitudes within their own country. (Hungary closed its borders during the height of the Syrian refugee crisis and has rejected humanitarian pleas to take part in the effort.)

The import of human rights is being subjugated in national dialectic to the sovereign rights of a nation-state that endorses a racist identity. That isn’t the priority of international law or humane moral systems.

The Political Response to Racism: Trump vs. the Squad

photograph of "Welcome Home Ilhan" sign held by supporter with others gathered at MSP

While Donald Trump tweeting something awful may barely qualify as news these days, his tweets on July 14th were awful enough to be considered by many to be crossing a line. Addressing “progressive democrat congresswomen” – in other words, Alexandra Ocasio-Cortez, Ilhan Omar, Rashida Tlaib, and Ayanna Pressely, sometimes referred to as “the squad”  – Trump told them to “go back and help fix the totally broken and crime infested places from which they came.” Trump’s remarks have widely been condemned as racist, bigoted, and xenophobic, although many Republicans have not yet openly denounced them.

There has, unsurprisingly, been a flurry of reactions to Trump’s tweets. While many politicians both local and international have expressed their disapproval, some have stopped short of calling Trump a racist, perhaps because of the moral condemnation the term implies, or perhaps because they are busy arguing about the semantics of the term. There is, however, little room for mental gymnastics here: telling four women of color to go back to where they came from is unambiguously racist. The question is not whether Trump’s remarks are morally reprehensible (they are), but instead what should be done in the aftermath.

Journalists have suggested a number of different answers to this question. For instance, those at Fox News took the unsurprising stance that Trump’s remarks shouldn’t be taken seriously, and that they were at most an “unforced error.” Others have called for some form of punishment, most notably in the form of the House passing a resolution to officially censure Trump for his remarks. While such a censure would not impede Trump in any significant sense, it would at least be a symbolic gesture that put him in the company of the last president to officially be censured, Andrew Jackson, all the way back in 1834.

While it seems clear that Trump’s remarks deserve condemnation, and that Trump himself ought to be held accountable for them in some way or another, some journalists have urged caution in deciding the best next course of action. The thought is something like the following: to Trump’s most diehard fanbase, racism is not a deal breaker (Trump’s history of racist remarks and actions is, after all, well-documented). By calling for official censure, then, one is only going to accomplish the riling up of his loyal supporters. Furthermore, by drawing attention to Ocasio-Cortez, Omar, Tlaib, and Pressley, one then associates the Democratic party with these four specific women, ones who have tended to be unpopular amongst those on the right. Really, then, condemning Trump only ultimately helps his cause of rallying the Republican troops.

Instances of this take are not hard to find. For example, Jonathan Freedland at The Guardian writes:

It’s race-baiting, no doubt about it. But it might also be effective, as Trump’s 2016 campaign proved. The result is a dilemma for Democrats. Do they try to win back those white, low-income voters who supported Trump last time or do they use the president’s hateful behaviour, including his attacks on the squad, to drive up turnout among those appalled by it – especially black voters and young people?

Or consider Republican strategist Ford O’Connell, who stated that Trump’s remarks were “very smart from an electoral strategy perspective” and that he is helping to make the squad “the face of the 2020 Democratic Party.” While not saying that Trump should not be condemned for his remarks, Ocasio-Corte, Omar, Tlaib, and Pressley have themselves issued caution about letting Trump’s remarks serve as a distraction from a number of pressing issues, and cautioned Americans that they should not “take the bait.” Other commentators have echoed this sentiment. Consider finally the following from journalist Vinay Menon:

There is no point in trying to shame a shameless man. If by now his fans do not see his profound failings as a human, those blinders can’t be removed. Trump is not a politician. He is a cult leader who is bending his party to his will.

And the more awful he is, the more his base rejoices.

This summer, critics should pretend he no longer exists. Put the focus elsewhere. Stop taking the bait. Cease giving him a power he has failed to earn.

We have, then, something of a dilemma, in that those who wish to condemn Trump risk helping him in the long run by doing so; or, as Freedland puts it,

The result is that Democrats face a choice between doing what is morally right and what is politically smart. When you’re dealing with an amoral bigot in the White House, those two things are not always the same.

Unfortunately, this has not been the first time that we have been faced with this dilemma during Trump’s presidency: for instance, some argued that beginning impeachment procedures after the release of the findings of the Mueller report would only ultimately help Trump’s cause, as it would be perceived as whining on behalf of the democratically controlled House. Indeed, it has been a consistent refrain any time after Trump does something awful: censuring him in one form or another only encourages his base, so why bother trying to punish him?

So what ought one do in this situation? If there is a risk that implementing a punishment for Trump’s morally egregious acts would actually help him in the long run, is this reason not to pursue that punishment?

I don’t think there’s an easy answer here. But I do think it is worth keeping in mind that many of those writing on the situation are doing so from something of a distance. In other words, it is easier to take the position that the risk of political backlash warrants inaction when one has themselves not been directly impacted Trump’s behavior. For instance, consider the headline “Before they can beat Donald Trump, his foes must learn to ignore him – even his racism.” This might sound like decent political advice, but it is harder to swallow when one is the target of the kind of racism that Trump is inciting. Indeed, the fact that during a recent Trump rally crowd members chanted the racist creed “send her back” indicates that ignoring his racism may not be the best course of action.

Does Australia Need a Bill of Rights?

photograph of Australia High Court building

Rights are one of the most recognizable ethical tools of the modern world. They have increasingly dominated the way we think about our moral lives – as individuals, as nations and in international relations. Nearly every mature, liberal democracy has a constitutional bill or a charter of rights to which lawmakers and keepers must defer. 

Rights language has become entrenched in the way we speak, that it is often taken as fundamental. A claim that “I have a right to X” will often trump other arguments. A right is an entitlement. A right entails a duty – the right to freedom of expression entails the duty not to impede expression. In theory, if not always in practice, rights have been very important in guaranteeing the dignity, self-determination of persons. They are important because they promote those conditions necessary for well being, for humans to flourish and for society to promote that flourishing. 

But there can be a dark side to rights claims – for example a claim to the right of free speech can be used to protect racism and lies, the right to freedom of religion can be used to protect discriminatory practices and the right to bear arms, enshrined in the US constitution, has made it nearly impossible to tackle the scourge of gun violence in America. 

Some important philosophical questions about rights – what they are grounded in, what things should be considered rights, how they are protected and what to do when rights appear to clash with one another – remain a challenge. Some of these questions are central to the current national debate in Australia over whether a bill or charter of rights should be instituted. 

Australia is the only mature liberal democracy that does not have a charter or a bill of rights. Many feel that the introduction of constitutional rights is long overdue, yet others do not believe that a bill of rights is needed. In fact, many feel that such a bill might even be a hindrance to the administration of justice.  

This has manifested as a tension between ‘old constitutionalists’ who believe that the combined functions of the parliamentary and judicial system provide the best, most flexible and most democratic protections for Australians, versus those who think that the system is failing in some key areas which a bill of rights would help to rectify. 

At the time Australia’s constitution was written, early in the twentieth century, having a bill of rights as part of the constitution was rejected. It was argued that, in the words of former High Court Justice Michael Kirby, “a due process provision in such a bill of rights would undermine some of the discriminatory provisions of the law at that time.”

Some constitutional provisions function as rights provisions– such as freedom of religion. But it is the government’s legislative power which has expanded federal legislation and protected fundamental rights by creating specific statutes dealing with human rights questions or the removal of various kinds of discrimination. Many of these have been based upon Australia’s ratification of international treaties. 

Various parties feel this process has worked well because it gives flexibility to the system, where charters of pre-existing, inalienable rights can make the system inflexible. Up to now, whenever this debate has arisen, the general sense has been that Australia’s parliamentary democracy usually works reasonably well, and its citizens have usually had a high degree of trust in legislators. If they do not act justly, particularly if they act oppressively, they will be dismissed from office at the next election. 

A further objection to the introduction of a bill of rights, that such a bill would lead to a kind of ‘judicial imperialism’ by way of transferring power currently held by the legislative body, to the courts – unelected (usually white, middle-aged, male) judges. The worry is that, a bill of rights could result in the entrenchment of values of said judges into law, in a way that would prevail even over Parliamentary statutes. 

However, the argument that it would politicize the courts and allow too much power in the hands of judges, who are unelected and therefore not as accountable in the democratic system, may be losing ground. One contributing factor is this era of increased populism, from which Australia, following the results of the most recent election, is certainly not immune. In that vein, one could also add the growing  sense that people’s trust in democracy has been eroded through the influence of many different, powerful forces from corporate lobby groups to misinformation spread on social media. 

Nevertheless, the issue of flexibility is still present. As the example of the right to bear arms in the US illustrates, things which may be important fundamental rights at one time, may not be appropriate in another. Having protections enshrined as rights can make them very difficult to amend later. The Australian constitution, like the US constitution, is very difficult to alter, so the worry is that the community could be stuck with rights that end up resulting in more harm than good. 

A bill of rights drawn up now may not have the capacity to deal with problems of the future. We live in an age of such exponential technological change, we may not yet know what problems internet technology, biotechnology, genetics or artificial intelligence may pose. It is not likely that a bill of rights drawn up now would be able to predict or manage all of the issues that these advances might bring. The argument is that it is better to leave rights and responsibilities associated with these issues to be dealt with as they arise by the parliament of the day through the enactment of specific legislation. Such legislation can typically be expressed in far greater detail and specificity. 

On the other hand, the democratic system may have its own flaws when it comes to equal protections for every person. It does, of course favor the majority, and for this reason it is felt by some that a bill of rights is necessary to ensure the interests of minorities and other vulnerable individuals are equally protected. As Justice Michael Kirby, a strong advocate for a bill of rights in Australia, said in a recent address on the subject:  

Democracies look after majorities. Democracies are good in looking after majorities… In America, if President Trump does something which is considered unjust, there is provision for the appeal to the federal courts and ultimately the Supreme Court. But in Australia we have very few weapons if politicians in the majority don’t feel it is a matter they are interested in or that there are no votes in it. 

Though it is true that rights can sometimes be inflexible, and that there are difficulties in deciding what rights to enshrine, how to enforce them, and how to manage situations where they may come into conflict with one another, from the perspective of the question of how a society can best protect minorities or vulnerable individuals it is prudent to remind ourselves about the philosophical case for rights. 

The notion of inalienable rights is based on an ethical principle of equality and dignity. It is a deontological principle which has at its core the imperative to treat persons with respect, as ends in themselves but never as means to an end. This fundamental tenet is at the center of the notion of human rights. 

There have been cases in Australia over recent years in which the government, for largely political reasons, has failed in its duty to treat all people with respect and dignity. A prominent example is Australia’s treatment of refugees, holding them in indefinite detention in substandard conditions for basically political reasons. Justice Kirby argues that: 

Basically, the idea of finding the fundamental principles that bind us together and that our rules for a fair society are principles that should be bipartisan and not consigned to one side of politics.

A bill of rights would ensure that basic protections, like the right to freedom from discrimination and freedom of expression, would be guaranteed for all Australians, and all those under Australia’s protection. Minorities and the vulnerable would be protected from the possibility of legislation which would undermine these things. These protections communicate our convictions about principles like equality, justice and kindness, which is the essence of a good and free society.

Discussing Scientific Consensus on Climate Change

close-up photograph of dried lakebed

Concerns about the climate are becoming more pronounced in politics and policy discussions with each year. In the recent E.U. election, Green parties witnessed a marked increase in support. In Canada, the Green Party recently doubled their national caucus and managed to come second in a recent provincial election. In the U.S., there is hope of a Green New Deal. However, the federal administration in the U.S. has issued new directives to various national agencies to strip references to climate change or to omit worst-case emission scenarios. Public debates and media coverage emphasize the near universal consensus of climate scientists, but, on specific issues, this level of consensus simply does not exist. The nature of scientific consensus on the issue of climate change makes public discussion difficult, and this has ethical implications for how the public should be educated on matters of science. 

Studies show that the American public tends to believe that the consensus on climate change is around 72%, while many in the media (John Oliver’s Last Week Tonight being a good example) focus on the point that 97% of climate scientists agree on the issue of human-caused climate change. Getting the public to understand the degree of scientific consensus is important; it allows the public to be better able to address the dangers of climate change and assess the merits of various policy proposals. However, an important issue that is often not discussed is what exactly is meant by “scientific agreement.” The degree of scientific consensus isn’t constant given different questions and projections. While there may be a risk in underemphasizing the degree of current consensus, there may also be a risk in overemphasizing it as well. Is it worth it to potentially muddy the waters and attempt a more complex and nuanced public discussion about the nature of this consensus and the implication of climate change? 

Consensus on reports by the Intergovernmental Panel on Climate Change (IPCC) is often considered important. However, a 2007 paper by Oppenheimer et al. warn policymakers about the extreme possibilities of climate change that are downplayed or excluded for the sake of consensus. It notes that the report tends to minimize uncertainty by excluding less understood processes. Because of this, various models may be subject to a “premature consensus.” 

Similarly, a 2010 paper by Dennis Bray discusses surveys of climate scientists and found even amongst IPCC participants there is not uniform consensus. On topics ranging from future changes to precipitation, only 54% of IPCC respondents state that the IPCC report reflects a consensus view. Bray’s paper also mentions a 2008 survey which examined participant agreement with official IPCC projections on extreme event projections of climate change, almost 50% indicated that they disagreed or strongly disagreed.  

As the papers suggest, the issue of scientific consensus is more complicated than it is often described in public discussion. While there is broad agreement between climate scientists, that consensus evaporates when considering the finer details. Given the seriousness of global climate change, it is obviously beneficial that the public takes the threat seriously and that they are confident in what scientists are telling us. No doubt this is why the “97% consensus” point is so compelling. 

But emphasizing consensus at the expense of considered disagreement and uncertainty comes with risks. This is important knowledge for policy debates; the public has a vested interest in knowing if official projections are under- or overestimating the potential harm. This may be especially important at the local and regional level since, for example, coastal regions are likely to be disproportionately affected by the effects of climate change. Vigorous public input in these regions may be both desirable and necessary. 

Appreciation of scientific consensus is important for depoliticizing the facts around climate change. But the more the details and limitations of this consensus are discussed, the greater the risk that the facts become politicized by a public who may not have the time or expertise necessary to process the information. Is it worth it then to have the public be informed about disagreement when there is concern that the consensus view may underestimate projections about extreme events? More specifically, is it worth it if the result is that in the public eye scientific consensus is weaker than originally thought and ultimately less is done about climate change overall? Even if there is broad consensus on the notion of human-caused climate change, climate change deniers would likely use the opportunity to use reports on disagreement on specifics to undermine the broad consensus that climate change is human caused. 

Deliberately not covering cases of climate scientists diverging from the consensus view can make for a less informed public and we generally consider this a bad thing.  It can undermine public trust in science and in the public’s ability to make well informed democratic decisions. However, if there is greater coverage of scientific disagreement the facts could become twisted. If public confidence in scientific consensus falls then the public may be more inclined to be skeptical of climate change and thus such actions may result in an even less informed public overall. 

These questions pose a moral problem for both those who report on scientific findings as well as  members of the public who may have a moral obligation to be as informed as possible. Perhaps the long-term answer is to focus on science education, but that can take time. Plato’s Republic advocated for a “noble lie” in order to ensure social cohesion and harmony. Reporting only on consensus and glossing over areas of disagreement may constitute a lie of omission, but is it noble to do so?

Refusal to Repatriate: The Owning, Lending, and Stealing of Art

photograph of the British Museum at night

In 1897, British troops stole some 4,000 sculptures after invading the Kingdom of Benin, which is now southwestern Nigeria. Since then, Nigeria has requested the return of these bronzes with increasing frequency, especially since the plans for a Royal Museum to open in 2021 became firm. The British Museum, and the UK, have come under a great deal of scrutiny and criticism at their refusal to repatriate these cultural artifacts. The UK has refused to return the Elgin Marbles, named for the noble that took them from Greece, and has refused Egypt’s request to repatriate the Rosetta Stone. In November, the British Museum agreed to a temporary solution with Nigeria: they will loan some pieces of their collection to Nigeria for the museum opening. 

This decision is consistent with past loans. In 2016, the British Museum refused to repatriate the Gwaegal shield to Australia, instead loaning the shield for a museum exhibit and reclaiming it afterward. 

This tentative policy is a contrast to the 2017 commitment by French President Emmanuel Macron to return objects of African heritage. After announcing that the Quai Branly Museum in Paris will return 26 stolen objects to the country of Benin, Macron said he wants to change French law so that France must return stolen objects whenever a country asks for them back.

The British Museum is not alone in Europe for being under pressure to reevaluate their policy and laws preventing their museums from returning parts of their collections. The Pergamon Museum in Berlin, Germany, has repeatedly been criticized for its refusal to repatriate the Ishtar Gate to Iraq among other objects. 

Often in discussions of repatriating artifacts of cultural heritage, it seems straightforwardly fair that the countries of origin be in possession of their significant artifacts. However, the appropriateness of repatriating these artifacts can be less straightforward when three factors come into play. The contemporary country of origin may have less of a claim to repatriation when 1) the acquisition may have been just, 2) the source cultural group is unclear or not clearly contiguous with a contemporary group, or 3) the value of institutional retention of disputed objects is thought to outweigh the good of repatriation.

The first consideration, the justice of how the current institutions came to possess the artifacts, is part of what makes repatriation a systemic problem. The reason that there are so many countries in Europe with issues of repatriation is because they obtained the artifacts by their imperialist practices. Archeologists from the imperialist nations, rich collectors, and black market acquisitions led to many artifacts finding their way out of their countries of origin while under foreign control. These are colonial powers who often acquired significant artifacts as a direct result of their imperialism: they ruled the countries of origin at the time and brought the artifacts back. The countries of origin were cut out of a chain of possession because they were not ruling themselves at the time. The culturally significant artifacts were taken just as natural resources were taken. Currently in Germany, an inventory is being taken of the artifacts in the major museums to track how their pieces were acquired as a potential first step to determining ownership rights. 

The second issue with repatriation regards a practical concern. If the country currently in possession of an artifact grants that it obtained the object unjustly, there is still a question of to whom to return it. Part of what makes these artifacts so important is their connection to cultural history and heritage. They represent a window from which to view societies from the past. But whether a contemporary group can claim to inherit the rights to cultural property can be less than clear. If, for instance, current national borders do a poor job of capturing a culture that has no clear descendants, some have argued that a contemporary nation’s claim to own the artifact is not significantly stronger than the nation who is possession of it. 

Finally, there is the artifact itself to consider. There may be reason to keep artifacts where they are, often for their own safety. If their country of origin is experiencing unrest or does not have a suitable institution to care for the artifacts under consideration, this bolsters a nation’s claim to stewardship over the artifact. Germany, for example, has used the justification that travel back to Egypt would damage the delicate bust of Nefertiti to refuse to return it to its country of origin despite continuous requests since 1930. Likewise, the British Museum claims that the value of keeping the collection of bronzes together outweighs the claim of cultural ownership and requires them to maintain the rights to the Kingdom of Benin’s bronzes. 

There is mounting pressure in favor of repatriation, especially with France’s expressed commitment to abide by any requests they receive. Progress may be slow, but it is heartening that it is moving in the direction of significant artifacts residing where their context supports, rather than where colonial power and money have moved them.

Plant-Based Meat Substitutes, Sensational Reporting, and Information Literacy

Close-up photograph of a hand holding vegan McDonalds burger

2019 has been a good year for plant-based meat replacements. In January, Carl’s Junior launched their Beyond Famous Star Burger, made with the plant-based Beyond Meat. The Mexican food franchise Del Taco launched tacos made with Beyond Meat at all of their franchises in April. The introduction of the vegan alternative has been a smash success, leading the company to release two additional products made with Beyond Meat to their menu in June of this year. Many other restaurants have recognized the consumer interest in meat-free options. Restaurants such as Burger King, White Castle, A&W, and Red Robin offer products made with Impossible burger, a plant-based competitor to Beyond Meat. 

Now, one might respond, “but haven’t plant based alternatives existed for quite some time?” After all, most people can’t remember a time when they haven’t been able to purchase a veggie patty from their local supermarket. These alternatives are different. They have been designed, not simply to provide a substitute for meat, but to provide an option that is so similar to beef in taste and texture that few consumers are able to tell the difference.

By May of this year, Impossible raised 300 million dollars in Series E funding, generating investments from individuals like Jay-Z, Serena Williams, Alexis Ohanian, Katy Perry, and Jaden Smith. When Beyond Meat went public in May, it was, according to Business Insider, “the best performing first-day IPO in nearly two decades.”

The success of plant-based substitutes is an unmitigated good thing. A move away from meat consumption is critical for a healthy environment. Animal agriculture accounts for 14.5-18% of all global greenhouse gas emissions. This makes animal agriculture the second biggest contributor to greenhouse gas emissions after the burning of fossil fuels. Animal agriculture also contributes to pollution of land, water and air, deforestation, and loss of biodiversity. 

Environmental considerations matter for our futures and for the futures of our children. They also matter for all of the other species of living beings on earth. But movement away from meat is important for reasons beyond impacts on the environment. We have a direct obligation to the animals being consumed. Given the fact that it isn’t necessary for humans to consume flesh to survive, many believe that animals ought not be made to sacrifice their lives because we enjoy how their dead bodies taste. What we should be striving for, as philosopher Tom Regan elegantly put it, is “Not larger cages, empty cages.”

Even those who aren’t sympathetic to that argument might be sympathetic to arguments related to animal welfare. One might think that it isn’t wrong to kill animals for food, so long as those animals are treated humanely and killed painlessly. The fact of the matter is that this simply isn’t the reality of how the meat for our tables is produced. Most meat is produced in factory farms. In these places, animals are treated as things rather than as living beings. They are treated as products to be mass produced rather than as creatures capable of experiencing a wide range of sensations including pleasure, pain, anxiety, and grief. Conditions in these places are appalling—animals are given very little room to move around, they experience existences of perpetual anxiety and pain, and there are no steps to see to it that these beings live their lives in a way that would constitute flourishing for members of their species. The appearance on the market of plant-based alternatives that taste so similar to meat that many tasters can’t tell the difference is truly a remarkable achievement. We have the power to ameliorate all of this suffering and death and still satisfy carnivorous palates.

The fact that meat substitutes are doing so well is big news. Predictably, in the current click-bait news climate, eye-catching headlines and sensationalist news stories are easy to find, and they spread across social media like a virus. Among other things, many news articles raise concerns about the nutritional value of substitutes like Beyond Meat and Impossible meat. All one needs to do is search for news stories about the nutritional value of a given meat substitute to find plenty of noteworthy examples. Consider the story Are Beyond Meat and Impossible Burgers Better for You? Nutritionists Weigh In, published in the Huffington Post on July 10th, 2019. The first concern that the nutritionist lists in the article has to do with protein. The protein content in these products is high—Beyond Burger contains 20 grams of protein and Impossible contains 19 grams. The levels are actually higher than the protein content one finds in ground beef with the recommended 20% fat. It can’t be, then, that these meat replacements don’t contain enough protein. Instead, the nutritionist’s complaint is that the proteins are processed. She expresses her concern, “The problem with a lot of additives is that we just don’t know the long-term effects of them, whereas a beef burger could just be one ingredient: beef.” She makes it clear, of course, that the problem with these products is not in average consumption, but in overconsumption. The average reader is likely left with the impression that we simply don’t know whether the contents of these products are healthy or not. A common takeaway, therefore, may be that, because we might not know the long-term effects of processed plant products, we should stick with tried and true beef. What she fails to point out in the article is that there are things about overconsumption of red meat that we do know—such consumption has been linked to both cancer and heart disease. Heart disease is the leading cause of death in the United States.

The nutritionist also raises concern about the amount of sodium in these meat replacement products—the Beyond Burger has 390 milligrams of sodium and Impossible has 370. This is, granted, much more sodium than is contained in beef. That said, these sodium levels aren’t, all things considered, high. The American Heart Association recommends no more than 2,300 milligrams of sodium per day and suggests that people average around 1,500 milligrams. Use of one of these meat alternatives as the main source of protein in a meal is certainly compatible with satisfying these recommendations.

The style of reporting in these stories is familiar—it is designed to raise concerns in the reader’s mind about the nutritional value of these products for the purposes of increased readership. In the end, the concerns, though presented as serious, are completely innocuous. Reporting on topics of consumption of all types is frequently this way. Consumption is a big part of life. Arguably, one of the most difficult aspects of living is achieving a healthy balance when it comes to consumption. Many people turn to the Internet to help them navigate the rocky terrain. When the stakes are this high, reporters shouldn’t scare consumers away with misplaced concerns about nutrition.

Vaccination Abstention and the Principle of Autonomy

image of 1960's polio vaccine poster with Wellbee Cartoon

The suppression or eradication of many serious diseases in vaccinated populations has been one of the great public health successes of the twentieth century. There have always been those who resist or refuse vaccination for a variety of religious, political, or health reasons. Though there can be some risk of negative reactions to vaccines in certain individuals, vaccination is very safe for the general population.  Continue reading “Vaccination Abstention and the Principle of Autonomy”

The Criminalization of HIV Transmission and Responsibility for Risky Behavior

black and white photograph of judges' library

Michael Johnson was released from prison on July 9th after serving five years of his original sentence of thirty years. He was in prison for failing to disclose his HIV status to his sexual partners and his sentence was longer than the state average for murder. The conviction covered transmitting HIV to two men and exposing four more to the virus, despite “an absence of genetic fingerprinting to connect him to the other men’s HIV strains.”

Johnson’s trial highlights the racist and homophobic undertones of the continued fear around HIV exposure. The images shown to the jury emphasized the darkness of Johnson’s skin, his muscularity (he was a star football player), and that two-thirds of the allegedly exposed men were white. The racist stereotypes regarding the sexuality of black men hurt Johnson’s chances in this trial, which were already slim given cringe-worthy missteps by his court-appointed public defender who claimed her client was “guilty until proven innocent.”

In the years since the trial and conviction, Johnson’s case has been a focal point of the discussion of the sexualization of black bodies and the inherent racism and homophobia in our criminal justice system. HIV criminalization laws disproportionately affect non-straight black men. Beyond these issues of justice, there is also the family of questions of the ethics surrounding sexual health. Johnson’s case is one of many where sexual relationships and health statuses are interpreted criminally, and the laws surrounding HIV transmission are not structured to reflect current empirical understandings of how the disease spreads. 

Empirical evidence regarding HIV criminalization laws suggests that having such laws do not affect disclosure of HIV status to partners or decrease risk behaviors. A key component to the sexual ethics debate, arguably, is that people who are HIV positive can be treated to the point that it is an empirical impossibility that they transmit the virus to sexual partners. When medicated, people with HIV can have an undetectable viral load, which means that there isn’t enough of the virus in the person’s system to turn up on standard tests. This makes it basically no more likely for them to transmit HIV to their partners than a partner without HIV. 

In light of this empirical reality, how should we ethically understand the risk of sexual behaviors? In recent years, some states have taken steps to make their laws more in line with the health reality of HIV transmission in particular: California has a bill that lessens the offense of knowingly transmitting HIV to a misdemeanor and a similar bill has been proposed in North Carolina. An attorney from the office that originally prosecuted Johnson in Missouri has become a supporter of a recent failed bill that would reduce punishment for knowingly expose someone to HIV in that state.

Knowingly exposing someone to risk is an ethically interesting area. There are cases where we knowingly expose people to risks and it seems ethically unproblematic. A bus driver exposes their passengers to risk on the road. A tandem jumper exposes their client to risk diving out of a plane. A friend exposes a guest to risk cooking for them, in operating ovens, in attempting to achieve safe temperatures and adequate freshness of ingredients.

There are two major ethical principles at work here, because knowingly exposing someone to risk is putting them in a position of potential harm. Serving a dinner guest a meal that you have reasonable expectations of harming them is an ethically problematic action, and we would hold you responsible for it. 

In similar yet ethically unproblematic cases, it could be that the case satisfies an ethical principle of respecting someone’s autonomy – the person consented to take on the risk, or the risk is part of their life-plan or set of values. For example, your guest would have to consent to the risk if you are serving your guest the famed potentially poisonous fish dish from Japan, fugu, where the smallest mistake in preparation could be fatal.

Another scenario where posing potential harm to someone could be unproblematic is under circumstances where the risk is so minimal or typical that if harm were to result, we wouldn’t consider another morally culpable. If you are serving dinner to a group of people buffet-style in the winter, this increases everyone’s to the risk of catching colds and flus from one another but typically we don’t’ take this to be ethically problematic. These two principles are at play when considering the risk of sexual behaviors. 

There are reasons to take on risks to one’s health and well-being, and we adopt these risks daily. Having sex with someone is certainly in the realm of behaviors that are risky, but that we have reason to take part in: sex is part of a fulfilling life for many people. There is risk of becoming pregnant for some sorts of sexual interactions, and broadly speaking, because of the intimacy of sex and its place in our social lives, there are biological and physical safety risks as well. 

So what sort of risk can we assume a potential partner has consented to in engaging in sexual  behavior and what risks require disclosure? The fear behind the exposure laws seems rooted in cases where someone has knowledge that they have a disease and deceptively and intentionally transmits the disease to another person by engaging in sexual behaviors. However, it is important to note that these are not the cases that typically are at stake in the trials (Recent studies of the criminalization of HIV transmission also found that: “Records of arrests and prosecutions reveal that many cases involve non-sexual behaviors or sexual activities that pose little to no risk of HIV transmission.”) Also, in the case of transmission of illness the relevant ethical questions involve exposure to risk and are more like the host/food-safety cases than a physical assault/murder model that Johnson was tried by. 

In principle, meeting the burden of these ethical question is difficult because there is not a clear standard of reasonable risk aversion for particular domains. Is it reasonable to eat fugu? How frequently should you eat at a buffet? Some cases are clear: don’t poison your guests and use standard food safety preparation methods. However, there aren’t clear standards for sexual disclosure methods. Perhaps the closest we can get to a principle of responsibility is that if someone is aware that, in engaging in a particular sexual activity, they are at particular risk of some particular harm, then this seems to let the partner off the ethical hook for the exposure to risk. Satisfying this criteria may come down to further sexual education and responsibility training.  

Consider the range of responses to engaging in behavior that exposes you to risk. You can put it out of your mind entirely and take part in the behavior without safeguards or protections. You can put your faith in the available protections (condoms, typically), and not put stock in the reliability of your partner(s). You can ask for verbal confirmation that the health status of your partner(s) meets a bar you find acceptable (when were they last tested, for what, how many partners have they had since). You can abstain until you receive documentation of such tests and take verbal confirmation of their activities since such tests have been performed. These options represent quite an array of actions you can take before engaging in sexual activity with a new partner. The first may sound excessively precarious as it can expose you to quite a bit of risk; your eventual health status depends on your partners and anyone they have interacted with sexually. The last may sound excessively cautious, yet is the standard for many non-monogamous individuals. Note that none of the strategies ensure a risk-free engagement, and whatever your risk-aversion strategy, your potential partner(s) may have another. 

These considerations, when coupled with the empirical evidence associated with the treatability and transmission of sexually transmitted infections, point to the ethical trickiness in criminalizing health statuses. There isn’t a clear model for the ethics of exposing someone to a possible illness. Further, while it may be ethically problematic or wrong to expose someone to an illness, it does not follow that it should be illegal. We are deceived and harmed by sexual, intimate, romantic partners and loved ones throughout our lives, and frequently in ways that cause us lasting harm. It does not follow that these harms should be criminalized. Lies, cheating, breaking agreements and leases, these are not illegal, and the burden should be on those seeking to criminalize the immoral behavior to justify why some ways a partner lets us down should be. When such behavior is placed in the stigmatized and oppressive social context of racism, homophobia, and lack of empowering sexual education, criminalizing exposure to disease becomes even more problematic.

Concrete Milkshakes and the Ethics of Being Wrong on the Internet

close-up photograph of milkshake

In late June there was yet another clash of right-wing groups and protesters, this time in Portland, Oregon. As has been widely reported, in several previous clashes this year there have been incidents in which protesters threw milkshakes at members of extremist groups and right-wing politicians, most notably during an incident in May in which European Parliament member and Brexit supporter Nigel Farage was on the receiving end of a protester’s milkshake. While the person involved in the incident was arrested, Farage himself was not injured, merely embarrassed. During the recent events in Portland, however, the protesters upped the ante: as many news outlets, along with the Portland police department reported, instead of using regular milkshakes to embarrass members of the fringe extremist group “Proud Boys” [sic], protesters added powdered cement mix to the milkshakes, clearly with the intention to seriously injure their opposition. According to reports, several people called the Portland police to report the incident, after which they tweeted a warning to those attending the protest.

The only problem with the information tweeted out by police and reported by numerous right-wing news outlets? There is not a shred of evidence to suggest that it is true. Not a single word of it.

That blatant falsehoods like these are spread in this day and age is perhaps not surprising; we are perhaps even becoming desensitized to it. And while there are clearly ethical issues with spreading misinformation (some of which have been written before about on this site), what is perhaps just as problematic is the general response from those who have taken part in spreading that misinformation once it has been shown to be false.

It seems that the best thing to do when one has been shown to be wrong is to admit one’s mistake, and to retract one’s original claims. But this is not what we have generally seen happen in recent incidents online: it seems that instead of people admitting that they are wrong they will either quickly move on to the next thing, or else manipulate the narrative so that they can convince themselves that they were right the whole time. Both of these types of responses have been commonplace in response to the recent Portland protests.

Consider first Fox News, who originally ran the headline “Antifa, conservative protests turn violent as demonstrators throw milkshakes of quick-dry cement at police and onlookers.” Not typically known for their subtlety, this headline could not be any clearer, and not any more false. One might think that, upon learning one has printed a headline full of egregious errors, it would be one’s responsibility to set the record straight by issuing a retraction. But no such retraction was issued. Instead, the updated headline was quietly changed to read “Antifa-Proud Boys confrontation in Portland turns violent; conservative writer injured,” with the content altered only slightly: instead of claiming that milkshakes with cement mixed into them were, in fact, thrown, the article now states that “it was reported” that such milkshakes were thrown. This is no way constitutes a retraction of a demonstrably false claim, but instead still strongly implies that it was true. It is hard to see how these actions could constitute anything other than blatant dishonesty.

Philosophers have been writing about the dangers of intellectual dishonesty and carelessness in forming one’s beliefs for a long time. Philosopher W.K. Clifford, for example, discussed an example that sounds like it could have come straight out of today’s headlines, all the way back in 1877:

There was once an island in which some of the inhabitants professed a religion teaching neither the doctrine of original sin nor that of eternal punishment. A suspicion got abroad that the professors of this religion had made use of unfair means to get their doctrines taught to children. They were accused of wresting the laws of their country in such a way as to remove children from the care of their natural and legal guardians; and even of stealing them away and keeping them concealed from their friends and relations. A certain number of men formed themselves into a society for the purpose of agitating the public about this matter. They published grave accusations against individual citizens of the highest position and character, and did all in their power to injure these citizens in their exercise of their professions. So great was the noise they made, that a Commission was appointed to investigate the facts; but after the Commission had carefully inquired into all the evidence that could be got, it appeared that the accused were innocent. Not only had they been accused on insufficient evidence, but the evidence of their innocence was such as the agitators might easily have obtained, if they had attempted a fair inquiry. After these disclosures the inhabitants of that country looked upon the members of the agitating society, not only as persons whose judgment was to be distrusted, but also as no longer to be counted honourable men. For although they had sincerely and conscientiously believed in the charges they had made, yet they had no right to believe on such evidence as was before them. Their sincere convictions, instead of being honestly earned by patient inquiring, were stolen by listening to the voice of prejudice and passion. (Clifford, “The Ethics of Belief”)

Clifford says that it was morally wrong for the agitators to believe what they believed, because they formed their beliefs on shaky and little evidence, and that their beliefs were not the result of patient reflection and careful consideration, but rather “prejudice and passion.” Importantly for Clifford, such beliefs can still be held with complete sincerity – the agitators were not initially aware of the falsity of their beliefs, and they were confident that what they believed was true. Nevertheless, the way that they formed their beliefs still shows that they were doing something wrong.

One difference between Clifford’s case and contemporary events is that instead of today’s agitators being broadly considered untrustworthy and dishonorable, the consequences for being blatantly wrong on the internet are far less severe. In fact, the prejudice and passion that drive the careless acquisition of evidence, further encourages the manipulation of that evidence when confronted with conflicting evidence.

Consider again responses to the fabricated concrete milkshake story. Even after discovering that the story was false, one could find people reinterpreting the evidence to fit their preferred narrative. Consider the following representative tweets:

I’ve been getting a lot of emails telling me I’m stupid for adding to the concrete shake line. Lots have sent me a Portland Mercury article saying there’s no proof of concrete. Maybe so, but this is exactly how to do a kind of petulant terrorism. This makes Antifa terrorists.

This is the effect of concrete shakes, real or not: They tell people this will happen; they do things to make it look real; they maybe don’t even do it; then they accuse everyone of being stupid and gullible. Then they get much of the effect they want and get to do it again.

This kind of response again not only fails to constitute a retraction (nor does it recognize any kind of personal wrongdoing) but instead takes what evidence does exist – that some people claimed that there were concrete milkshakes at the rally – to support the conclusion that they held all along – that Antifa is a violent terrorist group. This is a disastrous leap in logic that not only runs afoul of Clifford’s principle that beliefs should be formed carefully, conscientiously, and on the basis of the best possible evidence, but goes the extra step of manipulating the evidence in such a way that supports one’s initial belief, even after having been proven wrong.

There are a lot of lessons we can take away from the recent events in Portland and the online responses to them, but perhaps one of the most important is to recognize that it is better to admit wrongdoing than to try at all costs to save face in light of overwhelming evidence. Spreading misinformation is harmful, but manipulating evidence to further the spread of misinformation that one knows perfectly well is false is perhaps even more harmful still.

To Keep or Not to Keep? The US Electoral College and Presidential Representativeness

image of US map of electoral votes by state

“One person, one vote” and “Not my President!” These mantras underlie calls for election reform in the United States. They are pressed urgently now regarding the Electoral College and its role in selecting the President of the United States (POTUS) as the 2020 election approaches. Solutions posed by critics range from reformation, to circumvention, to abolition. To many the Electoral College is patently undemocratic because it does incorrectly represent the choice of the national constituency. This view is officially championed by numerous candidates for the 2020 Democratic Primary: Cory Booker, Elizabeth Warren, Jay Inslee, Julian Castro, Kirsten Gillibrand, Marianne Williamson, Pete Buttigieg, and Robert “Beto” O’Rourke.

During 2019 several states passed and signed legislation to join the National Popular Vote Interstate Compact: Colorado, Delaware, New Mexico, and Oregon. (Additionally, the measure passed both chambers of Nevada’s state house but was vetoed by Gov. Steve Sisolak.) These states join 11 others, and the District of Columbia, in pledging to assign their electoral votes to whichever candidate wins the national popular vote. This would effectively circumvent the Electoral College while leaving it in place. However, the compact only takes effect once enough states have signed up: i.e., enough states to contribute the 270 out of 538 electoral votes needed to win election.

How does the Electoral College work? Each state in the US is allotted a certain number of electoral votes, based on their representation in Congress (House seats plus Senate seats). Electoral votes are cast by individuals nominated to the College of Electors, whose votes directly determine which candidate becomes president. Most states give all their electoral votes to whichever candidate secures a simple majority (51%) of their popular vote. Maine and Nebraska are the exceptions, assigning electoral votes on the basis of results in each of their US House districts: a candidate receives one vote for each district they win, and the candidate who wins the statewide popular vote receives 2 votes.

Advocates for each position regarding the Electoral College claim their stance best represents the choice of voters, and that their opponents’ views over- or underrepresent some group or another. Supporters of the Electoral College argue it prevents urban areas from dominating elections, or that it accurately represents the federal structure of the US government. Critics of the Electoral College consider it unacceptable that a candidate can win election who does not have the support of a majority of the national constituency. They also argue the Electoral College inflates the voting power for citizens of certain states, and deflates the power of other states’ citizens, going against the “one person, one vote” principle.

Disagreements about the Electoral College are about who the POTUS represents. That is, it’s about what representation is and who the represented constituency is. Hanna Pitkin’s 1967 The Concept of Representation provides an important touchstone for a thoughtful discussion of representation. She elaborates four facets of representation: Formalistic, Symbolic, Descriptive, and Substantive. (See the Stanford Encyclopedia of Philosophy article on Political Representation.)

Superficially, the disagreement between detractors and supporters of the Electoral College solely concerns Pitkin’s formalistic aspect; the debate hinges on questions pertaining to the political process and its ability to confer legitimacy. We ask whether the election was conducted according to existing rules and the spirit of the law. Setting aside concerns about election tampering/interference, some claim President Trump’s 2016 election was “illegitimate” because he received significantly less of the national popular vote than Hillary Clinton. However the formalistic aspect of representation doesn’t fully capture the sense of illegitimacy pressed here: President Trump was elected according to the established protocol of the Electoral College system.

An alternative explanation is available in Pitkin’s symbolic and descriptive aspects of representation. When people denounce President Trump as “not their president”, they often mean to say that they object to what he stands for, or claim that he fails to resemble the voting public physically or ideologically. Such people would presumably accept, and feel represented by, a candidate who won the popular vote. Hence when critics of the Electoral College argue that the outcomes of US presidential elections are undemocratic, and don’t represent the will of US citizens, they mean it in the symbolic and descriptive senses. (This article will not discuss Pitkin’s substantive aspect. It involves an officeholder’s performance of their duties, which can only be evaluated after elections.)

While advocates for a national popular vote see US citizens at-large as the represented constituency, advocates for the Electoral College see US states as the represented constituency. This isn’t an irrelevant distinction. Consider a hypothetical situation in which the NPV is in effect. If the citizens of Oregon, which is in the NPV, vote unanimously in favor of one candidate but that candidate loses the national popular vote then all of that state’s electoral votes go to a candidate for whom not a single person in Oregon voted. The NPV, and any national popular vote scheme, recognizes no difference between Oregon voters and the voters of any other state—everyone is just a US voter. However the Electoral College system does distinguish between voters on the basis of their state of residence. 

Opponents of the Electoral College understand this, and argue that these distinctions diminish the voting power of some citizens relative to others. This effect is not a necessary consequence of the Electoral College—or at least not the effect’s magnitude. Rather it’s a direct effect of the cap on the number of voting representatives in the US House at 435 (Apportionment Act of 1911). This also caps the number of electors and has led to the average number of citizens represented by a House member (and hence the number of individual votes subsumed by an electoral vote) to increase over time, though differently for different states. The inflation/deflation of voting power Electoral College critics highlight is a direct consequence of the fixed number of House representatives. 

Increasing the number of Representatives would ameliorate the symbolic and descriptive representativeness problems of the Electoral College, while also increasing the representativeness of the House. Further it can be done by legislation in Congress rather than a Constitutional amendment as would be required to abolish or reform the Electoral College. Finally it preserves the distinction between voters of different states, respecting the federal structure of the US government. This consideration will not appeal to many opponents of the Electoral College. However, short of full abolition, increasing the total number of electoral votes by increasing the size of the House addresses representativeness problems, and does so without leaving open possibilities of bizarre, and objectionable, situations such as the hypothetical Oregon case above. The current Electoral College is malfunctioning, and the best ways to deal with it are complete abolition or substantive reform. The NPV does neither, merely walking around a broken machine without fixing it or removing it—leaving it to belch an occasional cloud of toxic smoke.


This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of our discussion questions, check out the Educational Resources page.

Farmworker Abuse and Agricultural Exceptionalism

photograph of migrant workers harvesting sweet potatoes

In mid-June, New York passed the Farm Laborers Fair Labor Practices Act, a measure aimed to improve working conditions for agricultural employees that has circulated the hallways of the state legislature for roughly twenty years. By allowing farm workers collective bargaining rights, eligibility for workers’ compensation, and unemployment benefits (among other provisions), the FLFLPA targets a series of long-standing exemptions in the legal code that have allowed farm owners to disadvantage their employees for decades. 

Although the 1930s brought a series of federal labor regulations to the books, including familiar arrangements like minimum wage requirements, overtime pay standards, and laws restricting child labor, agricultural workers were explicitly excluded from each of those statutes. Dubbed ‘agricultural exceptionalism,’ the federal government has largely left the task of protecting the interests of farmworkers to particular states which house various industries. While this practice of special treatment may have made sense in the early 20th century, one might expect policy arrangements to change as the landscape of contemporary agriculture has changed and farmwork has grown ever-more industrialized (and ever-more similar to the dismal factory conditions which, in part, prompted FDR’s labor reforms in the ‘30s). With a few notable exceptions, this simply has not happened.

Take, for example, minimum wage requirements and compensation protections for injured workers. Although the federal Fair Labor Standards Act was amended in 1966 to include some farmworkers under its wage mandate (a provision originally left out of the FLSA when FDR signed it in 1938), those requirements allow for plenty of loopholes that corporations can exploit to lower expenses by lowering employee compensation, such as the implementation of a piece rate system that pays farmhands based on their productivity (as measured in buckets or bags of produce picked). Even when piece rate systems are supplemented to equitable hourly rates, their very nature incentivizes farmworkers to engage in unsafe practices (such as working through rest periods or minor injuries) – a particularly problematic result when laws often do not require employers to provide workers’ compensation benefits for injured employees (in a field routinely ranked at “very high risk” for occupational hazards) or when a variety of additional pressures make such benefits risky or inaccessible to farmhands.

Of course, comparing data across industries is particularly difficult for agricultural economists, given that many agricultural jobs are filled by seasonal, migrant, and/or undocumented workers. This means that even if workers are paid above a state’s minimum hourly wage rate, their actual take-home earnings can leave them significantly impoverished. Consider how quirks in reporting requirements allow Californian employers to grossly overstate the actual amount of money paid to each employee: because the majority of workers do not work full-time for one employer, “in 2015, workers who received their primary earnings from agricultural employers earned an average of $17,500—less than 60 percent of the average annual wage of a full-time equivalent (FTE) worker in California.” Nevertheless, industry representatives can routinely make claims about higher compensation rates that, though technically true, are thoroughly misleading.

Consequently, the passage of the long-debated FLFLPA sets a standard against agricultural exceptionalism in one of the largest agricultural states in the country; requiring, among other things, that farmhands receive overtime pay (after working sixty hours a week, not forty, as a concession to industry lobbyists), be eligible for unemployment insurance and workers’ compensation coverage, and be given one uninterrupted 24-hour rest period each week. Critics of the legislation suggest that increasing industry costs may lead to the bankruptcy of small farms and the out-of-state relocation of others, but human rights advocates and labor defenders have heralded the FLFLPA as a landmark step in the right direction. As Beth Lyon, a law professor at Cornell who founded the school’s Farmworker Legal Assistance Clinic points out, “If you have an industry where the jobs are so unattractive that you have to fill them with undocumented 15-year-olds, then maybe you need to make the jobs more attractive.” It remains to be seen whether NY’s legislative victory for farmworkers will prove to effect change more broadly throughout the country or not.

Surrogacy in New York

photograph of pregnant torso

There are many ways to make a family. The intimate bonds of commitment and affection that make a family unit are grounded in a wide variety of ways: in biological relation, in choice, in shared experience, etc. Family bonds across generations are manifested between parent and child, but even these bonds vary in how they are grounded. Societies and our medical technology has developed in ways to support the variety of ways that parents can have children – currently there are ways to have a child through adoption, in vitro fertilization, and surrogacy, and advancements are being made in artificial wombs that would open up further methods of bringing children into the world. The diversity of methods for having children benefit potential parents for whom cis-hetero fertilization is not possible or desirable. Single parents, LGBT couples, and cis-hetero couples with fertility concerns are all aided by this variety of methods. 

So, the medical technologies and social policies that support individuals’ decisions to become parents, and thereby positively respect autonomy of these people. However, as with many developments and advancements that can be costly, there are justice considerations that arise: who is benefitting from the development, and who is placed at risk? Gestational surrogacy has recently been debated along these lines, for while the opportunity to have a child via surrogate benefits many potential parents, the risk and burden of gestation is adopted by someone else. To be a surrogate, a person agrees to take on the responsibilities of pregnancy and gestation for a potential parent with the understanding that parental rights and responsibilities after the birth of the child will belong to the person seeking the surrogate, not the person who gestates the child. The morality of compensating someone to take on this burden with their time and body raises questions for feminists and economic ethicists alike. 

Recently, New York State failed to pass a bill that would make compensated gestational surrogacy legal. Currently, in New York, only altruistic surrogacy is legal and surrogacy contracts are unenforceable. Surrogates cannot receive a fee or compensation, and the success of the arrangement is due solely to the integrity of the parties involved. 

New York is one of two states that currently ban compensated gestational surrogacy outright. In 1992, a gestational surrogate in New Jersey sued to keep parental rights over her biological child. In the wake of that suit, New Jersey, Michigan and New York passed bills banning gestational surrogacy. New Jersey reversed the ban last year, leaving New York and Michigan remaining. (Though it is important to note the variety of restrictions and protections that exist across America, sometimes at the county-by-county level.) 

However, the proposed bill — allowing gestational surrogates to be compensated for bearing a child without intending to bear the rights and responsibilities of parenthood — did not succeed during this legislative session. Democratic representatives were concerned that compensated surrogacy presents a slippery slope to commodifying women’s bodies and the bill did not garner sufficient support. “We must ensure that the health and welfare of women who enter into these arrangements are protected, and that reproductive surrogacy does not become commercialized,” said Assembly Speaker Carl Heastie.

Some feminists, such as Gloria Steinem, have been vocal opponents of gestational surrogacy. These opponents are concerned about the exploitation of people from marginalized and vulnerable groups and putting the bodies of individuals from such group to use for gestation. The monetary incentive to put one’s body through pregnancy presses on the economically vulnerable in an unjust way, they claim, and their case is strengthened by the state of surrogacy in Cambodia, Thailand, and India. In India, for example, some surrogates are forced to live in special homes and have no health insurance beyond the pregnancy, and no guarantee of payment.

Other feminists, as well as infertility advocates and LGBT groups, have been advocating in favor of changing the New York law. Governor Cuomo criticized the failure of support behind the bill, emphasizing the protections for the surrogates included that were meant to safeguard against exploitation. With all of these safeguards, Cuomo questioned how much the lawmakers were respecting the autonomy of those that would choose to be surrogates: “I say, how about a woman’s right to choose, which we just argued for Roe v. Wade?” Cuomo said. “But in this state we say the woman must have an attorney, the woman must have a health counselor, the transaction will be supervised under the Department of Health, the woman can’t be in dire economic conditions, but you still believe the woman is not competent to make that decision.” 

Thus the division between protecting vulnerable groups (economically disadvantaged and individuals with uteruses) and advocating for individuals to be able to take on risks consensually came down in favor of protection in New York this month. Both sides emphasized that this will be an ongoing conversation.

Who fact-checks the fact-checkers?

photograph of magnifying glass examining text

If you’re reading something about Facebook in the news these days, chances are you’re reading about how bad it is at preventing people from posting false or misleading information (either that, or it’s about concerns that Facebook is not good at keeping your personal information private). The platform has become notorious for being a place where conspiracy theories are allowed to run amok, and where pseudo- or anti-scientific views can receive strong endorsement by its user base. In an attempt to curb the spread of misinformation, Facebook has recently employed a number of fact-checking services. While Facebook has made the use of fact-checkers for a while now, the number of people responsible for the entirety of user output has in the past been tiny, a problem to which Facebook has recently responded by quadrupling the number of their American fact-checking partners. There are a number of websites that offer fact-checking services, and can provide various ratings on posts indicating whether a claim is true or false, or whether it presents information in a misleading way. The hope is that such fact-checking will help stop the spread of false information on Facebook overall, and especially with regard to that which can be actively damaging, such as false claims that vaccines are unsafe.

While making use of fact-checkers seems like a good move on Facebook’s part, some have recently expressed concerns that one of the fact-checking websites that Facebook employs in the US (there are different fact-checking services employed for different countries, a full list of which can be found here) is politically biased: the site Check Your Fact, which is a subsidiary of the website Daily Caller. The Daily Caller is an unambiguously right-wing and pro-Trump website, that often publishes articles denying climate change, and whose founder has expressed white supremacist views. There are concerns, then, that false or misleading claims made on Facebook that support a right-wing political agenda may not receive the kind of scrutiny as other kinds of claims because of the political affiliation of one of the fact-checkers.

Vox recently noted one incident of this type, in which a former conservative fact-checking website that Facebook used – the now defunct Weekly Standard – was over-aggressive with designating a headline critical of then supreme court nominee Brett Kavanaugh as false. Instead of controlling for false information, the fact-checking website in a sense created it, improperly flagging a headline that was, at worst, slightly misleading as outright false.

There are concerns, then, not only about the truth or falsity of individual claims being made on Facebook, but also about whether claims that fact-checkers are making about those claims are themselves true or false. What, then, are we supposed to do when faced with a claim on Facebook that has been fact-checked? Can we fact-check the fact-checkers?

There are, in fact, organizations that attempt to do just that. For instance, Facebook only uses fact-checkers that are certified by Poynter’s International Fact-Checking Network, an organization that evaluates fact-checkers on the bases of a code of principles, including “nonpartisanship and fairness,” “open and honest corrections,” and transparency of sources, funding, organization, and methodology. While all of these principles sound like good ones, we might still be concerned whether such an organization can really pick out the reliable fact-checkers from the unreliable ones. For instance, Check Your Fact does, in fact, pass the standards of the International Fact-Checking Network. 

What, then, of concerns about the partisanship of Facebook’s fact-checking partners? Are they overblown? Or should we go one step further, and fact-check those who fact-check the fact-checkers?

While this is perhaps not a bad idea, most people are probably not going to take the time to research the organization that determines the standards for fact-checkers when scrolling through Facebook. There is, however, perhaps a more pressing matter: in addition to how reliable these fact-checkers are – that is to say, how good they are at determining which claims are true, false, or misleading – there are also concerns about how effective they are – that is to say, how good they are at actually making it known that a false or misleading claim is, in fact, false or misleading. As reported at Poynter, there is reason to think that even if a claim is properly fact-checked as false, more people read the original false claim than the report showing that it is false. A worry, then, is that since information moves so quickly on Facebook it is often incredibly difficult for fact-checkers to keep up.

We might be worried about the efficacy of Facebook fact-checking for another reason, namely that people who have their posts fact-checked as false will probably not be deterred from posting similar such claims in the future. After all, if you believe that the information you are sharing is true, that a website tells you it is false may lead you not to reconsider your views, but instead to simply think that the fact-checking websites are wrong or biased.

So what are we to make of this complicated situation? Despite concerns about reliability and efficacy, making use of fact-checkers still seems to be a step in the right direction for Facebook: anything that can make any progress, even a little, towards stemming the tide of misinformation online is a good thing. What we perhaps should take away from all this is that fact-checking can be used as one tool among many for determining which Facebook posts you should pay attention to and which you should ignore.

On Julia le Duc’s Photograph and the Choice Not to View Distressing Content

photograph of warning sign

Last week, as I scrolled through the online newspapers from which I get my daily news – The Guardian online and the news website of the Australian Broadcasting Corporation, two high quality, trustworthy publications, I saw several headlines pertaining to a photograph of a father and daughter who drowned trying to cross from Mexico into the USA. News headlines on such platforms customarily carry a warning if the story they lead to contains images which viewers are likely to find distressing. 

I understand that a picture which shows graphic, distressing content can for that reason portray what words can fail to portray, and can bring the realities of the plight of migrants and refugees home to the public in profound and powerful ways. The difficulties faced by those fleeing war, terror, or extreme poverty is an issue I care deeply about, and I have previously written on the plight of refugees in indefinite detention in offshore facilities run by the Australian government. 

I am also a mother of a young daughter, which, on reflection, was the main reason why for several days I passed over this story, choosing not to open any of the links. I made the decision, with some trepidation, that I would find looking at the photograph of a young dead child and her father too upsetting. 

While The Guardian initially published the photograph behind a warning of that gave readers the choice to view the picture, or not to view it, later in the week the online paper published an opinion piece with a headline suggesting that people should be forced to see the picture.1 The thumbnail for this piece, overriding the general practice of providing a warning, displayed the picture near the top of the front page of the website, where every reader unavoidably saw it. 

It is indeed very difficult to look at Julia le Duc’s harrowing picture of the bodies of Óscar Alberto Martínez Ramírez and his 23-month-old daughter, Angie Valeria, floating face-down near the bank of the Rio Grande, on the US-Mexico border. I was, as anyone who has seen the picture must be, deeply distressed by it; I was also very upset that my careful choice not to view it had been removed. 

By way of explanation, the reader’s editor of The Guardian said that in weighing the decision to publish images such as these: “the standards guiding most serious newsrooms include: do not use gratuitously; provide context; give appropriate warnings; consider the sensitivities of the grieving; and respect the dignity of the deceased.” I do not believe The Guardian‘s publishing of this picture was gratuitous; I agree that in discussions of it they provided context. 

The question of the sensitivities of those who are grieving and of the dignity of the deceased is a little harder to answer – and answers may be of a more personal standing. Sometimes the best moral test is still to ask oneself how one would feel if the experience was one’s own. I don’t think in this case there can be a right moral answer to that question. Some people would say ‘yes if that was my husband and daughter I would want the world to see what happened to them if it may prevent even one more tragic migrant death’; others may give the opposite reply, that they do not want the tragedy of their young family to be visible to the whole world. Both those responses are valid. 

I do not argue that it is wrong of news organisations to publish the picture at all, and I do not even know if at other times I might have consented to see it. But for my own personal reasons, this time, for this picture, I had very consciously chosen not to, and I felt strongly about the fact that my choice not to view it had been removed.

The Guardian‘s editor-in-chief editor explained the decision to publish the picture saying: “It is an incredibly powerful image that would have a great impact and perhaps make people understand the human cost of the migrant crisis in the US.” This is doubtless true. The other side of President Trump’s anti-immigrant, populist rhetoric is the reality of people struggling for survival, people risking everything they have, including their lives and the lives of their children, to escape danger, poverty, and hopelessness. 

In this case a picture does indeed do more than speak a thousand words. What we need at this time is compassion – and a powerfully tragic picture such as this one is capable of softening hardened hearts and of moving people who would otherwise be unaware of the human cost of the refugee crisis. I agree that the arguments for making this image public are compelling.  

In another article on the subject, Guardian writer Peter Beaumont argues that, 

“What we see in Le Duc’s harrowing picture requires that we do not look away; that we demand to know the context and ask the hard questions. That we bear both witness and know what we are seeing.”2

Beaumont seems to be suggesting that we have a kind of duty (not a strict moral duty, but an obligation expressed by his choice of the word ‘requires’) not to look away from the reality this picture brings home. To take this argument seriously is to raise the question of whether my own personal choice not to look at the photograph is a way of resiling from the reality of the suffering of others, and whether this constitutes a kind of moral cowardice. Perhaps the dignity of those who suffer, as well as the dignity of those of us who witness that suffering, does require that we not turn away; that we look frankly at what we cannot bear to see.

Photographers who capture these images do so at significant personal cost. Don McCullin, a photojournalist who covered, among other things, the Congo Crisis in 1964 and Vietnam in 1968 later wrote that he was haunted by the memories of what he had seen and documented, and photojournalist Kevin Carter, who took the unforgettable picture of a little boy, starving and stalked by a vulture in Ethiopia in 1993, committed suicide four months after winning the Pulitzer Prize for Feature Photography.  

Looking frankly at what we cannot bear to see is a way we have of refusing to turn a blind eye to unbearable truths. Sometimes such images are able to catalyse change – as Carter’s photograph did; but there is also something morally important about simply ‘bearing witness’. Being emotionally, intelligently and compassionately present to the tragedy and the suffering of others is of great moral importance because it touches the depths of the very things that make us human, it reminds us of the ‘infinite preciousness’ of each individual person, to borrow a phrase from one of Australia’s greatest philosophers Raimond Gaita.3

In conclusion, I find I cannot cling to my outrage at being forced to view what I had pointedly chosen not to view. Nor, however, do I make a strong moral argument either for or against The Guardian having violated a right I might claim, to choose the content that I see. What is clear is that my own sensitivities are as nothing compared to what has happened to this family, and to what is happening to many, many refugees around the world right now.

1 I could not locate that article for this piece and I suspect it has been removed, but in any case, I feel it would be inappropriate to provide a link to the offending article, as it may recreate the situation I am here addressing.

2 Warning: This link contains the distressing image of a drowned father and daughter https://www.theguardian.com/global-development/2019/jun/27/harrowing-photo-of-drowned-father-and-daughter-rio-grande-us-mexico-border

3 Raimond Gaita, A Common Humanity: Thinking about Love, Truth and Justice, Text Publishing, Melbourne, 1999.

Do Women’s Soccer Players Deserve Equal Pay for Equal Play?

photograph of stands at women's world cup match

The popularity of women’s soccer is growing rapidly. The 2019 FIFA Women’s World Cup in France is being watched by record-breaking numbers of people around the world. 10.9 million people in France watched the hosts opening match against South Korea, far above the previous record of 4.12 million for a women’s soccer game. In the United Kingdom, 6.1 million people watched the match between England and Scotland. Similarly, in the United States, viewing figures have risen by 11% from the previous World Cup in 2015, even though the matches are played at a less convenient time for American audiences. 

Off the pitch though, women footballers continue to struggle for fair treatment from footballing authorities. One high-profile area of protest is the issue of prize money. The winners of this year’s World Cup will receive $4 million dollars in prize money, more than double the amount for the winners of the 2015 competition. An impressive figure, we might think, until we compare it to the $38 million received by the winners of the men’s World Cup in 2018. In total, FIFA has set aside $30 million in prize money for the Women’s competition compared to $400 million for last year’s men’s competition. 

The issue of gender inequality becomes even worse when we consider differences in pay. The 2017 Sporting Intelligence Survey found that the gender pay gap in football is particularly extreme compared to other sports. To take two clear examples, the average first-team player in the (men’s) English Premier League received £2.64 million in 2017, while the average pay for a player in the equivalent women’s league, the FA Women’s Super League, was just £26,752, while the total pay for all players in the top seven women’s leagues was roughly equal to the pay for just one male footballer, Neymar at Paris St-Germain. As Martha Kelner, chief sports reporter for The Guardian, points out, these figures suggest, “football is perhaps the most unequal profession in the world.” 

In response many women’s national teams have demanded for this pay gap to be eliminated or at least reduced. The US Women’s team are currently involved in a lawsuit against the US Soccer Federation over ‘institutionalized gender discrimination’ and demanding to be paid the same as the men’s team. Similarly, the Danish team refused to play in a friendly match in 2017 in protest over their pay and conditions, while the Scottish team implemented a brief media blackout in a similar protest in 2017. There have also been some notable successes. In 2017 the Norwegian FA introduced equal pay for their men and women’s teams, while the Dutch FA recently agreed to introduce equal pay by 2023

Are national associations morally required to pay their men and women’s soccer teams the same amount? As I have argued elsewhere (together with my colleague Martine Prange) there are three different arguments to support such a duty. Most straightforwardly, we might see the gender pay gap in soccer as a case of gender discrimination. The US team have been pushing this kind of argument in their campaign for Equal Pay for Equal Play. As star-player Carli Lloyd put the point, she and her teammates were, “sick of being treated like second-class citizens.” Feminist campaigners around the world have long argued that men and women working in the same job or equivalent jobs should be paid the same. Paying women less than men for the same work is unjust gender discrimination and is morally wrong. Given that women’s soccer players are being paid less than their male equivalents for playing for their national teams, this seems like a clear case of wrongful discrimination.  

However, this argument has been met with fierce resistance by some commentators. Writing about the decision of the Norwegian FA to introduce equal pay, journalist Matthew Syed claimed, “Norwegian male footballers are effectively doing a different job. In economic terms, they are more productive, persuading more fans and TV viewers to watch them, and more companies to sponsor them.” According to Syed, the different levels of revenue generated by the two different teams means that their work should not be viewed as the same or even equivalent. This means that paying these two teams differently is not an instance of discrimination; it is simply a reflection of the differing commercial value of the two teams. 

While many find this form of response persuasive, it cannot be used to justify all of soccer’s gender pay gaps. In the case of the US women’s team, there simply does not seem to be any good reason to think that the women’s team generates less revenue than the men’s. After winning the World Cup in 2015, the US women’s football team generated a $6.6 million profit compared to the men’s team’s $2 million. In the three years following, more total revenue has been generated from the women’s team’s matches than from those of the men’s team. Despite this, the women’s team continues to be paid less than the men’s team. At least in this case, there seems little reason to accept that the lower level of pay is a reflection of the lower levels of revenue generated and the charge of discrimination seems fair. 

However, the case of the US Women’s team is something of an exception. At most national soccer associations, the men’s team generates more revenue than the women. Some may take this to be the end of the discussion. If the different levels of pay simply a reflect the different levels of revenue generated then there does not seem to be any discrimination going on. And if it is not discriminatory then we may think that there is no moral requirement to pay women’s teams the same as men’s. 

This conclusion, though, assumes that avoiding discrimination is the only ethical reason that could support equal pay for women’s footballers. This is a mistake. A different ethical reason in favor of equal pay is that this action would be valuable for what it would express about the value of women’s soccer. This thought seems to underlie at least some of the recent moves towards equal pay. As the President of the Norwegian Players’ Association, Joachim Walltin, said of his association’s decision to introduce equal pay: “it was actually the FA’s own idea to go for equality. They said: ‘Isn’t it a cool idea and wouldn’t it be a good signal if we did things equally?’” The idea here is that by paying both sets of players the same, national associations would send a message that men’s and women’s football are equally valuable. This would be a positive message to send and may help to improve how people view women’s soccer. 

While this does seem like a positive message, some might object that it does not provide any reason to think there is a moral duty for associations to move to equal pay. Yes this would be a nice thing to do but would it really be wrong to keep paying men’s players more? Wouldn’t it also be acceptable to continue to pay male players more in order to reflect their higher commercial value? The positive message in itself might not seem to provide a sufficiently strong reason to think there is any moral obligation here. 

The case that there is a duty for national associations to move to equal pay becomes much stronger when we consider the role that national football associations have played in frustrating the development of women’s football. In England, for example, around 150 women’s soccer teams existed in 1921 with high-profile matches attracting tens of thousands of people. One especially high-profile match between Dick Kerr Ladies and St Helen Ladies attracted 53,000 spectators with an estimated 14,000 more people unable to gain entry into the ground. By the end of the following year however, the English Football Association responded by banning women’s football from their members’ grounds. Their reason? That, “The game of football is quite unsuitable for females and it ought not to be encouraged.” The English FA was far from alone in this. Similar bans on women’s football were introduced in France, West Germany, Brazil, and the Netherlands among others. 

The role these associations have played in frustrating the development of women’s football means that they cannot straightforwardly appeal to the lower commercial value of women’s football in justifying lower pay. One reason for this is that their actions are in large part responsible for this lower commercial value. If these associations had not banned women’s football then the commercial value of women’s football would likely be much higher than it is today. Another reason is that this history should change how we view the moral reasons favoring equal pay. The reasons that associations have are not simply ones concerning what it would be nice or good for them to do. Rather they owe duties of reparation to the women’s game to try to make up for the historical injustice these associations have committed against women’s football and women footballers. Those associations that have committed such injustices have a duty to attempt to make amends. What clearer way of doing so than to commit to equal pay for women’s footballers and sending the message that men’s football and women’s football are equally valuable?

The Peace Cross and Separation of Church and State

In 1925, a 40-foot stone cross was erected in Bladensburg Maryland. The cross was built by the American Legion and is known as the Bladensburg Cross or, more commonly, the Peace Cross. It was built as a monument to honor the 49 men from Prince George’s County who fought and died in World War I. The design of the monument is a simple white cross, which was a fairly common style in cemeteries at the time of its construction (though some argue that the cross was a central symbol of the war). Construction initially began on public land, but when the project ran out of funding, the American Legion took over and completed construction in a private capacity. In 1961, the state obtained the land through the state’s exercise of its eminent domain power for the purposes of constructing a highway. The memorial now stands on a highway median on state land and is maintained by Maryland-National Capital Park and Planning Commission. In 1985, the commission spent $100,000 in taxpayer money to renovate the monument. At that time, the state conducted a ceremony during which the monument was rededicated to veterans of all wars. In 2008, the legislature set an additional $100,000 for renovation of the deteriorating monument, but the general consensus is that at this stage the monument is beyond repair.

A plaque on the monument expresses commitment to belief in one God.  It reads:

WE, THE CITIZENS OF MARYLAND, TRUSTING IN
GOD, THE SUPREME RULER OF THE UNIVERSE,
PLEDGE FAITH IN OUR BROTHERS WHO GAVE
THEIR ALL IN THE WORLD WAR TO MAKE THE
WORLD SAFE FOR DEMOCRACY. THEIR MORTAL
BODIES HAVE TURNED TO DUST, BUT THEIR SPIRIT
LIVES TO GUIDE US THROUGH LIFE IN THE WAY OF
GODLINESS, JUSTICE, AND LIBERTY.
WITH OUR MOTTO, “ONE GOD, ONE COUNTRY AND
ONE FLAG,” WE CONTRIBUTE TO THIS MEMORIAL
CROSS COMMEMORATING THE MEMORY OF THOSE
WHO HAVE NOT DIED IN VAIN.

The American Humanist Association, among others, filed suit in District Court, alleging that the memorial violates the Establishment Clause of the United States Constitution. The Establishment Clause appears in the First Amendment and prohibits the government from making any law “respecting an establishment of religion.” The American Legion stepped in to defend the monument in court proceedings. The District Court offered summary judgment in favor of the American Legion, concluding that the memorial did not violate the Establishment Clause. On appeal in the Fourth Circuit, that decision was reversed.

Establishment Clause cases are often decided according to familiar precedent. That standard is the Lemon Test, established in the 1971 case Lemon v. Kurtzman. This test has three prongs. First, the statute must have a secular legislative purpose. Second, its principal or primary effect must be one that neither advances nor inhibits religion. Third, the statute must not foster an excessive government entanglement with religion. The Fourth Circuit court concluded that The Bladensburg Cross violated prongs two and three of this test.  

The Supreme Court reversed the decision of the Fourth Circuit.  The court declined to use the Lemon Test. Instead, in his opinion, Justice Alito focused on facts about the historical background of the monument, identifying four main reasons that assessing historical monuments is different from assessing monuments that are newly constructed.  First, he claimed that, when monuments are old enough, it is difficult to know the precise intentions behind their construction. Second, he claimed that symbols can take on additional meanings over time. Third, he suggested that the message of a monument may evolve over time, and finally, he said that when a monument has existed for long enough in a community and has become part of the everyday lives of those living in the community, the removal of the memorial may no longer be seen as neutral.

In her dissent, Justice Ruth Bader Ginsberg contested the idea that the cross was a symbol for anything other than Christianity.  She said from the bench, “The Latin cross is the foremost symbol of the Christian faith, embodying the ‘central theological claim of Christianity: that the Son of God died on the cross, that he rose from the dead and that his death and resurrection offer the possibility of eternal life.’ The Latin cross is not emblematic of any other faith.”

The court is somewhat bound by legal precedent, though they can and do reinterpret and even reinvent precedent as they go.  This controversy gives rise not only to legal questions, but, more fundamentally, to moral questions.

The historical features of our community have value. They provide the community with reminders that the lives they live now are made possible by the efforts and struggles of countless others who came before. They provide a sense of shared narrative, community, and even family. They honor the dignity and encourage an attitude of respect for those who have died.

On the other hand, there are many symbols of respect and honor that are not explicitly religious in nature. The claim that the use of religious symbolism by the state is not problematic because the memorial in question was built a sufficiently long time ago strikes many as nothing more than a fallacious appeal to tradition. As Justice Ginsberg points out, there is no confusion in our largely Christian culture when it comes to what a cross stands for. What’s more, the intended message is not inescapably lost to history—it’s carved right onto the monument itself. It reads, in part, “One God, one country, and one flag.” This is simply not the message that should be sent by a country committed to refrain from endorsing any particular religion. We are not a country committed to advancing the idea that there is one God and one God only.  

The fact that the memorial was rededicated to veterans from all wars is important.  The state continues to provide funding for a memorial that is dedicated to all veterans, but that, in design, demonstrates respect only to the Christian veterans of war. While Justice Alito is concerned that removing the memorial now would not be a demonstration of neutrality, it is hard to see how the state’s continued taxpayer-funded maintenance of a Christian symbol to honor all veterans of all wars is an act of religious neutrality. The monument is in bad condition, but  perhaps the $100,000 reserved to preserve it would be better spent in the construction of a new memorial with a more inclusive design in a more appropriate location.