← Return to search results
Back to Prindle Institute

Living in the Hinge of History

photograph of telescope pointed above the lights of the city

Consider three things. First: technological development means that there are many more people in the world than there used to be. This means that, if we survive far into the future, the number of future people could be really, really big. Perhaps the overwhelming majority of us have not yet been born.

Second: the future could be really good, or really bad, or a big disappointment. Perhaps our very many descendants will live amazing lives, improved by new technologies, and will ultimately spread throughout the universe. Perhaps they will reengineer nature to end the suffering of wild animals, and do many other impressive things we cannot even imagine now. That would be really good. On the other hand, perhaps some horrific totalitarian government will use new technologies to not only take over humanity, but also ensure that it can never be overthrown. Or perhaps humanity will somehow annihilate itself. Or perhaps some moral catastrophe that is hard to imagine at present will play out: perhaps, say, we will create vast numbers of sentient computer programs, but treat them in ways that cause serious suffering. Those would be really bad. Or, again, perhaps something will happen that causes us to permanently stagnate in some way. That would be a big disappointment. All our future potential would be squandered.

Third: we may be living in a time that is uniquely important in determining which future plays out. That is, we may be living in what the philosopher Derek Parfit called the “hinge of history.” Think, for instance, of the possibility that we will annihilate ourselves. That was not possible until very recently. In a few centuries, it may no longer be possible: perhaps by then we will have begun spreading out among the stars, and will have escaped the danger of being wiped out. So maybe technology raised this threat, and technology will ultimately remove it.

But then we are living in the dangerous middle, and what happens in the comparatively near future may determine whether our story ends here, or instead lasts until the end of the universe.

And the same may be true of other possibilities. Developments in artificial intelligence or in biotechnology, say, may make the future go either very well or very poorly, depending on whether we discover how to safely harness them.

These three propositions, taken together, would seem to imply that how our actions affect the future is extremely morally important. This is a view known as longtermism. The release of a new book on longtermism, What We Owe the Future by Will MacAskill, has resulted in it getting some media coverage.

If we take longtermism seriously, what should we do? It seems that at least some people should work directly on things which increase the chances that the long-term future will be good. For instance, they might work on AI safety or biotech safety, to reduce the chances that these technologies will destroy us and to increase the chances that they will be used in good rather than bad ways. And these people ought to be given some resources to do this. (The organization 80,000 Hours, for example, contains career advice that may be helpful for people looking to do work like this.)

However, there is only so much that can productively be done on these fronts, and some of us do not have the talents to contribute much to them anyway. Accordingly, for many people, the best way to make the long-term future better may be to try to make the world better today.

By spreading good values, building more just societies, and helping people to realize their potential, we may increase the ability of future people to respond appropriately to crises, as well as the probability that they will choose to do so.

To large extent, Peter Singer may be correct in saying that

If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

This also helps us respond to a common criticism of longtermism, namely, that it might lead to a kind of fanaticism. If the long-term future is so important, it might seem that nothing that happens now matters at all in comparison. Many people would find it troubling if longtermism implies that, say, we should redirect all of our efforts to help the global poor into reducing the chance that a future AI will destroy us, or that terrible atrocities could be justified in the name of making it slightly more likely that we will one day successfully colonize space.

There are real philosophical questions here, including ones related to the nature of our obligations to future generations and our ability to anticipate future outcomes. But if I’m right that in practice, much of what we should do to improve the long-term future aligns with what we should do to improve the world now, our answers to these philosophical questions may not have troubling real-world implications. Indeed, longtermism may well imply that efforts to help the world today are more important than we realized, since they may help, not only people today, but countless people who do not yet exist.

Bill Gates and the Ethics of Personal Pledges

black and white image of Bill Gates

A few days ago, Bill Gates, worth $113 billion, made headlines by pledging to donate “virtually all” of his wealth and to “move off the list of the world’s richest people.” You may think that billionaires shouldn’t exist, or that they wouldn’t exist in a just economic system. You may think Gates is simply meeting his moral duty, and not doing anything morally praiseworthy. Or, you may think that, while Gates will never be poor, it is still laudable to pledge almost all of one’s wealth to a philanthropic cause. His wealth will surely save more lives and alleviate more suffering for having been donated. Furthermore, billionaires don’t generally live up to the Gates standard. You may not be able to take it with you, but the fact is that most billionaires die billionaires.

But the moral problem on which I want to focus is not the moral status of Gates’ pledge, whether it is morally required, or “supererogatory” (beyond the call of moral duty), but its moral bindingness.

If Gates, for whatever reason, decided to reverse his decision in ten years, to stop the yearly donations he plans, would he have the moral right to do so, given his earlier pledge?

It might seem obvious that Gates would have the moral right to revoke his pledge, if he had a change of heart. After all, personal pledges are not promises to other people where we would need their consent to let us off the hook and void the promise. Personal pledges are just statements of personal intention or commitment that our future selves will act in a certain way. And we can revoke such commitments as easily as we can make them. I can pledge to wake up at six tomorrow morning, but I can also revoke my pledge when I find myself awake at midnight tonight and realize I wouldn’t get enough sleep.

Things are not always this simple, though. The philosopher Derek Parfit provides a thought experiment in which a Russian nobleman, a young socialist, decides to commit the vast estates he will one day inherit to the peasants. He knows his preferences could change over time, especially after receiving a sizeable fortune. So he writes a legally binding contract that will automatically donate his estates. He also adds a clause stating that his pledge cannot be revoked without his wife’s consent, even if he has new reasons for wishing to revoke the pledge. He then tells his wife not to provide this consent, using her as a mechanism to force his current will onto his future self. If the nobleman does change his mind, he tells his wife, “he” will no longer exist, and the man seeking to revoke the pledge will have no right to do so. I think Bill Gates is doing something similar. He is making this pledge very publicly as a way of creating pressure on his future self to comply with the wishes of his current self.

In Parfit’s thought experiment, later in life, the nobleman does change his mind. His wife is forced to choose to honor the will of her current husband, wishing to revoke the pledge, or the earlier version of her husband. What would be morally right to do?

It seems morally acceptable for the wife to refuse to revoke the document, and honor the wishes of her younger husband. But if the husband is the same person who made the pledge, why shouldn’t he have the right to revoke it? Implicitly, the wife’s decision to uphold the document would suggest that she views the older husband as an imposter of some kind, as someone who lacks the moral authority to revoke a pledge made by another person. Perhaps something like the psychological distance between the younger and older versions of the husband means that the older husband isn’t in a position to revoke the pledge made by the younger husband.

But it also isn’t obvious that the wife has a moral duty to uphold the pledge. The older husband was the younger husband. He knows why he made the pledge earlier in his life, and he presumably knows a lot more besides that his younger self didn’t. And given all this, he wishes to revoke his pledge. It is his wealth, after all, and it isn’t clear who could possibly have the right to force him to give it away, if he now wishes to keep it.

Can the younger self really hold such a moral claim over the behavior of the older self? It is difficult to say.

More broadly, the promises, pledges, and commitments we make at particular points in our lives can bind both ourselves and others, often for a long time and sometimes even beyond death. Most college endowments, for example, are so-called “restricted endowments.” When they are made, requirements are placed on how the assets can be used. As such, endowments often express the (peculiar) wills of people who have long been dead and whose interests and values are often different to those which are common today. The Dudley Professorship of Railroad Engineering at Yale, for example, gifted in 1923 by Plimmon H. Dudley, can only be used for “work in connection with the development and improvement of designs of rails, roadbeds, and crossties.” Yale’s president, Richard C. Levin, admitted to The New York Times that, “I was kind of stumped as to what to do with the chair.” The professorship sat vacant for more than seventy years. My own college, the University of Edinburgh, is not unique in having an endowment-funded Parapsychology Unit, funded by a believer in the parapsychic. (The unit has expanded their research into the “pseudo-psi” – what’s not psychic but looks like it).

Such restrictions on the use of endowments are, for better or worse, legally binding. Whether the personal commitments and pledges made by our past selves are also morally binding on our future selves is a much more difficult question — one that Bill Gates may soon face.

Nuclear War and Scope Neglect

photograph of 'Fallout Shelter' sign in the dark

“Are We Facing Nuclear War?”The New York Times, 3/11/22

“Pope evokes spectre of nuclear war wiping out humanity” — Reuters, 3/17/22

“The fear of nuclear annihilation raises its head once more” — The Independent, 3/18/22

“The threat of nuclear war hangs over the Russia-Ukraine crisis”NPR, 3/18/22

“Vladimir Putin ‘asks Kremlin staff to perform doomsday nuclear attack drill’”The Mirror, 3/19/22

“Demand for iodine tablets surge amid fears of nuclear war”The Telegraph, 3/20/22

“Thinking through the unthinkable”Vox, 3/20/22

The prospect of nuclear war is suddenly back, leading many of us to ask some profound and troubling questions. Just how terrible would a nuclear war be? How much should I fear the risk? To what extent, if any, should I take preparatory action, such as stockpiling food or moving away from urban areas?

These questions are all, fundamentally, questions of scale and proportion. We want our judgments and actions to fit with the reality of the situation — we don’t want to needlessly over-react, but we also don’t want to under-react and suffer an avoidable catastrophe. The problem is that getting our responses in proportion can prove very difficult. And this difficulty has profound moral implications.

Everyone seems to agree that a nuclear war would be a significant moral catastrophe, resulting in the loss of many innocent lives. But just how bad of a catastrophe would it be? “In risk terms, the distinction between a ‘small’ and a ‘large’ nuclear war is important,” explains Seth Baum, a researcher at a U.S.-based think tank, the Global Catastrophic Risk Institute. “Civilization as a whole can readily withstand a war with a single nuclear weapon or a small number of nuclear weapons, just as it did in WW2. At a larger number, civilization’s ability to withstand the effects would be tested. If global civilization fails, then […] the long-term viability of humanity is at stake.”

Let’s think about this large range of possible outcomes in more detail. Writing during the heights of the Cold War, the philosopher Derek Parfit compared the value of:

    1. Peace.
    2. A nuclear war that kills 99% of the world’s existing population.
    3. A nuclear war that kills 100%.

Everyone seems to agree that 2 is worse than 1 and that 3 is worse than 2. “But,” asks Parfit, “which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater.”

Parfit was, it turns out, correct about what most people think. A recent study posing Parfit’s question (lowering the lethality of option 2 to 80% to remove confounders) found that most people thought there is a greater moral difference between 1 and 2 than between 2 and 3. Given the world population is roughly 8 billion, the difference between 1 and 2 is an overwhelming 6.4 billion more lives lost. The difference between 2 and 3 is “only” 1.6 billion more lives lost.

Parfit’s reason for thinking that the difference between 2 and 3 was a greater moral difference was because 3 would result in the total extinction of humanity, while 2 would not. Even after a devastating nuclear war such as that in 2, it is likely that humanity would eventually recover, and we would lead valuable lives once again, potentially for millions or billions of years. All that future potential would be lost with the last 20% (or in Parfit’s original case, the last 1%) of humanity.

If you agree with Parfit’s argument (the study found that most people do, after being reminded of the long-term consequences of total extinction), you probably want an explanation of why most people disagree. Perhaps most people are being irrational or insufficiently imaginative. Perhaps our moral judgments and behavior are systematically faulty. Perhaps humans are victims of a shared psychological bias of some kind. Psychologists have repeatedly found that people aren’t very good at scaling up and down their judgments and responses to fit the size of a problem. They name this cognitive bias “scope neglect.”

The evidence for scope neglect is strong. Another psychological study asked respondents how much they would be willing to donate to prevent migrating birds from drowning in oil ponds — ponds that could, with enough money, be covered by safety nets. Respondents were either told that 2,000, or 20,000, or 200,000 birds are affected each year. The results? Respondents were willing to spend $80, $78, and $88 respectively. The scale of the response had no clear connection with the scale of the issue.

Scope neglect can explain many of the most common faults in our moral reasoning. Consider the quote, often attributed to Josef Stalin, “If only one man dies of hunger, that is a tragedy. If millions die, that’s only statistics.” Psychologist Paul Slovic called this tendency to fail to conceptualize the scope of harms suffered by large numbers of people mass numbing. Mass numbing is a form of scope neglect that helps explain ordinary people standing by passively in the face of mass atrocities, such as the Holocaust. The scale of suffering, distributed so widely, is very difficult for us to understand. And this lack of understanding makes it difficult to respond appropriately.

But there is some good news. Knowing that we suffer from scope neglect allows us to “hack” ourselves into making appropriate moral responses. We can exploit our tendency for scope neglect to our moral advantage.

If you have seen Steven Spielberg’s Schindler’s List, then you will remember a particular figure: The girl in the red coat. The rest of the film is in black and white, and the suffering borders continually on the overwhelming. The only color in the film is the red coat of a young Jewish girl. It is in seeing this particular girl, visually plucked out from the crowd by her red coat, that Schindler confronts the horror of the unfolding Holocaust. And it is this girl who Schindler later spots in a pile of dead bodies.

The girl in the red coat is, of course, just one of the thousands of innocents who die in the film, and one of the millions who died in the historical events the film portrays. The scale and diffusion of the horror put the audience members at risk of mass numbing, losing the capacity to have genuine and appropriately strong moral responses. But using that dab of color is enough for Spielberg to make her an identifiable victim. It is much easier to understand the moral calamity that she is a victim of, and then to scale that response up. The girl in the red coat acts as a moral window, allowing us to glimpse the larger tragedy of which she is a part. Spielberg uses our cognitive bias for scope neglect to help us reach a deeper moral insight, a fuller appreciation of the vast scale of suffering.

Charities also exploit our tendency for scope neglect. The donation-raising advertisements they show on TV tend to focus on one or two individuals. In a sense, this extreme focus makes no sense. If we were perfectly rational and wanted to do the most moral good we could, we would presumably be more interested in how many people our donation could help. But charities know that our moral intuitions do not respond to charts and figures. “The reported numbers of deaths represent dry statistics, ‘human beings with the tears dried off,’ that fail to spark emotion or feeling and thus fail to motivate action,” writes Slovic.

When we endeavor to think about morally profound topics, from the possibility of nuclear war to the Holocaust, we often assume that eliminating psychological bias is the key to good moral judgment. It is certainly true that our biases, such as scope neglect, typically lead us to poor moral conclusions. But our biases can also be a source for good. By becoming more aware of them and how they work, we can use our psychological biases to gain greater moral insight and to motivate better moral actions.

On the Rationality of the Capitol Rioters

photograph of rioters in front of Capitol

In the wake of the Capitol insurrection, there was no shortage of commentary concerning the moral and intellectual failings of the rioters. However, one not infrequent theme of this commentary was that, for all their errors, there was something about their behavior that made a certain sort of sense. After all, if one believed that one’s presidential candidate actually won the election by a landslide, and that this victory was being subverted by shadowy forces that included the Hugo Chávez family, then storming the Capitol can seem like a reasonable response.

Although the word “rationality” was not always used in this commentary, I think this is what these pundits have in mind: that the Capitol rioters were in some sense rational in acting as they did, given their beliefs. They probably didn’t know it, but in making this claim they echoed the view about rationality endorsed by the renowned moral philosopher Derek Parfit. In his magnum opus, On What Matters, Parfit argues that our desires and acts are rational when they causally depend in the right way on beliefs whose truth would give us sufficient reasons to have these desires, or to act in these ways. As applied to the case of the Capitol insurrection, Parfit’s view would seemingly endorse the rioters’ acts as rational, since the content of their beliefs about the election would, if true, give them sufficient reasons to riot. The key point is that on Parfit’s view, it does not matter whether the beliefs upon which the rioters’ actions were based are themselves true, but just that they rationally supported those actions.

Alternatively, David Hume famously wrote that the truth of one’s beliefs does make a difference to the rationality of one’s actions and desires. “It is only in two senses,” he wrote, “that any [desire] can be called unreasonable.” One of those senses is when the desire is “founded on the supposition of the existence of objects, which really do not exist.” In other words, desires based on false beliefs are irrational. Yet Hume appears to be mistaken here. One’s desire to run away can be rational even if based on the false belief that there is a rattlesnake about to strike inches from one’s feet, particularly if one’s belief is rational.

But what about the view that our desires and acts are rational just in case they causally depend in the right way on rational beliefs, whether true or not? If we accept this view, then the Capitol rioters’ actions and desires turn out to be irrational, since they are based on beliefs that are arguably irrational. Parfit resists this view using the example of a smoker who has a strange combination of attitudes: on the one hand, the rational belief that smoking will destroy his health, and on the other hand, and because of this belief, the desire to smoke. According to the view we are now considering, the smoker’s desire would be rational, since it depends on a rational belief. That seems false.

Another view about rationality that might support the Capitol rioters’ actions is the view, familiar from social science disciplines like economics, that the rational action is the one whose subjective expected utility — reflecting the utility of the possible outcomes, and the agent’s beliefs about the probability of those outcomes — is the highest. This view of rationality more or less abandons the idea of rationally assessing our non-instrumental desires, and simply evaluates actions in terms of how well they fulfill those desires. So, on this view, we might say that the rioters’ actions were rational because they maximally fulfilled their desires.

The Parfitian and maximizing views of rationality share a feature that the philosopher Warren Quinn famously highlighted in his article, “Rationality and the Human Good”: according to both views, rationality is at least sometimes indifferent as to the shamelessness, or moral turpitude, of a person’s ends. For example, Parfit’s view implies that someone who believes that the Jews are sub-human and, because of this belief, desires to exploit them in ways that would be immoral if the Jews were full-fledged members of the human race, is practically rational. Similarly, the maximizing view implies that someone who wants to exploit the Jews in such ways is practically rational if they take efficient means to that end. However, Quinn argues, this conception of practical rationality is in tension with the ancient idea that practical rationality is the highest virtue of humans as practical agents. How could practical rationality be morally defective, indifferent to the manifestly forceful practical demands of morality, and yet be the most “authoritative practical excellence”?

If rationality is integrally connected to morality in the way Quinn suggests, then it becomes harder to see how we could say that the Capitol rioters’ actions and desires were rational or in accordance with reason. Even if their beliefs, if true, would have justified their desires and acts, and even if their acts maximize the fulfillment of their desires, the fact is that their beliefs were false, and their actions and desires shameless. And if Quinn is right, that fact should make us reluctant to credit their actions and desires with the label “rational.” For Quinn, you can’t be rational and immoral at the same time. For Parfit or the maximizer, you can.

Thus, it turns out that much of significance hangs on whether we think what the rioters did was in accordance with reason. If we say that it was, either because we adopt Parfit’s conception of rationality or the maximizing conception, then we commit ourselves to the occasional indifference of rationality to moral considerations. If, instead, we adopt Quinn’s view, then we must reject that indifference.

Climate Change and the Philosophical Pitfalls of Grounding Duty to Future Generations

Two young women in the foreground of a protest march, with signs behind them saying "our future our choice"

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Reports of mass extinction, extreme weather events, speedily melting ice caps and the inundation of low-lying islands by rising seas suggest that the environmental disaster scientists and activists have been warning about for several decades, has now begun.  

On the face of it, there isn’t really a good argument to be made against a moral imperative to fight climate change. Those who voice opposition to, and those who lobby against, climate action generally deny climate change is real rather than argue against an obligation to do something about it.

Governments across the world are nowhere near where they need to be on acting to prevent worst-case scenario outcomes, even where climate change is grudgingly acknowledged by the powers that be and even as its effects start to become difficult to ignore.

In January this year, David Attenborough told a cohort of business and government leaders, diplomats and influential celebrities at the Davos World Economic Forum that “What we do now, and in the next few years, will profoundly affect the next few thousand years,” and “Unless we sort ourselves out in the next decade or so we are dooming our children and our grandchildren to an appalling future.”

Few would now disagree that there is urgency about the issue of climate change; however, the question of our moral relation to future generations is philosophically complex. Does it make sense to claim that we have moral duties to persons who do not exist?  Do future persons, not already alive, have rights? How are the rights of future persons connected with moral duties that we have now?

There are some meta-ethical issues at play here (issues about what we take ethics to be centrally concerned with). A common foundation for morality is how the behavior of persons affects other persons (and sometimes other creatures/entities). The ‘moral community’ is that group to whom we owe moral consideration; whose well-being makes moral claims on us, or whose interests, or rights, provide imperatives and checks on our actions.

On one level, using a simple example given by Derek Parfit, we can see how, straightforwardly, the actions of someone now can harm an unknown, hypothetical person in the future. Parfit points out that, if I leave some broken glass in the undergrowth of a wood, which a hundred years later wounds a child, my act harms this child. This is of course familiar and quite intuitive reasoning; it forms the basis of things we tell our children every day, of the ilk: “don’t leave that there, someone could trip over it.”

Is it morally significant how far in the future that harm occurs? On one view it is not, as there is a direct causal relation between action of burying the glass and the child’s being cut by it. When I tell my child not to do something as it may harm someone, I am likely not to be thinking that far into the future as Parfit’s example, but this seems to be a result of the psychology, rather than the morality, of temporal distance.

However, it could be argued that moral accountability is weakened by temporal distance, because part of what it means to be in a moral community is that there is moral accountability, by virtue of reciprocity, between members – so that there is an in-principle possibility of the wronged party holding the other to account.

In response to Parfit’s example it should also be noted the person burying the glass is only causing a future harm as long as the child (or someone else) is later cut by it. That outcome is highly contingent. If the lack of reciprocity between individuals who are not one another’s contemporary, together with the contingency involved in any particular outcome, are problematic then it may be even more difficult to make decisions about the behaviour of a current population’s effect on a future population.

The question about how current people’s actions harm, or directly impact, future people encounters a paradox: because each person born is the product of so many and such complex contingencies, and all future persons are a product of actions taken in the present, a different set of actions (even one different variable) will produce a different human being.

Imagine two possible scenarios. In the first, no action is taken and climate change produces a disastrous future for all people on the planet. In the second, massive effort is undertaken, now with the outcome that disaster is averted and future generations are able to pursue happiness and have the opportunity to flourish.

Because of this paradox, it isn’t quite right to say that particular future persons will be better off if action is taken, since particular future persons who come into existence in the first scenario, if action is not taken, would not have existed in the second scenario. Can the people of the future in the first scenario really blame us, since had we made different choices they would not exist?

This line of thinking may appear to yield the conclusion that even if we do not conserve the environment for future generations of people, it cannot consistently be said that we have wronged them. But can we cogently argue that they cannot complain since in any other set of circumstances they would never have existed?

This is a difficult moral question – borne out in other problems or areas of practical ethics, such as whether to choose to have a disabled child. It opens up issues of how we value existence, what type of existence we value, and what level of difficulty we would be prepared to accept for the sake of existence itself. I shall not try to resolve this problem here – but it is not necessarily agreed that such future persons, in unbearable hardship, have no right to complain of the actions of their forebears that led to that hardship.

This paradox seems to arise in part where morality is taken to be centrally concerned with how actions of certain individuals affect other individuals – the problem here is that there is too much focus on particular individuals.  (Parfit himself thought that we should rescind the principle of morality being centrally concerned with individuals and employ a more impartial principle to ground morality in actions which produce the most wellbeing.)

But this solution creates another problem, which is encountered in consequentialist ethics of a utilitarian persuasion. Since utilitarianism is based on a principle of maximising happiness or wellbeing, it functions as a calculation of total possible happiness. This produces the counterintuitive outcome that a very large number of people who were borderline miserable would be preferable to a very small number of very happy people. Obviously this system cannot provide a foundation for a reasonable, binding, moral relation to future generations.

An argument from the notion of rights appears to fare better. If we acknowledge universal and inalienable human rights such as a right to life, liberty and security then, by virtue of their universality, we could extend them to future persons by including them in the moral community of holders of rights.

It has been noted by some philosophers, however, that the concept of rights is in some sense morally inadequate – it can fail to capture the moral seriousness of a situation. Imagine having to answer to future persons living with the devastation of our failure to act when we had the means and opportunity. It would not go all the way to the heart (so to speak) of their moral grievance to simply note that their right to live a full human life was violated – in the same way that the moral terribleness of murder is not adequately captured by noting that the murdered person’s right to life has been violated.

A still better grounding might be in a notion of moral duty as suggested by Immanuel Kant in the principle of universalisation: that we discover our moral duty by asking if we could will any action to be a universal law. Applying this principle our moral duty to future generations becomes clear when we simply ask what we would have endorsed were we to find ourselves in the same situation.

The window, we are being told by scientists, is closing fast. We may have little more than a decade to avoid unstoppable climate catastrophe. This means that the future is arriving. In a sense, the future is already here. Children born in this decade may be alive at the end of the century, and will be directly affected by our current actions or failure to act. Those future generations that appear in the abstract in the philosophical discussions of the past twenty to thirty years are already here. There are some hopeful signs; the new generation is starting to demand action as high-profile strikes by schoolchildren across the world put pressure on those in power to act to rescue the future they will inherit.

Reckoning with the Legacy of Derek Parfit

Philosopher Derek Parfit died on January 1st. Let us hope he will go to heaven. Will he? Parfit, who was an agnostic, was not much concerned with the existence of heaven or hell. But, he did famously argue that, even if such places do exist, the person going there would not be the same person who previously died. And, thus, someone would be punished or rewarded for the deeds of another person. This is deeply unjust, as unfair as sending someone to prison because of the crimes committed by his identical twin brother.

Continue reading “Reckoning with the Legacy of Derek Parfit”