Back to Prindle Institute

FTX, Effective Altruism, and Ends Justifying Means

image of smartphone displaying FTX logo with stock market graphs in background

Until a week ago, Future Exchange (FTX) was one of the largest and most respected cryptocurrency exchanges in the world. Then, in spectacular fashion, it all collapsed.

The collapse didn’t just wipe out the billions of dollars that users had invested in the platform. It also wiped out the fortune and reputation of FTX’s billionaire CEO and philanthropist, Sam Bankman-Fried. And because of Bankman-Fried’s close ties with effective altruism, a particular kind of philanthropy championed prominently by Oxford moral philosopher William MacAskill, the shockwaves of FTX’s collapse have been far reaching.

Effective altruism is a movement with roots in moral philosophy. In 2011 when MacAskill was a graduate student in philosophy at Oxford he co-founded the organization 80,000 Hours. The name is taken from an estimate about the number of working hours a person will have over the course of their career. Its goal is to advise people about how to make the biggest impact in their careers to address the world’s most pressing problems. In practice, the advice is often to earn as much money as possible and then donate that money to causes that are effective at doing good. MacAskill himself describes the movement as follows:

The effective altruism movement consists of a growing global community of people who use reason and evidence to assess how to do as much good as possible, and who take action on this basis.

If you want to do as much good as possible, there are two main things to think about: (1) what distribution of resources makes the biggest difference, and (2) how do we get a lot of resources to distribute? On that second question effective altruists have advised each other and outsiders to pursue careers that will generate significant financial resources. On the first question, effective altruists have advised others to give (and have, themselves, given) to causes such as mosquito nets for areas subject to malaria (because the cost of the nets is so low compared to the life-saving potential), and to minimizing global risk such as pandemic preparedness, AI risk mitigation, and nuclear nonproliferation (because in these cases, the potential upside of lowering risk is so great).

Effective altruism is not an ethical theory per se, but it does have connections to the ethical theory known as utilitarianism. According to utilitarianism the sole way to evaluate actions morally is based on their consequences. For each action you could perform, you consider the total set of consequences for all people, current and present. You are then morally obligated to choose the action that produces the best overall outcome. What makes an outcome best? That is a question about which there is disagreement, but everyone agrees that pain and suffering are bad, and that happiness and pleasure are good. On this (overly) simple view, the action with the best outcome is just the one with the best balance sheet: total happiness minus total pain.

One key feature of utilitarianism is that it doesn’t rule out any kinds of actions unilaterally. Lying, stealing, and even murder could, in certain situations, yield the overall best consequences. In such cases, utilitarianism says that’s what you ought to do. The ends justify the means.

If you apply utilitarianism to questions about philanthropy or about what career you should pursue, you get something that looks a lot like effective altruism. That’s not to say that effective altruism just is utilitarianism. Effective altruism isn’t a single unified theory, so isn’t really in the same category as utilitarianism. But if effective altruists are those who “use reason and evidence to assess how to do as much good as possible, and who take action on this basis,” then it is going to share some features with utilitarianism – in particular, that (in some cases) the ends might justify the means. When an effective altruist advises someone to take a career in finance so that he might make a lot of money that can then be donated to worthwhile causes, this advice is grounded in the idea that working in finance is simply a means to doing good. And for that reason, such a career trajectory is morally justifiable. That doesn’t mean that an effective altruist has to say that stealing is okay if it leads to the best consequences. But it does mean that the effective altruist is engaged in the same style of argument.

Now, in 2011 just as MacAskill was founding the effective altruism movement, he also met Sam Bankman-Fried, then an undergraduate at MIT. According to a New York Times article describing this meeting, Bankman-Fried was interested in utilitarianism and talked with MacAskill about working to promote animal welfare. MacAskill suggested that Bankman-Fried instead pursue a high-earning career and then donate to causes that could do the most good. The suggestion was clearly taken to heart. Bankman-Fried pursued a career in finance, founding the cryptocurrency exchange FTX in 2019, which allows users to buy and sell cryptocurrencies like Bitcoin or Ethereum much like one might buy stocks on the New York Stock Exchange. Up until a few weeks ago, his net worth was valued at around $10.5 billion, which he had promised to give away in the spirit of effective altruism. He had, in fact, already given generously and had set up the FTX Foundation, which promotes and supports various effective altruist projects. One prominent part of the foundation is the FTX Future Fund, which makes grants to nonprofits and individuals who are helping to make positive and long-lasting change in the most effective ways. MacAskill, until his recent resignation, was on the team that decides how to distribute FTX Future Fund grants.

That brings us to the surprising events of the past two weeks. During the week of November 7, floods of FTX users started to withdraw their funds, creating what was essentially a bank run. The details about what caused this are still emerging, but it was precipitated by the revelation that a hedge fund also run by Bankman-Fried, Alameda Research, was closely (too closely?) entangled in FTX’s finances. By November 9, FTX had suspended withdrawals. By November 11, FTX and its affiliates had declared bankruptcy.

During that same week, MacAskill and others resigned from their roles managing the FTX Future Fund. In a public letter they wrote:

We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund.

We don’t yet know for sure whether moral or legal norms were violated by Bankman-Fried and others at FTX, but MacAskill seems to have suspicions.

On Twitter, MacAskill responded to the possible bad behavior at FTX as follows:

For years, the EA [Effective Altruism] community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

This particular response by MacAskill struck some as odd. After all, effective altruists, by MacAskill’s own lights “use reason and evidence to assess how to do as much good as possible, and who take action on this basis.” If reason and evidence suggest that violating common-sense moral constraints will lead to the most possible good, then why shouldn’t an effective altruist take action on that basis? Seen in this light, what was wrong about Bankman-Fried’s actions at FTX is not that he violated common-sense moral constraints (if it is true that he did), but instead that he did so ineptly and that left him without significant resources to donate.

MacAskill hasn’t explained his thinking on this particular case publicly, but there are things that an effective altruist could say in response. One prominent response is that often those who violate common-sense moral constraints “for the greater good” eventually end up getting found out and doing more damage to the cause they were championing. In this way, “ends justify the means” reasoning is typically counterproductive. So, the evidence suggests that violating common-sense moral constraints will not generate as much good as possible. Effective altruists, then, have a case for saying that common-sense moral constraints shouldn’t be violated: it just doesn’t work out.

Is that a compelling response? Perhaps. But notice that this line of thought locates the wrongness of violating common-sense moral constraints not in the violations themselves, but rather in the way that others will perceive such violations. Knowing that, it is not so hard to convince oneself that violating a constraint in a particular case is not really so bad, especially if it looks like one can keep things secret. This suggests that even if the response by the effective altruist above is philosophically compelling, it may be one that it is dangerous to give voice to. In the end, perhaps the best way to be an effective altruist is to not talk about effective altruism.

Living in the Hinge of History

photograph of telescope pointed above the lights of the city

Consider three things. First: technological development means that there are many more people in the world than there used to be. This means that, if we survive far into the future, the number of future people could be really, really big. Perhaps the overwhelming majority of us have not yet been born.

Second: the future could be really good, or really bad, or a big disappointment. Perhaps our very many descendants will live amazing lives, improved by new technologies, and will ultimately spread throughout the universe. Perhaps they will reengineer nature to end the suffering of wild animals, and do many other impressive things we cannot even imagine now. That would be really good. On the other hand, perhaps some horrific totalitarian government will use new technologies to not only take over humanity, but also ensure that it can never be overthrown. Or perhaps humanity will somehow annihilate itself. Or perhaps some moral catastrophe that is hard to imagine at present will play out: perhaps, say, we will create vast numbers of sentient computer programs, but treat them in ways that cause serious suffering. Those would be really bad. Or, again, perhaps something will happen that causes us to permanently stagnate in some way. That would be a big disappointment. All our future potential would be squandered.

Third: we may be living in a time that is uniquely important in determining which future plays out. That is, we may be living in what the philosopher Derek Parfit called the “hinge of history.” Think, for instance, of the possibility that we will annihilate ourselves. That was not possible until very recently. In a few centuries, it may no longer be possible: perhaps by then we will have begun spreading out among the stars, and will have escaped the danger of being wiped out. So maybe technology raised this threat, and technology will ultimately remove it.

But then we are living in the dangerous middle, and what happens in the comparatively near future may determine whether our story ends here, or instead lasts until the end of the universe.

And the same may be true of other possibilities. Developments in artificial intelligence or in biotechnology, say, may make the future go either very well or very poorly, depending on whether we discover how to safely harness them.

These three propositions, taken together, would seem to imply that how our actions affect the future is extremely morally important. This is a view known as longtermism. The release of a new book on longtermism, What We Owe the Future by Will MacAskill, has resulted in it getting some media coverage.

If we take longtermism seriously, what should we do? It seems that at least some people should work directly on things which increase the chances that the long-term future will be good. For instance, they might work on AI safety or biotech safety, to reduce the chances that these technologies will destroy us and to increase the chances that they will be used in good rather than bad ways. And these people ought to be given some resources to do this. (The organization 80,000 Hours, for example, contains career advice that may be helpful for people looking to do work like this.)

However, there is only so much that can productively be done on these fronts, and some of us do not have the talents to contribute much to them anyway. Accordingly, for many people, the best way to make the long-term future better may be to try to make the world better today.

By spreading good values, building more just societies, and helping people to realize their potential, we may increase the ability of future people to respond appropriately to crises, as well as the probability that they will choose to do so.

To large extent, Peter Singer may be correct in saying that

If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

This also helps us respond to a common criticism of longtermism, namely, that it might lead to a kind of fanaticism. If the long-term future is so important, it might seem that nothing that happens now matters at all in comparison. Many people would find it troubling if longtermism implies that, say, we should redirect all of our efforts to help the global poor into reducing the chance that a future AI will destroy us, or that terrible atrocities could be justified in the name of making it slightly more likely that we will one day successfully colonize space.

There are real philosophical questions here, including ones related to the nature of our obligations to future generations and our ability to anticipate future outcomes. But if I’m right that in practice, much of what we should do to improve the long-term future aligns with what we should do to improve the world now, our answers to these philosophical questions may not have troubling real-world implications. Indeed, longtermism may well imply that efforts to help the world today are more important than we realized, since they may help, not only people today, but countless people who do not yet exist.

Peter Singer and the Ethics of Effective Altruism

In the first part of this two-part series, we explored the views of Princeton bioethicist Peter Singer and whether they count as “eugenics.” Although his possibly eugenicist views are what drew protestors to Singer’s recent talk at the University of Victoria, Singer wasn’t there to discuss bioethics. Instead, he had been invited by the Effective Altruism club, and the event included a screening of Singer’s 2013 TED talk on Effective Altruism.

Continue reading “Peter Singer and the Ethics of Effective Altruism”