Back to Prindle Institute

FTX, Effective Altruism, and Ends Justifying Means

image of smartphone displaying FTX logo with stock market graphs in background

Until a week ago, Future Exchange (FTX) was one of the largest and most respected cryptocurrency exchanges in the world. Then, in spectacular fashion, it all collapsed.

The collapse didn’t just wipe out the billions of dollars that users had invested in the platform. It also wiped out the fortune and reputation of FTX’s billionaire CEO and philanthropist, Sam Bankman-Fried. And because of Bankman-Fried’s close ties with effective altruism, a particular kind of philanthropy championed prominently by Oxford moral philosopher William MacAskill, the shockwaves of FTX’s collapse have been far reaching.

Effective altruism is a movement with roots in moral philosophy. In 2011 when MacAskill was a graduate student in philosophy at Oxford he co-founded the organization 80,000 Hours. The name is taken from an estimate about the number of working hours a person will have over the course of their career. Its goal is to advise people about how to make the biggest impact in their careers to address the world’s most pressing problems. In practice, the advice is often to earn as much money as possible and then donate that money to causes that are effective at doing good. MacAskill himself describes the movement as follows:

The effective altruism movement consists of a growing global community of people who use reason and evidence to assess how to do as much good as possible, and who take action on this basis.

If you want to do as much good as possible, there are two main things to think about: (1) what distribution of resources makes the biggest difference, and (2) how do we get a lot of resources to distribute? On that second question effective altruists have advised each other and outsiders to pursue careers that will generate significant financial resources. On the first question, effective altruists have advised others to give (and have, themselves, given) to causes such as mosquito nets for areas subject to malaria (because the cost of the nets is so low compared to the life-saving potential), and to minimizing global risk such as pandemic preparedness, AI risk mitigation, and nuclear nonproliferation (because in these cases, the potential upside of lowering risk is so great).

Effective altruism is not an ethical theory per se, but it does have connections to the ethical theory known as utilitarianism. According to utilitarianism the sole way to evaluate actions morally is based on their consequences. For each action you could perform, you consider the total set of consequences for all people, current and present. You are then morally obligated to choose the action that produces the best overall outcome. What makes an outcome best? That is a question about which there is disagreement, but everyone agrees that pain and suffering are bad, and that happiness and pleasure are good. On this (overly) simple view, the action with the best outcome is just the one with the best balance sheet: total happiness minus total pain.

One key feature of utilitarianism is that it doesn’t rule out any kinds of actions unilaterally. Lying, stealing, and even murder could, in certain situations, yield the overall best consequences. In such cases, utilitarianism says that’s what you ought to do. The ends justify the means.

If you apply utilitarianism to questions about philanthropy or about what career you should pursue, you get something that looks a lot like effective altruism. That’s not to say that effective altruism just is utilitarianism. Effective altruism isn’t a single unified theory, so isn’t really in the same category as utilitarianism. But if effective altruists are those who “use reason and evidence to assess how to do as much good as possible, and who take action on this basis,” then it is going to share some features with utilitarianism – in particular, that (in some cases) the ends might justify the means. When an effective altruist advises someone to take a career in finance so that he might make a lot of money that can then be donated to worthwhile causes, this advice is grounded in the idea that working in finance is simply a means to doing good. And for that reason, such a career trajectory is morally justifiable. That doesn’t mean that an effective altruist has to say that stealing is okay if it leads to the best consequences. But it does mean that the effective altruist is engaged in the same style of argument.

Now, in 2011 just as MacAskill was founding the effective altruism movement, he also met Sam Bankman-Fried, then an undergraduate at MIT. According to a New York Times article describing this meeting, Bankman-Fried was interested in utilitarianism and talked with MacAskill about working to promote animal welfare. MacAskill suggested that Bankman-Fried instead pursue a high-earning career and then donate to causes that could do the most good. The suggestion was clearly taken to heart. Bankman-Fried pursued a career in finance, founding the cryptocurrency exchange FTX in 2019, which allows users to buy and sell cryptocurrencies like Bitcoin or Ethereum much like one might buy stocks on the New York Stock Exchange. Up until a few weeks ago, his net worth was valued at around $10.5 billion, which he had promised to give away in the spirit of effective altruism. He had, in fact, already given generously and had set up the FTX Foundation, which promotes and supports various effective altruist projects. One prominent part of the foundation is the FTX Future Fund, which makes grants to nonprofits and individuals who are helping to make positive and long-lasting change in the most effective ways. MacAskill, until his recent resignation, was on the team that decides how to distribute FTX Future Fund grants.

That brings us to the surprising events of the past two weeks. During the week of November 7, floods of FTX users started to withdraw their funds, creating what was essentially a bank run. The details about what caused this are still emerging, but it was precipitated by the revelation that a hedge fund also run by Bankman-Fried, Alameda Research, was closely (too closely?) entangled in FTX’s finances. By November 9, FTX had suspended withdrawals. By November 11, FTX and its affiliates had declared bankruptcy.

During that same week, MacAskill and others resigned from their roles managing the FTX Future Fund. In a public letter they wrote:

We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund.

We don’t yet know for sure whether moral or legal norms were violated by Bankman-Fried and others at FTX, but MacAskill seems to have suspicions.

On Twitter, MacAskill responded to the possible bad behavior at FTX as follows:

For years, the EA [Effective Altruism] community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

This particular response by MacAskill struck some as odd. After all, effective altruists, by MacAskill’s own lights “use reason and evidence to assess how to do as much good as possible, and who take action on this basis.” If reason and evidence suggest that violating common-sense moral constraints will lead to the most possible good, then why shouldn’t an effective altruist take action on that basis? Seen in this light, what was wrong about Bankman-Fried’s actions at FTX is not that he violated common-sense moral constraints (if it is true that he did), but instead that he did so ineptly and that left him without significant resources to donate.

MacAskill hasn’t explained his thinking on this particular case publicly, but there are things that an effective altruist could say in response. One prominent response is that often those who violate common-sense moral constraints “for the greater good” eventually end up getting found out and doing more damage to the cause they were championing. In this way, “ends justify the means” reasoning is typically counterproductive. So, the evidence suggests that violating common-sense moral constraints will not generate as much good as possible. Effective altruists, then, have a case for saying that common-sense moral constraints shouldn’t be violated: it just doesn’t work out.

Is that a compelling response? Perhaps. But notice that this line of thought locates the wrongness of violating common-sense moral constraints not in the violations themselves, but rather in the way that others will perceive such violations. Knowing that, it is not so hard to convince oneself that violating a constraint in a particular case is not really so bad, especially if it looks like one can keep things secret. This suggests that even if the response by the effective altruist above is philosophically compelling, it may be one that it is dangerous to give voice to. In the end, perhaps the best way to be an effective altruist is to not talk about effective altruism.

Bill Gates and the Ethics of Personal Pledges

black and white image of Bill Gates

A few days ago, Bill Gates, worth $113 billion, made headlines by pledging to donate “virtually all” of his wealth and to “move off the list of the world’s richest people.” You may think that billionaires shouldn’t exist, or that they wouldn’t exist in a just economic system. You may think Gates is simply meeting his moral duty, and not doing anything morally praiseworthy. Or, you may think that, while Gates will never be poor, it is still laudable to pledge almost all of one’s wealth to a philanthropic cause. His wealth will surely save more lives and alleviate more suffering for having been donated. Furthermore, billionaires don’t generally live up to the Gates standard. You may not be able to take it with you, but the fact is that most billionaires die billionaires.

But the moral problem on which I want to focus is not the moral status of Gates’ pledge, whether it is morally required, or “supererogatory” (beyond the call of moral duty), but its moral bindingness.

If Gates, for whatever reason, decided to reverse his decision in ten years, to stop the yearly donations he plans, would he have the moral right to do so, given his earlier pledge?

It might seem obvious that Gates would have the moral right to revoke his pledge, if he had a change of heart. After all, personal pledges are not promises to other people where we would need their consent to let us off the hook and void the promise. Personal pledges are just statements of personal intention or commitment that our future selves will act in a certain way. And we can revoke such commitments as easily as we can make them. I can pledge to wake up at six tomorrow morning, but I can also revoke my pledge when I find myself awake at midnight tonight and realize I wouldn’t get enough sleep.

Things are not always this simple, though. The philosopher Derek Parfit provides a thought experiment in which a Russian nobleman, a young socialist, decides to commit the vast estates he will one day inherit to the peasants. He knows his preferences could change over time, especially after receiving a sizeable fortune. So he writes a legally binding contract that will automatically donate his estates. He also adds a clause stating that his pledge cannot be revoked without his wife’s consent, even if he has new reasons for wishing to revoke the pledge. He then tells his wife not to provide this consent, using her as a mechanism to force his current will onto his future self. If the nobleman does change his mind, he tells his wife, “he” will no longer exist, and the man seeking to revoke the pledge will have no right to do so. I think Bill Gates is doing something similar. He is making this pledge very publicly as a way of creating pressure on his future self to comply with the wishes of his current self.

In Parfit’s thought experiment, later in life, the nobleman does change his mind. His wife is forced to choose to honor the will of her current husband, wishing to revoke the pledge, or the earlier version of her husband. What would be morally right to do?

It seems morally acceptable for the wife to refuse to revoke the document, and honor the wishes of her younger husband. But if the husband is the same person who made the pledge, why shouldn’t he have the right to revoke it? Implicitly, the wife’s decision to uphold the document would suggest that she views the older husband as an imposter of some kind, as someone who lacks the moral authority to revoke a pledge made by another person. Perhaps something like the psychological distance between the younger and older versions of the husband means that the older husband isn’t in a position to revoke the pledge made by the younger husband.

But it also isn’t obvious that the wife has a moral duty to uphold the pledge. The older husband was the younger husband. He knows why he made the pledge earlier in his life, and he presumably knows a lot more besides that his younger self didn’t. And given all this, he wishes to revoke his pledge. It is his wealth, after all, and it isn’t clear who could possibly have the right to force him to give it away, if he now wishes to keep it.

Can the younger self really hold such a moral claim over the behavior of the older self? It is difficult to say.

More broadly, the promises, pledges, and commitments we make at particular points in our lives can bind both ourselves and others, often for a long time and sometimes even beyond death. Most college endowments, for example, are so-called “restricted endowments.” When they are made, requirements are placed on how the assets can be used. As such, endowments often express the (peculiar) wills of people who have long been dead and whose interests and values are often different to those which are common today. The Dudley Professorship of Railroad Engineering at Yale, for example, gifted in 1923 by Plimmon H. Dudley, can only be used for “work in connection with the development and improvement of designs of rails, roadbeds, and crossties.” Yale’s president, Richard C. Levin, admitted to The New York Times that, “I was kind of stumped as to what to do with the chair.” The professorship sat vacant for more than seventy years. My own college, the University of Edinburgh, is not unique in having an endowment-funded Parapsychology Unit, funded by a believer in the parapsychic. (The unit has expanded their research into the “pseudo-psi” – what’s not psychic but looks like it).

Such restrictions on the use of endowments are, for better or worse, legally binding. Whether the personal commitments and pledges made by our past selves are also morally binding on our future selves is a much more difficult question — one that Bill Gates may soon face.