← Return to search results
Back to Prindle Institute

Should You Outsource Important Life Decisions to Algorithms?

photograph of automated fortune teller

When you make an important decision, where do you turn for advice? If you’re like most people, you probably talk to a friend, loved one, or trusted member of your community. Or maybe you want a broader range of possible feedback, so you pose the question to social media (or even the rambunctious hoard of Reddit). Or maybe you don’t turn outwards, but instead rely on your own reasoning and instincts. Really important decisions may require that you turn to more than one source, and maybe more than once.

But maybe you’ve been doing it wrong. This is the thesis of the book Don’t Trust Your Gut: Using Data to Get What You Really Want in Life by Seth Stephens-Davidowitz.

He summarizes the main themes in a recent article: the actual best way to make big decisions when it comes to your happiness is to appeal to the numbers.

Specifically, big data: the collected information about the behavior and self-reports of thousands of individuals just like you, analyzed to tell you who to marry, where to live, and how many utils of happiness different acts are meant to induce. As Stephens-Davidowitz states in the opening line of the book: “You can make better life decisions. Big Data can help you.”

Can it?

There are, no doubt, plenty of instances in which looking to the numbers for a better approximation of objectivity can help us make better practical decisions. The modern classic example that Stephens-Davidowitz appeals to is Moneyball, which documents how analytics shifted evaluations of baseball players from gut instinct to data. And maybe one could Moneyball one’s own life, in certain ways: if big data can give you a better chance of making the best kinds of personal decisions, then why not try?

If that all seems too easy, it might be because it is. For instance, Stephens-Davidowitz relies heavily on data from the Mappiness project, a study that pinged app users at random intervals to ask them what they were doing at that moment and how happy they felt doing it.

One activity that ranked fairly low on the list was reading a book, scoring just above sleeping but well below gambling. This is not, I take it, an argument that one ought to read less, sleep even less, and gamble much more.

Partly because there’s more to life than momentary feelings of happiness, and partly because it just seems like terrible advice. It is hard to see exactly how one could base important decisions on this kind of data.

Perhaps, though, the problem lies in the imperfections of our current system of measuring happiness, or any of the numerous problems of algorithmic bias. Maybe if we had better data, or more of it, then we’d be able to generate a better advice-giving algorithm. The problem would then lie not in the concept of basing important decisions on data-backed algorithmic advice, but in its current execution. Again, from Stephens-Davidowitz:

These are the early days of the data revolution in personal decision-making. I am not claiming that we can completely outsource our lifestyle choices to algorithms, though we might get to that point in the future.

So let’s imagine a point in the future where these kinds of algorithms have improved to a point where they will not produce recommendations for all-night gambling. Even then, though, reliance on an impersonal algorithm for personal decisions faces familiar problems, ones that parallel some raised in the history of ethics.

Consider utilitarianism, a moral system that says that one ought to act in ways that maximize the most good, for whatever we should think qualifies as good (for instance, one version holds that the sole or primary good is happiness, so one should act in ways that maximize happiness and/or minimize pain). The view comes in many forms but has remained a popular choice of moral systems. One of its major benefits is that it provides a determinate and straightforward way (at least, in principle) of determining which actions one morally ought to perform.

One prominent objection to utilitarianism, however, is that it is deeply impersonal: when it comes to determining which actions are morally required, people are inconsequential, since what’s important is just the overall increase in utility.

That such a theory warrants a kind of robotic slavishness towards calculation produces other unintuitive results, namely that when faced with moral problems one is perhaps better served by a calculator than actual regard for the humanity of those involved.

Philosopher Bernard Williams thus argued that these kinds of moral systems appeal to “one thought too many.” For example, if you were in a situation where you need to decide which of two people to rescue – your spouse or a stranger – one would hope that your motivation for saving your spouse was because it was your spouse, not because it was your spouse and because the utility calculations worked out in the favor of that action. Moral systems like utilitarianism, says Williams, fail to capture what really motivates moral actions.

That’s an unnuanced portrayal of a complex debate, but we can generate parallel concerns for the view that we should outsource personal decision-making to algorithms.

Algorithms using aggregate happiness data don’t care about your choices in the way that, say, a friend, family member, or even your own gut instinct does.

But when making personal decisions we should, one might think, seek out advice from sources that are legitimately concerned about what we find important and meaningful.

To say that one should adhere to such algorithms also seems to run into a version of the “one thought too many” problem. Consider someone who is trying to make an important life decision, say about who they should be in a relationship with, how they should raise a child, what kind of career to pursue, etc. There are lots of different kinds of factors one could appeal to when making these decisions. But even if a personal-decision-making algorithm said your best choice was to, say, date the person who made you laugh and liked you for you, your partner would certainly hope that you had made your decision based on factors that didn’t have to do with algorithms.

This is not to say that one cannot look to data collected about other people’s decisions and habits to try to better inform one’s own. But even if these algorithms were much better than they are now, a basic problem would remain with outsourcing personal decisions to algorithms, one that stems from a disconnect between meaningful life decisions and impersonal aggregates of data.

Who Should Get the Vaccine First?

photograph of doctor holding syringe and medicine for vaccination

As at least one COVID-19 vaccine is scheduled to enter clinical trials in the United States in September, and Russia announced that it will be putting its own vaccine into production immediately, it seems like an auspicious moment to reflect on some ethical issues surrounding the new vaccines. Now, if we could produce and administer hundreds of millions of doses of vaccine instantaneously, there would presumably be no ethical question about how it ought to be distributed. The problem arises because it will take a while to ramp up production and to set up the capacity to administer it, so the vaccine will remain a relatively scarce resource for some time. Thus, I believe that there is a genuine ethical question here: namely, which moral principles ought to govern who gets the vaccine when there is not enough to go around and the capacity to administer it remains inchoate? In this column, I will weigh the pros and cons of a few principles that might be used.

One fairly straightforward principle is that everyone is equally deserving of treatment: everyone’s life matters equally, regardless of their race, gender, or socioeconomic status. The most straightforward way of fulfilling the principle is to choose vaccine recipients at random, or by lot. The trouble with this method is that, although it arguably best adheres to the principle of equality, it also fails to maximize the good. We know that not everyone is equally vulnerable to the virus; choosing vaccine recipients by lot would mean that many vulnerable people would die needlessly at the back of the line.

One way of defining “the good” in medical contexts is in terms of quality-adjusted life years, or “QALYs.” One QALY equates to one year of perfect health; QALY scores range from 1 to 0. If our aim in distributing the vaccine is to maximize QALYs, then we would prioritize recipients for whom a vaccine would make the greatest difference in terms of QALYs. Since the vaccine would make the greatest difference to members of vulnerable groups, we would tend to put these groups at the front of the line. We could also combine the principle of maximizing QALYs with the equality principle by selecting individual members of each group by lot while shifting all members of vulnerable groups to the front of the line.

While the principle of maximizing QALYs would in this way help the most vulnerable, it might be open to the objection that it neglects those who perform particularly important social functions. These perhaps include government officials and workers in essential industries who cannot shelter in place. One justification for prioritizing these individuals would be that since they contribute more to the functioning of society, they are entitled to a greater level of protection from threats to their productivity, even if giving them the vaccine first would fail to maximize QALYs. Another idea is that prioritizing such individuals maximizes overall well-being, rather than QALYs: more people benefit if society functions well than if members of vulnerable groups live longer. In a sense, then, we can view the dispute between the principle of maximizing QALYs and the principle of rewarding social productivity as a dispute between two ways of defining “the good.”

Finally, we might consider using the vaccine to reward those who have made significant contributions to social welfare in their lives, both on the grounds of intrinsic desert and to provide incentives for individuals to make similar contributions in the future. For example, we might decide that, between two individuals A and B for whom the vaccine would make an equal difference in terms of QALYs, if A is a war veteran, retired firefighter, teacher, and so on, then A ought to receive the vaccine first. One troubling feature of using this criterion is that owing to past discriminatory policies, this principle might heavily favor men over women. On the other hand, men may already be favored over women by the principle of maximizing QALYs, since they appear to be more vulnerable to COVID-19.

A final suggestion is just to let the market decide who will get the vaccine. But it’s hard to see how that idea is compatible with any of the normative principles discussed in this column. This method of distribution will not maximize QALYs or reward those who make or have made significant contributions to social welfare, and it seems at odds with the notion that all lives matter equally — in effect, it expresses the idea that the lives of the wealthy matter more.

Here is my proposal, for what it’s worth. If the disease were deadlier and there was not effective basic protection against transmission, then we would have to worry much more about the ability of government and essential industries to function without the vaccine. Luckily, COVID-19 does not pose such a threat. This means that operationalizing the principle of maximizing QALYs probably also would maximize overall social well-being, despite prioritizing vulnerable groups over essential workers and non-vulnerable groups. As I suggested above, we ought to select individual members of groups by lot, so as to affirm their basic equality. And in cases where we would make a roughly equal difference in terms of QALYs, we ought to favor the would-be recipient who has made a significant contribution to social welfare in their lives.