← Return to search results
Back to Prindle Institute

Should You Outsource Important Life Decisions to Algorithms?

photograph of automated fortune teller

When you make an important decision, where do you turn for advice? If you’re like most people, you probably talk to a friend, loved one, or trusted member of your community. Or maybe you want a broader range of possible feedback, so you pose the question to social media (or even the rambunctious hoard of Reddit). Or maybe you don’t turn outwards, but instead rely on your own reasoning and instincts. Really important decisions may require that you turn to more than one source, and maybe more than once.

But maybe you’ve been doing it wrong. This is the thesis of the book Don’t Trust Your Gut: Using Data to Get What You Really Want in Life by Seth Stephens-Davidowitz.

He summarizes the main themes in a recent article: the actual best way to make big decisions when it comes to your happiness is to appeal to the numbers.

Specifically, big data: the collected information about the behavior and self-reports of thousands of individuals just like you, analyzed to tell you who to marry, where to live, and how many utils of happiness different acts are meant to induce. As Stephens-Davidowitz states in the opening line of the book: “You can make better life decisions. Big Data can help you.”

Can it?

There are, no doubt, plenty of instances in which looking to the numbers for a better approximation of objectivity can help us make better practical decisions. The modern classic example that Stephens-Davidowitz appeals to is Moneyball, which documents how analytics shifted evaluations of baseball players from gut instinct to data. And maybe one could Moneyball one’s own life, in certain ways: if big data can give you a better chance of making the best kinds of personal decisions, then why not try?

If that all seems too easy, it might be because it is. For instance, Stephens-Davidowitz relies heavily on data from the Mappiness project, a study that pinged app users at random intervals to ask them what they were doing at that moment and how happy they felt doing it.

One activity that ranked fairly low on the list was reading a book, scoring just above sleeping but well below gambling. This is not, I take it, an argument that one ought to read less, sleep even less, and gamble much more.

Partly because there’s more to life than momentary feelings of happiness, and partly because it just seems like terrible advice. It is hard to see exactly how one could base important decisions on this kind of data.

Perhaps, though, the problem lies in the imperfections of our current system of measuring happiness, or any of the numerous problems of algorithmic bias. Maybe if we had better data, or more of it, then we’d be able to generate a better advice-giving algorithm. The problem would then lie not in the concept of basing important decisions on data-backed algorithmic advice, but in its current execution. Again, from Stephens-Davidowitz:

These are the early days of the data revolution in personal decision-making. I am not claiming that we can completely outsource our lifestyle choices to algorithms, though we might get to that point in the future.

So let’s imagine a point in the future where these kinds of algorithms have improved to a point where they will not produce recommendations for all-night gambling. Even then, though, reliance on an impersonal algorithm for personal decisions faces familiar problems, ones that parallel some raised in the history of ethics.

Consider utilitarianism, a moral system that says that one ought to act in ways that maximize the most good, for whatever we should think qualifies as good (for instance, one version holds that the sole or primary good is happiness, so one should act in ways that maximize happiness and/or minimize pain). The view comes in many forms but has remained a popular choice of moral systems. One of its major benefits is that it provides a determinate and straightforward way (at least, in principle) of determining which actions one morally ought to perform.

One prominent objection to utilitarianism, however, is that it is deeply impersonal: when it comes to determining which actions are morally required, people are inconsequential, since what’s important is just the overall increase in utility.

That such a theory warrants a kind of robotic slavishness towards calculation produces other unintuitive results, namely that when faced with moral problems one is perhaps better served by a calculator than actual regard for the humanity of those involved.

Philosopher Bernard Williams thus argued that these kinds of moral systems appeal to “one thought too many.” For example, if you were in a situation where you need to decide which of two people to rescue – your spouse or a stranger – one would hope that your motivation for saving your spouse was because it was your spouse, not because it was your spouse and because the utility calculations worked out in the favor of that action. Moral systems like utilitarianism, says Williams, fail to capture what really motivates moral actions.

That’s an unnuanced portrayal of a complex debate, but we can generate parallel concerns for the view that we should outsource personal decision-making to algorithms.

Algorithms using aggregate happiness data don’t care about your choices in the way that, say, a friend, family member, or even your own gut instinct does.

But when making personal decisions we should, one might think, seek out advice from sources that are legitimately concerned about what we find important and meaningful.

To say that one should adhere to such algorithms also seems to run into a version of the “one thought too many” problem. Consider someone who is trying to make an important life decision, say about who they should be in a relationship with, how they should raise a child, what kind of career to pursue, etc. There are lots of different kinds of factors one could appeal to when making these decisions. But even if a personal-decision-making algorithm said your best choice was to, say, date the person who made you laugh and liked you for you, your partner would certainly hope that you had made your decision based on factors that didn’t have to do with algorithms.

This is not to say that one cannot look to data collected about other people’s decisions and habits to try to better inform one’s own. But even if these algorithms were much better than they are now, a basic problem would remain with outsourcing personal decisions to algorithms, one that stems from a disconnect between meaningful life decisions and impersonal aggregates of data.

Sensorvault and Ring: Private-Sector Data Collection Meets Law Enforcement

closeup photograph of camera lens

Concerns over personal privacy and security are amplifying as more information surfaces about the operations of Google’s Sensorvault, Amazon’s Ring, and FamilyTreeDNA.

Sensorvault, Google’s enormous database, stands out from the group as a major player in the digital profiling arena. Since at least 2009, it has been amassing data and constructing individual profiles for all of us based on vast information about our location history, hobbies, race, gender, income, religion, net worth, purchase history, and more. Google and other private-sector companies argue that the amassment of digital dossiers facilitates immense improvements in their efficiency and profits. However, the collection of such data also raises thorny ethical concerns about consent and privacy.

With regard to consent, the operation of Sensorvault is morally problematic for three main reasons. First, the minimum age required for managing your own Google account in North America is 13, meaning that Google can begin constructing the digital profiles of children, despite the likelihood that they are unable to comprehend the Terms and Service agreement or its implications. Their digital files are thus created prior to the (legal) possibility of providing meaningful consent.

Second, the dominance of Google’s Search Engine, Maps, and other services are making it increasingly less feasible to live a Google-free life. In the absence of a meaningful exit option, the value of supposed consent is significantly diminished. Third, as law professor Daniel Solove puts it, “Life today is fueled by information, and it is virtually impossible to live as an Information Age ghost, leaving no trail or residue.” Even if you avoid using all Google services, your digital profile can and will still be constructed from other data point references about your life, such as income level or spending habits.

The operation of Sensorvault and similar databases also raise moral concerns about individual privacy. Materially speaking, the content in Sensorvault puts individuals at extreme risks of fraud, identity theft, public embarrassment, and reputation damage, given the detailed psychological profiles and life-patterns contained in the database. Google’s insistence that protective safeguards are in place is not particularly persuasive either in light of recent security breaches, such as Social Security numbers and health information of military personnel and their families being stolen from a United States Army Base.

More abstractly, these data collection agencies represent an existential threat to our private selves. Solove argues in his book “The Digital Person” that the digital dossiers amassed by private corporations are eerily reflective of the files that Big Brother has on its citizens in 1984. He also makes a comparison between the secrecy surrounding these profiles and The Trial, in which Kafka warns of the dangers of losing control over personal information and enabling bureaucracies to make decisions about our lives without us being aware.

The stakes are growing increasingly high as Google, Amazon, and FamilyTreeDNA move beyond using data collection for their own purposes and are now collaborating with law enforcement agencies. These private companies attempt to justify their practices on the grounds that they are a boon to policing practices and are effectively helping to solve and deter crime. However, even if you are sympathetic to their justification, there are still significant ethical and legal reasons to be concerned by the growing relationship between data collecting private-sector companies and law enforcement agencies.

In Google’s case, the data in Sensorvault is being shared with the government as part of a new policing mechanism. American law enforcement agencies have recently started issuing “Geofence warrants” which grant them access to the digital trails and location patterns left by individuals’ devices in a specific time and area, or “geofence.” Geofencing warrants differ significantly from traditional warrants because they permit law enforcement to obtain access to Google user’s data without probable cause. According to one Google employee, “the company responds to a single warrant with location information on dozens or hundreds of devices,” thus ensnaring innocent people in a digital dragnet. As such, Geofencing warrants raise significant moral and legal concerns in that they circumvent the 4th Amendment’s protection of privacy and probable cause search requirement.

Amazon’s Ring (a home surveillance system) is also engaged in morally problematic relations with law enforcement. They have partnered with hundreds of departments in the US to provide police with data from their customers’ home security systems. Reports suggest that Ring has shared the locations of their customers’ homes with law enforcement, is working on enabling police to automatically activate Ring cameras in an area where a crime has been committed, and that Amazon is even coaching police on how to gain access to user’s cameras without a warrant.

FamilyTreeDNA, one of the country’s largest genetic testing companies, is also putting consumers’ privacy and security at risk by providing its data to the FBI. FamilyTree has offered DNA testing for nearly two decades, but in 2018, it willingly granted law enforcement access to millions of consumer profiles, many of which were collected before users were aware of the company’s collaboration with law enforcement. While police have long been using public genealogy databases to solve crime, FamilyTree’s partnership with the FBI marks one of the first times a private-sector database has willingly shared the sensitive information of its consumers with governmental agencies.

Several strategies might be pursued to mitigate the concerns raised by these companies regarding consent, privacy, and law enforcement collaboration. First, the US ought to consider adopting safeguards similar to the EU’s General Data Protection Regulations which, for example, sets the minimum age of consent for Google Users at 16 and stipulates that Terms of Service “should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms.” Second, all digital and DNA data collecting companies should undergo strict security testing to protect against theft, fraud, and the exposure of personal information. Third, given the extremely private and sensitive nature of such data, regulations ought to be enacted to prevent private companies like Family Tree from sharing profiles they amassed before publicly disclosing their partnership with law enforcement. Fourth, the US Congress Committee on Energy and Commerce should continue to monitor and inquire into companies as they did in their 2019 letter to Google. There needs to be greater transparency regarding what data is being stored and for what purposes. Finally, the 4th Amendment must become a part of the mainstream conversation regarding the amassing of digital dossiers, DNA profiles, and the access to such data by law enforcement agencies without probable cause.

The Socialist Calculation Debate: Revisited

Photo of Karl Marx bust on a plinth in a small park

The socialist calculation debate preoccupied some of our finest economic thinkers in the first half of the 1900s. The debate revolved around how to best solve society’s resource allocation problem—as in, how do we best allocate society’s scarce resources? In the attempt to answer the question, two camps emerged: the right-wing free-marketers and the left-wing socialists. The right’s answer to the allocation problem was a decentralized pricing system, whereas the left’s answer was a centrally planned economy. While on some level this debate died with the 20th century, glimmers of its return can be seen today. Continue reading “The Socialist Calculation Debate: Revisited”