The world is increasingly inundated by algorithmic decision-making. Everything we experience online is the result of a calculation. It is sometimes difficult to remember that what we see is not chosen randomly, or without reason. There are motivations behind the online experiences we have, mostly driven by engagement metrics and revenue generation.
Beyond our consumption choices, algorithms are used in high stakes, real world conditions. Courts use algorithmic systems like COMPAS to determine the possibility of recidivism. Car insurance companies use AI algorithms to determine whether we should pay more or less money for our insurance. Health care companies are using AI to approve and deny patient coverage.
As our society increasingly embeds algorithmic decision-making into its infrastructure, it will become ever more important to know whether, and precisely how, algorithms can treat us fairly. A great deal of the current research on algorithmic fairness focuses on the best ways to remove biased data inputs and how to adjust outcomes so that they conform to a fair distribution based on some theory of outcome fairness (which is itself a contentious subject). There are, however, far fewer arguments that consider the possibility that algorithmic fairness may be an impossible union, and that some degree of unfairness may always be baked into algorithms. But on what grounds could we make this claim?
The reason is not simply that we can’t make outcomes fair, or that we can’t remove bias from data (though perhaps these things are true). The reason has to do with the very nature of how algorithms work.
To make this argument, I will consider fairness from a Rawlsian perspective. Rawls explicitly and repeatedly states that his conception of fairness concerns the basic structure of society. According to Rawls, the basic structure consists of a political constitution, an independent judiciary, legally recognized forms of property, an economic structure, and a family structure. For Rawls, then, fairness is relevant with respect to these general social practices, including decision making that affects these areas.
Fairness for Rawls also requires engagement between a particular kind of persons, namely rational and reasonable ones. Rational people are those who pursue what is in their own best interests, while reasonable people are those who are willing both to propose fair terms of cooperation and to accept them when proposed by others. If we are to be engaged in the creation of a fair social arrangement, we must be engaged with people who are both rational and reasonable, otherwise fairness is likely impossible to create.
Crucially, we must also see one another as free and equal. What makes us equal, for Rawls, is that we possess two basic moral powers: the capacity to form a conception of justice and the capacity to form a conception of the good. What makes us free, according to Rawls, is that we conceive of ourselves and of each other as having the moral power to form, revise, and pursue a conception of the good. We regard ourselves as self-authenticating sources of valid claims. Simply put, we think of ourselves and one another as free.
If we are going to make algorithms fair in a Rawlsian sense, we need to reconcile how algorithms treat people with Rawls’s idea that a fair society is one constituted by free and equal persons. So, what is the argument for why algorithms may be unable to treat us fairly? It goes something like this:
Justice requires treating people as free. This entails (1) treating people as capable of conceiving of the good and pursuing it, and (2) treating people as self-authenticating sources of valid claims.
However, algorithms cannot, in principle, do this. Algorithmic systems do not have minds, and so they do not think of us as free, for the simple reason that they do not think at all. Treating someone as algorithmically predictable is conceptually incompatible with treating them as free in the Rawlsian sense. Numbers cannot adequately represent the kind of Rawlsian freedom that justice requires.
Algorithms can be deterministic, probabilistic, or non-deterministic. For those who believe that determinism and free will are incompatible, it should be obvious why deterministic algorithms cannot treat people as free. But even for those who accept that determinism and freedom are compatible, a deterministic algorithm still treats humans as deterministic systems, not as free agents.
Probabilistic and non-deterministic algorithms may sound as though they leave room for treating people as free, but it is not clear how they do so. The burden of proof seems to lie with those who claim that such algorithms are fair. If algorithms are supposed to treat people as free, then we need a reason to believe that treating people probabilistically or non-deterministically is equivalent to treating them as free.
Consider a concrete example. A court uses COMPAS to evaluate a defendant’s eligibility for bail. After using the probabilistic algorithm, the court decides to deny bail. The algorithm determines that there is a 75% chance that the defendant, if released, would flee. In what sense does the algorithm treat the defendant as free merely because it relies on a probabilistic calculation?
The crucial question that proponents of algorithmic fairness must answer is this: How can an algorithm treat a human being as free when its basic functioning relies on the predictability and quantification of human behavior?
While algorithmic harm and bias are real, there are reasons to be optimistic that we might make algorithms fairer, or more just, on the outcome side. One can define fair outcomes and then adjust decision-making parameters until those outcomes are achieved.
However, if justice and fairness require that our procedures themselves be fair, and if fairness demands treating people as free, then we must also ask whether algorithmic systems can ever satisfy this requirement.
We might wonder why algorithms need to be just in this particular way. One might argue that the solution is straightforward: all that is required is to ensure that a human being remains in the decision-making loop. If a human uses an algorithm and then offers a justification to another human, is that not enough to satisfy the requirement of treating people as free?
I think the answer depends on a few things. First, when a decision-maker justifies a decision by appealing to an algorithm, we can ask whether there are additional reasons for the decision and how central the algorithm’s role was in arriving at it. In cases where the sole justification is the algorithmic output, the presence of a human decision-maker is merely performative. We might as well remove the human entirely and allow an anthropomorphized system to deliver the decision.
However, if there are independent reasons a person can give beyond the algorithmic output, then perhaps the conditions of Rawlsian fairness can be preserved. Even then, this would depend on whether the reasons offered still treat the person subject to the decision as free.
The upshot of this argument is not simply that algorithmic systems can be poorly designed and need to use better, less biased data, or produce fairer outcomes (though surely both would be ideal). Rather, it is that there may be a fundamental mismatch between predictability and justice. If treating people as free requires engaging them as self-authenticating sources of claims, then any system that governs by prediction rather than justification may fall short of fairness in a Rawlsian sense.
This leaves us with an urgent question to answer: are willing to trade procedural justice for the benefits of algorithmic decision-making, or should certain decisions demand forms of reasoning that algorithms cannot, in principle, provide and thus forgo the use of algorithms and AI entirely?