← Return to search results
Back to Prindle Institute
Technology

Driving with the Machine: Self-Driving Cars, Responsibility, and Moral Luck

By Daniel Story
8 Dec 2022
photograph of driver sleeping in self-driving car

Charlie and Ego pick me up from a university parking lot on a balmy September afternoon. A soccer match is about to start nearby, and Charlie, eyeing the fans bustling around his car, carefully drives us off the lot. Charlie is a former student of mine. I gather as we catch up that Charlie now does something lucrative and occult with cryptocurrencies, of which the Tesla Model S we are riding in is presumably an alchemical product. It’s nice to see Charlie. As pleasant as our conversation is, though, that’s not why I’m here. I want to meet Ego.

There’s a map of our sleepy little town of San Luis Obispo, California on the touch screen in the center console of Charlie’s car. As we approach the edge of campus, Charlie casually selects a location downtown, clicks a few buttons, and lets go of the wheel. The wheel and the five-thousand pound car begin to move of their own accord. Ego is driving.

Ego, despite the moniker, is not a person. “Ego” is what Charlie calls the Full Self-Driving beta function on his car, a test version of Tesla’s self-driving program that is designed to navigate city streets and highways.

When Ego takes over, my riding experience immediately feels different, less familiar. Charlie’s driving was smooth and confident; Ego’s driving feels jerky and neurotic.

Ego drives us down suburban streets, past cars, bicyclists, and pedestrians. It doesn’t come close to hitting anyone (you can tell Ego is programmed to be extra careful around pedestrians), and it gets us where we want to go. But it moves unnaturally. The wheel jitters. Sometimes the car moves haltingly, slowing on empty streets or stopping abruptly in intersections. At other times it moves like a missile, accelerating rapidly into left-hand turns or sweeping within inches of inanimate obstacles. You wouldn’t mistake it for a bad human driver if your eyes were closed. It feels unmistakably robotic. I’m sure that many of Ego’s peculiarities reflect temporary technical problems, but it’s hard to shake the sense that there’s something fundamentally alien at the wheel.

Despite my unease about Ego, I never felt unsafe while Ego was driving. That’s because Charlie was attending assiduously to Ego’s movements. Whenever Ego would do something weird, Charlie would fiddle with the console to inform Tesla’s algorithms that something went wrong. And when Ego started to do something egregious or annoying, Charlie would grab the wheel and manually navigate us to a new situation. I soon realized that it wasn’t accurate to say that Ego is driving or that Charlie is driving. The better thing to say is that they’re driving together.

This is how Charlie sees things, too.

Over time it’s started to feel like it’s a team effort, that we’re working together. And I think that way because it messes up in the same spots. It’s very predictable. It shows me what it’s going to do, and I can override some of those functions. So it’s kind of like it’s doing the actual task of driving, but I’m overseeing it and making sure that it’s, you know, not crashing. So I do feel like it’s a team effort.

This dynamic piques my interest. I’ve spent a lot of time thinking about moral responsibility in contexts of shared agency. Participants in shared agency are often praised or blamed for actions or outcomes that originate outside the sphere of their own individual agency. For example, if a medical provider working as part of a healthcare team goes above and beyond in a moment of crisis to save a patient’s life, the team members who enabled or supported the provider’s care may share some praise for saving the patient even though they weren’t directly involved in the crisis.

Whenever a person’s moral status (including their praiseworthiness or blameworthiness) depends upon factors that are at least partly outside of their control, they are subject to what’s called moral luck.

Moral luck is controversial because in the abstract we tend to think that a person’s moral status should be based on the quality of their intentions, choices, or character, on things they can fully control. However, our intuitions about particular cases often suggest otherwise.

A classic example involves drunk driving: we tend to morally blame drunk drivers who hit and kill children much more harshly than equally negligent drunk drivers who luckily get home safely.

In the past, I’ve argued that moral luck is a common feature of shared agency because when you act jointly with other people your moral status can be affected by them in ways you can’t fully anticipate or control. You might find yourself to blame for another agent’s actions. And as I watched Charlie and Ego drive around town together, I couldn’t help but feel that their shared activity exhibited a similar dynamic.

Ego does not meet the conditions required for moral responsibility. But Charlie does. He is the responsible adult in this activity, which is inherently risky and could result in serious harms. It’s natural to think that he is responsible for it, even if, because he and Ego are sharing the reins, he is not fully in control of how it unfolds.

If that’s right, then people who use self-driving programs are susceptible to moral luck because they can be on the moral hook for what these programs do. And this luck is analogous to the luck involved in shared agency between people.

It’s possible to complicate this line of thought. For one, it will not always be feasible for people to productively intervene to prevent harmful self-driving malfunctions, especially as the technology becomes more sophisticated and reliable. Accidents often happen quickly, and intervening can make things worse. When an accident involving a self-driving car is not due to the human driver’s negligence (or some other morally criticizable error), many people will say that the human driver is not morally responsible. Moreover, the human driver is not the only potentially responsible person in the mix. As my colleague Patrick Lin has pointed out, those who design self-driving cars can bear responsibility for bad outcomes that result from criticizable design choices. In fact, in many situations designers would seem to be a better candidate for blame than drivers, since, unlike drivers, designers have the luxury of time and forethought.

These points are both important, but they are compatible with the claim that human drivers are subject to a significant sort of moral luck by way of self-driving cars. At least when a human driver’s negligence leads to a harmful self-driving accident that would not have occurred had the driver not been negligent, it seems reasonable to say that the driver is blameworthy for that accident, even if other parties, such as designers or other drivers, bear some responsibility, too.

Reactions like praise and blame perform important functions in human life. Thus, thinking about the specific conditions under which humans are and are not morally responsible for self-driving cars is worthwhile. However, it is perhaps possible to overemphasize the importance of fault and blameworthiness here.

The more reliable and autonomous self-driving cars become, the more tempting it will be for human drivers to morally, socially, and personally distance themselves from harmful accidents involving their self-driving cars with the thought: “There’s nothing I could have done; I am unfortunate but as blameless as a mere spectator.”

This thought may be true, but it threatens to obscure in the driver’s conscience the fact that the driver’s own agency bears a special relation to the accident. There is something unsavory about someone who refuses to acknowledge this special relation. It’s appropriate for the driver, even if blameless, to feel a special type of first-personal regret about her choice to take the self-driving car out for a spin that day, a regret that is different from the sadness a spectator might feel and that might motivate her to make amends or apologize if she can. The willingness to take responsibility for those aspects of one’s embodied agency that fall outside of one’s control is a manifestation of a virtuous spirit and seems wholly appropriate – indeed, requisite – for those who choose to risk others’ lives by using self-driving cars.

The upshot is that using a self-driving car is morally risky, even for the most conscientious users. This is true of conventional cars as well. But the risk associated with self-driving cars is special because it originates in the actions of an artificial agent that has the potential to do great harm.

For now, I suspect that most self-driving car users are acutely aware of this. Charlie certainly is.

“If I was not paying attention, and I hit someone, I would feel 100% responsible. And I probably would feel at least mostly responsible if I was paying attention. So it very much feels like I am responsible for what it does.”

Daniel Story received his PhD in Philosophy from the University of California, Santa Barbara and currently teaches at California Polytechnic State University, San Luis Obispo. His research focuses primarily on issues relating to shared agency, responsibility, moral luck, and death.
Related Stories