← Return to search results
Back to Prindle Institute
Society

Who Should Program the Morality of Self-Driving Cars?

By Meredith McFadden
27 Mar 2018

On Sunday, March 18, Elaine Herzberg died after being hit by Uber’s self-driving car on the road in Tempe, Arizona. Out for a test-run, the video of the collisions suggests that there was a failure of the self-driving technology as well as the in-car driver meant to supervise the testing of the test drive.  Uber has removed its self-driving cars from the road while cooperating with investigations, and discussions of the quickly advancing future of driverless vehicles have once again been stirred up in the press.

Whenever new technology harms the population, it is natural for concerns to arise, and discussion of the possible directions and regulations that driverless cars should face is not a new one. Though Uber has not joined the federal government’s voluntary safety self-assessment program for driverless cars, Waymo and General Motors have. The first European Conference on Connected and Automated Driving met in April of 2017 and discussed the progress and regulations of this revolutionary technology. It is difficult to capture the ethics of a new way of life – often technology outstrips our moral frameworks, and it can be worrying to allow corporations to make decisions regarding policy before the populace has the chance to catch up.

Discussions about driverless cars have for years mentioned the Trolley Problems – a series of thought experiments that put pressure on our intuitions that the right thing to do would be to end up with the most un-harmed people at the end of the day (or, if lives are at stake, the most survivors). If you see a trolley barreling, out of control, down its tracks towards a group of five people, and it is within your power to pull a switch to redirect the trolley to a track where only one person stands, should you pull the switch? Surveys show that the majority of people say it is permissible to pull the switch. However, if you must push someone in front of the trolley to stop it from hitting the five on the tracks, the number of survey participants who say this is the correct course of action diminishes. The way in which you prevent the five people on the tracks from getting hit by the trolley seems to matter to a number of people’s moral intuitions.

The connection with driverless cars comes when we consider the decisions the cars will make when extenuating circumstances occur during the course of travel. If a pedestrian wanders across the path, or two cars interfere with one another, how should the cars “decide” to prioritize survivability or minimizing harm? If the self-driving Uber car on Sunday was not malfunctioning and faced a jaywalking pedestrian, should it prioritize its driver over the pedestrian? What about if there are more occupants in the car? What about if there are five pedestrians?

Answering these questions will produce algorithms that will run the operation of self-driving cars. Corporations will determine these priorities, and currently, in Europe, there are principles resulting from the conference suggesting that the algorithms should not be left to the open market: companies producing self-driving cars must all agree to the same algorithms.

It is understandable that feelings are mixed about driverless cars. They are unlikely to eliminate collisions, and therefore injuries and fatalities will persist, but our attitudes towards them are likely to shift. This points to the complexity of our attitudes towards the harms we come to during the course of our lives. They rarely are a simple result of a cost-benefit analysis. Though a universal switch to driverless cars may result in a drastic overall reduction in suffering, it changes the nature of the suffering that would occur, and that is a complex matter.

Currently, when someone is harmed in a car collision, we can either attribute it to a poor decision by a person or some external force – as in cases we hasten to characterize as “accidents”: bad weather conditions, or instances of circumstance no one could avoid or correct for.

Injuries and deaths that result from collisions now can be understood as the result of luck or human failures. These are different causal chains, perhaps, than harms that have been “calculated in” – the results of algorithms from “on high,” so to speak. The notion that a system of drivers have decided who lives and who dies in each scenario can seem alienating and troubling. It is telling that while those surveyed are willing to sell driverless cars that would prioritize those outside the car, fewer people would be willing to purchase them.

Certainly, it puts traveling by car and the attitudes towards the risks of doing so into a different moral and emotional category than they are in now. Currently, the risks are rather high – higher than we tend to consciously face, as we can see by the shock when we consider the difference in risk between driving a car and traveling by plane, for instance. Reducing the risk seems like a good thing, but we are not brute risk minimizers, there are other values each society holds that balance risk aversion. (Recall as a fitting tangent Benjamin Franklin’s quote for the overly risk-averse: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”)

When harms are the result of natural disasters, we don’t tend to look for blame: they are tragedies. Harms from driverless cars don’t quite fit into this category: decisions were made, just not by the “driver.” As a society of travelers, we have to decide how to determine how cars should drive.

Meredith is an Assistant Professor at the University of Wisconsin, Whitewater. She earned her PhD at the University of California, Riverside, with a research focus in Philosophy of Action and Practical Reasoning and continues to explore the relationship between reason and value. Her current research consists of investigating modes of agential endorsement: how an agent's understanding of what is good, what is reasonable, what she desires, and who she is, informs what she does. Meredith is also committed to public philosophy and applied ethics; in particular, she is invested in illuminating debates in biomedical ethics, ethics of technology, and philosophy of law. Her website can be found at: https://mermcfadden.wixsite.com/philosopher.
Related Stories