Back to Prindle Institute

Digital Decisions in the World of Automated Cars

By Erin Law
11 Nov 2015

We’re constantly looking towards the future of technology and gaining excitement for every new innovation that makes our lives easier in some way. Our phones, laptops, tablets, and now even our cars are becoming increasingly smarter. Most new cars on the market today are equipped with GPS navigation, cruise control, and even with some intelligent parallel parking programs. Now, self-driving cars have made their way to the forefront of the automotive revolution.

Tens of thousands of electric Model S cars by Tesla are about to hit the streets and are equipped with cameras and sensors to read their surroundings and react accordingly. The strength of these cars in their current technological state is in their highway driving, i.e. smooth surfaces on which these cars are comfortable maneuvering, but each car’s data collection system and connection to a central database mean software development and sophistication will develop over time. The benefits of such systems, such as minimizing the number of accidents caused by drunk or distracted driving, seem to be plentiful.

Criticism of the potentially harmful effects of artificial technology on the human race is not new–take the popular Bruce Willis film “Surrogate” as proof. Increasingly, however, moral questions have arisen about the decision making capacity of automated cars.

Imagine the following scenario as described by the MIT Technology Review: You own a self-driving car and it is transporting you somewhere when an unpredictable series of events occur, leading to your car careening towar a crowd of 10 people crossing the street.  There is too little time for your car to stop in time to avoid collision. The car can, however, steer into a wall on the side of the street, thus saving the pedestrians but killing you, the owner and occupant of the car.

So what should the car do? Should it react to protect its paying owner, or should it react to minimize loss of life? These questions are based on an assumption that these cars will eventually have the capacity to make these kind of moral judgments based on context, which is a very sophisticated and potentially problematic issue. Further, will demographics of vehicle occupants eventually affect these decisions? Consider another scenario as described by Yahoo! Finance: Two networked self-driven cars are about to collide. One is occupied by a twenty-five year-old mother and the other is occupied by a seventy year-old childless man. If a fatal crash is certain, should the car carrying the man make a judgment about the value of his life as less than that of the mother and sacrifice its occupant and owner?

These kinds of ethical dilemmas are ones that require immediate attention as this technology develops. Although this innovation is potentially very beneficial to society, it also has the potential to allow vehicles or its programmers to place value on vehicle occupants and accordingly save or sacrifice these occupants to minimize loss of life.

Erin graduated from DePauw University with Sociology and Spanish majors. She was a member of the Media Fellows program and Kappa Kappa Gamma sorority, and she is a Minnesota native.
Related Stories