Who is “Driving” Ethics, Literally?
Ever wish your car could drive itself? I am sure we all have at some point. But as companies begin coding driverless car software, the question of who decides how a programmed car will act in the case of a collision becomes not only one of paramount importance, but one of ethics.
According to the National Highway Traffic Safety Administration, there are approximately 80 people killed on US roads every day. In 2013, a total of 35,244 people died as the result of a car crash; this is something that researchers at Google and other corporations want to change. Any company coding driverless car software, however, must decide how a car will act in the case of a collision or unforeseeable circumstance.
In a recent article published by the Claims Journal, Justin Pritchard discusses the issues that arise in the process of coding driverless car software. A main reason for fatal collisions is indeed, human error; however, is it possible for driverless cars to produce a more favorable day-to-day outcome? Google and other companies realize that a future with driverless cars does not automatically relinquish the possibility and the occurrence of automobile collisions and fatal crashes. Accidents do happen, but is this a chance we are willing to take?
“This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone’s death…we expect those [decisions] to be as right as we can be,” admits Professor Patrick Lin, director of the ethics and emerging sciences group at Cal Poly, San Luis Obispo. Who drives, or should drive these decisions? And is it even ethical to create a self-driving machine that could potentially kill people, or one that we know will probably kill someone?
What do you think? Do the benefits of having driverless cars with state-of-the-art programming outweigh the loss of life due to this invention?