SciTech

Self-driving cars should stay in their lane

Credit: Isabelle Vincent/ Credit: Isabelle Vincent/

A new study in Risk Analysis: An International Journal reveals the prevailing public opinion on how self-driving cars should make decisions in the face of an oncoming collision. Researchers at the Max Planck Institute for Human Development and the University of Göttingen in Germany presented participants with two plausible scenarios involving autonomous vehicles, and surveyed them on the most morally appropriate action for the autonomous system.

The first component of the study considered the choice between a self-driving car staying in its lane or swerving before braking to avoid a pedestrian. The hypothetical likelihood of colliding varied between 20, 50, and 80 percent. Approximately 75 percent of the 872 participants opted to stay in the lane before braking, and all participants opted to stay in lane when the likelihood of colliding with a bystander was given as 50 percent.

In the second experiment, a separate set of 766 participants were presented with the same scenario in retrospect — the research team wanted to determine how people’s perception of autonomous vehicles changed based on the actions the vehicles took in a collision or near-collision event. The researchers found that, regardless of the outcome, most participants always elected to stay in lane, but the number of subjects who found swerving acceptable halved when told that a collision had occurred.

According to Björn Meder, one of the researchers, the study “highlights the importance of gaining a better understanding of how people think about the behavior of autonomous vehicles under different degrees of uncertainty. The findings will help to inform policy making and public discussion of the ethical implications of technological advances that will transform society in a variety of ways.”

The key takeaway from the study is that the public is generally comfortable with autonomous vehicles defaulting to stay in their lane when facing a collision. This choice requires the least additional information to be collected by a self-driving car’s sensors and minimizes computational load for an autonomous vehicle, even if it does not always prevent loss of life. Previous studies in this area have found that people usually prefer self-driving cars to place the least value on their own passengers, but this study demonstrates that both values can coexist in an effort for autonomous vehicles to protect passengers and bystanders.

Ethics concerning self-driving cars are becoming an increasingly pertinent field, as automobile and tech companies race to build complex autonomous systems without a clear moral direction. Earlier this year, a crash involving one of Uber Technologies Inc.’s self-driving cars killed a pedestrian in Tempe, Arizona. The vehicle did not perform an emergency brake until 1.3 seconds before the impact, even though it could have come to a complete stop in 3 seconds. A federal investigation that suspended Uber's permit to operate self-driving vehicles found that the system had relied on radar technology to determine when to stop, instead of making a default decision in an emergency. Perhaps incorporating the results of studies like this one would allow autonomous vehicles to make morally acceptable decisions to minimize risks to humans.