In the 2004 dystopian action movie “I, Robot,” the main character (played by Will Smith) harboured a great deal of resentment towards advanced robotic assistants because of their inability to make complex moral decisions. In fact, as you find out through the course of the film, a robot had made a choice to save his life, rather than that of a young girl, based on the logical calculations of both their chances of survival during a catastrophic car accident. The point was simple, the decision making power of robots will always be flawed because they lack the emotional capacity to make nuanced moral choices.
While a decade ago considering moral theory as it relates to robotics might have seemed like some futuristic thought experiment, today it has become a reality, as the advent of self-driving cars is presenting unique moral challenges, particularly related to what decisions robotic cars should make in the event of a crash.
The fact of the matter is that while self-driving cars purport to deliver advantages related to more efficient traffic systems, reduced accidents and lower emissions, even robots will get into accidents, and autonomous vehicles will have to decide how to respond to those accidents and make decisions as to who might be injured in them: passengers or pedestrians.
It is a moral dilemma that is currently facing the autonomous vehicle industry, and one that will need to be programmed and resolved in forthcoming self-driving cars.
In that split second of an automobile accident the driver may make thousands of instantaneous moral decisions, particularly related to the safety of passengers, pedestrians, and self. Not only that, but repeat the same accident scenario 1000 times with 1000 different people, and you might get 1000 different outcomes, each person making unique instinctual choices (as far as it’s possible to make “choices” in such instances).
In fact, as you read this you might think you’re immediate response would be naturally altruistic, that you would look to save others before yourself, or perhaps you’re more concerned about you and yours, thinking of personal safety first and foremost. Say what you will about one’s innate propensity towards either end, they are decisions that we make, and thus they’ll need to be decisions that autonomous vehicles make as well.
But here’s the rub: According to new research by the University of California, when people were asked whether self-interest or the public good should predominate when it comes to programming moral principles into self-driving robotic cars, while most approve of the concept of self-driving cars potentially sacrificing a passenger (or passengers) to save others, those same people would rather not purchase or ride in such vehicles. Or to put it another way “participants were less likely to purchase a self-driving car that would sacrifice them and their passengers.”
“Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs,” the study authors said.
Simply put, people like the idea of self-sacrifice, but they have trouble when it might be demanded of them.
Going forward it will be interesting to see how the automotive and technology industries meet these unique challenges, finding ways to effectively meet the seemingly incompatible objectives of: responding consistently, not causing public outrage, and not alienating buyers. Given the outcome of the study mentioned above, finding an algorithm that aligns with complex and nuanced (and not to mention, fluctuating) human values will be challenging indeed.
Written by: Matt Klassen. www.digitcom.ca.