WASHINGTON — Consider this hypothetical:
It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle. You’re sitting back, enjoying the view, moving along at the 45 mph speed limit.
As you approach a rise in the road, heading south, a school bus appears, driving north, driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.
Does the car:
–– Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?
–– Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?
–– Hit the bus, possibly killing you as well as the driver and kids on the bus?
In everyday driving, such no-win choices may be exceedingly rare but, when they happen, what should a self-driving car – programmed in advance — do? Or in any situation — even a less dire one — where a moral snap judgment must be made?
It’s not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. Investment in the field totals about $80 billion. Companies like Google-affiliated Waymo are working feverishly on them, mobility companies like Uber and Tesla are racing to beat them. Detroit’s automakers are placing a big bet on them.
There’s every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges; and sharply reduce the number of deaths on U.S. highways each year, now about 35,000.
But there are other questions to be sorted out as well, like what happens to cabdrivers and whether such vehicles will create sprawl.
And there is an existential question:
Who dies when the car is forced into a no-win situation?
“There will be crashes,” said Van Lindberg, an attorney in the Dykema law firm’s San Antonio office who specializes in autonomous vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”
Few people seem to be in a hurry to take on these questions, at least publicly.
It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are “important” and links to a brief acknowledgement that “no consensus around acceptable ethical decision-making” has been reached.
There is evidence that people are worried about the choices self-driving cars will be programmed to take.
Last year, for example, a Daimler executive was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later said he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”
Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “if it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”
But what if the smaller thing is a child?
How that question gets answered may be important to the development and acceptance of self-driving cars.
Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”
Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. The American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.
“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.
While he can’t talk about specific discussions, Lin says some automakers “simply deny that ethics is a real problem, without realizing that they’re making ethical judgment calls all the time” in their development, determining what objects the car will “see,” how it will predict what those objects will do next and what the car’s reaction should be.
Does the computer always follow the law? Does it slow down whenever it “sees” a child? Is it programmed to generate a random “human” response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?
“You can see what a thorny mess it becomes pretty quickly,” Lindberg said. “Who bears that responsibility? … There are half a dozen ways you could answer that question leading to different outcomes.”
Automakers and suppliers largely downplay the risks of what in philosophical circles is known as “the trolley problem” –– named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.
In the circumstance of the self-driving car, it’s often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.
Philosophical considerations, aside, automakers argue it’s all but bunk — it’s so contrived.
“I don’t remember when I took my driver’s license test that this was one of the questions,” said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.
If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies — for brakes, for sensors — will detect danger and react more appropriately.
“The cars will be smart — I don’t think there’s a problem there. There are just solutions,” Papadopol said.
About the Author