Opinion: Tragedies Happen, but driverless cars can still save lives

An Arizona woman was killed after a self-driving Uber car struck her as she was crossing the street. Initial reports say that she crossed outside of a crosswalk, but that does not necessarily matter, as a human driver might very well have noticed her walking there and may have stopped in time. In light of the incident, Uber (at least temporarily) suspended the testing of its self-driving fleet.

There are three important takeaways from this horrible incident:

1. This was by all accounts a tragedy. It is just as tragic as any of the other nearly 37,500 crash-related deaths that occur every year in the United States;

2. Self-driving car engineers and developers need to learn from this incident; and

3. Recognizing how tragic it is when a human being is killed in a car accident, we need to speed up, not slow down, deployment of self-driving cars, and we need to do so in order to save more lives.

It is understandable that the first emotional reaction to this incident is to want to stop and re-evaluate whether we should be investing so much time, energy, and money into self-driving car fleets. At first glance, the question of whether we should let machines drive when they might kill pedestrians that a human driver would have avoided represents the real-life 21st century version of the classic philosophical “trolley dilemma” first proposed by Philippa Foot in 1967 and adapted by Judith Jarvis Thompson in 1985.

In the classic version of the problem, a runaway trolley car is bearing down the tracks towards a group of five people who cannot see or hear it coming. If you do nothing, they will all die. There is, however, a lever you could press, that would divert the train down a second set of tracks, where only one innocent bystander would perish. Should you pull the switch?

By analogy, we currently have a system that allows human drivers, with all of their ticks, quirks, bad habits and malfunctions, to operate massive heavy machinery at incredibly high speeds, a system that results in approximately 37,500 deaths a year in the U.S. In fact, the U.S. Transportation Department says that about 94 percent of fatal accidents are caused by human error.

On the flipside, there is potentially another way of doing things; we could continue the push to make the switch to fully autonomous vehicles. According to some reports, autonomous vehicles could eliminate up to 90 percent of fatalities. That equates to roughly 30,000 lives saved every single year, but of course, as in the classic trolley problem, while the overall number of people who are killed might be far less, the individuals themselves would be different, as the kinds of accidents would be different, and who are we to play God?

There is of course, no one right answer to this conundrum, and moral and ethical value judgments on the value of human lives play a central role in whether or not to throw that switch — or in our case, to make the switch to more, not less, autonomous vehicle development. But at second glance, our case offers two very important reasons to go ahead and continue with the driverless rollout without any of the trolley’s moral quandaries.

The truth is that when it comes to preventing future accidents, the allegorical train is not already barreling towards the people, and there are no identifiable individuals on either set of tracks. What we are looking at is a far simpler problem; if given the chance to save a lot of people or a smaller number of people, we should opt to save a lot.

Also, unlike in the trolley problem, there is no guarantee that any one person would ever have to die. While fatality rates with human drivers are actually going up, and humans are unlikely to develop new abilities, autonomous vehicles are constantly learning and updating and becoming better drivers overall. Of course, the chances are high that more freak accidents will occur, but making the switch, unlike pulling the switch, does not necessarily result in any loss.

That is why we should, as a nation, take a moment to grieve this tragic loss of life, because every innocent person that dies in a car crash deserves a better fate. We can also take the time to ask if any particular self-driving car or fleet is ready for deployment — perhaps a Google car might not have made this same mistake? Certainly, there must be some criteria by which we decide which cars are ready for the road.

The point is simply that we should not put the idea on hold because of even a glaring setback. Instead, in this person’s honor and in honor of all of the lives that have been lost to human driving, we should move forward and continue developing the criteria and technology that will save millions of innocent people.