--------------------------------------------------------------------------
Source:
http://www.post-gazette.com/opinion/Op-Ed/2016/09/30/Safe-self-driving-It-s-not-here-yet/stories/201609300137
Safe self-driving? It’s not here yet
A lot of testing remains before self-driving cars should be widely deployed
Pittsburgh Post-Gazette
September 30, 2016 12:00 AM
By Philip Koopman and Michael Wagner
President Barack Obama this month announced a Department of Transportation draft policy on automated-vehicle safety. Hard-working staff at the Department of Transportation, the National Highway Traffic Safety Administration and the Volpe National Transportation Systems Center did a great job in creating it.
But, while self-driving cars promise improved safety, some critical issues must be addressed to ensure they are at least as safe as human-driven cars and, hopefully, even safer.
Machine learning is the secret sauce for the recent success of self-driving cars and many other amazing computer capabilities. With machine learning, the computer isn’t programmed by humans to follow a fixed recipe as it was in traditional systems. Rather, the computer in effect teaches itself how to distinguish objects such as a bush from a person about to cross a street by learning from example pictures that label bushes and people.
Traditional software safety evaluations check the software’s recipe against its actual behavior. But with machine learning, there is no recipe — just a huge bunch of examples. So it is difficult to be sure how the car will behave when it sees something that differs even slightly from an example.
To take a simple example, perhaps a car will stop for pedestrians in testing because it has learned that heads are round, but it has trouble with unusual hat shapes. There are so many combinations of potential situations that it is nearly impossible to simply test the car until you are sure it is safe.
Statistically speaking, even 100 million miles of driving is not enough to show that these cars are even as safe as an average human driver. The Department of Transportation should address this fundamental problem head on and require rigorous evidence that machine-learning results are sufficiently safe and robust.
A key element of ensuring safety in machinery, except for cars, is independence between the designers and the folks who ensure safety. In airplanes, trains, medical devices and even UL-listed appliances, ensuring safety requires an independent examination of software design. This helps keep things safe even when management feels pressure to ship a product before it’s really ready.
There are private-sector companies that do independent safety certification for many different products, including cars. However, it seems that most car companies don’t use these services. Rather than simply trust car companies to do the right thing, the Department of Transportation should follow the lead of other industries and require independent safety assessments.
Wirelessly transmitted software updates can keep car software current. But will your car still be safe after a software patch? You’ve probably experienced yourself cell phone or personal computer updates that introduce new problems.
The Department of Transportation’s proposed policy requires a safety re-assessment only when a “significant” change is made to a vehicle’s software. What if a car company numbers a major new software update 8.6.1 instead of 9.0 just to avoid a safety audit? This is a huge loophole.
The policy should be changed to require a safety certification for every software change that potentially affects safety. This sounds like a lot of work, but designers have been dealing with this for decades by partitioning vehicle software across multiple independent computers — which guarantees that an update to a non-critical computer won’t affect unchanged critical computers. This is why non-critical updates don’t have to be recertified.
Changing an icon on the radio computer’s display? No problem. Making a “minor” change to the machine-learning data for the pedestrian sensor computer? Sorry, but we want an independent safety check on that before crossing the street in a winter hat.
Philip Koopman is an associate professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. Michael Wagner is a senior commercialization specialist at CMU’s National Robotics Engineering Center. They are co-founders of Edge Case Research, which specializes in autonomous vehicle safety assurance and embedded software quality. The views expressed here are personal and not necessarily those of CMU.
--------------------------------------------------------------------------
Notes: It is important to emphasize that the context for this piece is mainstream deployment of autonomous vehicles to the general public. It is not intended to address limited deployment with trained safety observer drivers. There are more issues that matter, but this discussion framed to fit the target length and appeal to a broad audience.
"Statistically speaking, even 100 million miles of driving is not enough to show that these cars are even as safe as an average human driver."
ReplyDeleteI think this is a mistake. It's true that, in the US, the current death rate is just over 1 per 100 million miles, so, looking at deaths alone, it would be impossible to make a statistically reliable determination. But the crash rate is 185 and the injury rate is 74. That's enough to make a highly reliable comparison.
Google's record so far is 1 minor crash (attributed to machine error) in 1 million miles, which would be incredibly unlikely if the true average rate were over 100.
https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/
http://www.caranddriver.com/features/safety-in-numbers-charting-traffic-safety-and-fatality-data
John,
DeleteIt is all to easy to end up mixing apples and oranges, and drawing unsupported conclusions from the resultant fruit salad.
For example, if you are talking about safety of intermediate levels of autonomy, the driver could (and often does) get blamed for not preventing crashes due to autonomy failures. Citing only 1 Google crash as you do is highly misleading because the autonomy is not as good as that (nor does Google claim it to be -- that's why they have safety drivers). Even considering the 1 attributed crash for shared responsibility driving is misleading because the last time I checked they had two safety drivers who were trained and experienced (I don't know if they've cut back to one driver recently). But if you bought such a car it would just be you with probably less training and supervision to make sure you were continuously paying attention. Certainly the track record for Teslas operated by owners is not as good as the Google data you cite (nor is the software likely to be the same between vehicles, etc.)
Also, I've not analyzed data regarding incident rates. You're assuming they are the same, but there are reasons to believe they might be different depending upon the operational scenario due to machines and drivers having different strengths and weaknesses.
Trying to do an analysis of autonomous vehicle safety is not this simple because of all the confounding factors, especially when using data from vehicles that have safety drivers, are operating in restricted operational scenarios, are having their software updated frequently, and so on.
-- Phil