Yesterday the local newspaper published my op-ed piece on the DoT's draft policy for autonomous vehicle safety. In the interest of starting a more general public discussion about this important topic, it is reproduced below. I am working a more comprehensive technical response, and I'll post that here soon.
--------------------------------------------------------------------------
Source:
http://www.post-gazette.com/opinion/Op-Ed/2016/09/30/Safe-self-driving-It-s-not-here-yet/stories/201609300137
Safe self-driving? It’s not here yet
A lot of testing remains before self-driving cars should be widely deployed
Pittsburgh Post-Gazette
September 30, 2016 12:00 AM
By Philip Koopman and Michael Wagner
President Barack Obama this month announced a Department of Transportation draft policy on automated-vehicle safety. Hard-working staff at the Department of Transportation, the National Highway Traffic Safety Administration and the Volpe National Transportation Systems Center did a great job in creating it.
But, while self-driving cars promise improved safety, some critical issues must be addressed to ensure they are at least as safe as human-driven cars and, hopefully, even safer.
Machine learning is the secret sauce for the recent success of self-driving cars and many other amazing computer capabilities. With machine learning, the computer isn’t programmed by humans to follow a fixed recipe as it was in traditional systems. Rather, the computer in effect teaches itself how to distinguish objects such as a bush from a person about to cross a street by learning from example pictures that label bushes and people.
Traditional software safety evaluations check the software’s recipe against its actual behavior. But with machine learning, there is no recipe — just a huge bunch of examples. So it is difficult to be sure how the car will behave when it sees something that differs even slightly from an example.
To take a simple example, perhaps a car will stop for pedestrians in testing because it has learned that heads are round, but it has trouble with unusual hat shapes. There are so many combinations of potential situations that it is nearly impossible to simply test the car until you are sure it is safe.
Statistically speaking, even 100 million miles of driving is not enough to show that these cars are even as safe as an average human driver. The Department of Transportation should address this fundamental problem head on and require rigorous evidence that machine-learning results are sufficiently safe and robust.
A key element of ensuring safety in machinery, except for cars, is independence between the designers and the folks who ensure safety. In airplanes, trains, medical devices and even UL-listed appliances, ensuring safety requires an independent examination of software design. This helps keep things safe even when management feels pressure to ship a product before it’s really ready.
There are private-sector companies that do independent safety certification for many different products, including cars. However, it seems that most car companies don’t use these services. Rather than simply trust car companies to do the right thing, the Department of Transportation should follow the lead of other industries and require independent safety assessments.
Wirelessly transmitted software updates can keep car software current. But will your car still be safe after a software patch? You’ve probably experienced yourself cell phone or personal computer updates that introduce new problems.
The Department of Transportation’s proposed policy requires a safety re-assessment only when a “significant” change is made to a vehicle’s software. What if a car company numbers a major new software update 8.6.1 instead of 9.0 just to avoid a safety audit? This is a huge loophole.
The policy should be changed to require a safety certification for every software change that potentially affects safety. This sounds like a lot of work, but designers have been dealing with this for decades by partitioning vehicle software across multiple independent computers — which guarantees that an update to a non-critical computer won’t affect unchanged critical computers. This is why non-critical updates don’t have to be recertified.
Changing an icon on the radio computer’s display? No problem. Making a “minor” change to the machine-learning data for the pedestrian sensor computer? Sorry, but we want an independent safety check on that before crossing the street in a winter hat.
Philip Koopman is an associate professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. Michael Wagner is a senior commercialization specialist at CMU’s National Robotics Engineering Center. They are co-founders of Edge Case Research, which specializes in autonomous vehicle safety assurance and embedded software quality. The views expressed here are personal and not necessarily those of CMU.
--------------------------------------------------------------------------
Notes: It is important to emphasize that the context for this piece is mainstream deployment of autonomous vehicles to the general public. It is not intended to address limited deployment with trained safety observer drivers. There are more issues that matter, but this discussion framed to fit the target length and appeal to a broad audience.