Tuesday, October 25, 2016

Peer Review Tutorial

Here's a preview video on Peer Review (about 1 minute of content, plus leader/trailer stuff).  There is also a full version of this video available for free from the Edge Case Research video library (see below for details).

Full tutorial video: https://vimeo.com/181433327

  • Peer review finds half the defects for 10% of the project cost. (Blog)
  • But they only work if you actually do the reviews! (Blog)
  • If you're finding more than 50% of your defects in test instead of peer review, then your peer reviews are broken. It's as simple as that. (Blog)
  • Here's a peer review checklist to get you started. (Blog) You should customize it to your needs.
For more about Edge Case Research and how to subscribe to our video training channel, please see this Blog posting.

Monday, October 24, 2016

Training Video Series

Summary: My startup company is launching a series of training videos.  Here's an overview.

(This blog posting is an ad for my startup company ... but it also has links to free training videos!)

Edge Case Research:

I have a startup company that has built a brisk business in design reviews, robustness testing, and software safety.  We're building on the 20 years and 200 design reviews I've done working with industry partners on a variety of embedded systems. We're also building on my long collaboration with my co-founder Mike Wagner and others at the National Robotics Engineering Center doing autonomy validation and safety.

We do a lot of work with autonomous systems (robots, self-driving cars), but also with consumer electronics and industrial controls. In general the idea is that we have a dedicated team of embedded system experts who can help companies with both the technical aspects of their product and their software development processes. Our strengths are:
If we can help you with your embedded product just let us know!

Free Training Videos:

One of the things we have found is that there are a number of common areas in which our customers (and probably many others) could use some help with in terms of updating technical and process skills.  Thus, we're launching a video training series to help with the topics we most commonly see as we review projects.

To access the full videos for free, visit our video library page here.  If this is successful we'll add more videos over time, possibly requiring registration and/or a subscription for the full library.  But the ones there now are free, and I expect that these particular videos will stay free indefinitely.  Please respect our copyright and substantial investment in creating them by pointing people there to view them instead of trying to download the full length videos.

We also have a YouTube channel with just the previews that you can subscribe to if you want notification of new videos as they come out.  Our full length videos as well as the previews are hosted on our Vimeo channel.

Hope you enjoy these!

Friday, October 14, 2016

Interviews on Self-Driving Vehicle Safety

With all the recent activity on self-driving vehicle safety, I've done a few interviews.  If you are looking for various ways of explaining this complex topic, here are some pointers you might find useful.  The unifying theme is that they all are based at least in part on an interview I did with the journalist.

Sunday, October 2, 2016

Response to DoT Policy on Highly Automated Vehicles

I've prepared a draft response to DoT on their proposed policy for highly automated vehicle safety.

EE Times article that summarizes my response

The topics I cover are:
1. Requiring a safety argument that deals with the challenges of validating machine learning
2. Requiring transparent independence in safety assessment
3. Triggering safety reassessment based on safety integrity, rather than “significant” functionality
4. Requiring assessment of changes that can compromise the triggering of fall-back strategies
5. Characterizing what “reasonable” might mean regarding anticipation of exceptional scenarios
6. Assuring the integrity of data that is likely to be used for crash investigations
7. Diagnostics that encompass non-collision failures of components and end-of-life reliability loss
8. More uniform codification of traffic rule exceptions
9. Ensuring the safety of driver-takeover strategies for SAE Level 2 systems

Document pointers:
Comments either to this blog or to my university e-mail are welcome.  
         koopman - at - cmu.edu
I might not have time to respond in detail to all comments, but I appreciate anything that will help improve this response.

Saturday, October 1, 2016

Op-Ed About Autonomous Vehicle Safety

Yesterday the local newspaper published my op-ed piece on the DoT's draft policy for autonomous vehicle safety.  In the interest of starting a more general public discussion about this important topic, it is reproduced below.  I am working a more comprehensive technical response, and I'll post that here soon.



Safe self-driving? It’s not here yet
A lot of testing remains before self-driving cars should be widely deployed
Pittsburgh Post-Gazette
September 30, 2016 12:00 AM

By Philip Koopman and Michael Wagner

President Barack Obama this month announced a Department of Transportation draft policy on automated-vehicle safety. Hard-working staff at the Department of Transportation, the National Highway Traffic Safety Administration and the Volpe National Transportation Systems Center did a great job in creating it.

But, while self-driving cars promise improved safety, some critical issues must be addressed to ensure they are at least as safe as human-driven cars and, hopefully, even safer.

Machine learning is the secret sauce for the recent success of self-driving cars and many other amazing computer capabilities. With machine learning, the computer isn’t programmed by humans to follow a fixed recipe as it was in traditional systems. Rather, the computer in effect teaches itself how to distinguish objects such as a bush from a person about to cross a street by learning from example pictures that label bushes and people.

Traditional software safety evaluations check the software’s recipe against its actual behavior. But with machine learning, there is no recipe — just a huge bunch of examples. So it is difficult to be sure how the car will behave when it sees something that differs even slightly from an example.

To take a simple example, perhaps a car will stop for pedestrians in testing because it has learned that heads are round, but it has trouble with unusual hat shapes. There are so many combinations of potential situations that it is nearly impossible to simply test the car until you are sure it is safe.

Statistically speaking, even 100 million miles of driving is not enough to show that these cars are even as safe as an average human driver. The Department of Transportation should address this fundamental problem head on and require rigorous evidence that machine-learning results are sufficiently safe and robust.

A key element of ensuring safety in machinery, except for cars, is independence between the designers and the folks who ensure safety. In airplanes, trains, medical devices and even UL-listed appliances, ensuring safety requires an independent examination of software design. This helps keep things safe even when management feels pressure to ship a product before it’s really ready.

There are private-sector companies that do independent safety certification for many different products, including cars. However, it seems that most car companies don’t use these services. Rather than simply trust car companies to do the right thing, the Department of Transportation should follow the lead of other industries and require independent safety assessments.

Wirelessly transmitted software updates can keep car software current. But will your car still be safe after a software patch? You’ve probably experienced yourself cell phone or personal computer updates that introduce new problems.

The Department of Transportation’s proposed policy requires a safety re-assessment only when a “significant” change is made to a vehicle’s software. What if a car company numbers a major new software update 8.6.1 instead of 9.0 just to avoid a safety audit? This is a huge loophole.

The policy should be changed to require a safety certification for every software change that potentially affects safety. This sounds like a lot of work, but designers have been dealing with this for decades by partitioning vehicle software across multiple independent computers — which guarantees that an update to a non-critical computer won’t affect unchanged critical computers. This is why non-critical updates don’t have to be recertified.

Changing an icon on the radio computer’s display? No problem. Making a “minor” change to the machine-learning data for the pedestrian sensor computer? Sorry, but we want an independent safety check on that before crossing the street in a winter hat.

Philip Koopman is an associate professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. Michael Wagner is a senior commercialization specialist at CMU’s National Robotics Engineering Center. They are co-founders of Edge Case Research, which specializes in autonomous vehicle safety assurance and embedded software quality. The views expressed here are personal and not necessarily those of CMU.

Notes: It is important to emphasize that the context for this piece is mainstream deployment of autonomous vehicles to the general public. It is not intended to address limited deployment with trained safety observer drivers. There are more issues that matter, but this discussion framed to fit the target length and appeal to a broad audience.