If you don't do peer reviews of your design and code you're missing the boat. It is the most effective way I know of to improve software quality. It really works!
A more interesting question is whether or not e-mail or on-line tool peer reviews are effective. From what I've seen they often don't work. I have no doubt that if you use a well thought out support tool and have just the right group of developers it can be made to work. But more often I have seen it not work. This includes some cases when I've been able to do more or less head-to-head comparisons, both for students and industry designers. The designers using on-line reviews are capable, hard-working, and really think the reviews are working. But they're not! They aren't finding the usual 40%-60% of defects in reviews (with most of the rest -- hopefully -- found via test).
I've also seen this effect in external reviews where sometimes I send comments via e-mail, and sometimes I subject myself to the US Air Transportation System and visit a design team. The visits are invariably more productive.
The reasons most people have for electronic reviews are that they are more convenient. I can believe that. But (just to stir the pot) when you say that, what you're really saying is you can't set aside a meeting time for a face to face review because you have more important things to do (like writing code).
Reviews let you save many hours of debugging for each review hour. If all you care about is getting to buggy code as fast as possible, then sure, skip reviews. But if what you really care about is getting to working product with the least effort possible, then you can't afford to skip reviews or do them in an ineffective way. Thus far I haven't seen data that shows tools are consistently effective.
If you're using on-line tools for reviews and they really work (or have been burned by them) let me know! If you think they work, please say how you know that they do. Usually when people claim that I'm looking for them to find about half their bugs via review, but if you have a novel and defensible measurement approach I'd be interested in hearing about it. I'd also be interested in hearing about differences between informal (e-mail pass-around) and tool based review approaches.
Companion blog to the book Better Embedded System Software by Phil Koopman at Carnegie Mellon University
Subscribe to:
Post Comments (Atom)
Static Analysis Ranked Defect List
Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...
-
It is common to see small helper functions implemented as macros, especially in older C code. Everyone seems to do it. But you should ...
-
(If you want to know more, see my Webinar on CRCs and checksums based on work sponsored by the FAA.) If you are looking for a lightwei...
-
Oct 3, 2014: updated with video of the lecture Here is my case study talk on the Toyota unintended acceleration cases that have been in ...
No comments:
Post a Comment
Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.
If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!