Monday, August 19, 2013

Peer Review Spreadsheet

Summary: I've found peer reviews are only effective if they have tangible paperwork. Here's a minimalist approach that has worked for me in a classroom situation.



Good peer reviews can be very effective at finding bugs .. but bad peer reviews can be nearly useless. I teach a course that involves a semester-long software project and found it really difficult to get students to do proper peer reviews.  Basically, they were going through the motions but didn't produce much (i.e., didn't find many bugs). The emphasis on the materials I provided them was checklists for what to look for, but not a lot of formality in reporting results, because I wanted to minimize paperwork.  That didn't work. Teams were only finding about 10% of their bugs in reviews, which is just not enough to be worth the effort.

So one year I changed things, and made them to use a variant of the below spreadsheet to report results. The results were dramatic. The first semester I used the spreadsheet, defects found via peer review went from about 10% to about 50% across the class, and have stayed there ever since. (Also, the effort profile changed dramatically for the better, but that's a topic for another posting.) In my experience, finding 50% of bugs or so in peer review is about right. So, while I'm not claiming ultimate academic rigor for this study, it seems based on this experience that adopting a spreadsheet like this can be effective at improving the quality of peer reviews.

Here is the spreadsheet (click to download the .XLS file):




The first few lines are to record the project name, date, artifact (e.g., file name of the code being reviewed), and names of the reviewers present.  # Bugs is filled out later. The artifact author is intentionally omitted because the records are about finding bugs, not blaming the author. The issue rows are a place to record issues found in the review in free text, usually only one or two sentences apiece.

The status fields are left blank during the review. However, within 24 hours after the review has taken place the author of the item being reviewed needs to update it to indicate "Fixed", "Not Fixed," or "Not a Bug." The idea here is that if it is easy to fix, the burden to record the bug and fix it is minimal -- it stays in the spreadsheet. But if a bug is too hard to fix immediately, it is "Not Fixed" and must be entered as a formal bug report into the bug tracking system (Bugzilla or otherwise). Some items turn out not to be bugs, and it is OK to record them as such (e.g., a feature request or a misunderstanding by the reviewers). When the program author has updated the status, the # Bugs line is updated to reflect the number of bugs actually found, and that number is rolled up to a project tracking spreadsheet.

This last piece about rolling up the # Bugs to a higher level of visibility is crucial. In my course I have the Teaching Assistants track the number of bugs found weekly for every project team and make sure they asked hard questions if the answers were consistently zero. Really, that's all it took. Apparently if the teams know someone is watching, they'll look a bit harder to find bugs, and once they do it seems that the process boot-strapped into fairly effective reviews with minimal help from the course staff. It's worked several years in a row for me -- almost like a light switch had been flipped for the students in my class. Results have been pretty consistent since we started using the spreadsheet at about 50% of bugs found in peer reviews across dozens of teams. It should be noted that we peer review not only code, but also tests, statecharts, sequence diagrams, and other artifacts for our projects, and the payoff in finding bugs early has been unmistakable. Of course I need to make a "Your Mileage May Vary" disclaimer here, but it's worked for me.

I'd be interested in hearing stories about simple ways to make reviews more effective from industry teams as well. Ideally each team gets solid training on a review process along with help on soft skills for review leaders. But realistically a lot of times a bunch of engineers are just tossed into a room and told to make it happen. Knowing tricks that help is not a bad thing.


3 comments:

  1. Phil -

    Reminds me of something I saw a little while ago (http://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/) - the so-called "Ego Effect":

    QUOTE: >>>The Ego Effect drives developers to write better code because they know that others will be looking at their code and their metrics.<<<

    Similarly, knowing that the review will be "reviewed" induces the code inspectors to scrutinize more closely. Kinda meta, I guess.

    ReplyDelete
  2. Hi Phil,

    I have similar experiences but carry the review process out on all the artifacts of a project. We begin by testing the requirements specification even before any consideration of potential solutions is entered into. Resolving the issues with the requirements specification can eliminate the initiating factors for better than 60% of the potential bugs in a system, thus helping to improve the basis on which the rest of the project rests. Further reviews then continue throught the rest of the design and implementation through testing and before final wrap-up of the project.

    ReplyDelete
  3. Paul -- I agree completely. In my class I make the students do peer reviews on everything from requirements through test plans. By the time you get to coding, all the really big mistakes have already been made.

    ReplyDelete

Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.

If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!

Job and Career Advice

I sometimes get requests from LinkedIn contacts about help deciding between job offers. I can't provide personalize advice, but here are...