Thursday, March 31, 2011

Good Article on User Interface Design Rules

I found a well-written article about user interface design rules. "Understanding user interface design rules," by Jeff Johnson, EE Times. (Two part article links:  Part 1  | Part 2 )

The take-aways are:
  • Make the operation task-focused, simple, and consistent. Bring the operation itself as close to possible as what the user wants to accomplish (the outcome), not what the device knows how to do (the mechanisms available).
  • Keep things simplicity and consistent.
  • Make the terminology familiar (and task-focused, and simple). Use the meaning that people expect to see, not jargon. Use different terms whenever there is a concept that differs in an important way. Don't use different terms if the difference doesn't matter to the user.
  • Make mistakes low cost (e.g., provide undo) so people aren't afraid to explore the interface.
The concept of an objects/actions matrix looks interesting. The idea is you put objects as the rows of a matrix and actions on those objects as a column.  Whenever an action is permissible for an object it gets a check mark in the matrix.  Good designs have square, densely filled in matrices (simple, consistent). Bad designs have large, sparsely filled in matrices (complex, inconsistent).

This article is well worth the time for anyone who designs embedded systems.

Thursday, March 24, 2011

Using a Risk Analysis Table to Categorize Bug Priority

Perhaps the single most difficult thing to get right in a bug report is assigning a priority. It is tempting to assign a priority based on how spectacular the result is. But you also need to take into account how likely the bug is to manifest in a deployed system. For example, a bug that causes a system to crash and automatically reboot may be a lot more dramatic than a confusing screen message, but if that confusing screen message results in thousands of tech support calls, it could be a disaster for your company.  The best approach is one that combines severity and probability.

It is tempting to try to use fancy math to combine severity and probability. Usually that doesn't work out so well in practice. Instead, I recommend borrowing a technique from the safety critical system community. They use a Risk Table to assign a "criticality" to a particular adverse event as part of their Preliminary Hazard Analysis (PHA). You can use the same table in a different way and just say you are assigning bug "priority" instead of "risk."  Below is an example Risk Table:


Probability is your best estimate as to how often the bug will be seen in use. Consequence is how big a problem it will cause. The Risk (indicated by each box in the grid) is how big the risk is to product reputation -- which ought to be the same as the bug priority.  It helps to have clearly defined statements to guide assigning any particular bug to a row and to a column. Once you assign probability and consequence, the table tells you the priority of that particular bug.


You can modify this table to have 3 to 6 rows and 3 to 6 columns depending upon your needs (the table can be a rectangle rather than a square).  You can also modify the asymmetry of assigning risks as has been done in this example (consequence is weighted a bit higher than probability for this table by putting extra "Very High" boxes on the top row and so on). The point is not the table itself, but rather that binning things in this way makes assigning bug priority a lot easier for people to do in practice.

Static Analysis Ranked Defect List

  Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...