The idea of traceability is simple: look at the inputs to a design process and then look at the outputs from that same process. Compare them to make sure you didn't miss anything.
The best place to start using traceability is usually comparing requirements to acceptance tests. Here's how:
Make a table (spreadsheets work great for this). Label each column with a requirement. Label each row with an acceptance test. For each acceptance test, put an X in the requirement columns that are exercised to a non-trivial degree by that test.
When you're done making the table, you can do traceability analysis. An empty row (an acceptance test with no "X" marks) means you are running a test that isn't required. It might be a really good test to run -- in which case your requirements are missing something. Or it might be a waste of time.
An empty column (a requirement with no "X" marks) means you have a requirement that isn't being tested. This means you have a hole in your testing, a requirement that didn't get implemented, or a requirement that can't be tested. No matter the cause, you've got a problem. (It's OK to use non-testing validation such as a design review to check some requirements. For traceability purposes put it in as a "test" even though it doesn't involve actually running the code.)
If you have a table with no missing columns and no missing rows, then you've achieved complete traceability -- congratulations! This doesn't mean you're perfect. But it does mean you've managed to avoid some easy-to-detect gaps in your software development efforts.
These same ideas can be used elsewhere in the design process to avoid similar mistakes. The point for most projects is to use this idea in a way that doesn't take a lot of time, but catches "stupid" mistakes early on. It may seem too simplistic, but in my experience having written traceability tables helps. You can find out more about traceability in my book, Chapter 7: Tracing Requirements To Test.
---
Companion blog to the book Better Embedded System Software by Phil Koopman at Carnegie Mellon University
Subscribe to:
Post Comments (Atom)
Static Analysis Ranked Defect List
Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...
-
It is common to see small helper functions implemented as macros, especially in older C code. Everyone seems to do it. But you should ...
-
(If you want to know more, see my Webinar on CRCs and checksums based on work sponsored by the FAA.) If you are looking for a lightwei...
-
Oct 3, 2014: updated with video of the lecture Here is my case study talk on the Toyota unintended acceleration cases that have been in ...
No comments:
Post a Comment
Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.
If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!