Monday, December 2, 2013

Why You Need A Software-Specific Test Plan

Summary: Pretty much every embedded system goes through some sort of system test. But, you need to do software-specific testing in addition to that.  For example, what if you ship a system with the watchdog timer accidentally turned off?



In essentially every embedded system there is some sort of product testing. Typically there is a list of product-level requirements (what the product does), and a set of tests designed to make sure the product works correctly. For many products there is also a set of tests dealing with fault conditions (e.g., making sure that an overloaded power supply will correctly shed load). And many companies think this is enough .. but I've found that such tests usually fall short in many cases.

The problem is that there are features built into your software that are difficult or near-impossible to test in traditional product-level testing.  Take the watchdog timer for example. I have heard more than one developer say that there was a case where a product shipped (at least one version of a product) with the watchdog timer accidentally turned off. How could this happen?  Easy: a field problem is reported; developer turns off watchdog to do single-step debugging; bug found and fixed; forgot to turn the watchdog back on; product test doesn't have a way to intentionally crash the software to see if the watchdog is working; new software version ships with watchdog timer still turned off.

Continuing with the watchdog example, how do you solve this? One way is to include user-accessible functions that exercise the watchdog timer by intentionally crashing the software. Sounds a bit dangerous, especially if you are worried about security. More likely you'll need to have some separate, special way of testing functions that you don't want visible to the end user. And you'll need a plan for executing those tests.

And ... well, here we are, needing a Software Test Plan in addition to a Product Test Plan. Maybe the software tests are done by the same testers who do product test, but that's not the point. The point is you are likely to need some strategy for testing things that are there not because the end product user manual lists them as functions, but rather because the software requirements say they are needed to provide reliability, security, or other properties that aren't typically thought of as product functions. ("Recovers from software crashes quickly" is typically not something you boast about in the user manual.) For similar reasons, the normal product testers might not even think to test such things, because they are product experts and not software experts.

So to get this right the software folks are going to have to work with the product testers to create a software-specific test plan that tests what the software requirements need to have tested, even if they have little directly to do with normal product functions. You can put it in product test or not, but I'd suggest making it a separate test plan, because some tests probably need to be done by testers who have particular skill and knowledge in software internals beyond ordinary product testers. Some products have a "diagnostic mode" that, for example, sends test messages on a network interface. Putting the software tests here makes a lot of sense.

But for products that don't have such a diagnostic mode, you might have to do some ad hoc testing before you build the final system by, for example, manually putting infinite loops into each task to make sure the watchdog picks them up. (Probably I'd use conditional compilation to do that -- but have a final product test make sure the conditional compilation flags are off for the final product!)

Here are some examples of areas you might want to put in your software test plan:

  • Watchdog timer is turned on and stays turned on; product reboots as desired when it trips
  • Watchdog timer detects timing faults with each and every task, with appropriate recovery (need a way to kill or delay individual tasks to test this)
  • Tasks and interrupts are meeting deadlines (watchdog might not be sensitive enough to detect minor deadline misses, but deadline misses usually are a symptom of a deeper problem)
  • CPU load is as expected (even if it is not 100%, if you predicted an incorrect number it means you have a problem with your scheduling estimates)
  • Maximum stack depth is as expected
  • Correct versions of all code have been included in the build
  • Code included in the build compiles "clean" (no warnings)
  • Run-time error logs are clean at the end of normal product testing
  • Fault injection has been done for systems that are safety critical to test whether single points of failure turn up (of course it can't be exhaustive, but if you find a problem you know something is wrong)
  • Exception handlers have all been exercised to make sure they work properly. (For example, if your code hits the "this can never happen" default in a switch statement, does the system do something reasonable, even if that means a system reset?)
Note that some of these are, strictly speaking, not really "tests." For example, making sure the code compiles free of static analysis warnings isn't done by running the code. But, it is properly part of a software test plan if you think of the plan as ensuring that the software you're shipping out meets quality and functionality expectations beyond those that are explicit product functions.

And, while we're at it, if any of the above areas aren't in your software requirements, they should be. Typically you're going to miss tests if there is nothing in the requirements saying that your product should have these capabilities.

If you have any areas like the above that I missed, please leave a comment.  I welcome your feedback!

5 comments:

  1. Hi,
    Nice article. After building any software it is necessary to test its features for perfect functioning. That’s why mostly software company hire a tester for perfect functioning of software.

    ReplyDelete
  2. Thanks for the nice article. I would like to add that you should add a test for checking that the correct version number is set for the final release version and that there are no BETA flags switched on for example.

    ReplyDelete
  3. Hi,

    it would make sense to include some kind of test for ABI compatibility? If a third-party library or OS is included in the project and it is later updated, then subtle issues may happen.

    ReplyDelete
    Replies
    1. Nice comment. This is certainly an important point to get right. Usually what I have seen is that there is an OS and library version freeze during development. Then OS, library, and other compatibility is subsumed into other testing. Then you do an audit by the build team to ensure that the correct versions of everything are in the build rather, rather than attempting to re-test everything to make sure that a new build is compatible (which is likely to be very tricky to pull off).

      Delete
    2. Thank you for the clarification!

      Delete

Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.

If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!

Static Analysis Ranked Defect List

  Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...