Perfect doesn't ever happen. "Never" is longer than you have available to test for software dependability. And it is the rare everyday embedded system that has taken a rigorous approach to ensuring safety.
It is also true that in many areas it is too risky from a liability point of view to write down a concrete requirement for less than perfection. And you may be guessing as to a target of less than perfection even if you do specify one. (Is it OK for your system to crash once every 1000 years, or will 900 years do? Did you guess when you answered that, or do you have a concrete basis for making that tradeoff?)
If you can and are permitted to specify a concrete, non-perfect set of requirements for your product, you should. But if you can't, consider instead defining a set of acceptance tests that will at least let you perform actual measurements to validate your system is good enough. These can be either process requirements or actual tests. Some examples include:
- System shall not crash during one full week of stress tests.
- All sources of crashes during testing shall be tracked down to root cause, and eliminated if appropriate.
- System shall perform an emergency shutdown if a defined safety requirement is violated at run time. (This assumes you are able to monitor these requirements effectively at run time.)
- All system errors shall be logged for analysis in failed units returned for factory service.
---
No comments:
Post a Comment
Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.
If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!