-----------------------------
Several times a year I fly or drive (or webex) to visit an embedded system design team and give them some feedback on their embedded software. I've done this perhaps 175 times so far (and counting). Every project is different and I ask different questions every time. But the following are the top five areas I've found that need attention in the past few years. "(Blog)" pointers will send you to my previous blog postings on these topics.
(Don't miss our growing video tutorial library that covers some of these issues.)
How does your project look through the lens of these questions?
(1) How is your code complexity?
- Is all your code in a single .c file (single huge main.c)?
- If so, you should break it up into more bite-sized .c and .h files (Blog)
- Do you have subroutines more than a printed page or so long (more than 50-100 lines of code)?
- If so, you should improve modularity. (Blog)
- Do you have "if" statements nested more than 2 or 3 deep?
- If so, in embedded systems much of the time you should be using a state machine design approach instead of a flow chart (or no chart) design approach. (Book Chapter 13). Or perhaps you need to untangle your exception handling. (Blog)
- If you have very high cyclomatic complexity you're pretty much guaranteed to have bugs that you won't find in unit test nor peer review. (Blog)
- Did you follow an appropriate style guideline and use a static analysis tool?
- Your code should compile with zero warnings for an appropriate warning set. Consider using the MISRA C rule set (Blog) and a good static analysis tool. (Blog)
- Do you limit variable scope aggressively, or is your code full of global variables?
(2) How do you know your real time code will meet its deadlines?
- Did you set up your watchdog timer properly to detect timing problems and task death?
- The watchdog has to detect the death or hang of each and every task in the system to provide a reasonable level of protection. (Blog)
- How long to set the watchdog is a little trickier than you might think. (Blog)
- Do you know the worst case execution time and deadline for all your time-sensitive tasks?
- Just because the system works sometimes in testing doesn't mean it will work all the time in the field, whether the failure is due to a timing fault or other problems. (Blog)
- Did you do scheduling math for your system, such as main loop scheduling?
- You need to actually do the scheduling analysis. (Blog ; Blog)
- Less than 100% CPU usage does not mean you'll meet deadlines unless you can verify you meet some special conditions, and probably you don't meet those conditions if you didn't know what they were. (Blog)
- Did you consider worst case blocking time (interrupts disabled and/or longest non-context-switched task situation)?
- If you have one long-running task that ties up the CPU only once per day, then you'll miss deadlines when it runs once per day. But perhaps you get lucky on timing most days and don't notice this in testing. (Blog)
- Did you follow good practices for interrupts?
- What's your unit test coverage?
- If you haven't exercised, say, 95% of your code in unit test, you're waiting to find those bugs until later, when it's more expensive to find them. (Blog) (There is an assumption that the remaining 5% are exception cases that should "never" happen, but it's even better to exercise them too.)
- In general, you should have coverage metrics and traceability for all your testing to make sure you are actually getting what you want out of testing. (Blog)
- What's your peer review coverage?
- Peer review finds half the defects for 10% of the project cost. (Blog) But only if you do the reviews! (Blog)
- Are your peer reviews finding at least 50% of your defects?
- If you're finding more than 50% of your defects in test instead of peer review, then your peer reviews are broken. It's as simple as that. (Blog)
- Here's a peer review checklist to get you started. (Blog)
- Does your testing include software specific aspects?
- A product-level test plan is pretty much sure to miss some potentially big software bugs that will come back to bite you in the field. You need a software-specific test plan as well. (Blog)
- How do you know you are actually following essential design practices, such as protecting shared variables to avoid concurrency bugs?
- Are you actually checking your code against style guidelines using peer review and static analysis tools? (Blog)
- Do your style guidelines include not just cosmetics, but also technical practices such as disabling task switching or using a mutex when accessing a shared variable? (Blog) Or avoiding stack overflow? (Blog)
(4) Is your software process methodical and rigorous enough?
- Do you have a picture showing the steps in your software and problem fix process?
- If it's just in your head then probably every developer has a different mental picture and you're not all following the same process. (Book Chapter 2)
- Are there gaps in the process that are causing you pain or leading to problems?
- Very often technical defects trace back to cutting corners in the development process or skipping review/test steps.
- Skipping peer reviews and unit test in the hopes that product testing catches all the problems is a dangerous game. In the end cutting corners on development process takes at least as long and tends to ship with higher defect rates.
- Are you doing a real usability analysis instead of just having your engineers wing it?
- Do you have configuration management, version control, bug tracking, and other basic software development practices in place?
- You'd think we would not have to ask. But we find that we do.
- Do you prioritize bugs based on value to project rather than severity of symptoms? (Blog)
- Is your test to development effort ratio appropriate? Usually you should have twice as many hours on test+reviews than creating the design and implementation
- Time and again when we poll companies doing a reasonable job on embedded software of decent quality we find the following ratios. One tester for every developer (1:1 head count ratio). Two test/review hours (including unit test and peer review) for every development hour (2:1 effort ratio). The companies that go light on test/review usually pay for it with poor code quality. (Blog)
- Do you have the right amount of paperwork (neither too heavy nor too light)
- Yes, you need to have some paper even if you're doing Agile. (Blog) It's like having ballast in a ship. Too little and you capsize. Too much and you sink. (Probably you have too little unless you work on military/aerospace projects.) And you need the right paper, not just paper for paper's sake. (Book Chapters 3-4)
(5) What about dependability aspects?
- Have you considered maintenance issues, such as patch deployment?
- If your product is not disposable, what happens when you need to update the firmware?
- Have you done stress testing and other evaluation of robustness?
- If you sell a lot of units they will see things in the field you never imagined and will (you hope) run without rebooting for years in many cases. What's your plan for testing that? (Blog)
- Don't forget specialty issues such as EEPROM wearout (Blog), time/date management (Blog), and error detection code selection (Blog).
- Do your requirements and design address safety and security?
- Maybe safety (Blog) and security (Blog) don't matter for you, but that's increasingly unlikely. (Blog)
- Probably if this is the first time you've dealt with safety and security you should either consult an internal expert or get external help. Some critical aspects for safety and security take some experience to understand and get right, such as avoiding security pittfalls (Blog) and eliminating single points of failure. (Blog)
- And while we're at it, you do have written, complete, and measurable requirements for everything, don't you? (Book Chapters 5-9)
Finally, the point of my book is to explain how to detect and resolve the most common issues I see in design reviews. You can find more details in the book on most of these topics beyond the blog postings. But the above list and links to many of the blog postings I've made since releasing the book should get you started.
No comments:
Post a Comment
Please send me your comments. I read all of them, and I appreciate them. To control spam I manually approve comments before they show up. It might take a while to respond. I appreciate generic "I like this post" comments, but I don't publish non-substantive comments like that.
If you prefer, or want a personal response, you can send e-mail to comments@koopman.us.
If you want a personal response please make sure to include your e-mail reply address. Thanks!