Monday, May 22, 2017

#define vs. const

Is your code full of "#define" statements?  If so, you should consider switching to the const keyword.

Old school C:
    #define MYVAL 7

Better approach:
   const uint32_t myVal = 7;

Here are some reasons you should use const instead of #define:
  • #define has global scope, so you're creating (read-only) global values every time you use #define. Global scope is evil, so don't do that.  (Read-only global scope for constant values is a bit less evil than global variables per se, especially if you can't use the namespace features of C++. But gratuitous global scope is always a bad idea.) A const alternative can obey scoping rules, including being purely local if defined inside a procedure, or more commonly file static with the "static" keyword.
  • Const lets you do more aggressive type checking (depending upon your compiler and static analysis tools, especially if you use a typedef more specific than built-in C data types). While C is a bit weak as a language in this area compared to other languages, a classical example is a const lets you identify a number as being in feet or meters, while the #define approach is just as if you'd typed the number 7 in with no units. The #define approach can bite you if you use the wrong value in the wrong place. Type checking is an effective way to find bugs, and using #define gives up an opportunity to let static analysis tools help you with that.
  • Const lets you use the value as if it were a variable when you need to (e.g., passing an address to the variable) without having to change how the variable is defined.
  • #define in general is so bug-prone that you should minimize its use just to avoid having to spend time asking "is this one OK?" in a peer review. Most #define uses tend to be const variables in old-school code, so getting rid of them can dramatically reduce the peer review burden of sifting through hundreds of #define statements to look for problems.
Here are some common myths about this tradeoff. (Note that on some systems these statements might be true, especially if you have and old and lame compiler.  But they don't necessarily have to be true and they often are false, especially on newer chips with newer compilers.)
  • "Const wastes memory."  False if you have a compiler that is smart enough to do the right thing. Sure, if you want to pass a pointer to the const it will actually have to live in memory somewhere, but you can't even pass a pointer to a #define at all. One of the points of "const" is to give the compiler a hint that lets it optimize memory footprint.
  • "Const won't work for X." Generally false if you have a newer compiler, and especially if you are using a mostly-C subset of the capability of a C++ compiler, as is increasingly common. And honestly, most of the time #define is just being used as a plain old integer const to get rid of magic numbers. const will work fine.  (If you have magic numbers instead of #define, then you have bigger problems than this even.) Use const for the no-brainer cases. Something is probably wrong if everything about your code is so special you need #define everywhere.
  • "Const hassles me about type conversions."  That's a feature to prevent you from being sloppy!  So strictly speaking the compiler doing this is not a myth. The myth is that this is a bad thing.
There are plenty of discussions on this topic.  You'll also see that some folks advocate using enums for some situations, which we'll get to another time. For now, if you change as many #defines as you can to consts then that is likely to improve your code quality, and perhaps flush out a few bugs you didn't realize you had.

Be careful when reading discussion group postings on this topic.  There is a lot of dis-information out there about performance and other potential tradeoff factors, usually based on statements about 20 year old versions of the C language or experiences with compilers that have poor optimization capability.  In general, you should always use const by default unless your particular compiler/system/usage presents a compelling case not to.

See also the Barr Group C coding standard rule 1.8.b which says to use const, and has a number of other very useful rules.


Monday, May 8, 2017

Optimize for V&V, not for writing code



Writing code should be made more difficult so that Verification &Validation can be made easier.

I first heard this notion years ago at a workshop in which several folks from industry who build high assurance software (think flight controls) stood up and said that V&V is what matters. You might expect that from flight control folks, but their reasoning applies to pretty much every embedded project. That's because it is a matter of economics. 

Multiple speakers at that workshop said that aviation software can require 4 or 5 hours of V&V for every 1 hour of creating software. It makes no economic sense to make life easy for the 1 hour side of the ratio at the expense of making life painful for the 5 hour side of the ratio.

Good, but non-life-critical, embedded software requires about 2 hours of V&V for every 1 hour of code creation. So the economic argument still holds, with a still-compelling multiplier of 2:1.  I don't care if you're Vee,  Agile, hybrid model or whatever. You're spending time on V&V, including at least some activities such as peer review, unit test, created automated tests, performing testing, chasing down bugs, and so on. For embedded products that aren't flaky, probably you spend more time on V&V than you do on creating the code. If you're doing TDD you're taking an approach that has the idea of starting with a testing viewpoint built in already, by starting from testing and working outward from there. But that's not the only way to benefit from this observation.

The good news is that making code writing "difficult" does not involve gratuitous pain. Rather, it involves being smart and a bit disciplined so that the code you produce is easier for others to perform V&V on. A bit of up front thought and organization can save a lot on downstream effort. Some examples include:
  • Writing concise but helpful code comments so that reviewers can understand what you meant.
  • Writing code to be obvious rather than clever, again to help reviewers.
  • Follow a style guide to make your code consistent, and thus easier to understand.
  • Writing code that compiles clean for static analysis, avoiding time wasted finding defects in test that a tool could have found, and avoiding a person having to puzzle out which warnings matter, and which don't.
  • Spending some time to make your unit interfaces easier to test, even if it requires a bit more work designing and coding the unit.
  • Spending time making it easy to trace between your design and the code. For example, if you have a statechart, make sure the statechart uses names that map directly to enum names rather than using arbitrary state variables such as "magic number" integers between 1 and 7. This makes it easier to ensure that the code and design match. (For that matter, just using statecharts to provide a guide to what the code does also helps.)
  • Spending time up front documenting module interaction so that integration testers don't have to puzzle out how things are supposed to work together. Sequence diagrams can help a lot.
  • Making the requirements both testable and easy to trace. Make every requirement idea a stand-alone sentence or paragraph and give it a number so it's easy to trace to a specific test primarily designed to test that particular requirement. Avoid having requirements in huge paragraphs of free-form text that mix lots of different concepts together.
Sure, these sound like a good idea, but many developers skip or skimp on them because they don't think they can afford the time. They don't have time to make their code clean because they're too busy writing bugs to meet a deadline. Then they, and everyone else, pay for this during the test cycle. (I'm not saying the programmers are necessarily the main culprits here, especially if they didn't get a vote on their deadline. But that doesn't change the outcome.)

I'm here to say you can't afford not to follow these basic code quality practices. That's because every hour you're saving by cutting corners up front is probably costing you double (or more) downstream by making V&V more painful than it should be. It's always hard to invest in downstream benefits when the pressure is on, but doing so is costing you dearly when you skimp on code quality.

Do you have any tricks to make code easier to understand that I missed?

Static Analysis Ranked Defect List

  Crazy idea of the day: Static Analysis Ranked Defect List. Here is a software analysis tool feature request/product idea: So many times we...