The reason often given for not using small subroutines is runtime execution cost. Doing a subroutine call to perform a small function can slow down a program significantly if it is done all the time. One of my soapboxes is that you should almost always buy a bigger CPU rather than make software more complex -- but for now I'm going to assume that it is really important that you minimize execution time.
Here's a toy example to illustrate the point. Consider a saturating increment, that will add one to a value, but will make sure that the value doesn't exceed the maximum positive value for an unsigned integer:
int SaturatingIncrement(int x)
{ if (x != MAXINT)
{ x++;
}
return(x);
}
So you might have some code that looks like this:
...
x = SaturatingIncrement(x);
...
z = SaturatingIncrement(z);
You might find that if you do a lot of saturating increments your code runs slowly. Usually when this happens I see one of two solutions. Some folks just paste the actual code in like this:
...
if (x != MAXINT) { x++; }
...
if (z != MAXINT) { z++; }
A big problem with this is that if you find a bug, you get to to track down all the places where the code shows up and fix the bug. Also, code reviews are harder because at each point you have to ask whether or not it is the same as the other places or if there has been some slight change. Finally, testing can be difficult because now you have to test MAXINT for every variable to get complete coverage of all the branch paths.
A slightly better solution is to use a macro:
#define SaturatingIncrement(w) { if ((w) != MAXINT) { (w)++; } }
which lets you go back to more or less the original code. This macro works by pasting the text in place of the macro. So the source you write is: ...
SaturatingIncrement(x);
...
SaturatingIncrement(z);
but the preprocessor uses the macro to feed into the compiler this code:
...
if (x != MAXINT) { x++; }
...
if (z != MAXINT) { z++; }
thus eliminating the subroutine call overhead.The nice things about a macro are that if there is a bug you only have to fix it one place, and it is much more clear what you are trying to do when there is code review. However, complex macros can be cumbersome and there can be arcane bugs with macros. (For example, do you know why I put "(w)" in the macro definition instead of just "w"?) Arguably you can unit test a macro by invoking it, but that test may well miss strange macro expansion bugs.
The good news is that in most newer C compilers there is a better way. Instead of using a macro, just use a subroutine definition with the "inline" keyword.
inline int SaturatingIncrement(int x)
{ if (x != MAXINT)
{ x++; }
return(x);
}
The inline keyword tells the compiler to expand the code definition in-line with the calling function as a macro would do. But instead of doing textual substitution with a preprocessor, the in-lining is done by the compiler itself. So you can write your code using as many inline subroutines as you like without paying any run-time speed penalty. Additionally, the compiler can do type checking and other analysis to help you find bugs that can't be done with macros.
There can be a few quirks to inline. Some compilers will only inline up to a certain number of lines of code (there may be a compiler switch to set this). Some compilers will only inline functions defined in the same .c file (so you may have to #include that .c file to be able to inline it). Some compilers may have a flag to force inlining rather than just making that keyword a suggestion to the compiler. To be sure inline is really working you'll need to check the assembly language output of your compiler. But, overall, you should use inline instead of macros whenever you can, which should be most of the time.
During debugging, especially when you have a a couple of inline functions calling one another, it is useful to see the call flow, and you don't much care about the performance.
ReplyDeleteSo I found it useful to have a macro like follows:
#ifdef DEBUG
#define inline inline
#else
#define inline
#endif
Good point korya! I think maybe you meant this:
ReplyDelete#ifdef DEBUG
#define inline
#else
#define inline inline
#endif
which has the effect of making "inline" go away while debugging but making "inline" stay as-is during normal compilation.
Totally agree about preferring inline functions over function-like macros.
ReplyDeleteA couple quick points:
(1) In your macro version
#define SaturatingIncrement(w) { if ((w) != MAXINT) { (w)++; } }
I think where you have:
x = SaturatingIncrement(x);
you probably just mean
SaturatingIncrement(x);
(for the inline function version, which has a return value, the assignment is possible, but not for the macro version as it stands)
or you could do something like:
#define SaturatingIncrement(w) (((w) == MAXINT) ? ((w) : ((w) + 1)))
and then use the assignment syntax.
(2) Most of the embedded systems I work on are C, so inline is as far as you can go... but if you're using C++, you can take it a step further by using templates & specialization. For example, you could specialize an inline function template for the type uint8_t and use 255 instead of MAXINT.
Another use case would be using different saturation limits for signed & unsigned types.
And of course you could have classes (or class templates) which saturate at some other crazy value like 99 or whatever, even overloading operators such as operator++, etc. Plenty of corner cases & decisions to make if you go this route, though...
Thanks for the nice comments Dan. You are right about the macro mistake I made. I've fixed it in the original post because not everyone reads the comments. I should have used the assignment syntax you indicate (which shows just how tricky macros can be to get right!)
ReplyDeleteYou're right about all the C++ and other notes as folks get more advanced. The most important point of my posting is that the keyword "inline" exists -- which is not known by everyone writing embedded code.