Monday, December 7, 2015

Multi-Rate Main Loop Tasking


Recently I was looking for an example of a prioritized, cooperative multitasking main loop and realized it is surprisingly difficult to find one that is (1) readily available and (2) comprehensible.

Sure, once you understand the concept maybe you can dig through an industrial-strength implementation with all sorts of complexity. But good luck getting up the learning curve!  So, here is a simple (I hope!) description of a multi-rate main loop scheduler.

First of all, we're talking about non-preemptive multitasking. This is variously called main loop scheduling, a main loop tasker, a prioritized cooperative multitasker, a cyclic executive, a non-preemptive scheduler, and no doubt a bunch of other terms. (Not all of these terms are precisely descriptive, but in terms of search terms they'll get you in the ballpark.) The general idea is to be able to schedule multiple tasks that operate at different periods without having to use an RTOS and without having to use the generally bad idea of stuffing everything into timer-based interrupts.

Single Rate Main Loop

The starting point is a single-rate main loop. This is a classical main loop schedule in which all tasks are run to completion over and over again:

void main(void)
{ ... initialization ...

  while(1)
  { Task0();
    Task1();
    Task2();
    Task3();
  }
}


The good news about this is that you don't need an RTOS, and it's really hard to get things wrong. (In other words reviews and testing are easy to do well.)

The bad news is that all tasks have to run at the same period. That means that if one task needs to run really quickly you'll miss its deadlines.

I've seen way too many hacks that use conditional execution based on CPU load to sometimes skip tasks. But ad hoc approaches have the problem that you can't really know if you'll miss deadlines. Instead, you should use a methodical approach that can be analyzed mathematically to make sure you'll meet deadlines. The way people generally go is with some variation of a multi-rate main loop.

Multi-Rate Main Loop

The idea behind a multi-rate main loop is that you can run each task at a different periodic rate. Each task (which is just a subroutine) still runs to completion, so this is not a full-up preemptive multitasking system. But it is relatively simple to build, and flexible enough for many embedded systems.

Here is some example code of the main loop itself, with some explanation to follow.

void main(void)
{ ... initialization ...

  while(1)
  { if(       flag0 )  { flag0 = 0; Task0(); }
    else if ( flag1 )  { flag1 = 0; Task1(); }
    else if ( flag2 )  { flag2 = 0; Task2(); } 
    else if ( flag3 )  { flag3 = 0; Task3(); }
  }
}

The way this code works is as follows.  All the tasks that need to be run have an associated flag set to 1.  So if Task1 and Task2 are the only tasks that need to run, flag0 is 0, flag1 is 1, flag2 is 1, and flag3 is 0. The code crawls through an "else if" cascade looking for the first non-zero flag.  When it finds a non-zero flag it executes that task, and only that task.

Note that each task sets its flag to zero so that it runs exactly one time when it is activated by its flag. If all flags are zero then no task is executed and the do..while loop simply spins away until a flag finally becomes non-zero. More about how flags get set to 1 in a moment.

After executing at most one task, the loop goes back to the beginning. Because at most one task is executed per iteration of the main do..while loop, the tasks are prioritized. Task0 has the highest priority, and Task3 the lowest priority.

Yes, this prioritization means that if your CPU is overloaded Task0 may execute many times and Task3 may never get a turn. That's why its important to get scheduling right (this will be a topic in a later blog posting).

Multi-Rate Timers

The main loop wasn't so bad, except we swept under the rug the messy business of getting the flags set properly.  Trying to do that in the main loop generally leads to problems, because a long task will cause many milliseconds to go by between timer checks, and it is too easy to have a bug that misses setting a flag some of the time. Thus, in general you tend to see flag maintenance in the timer interrupt service routine.

Conceptually the code looks like this, and for our example lives in a timer interrupt service routine (ISR) that is called every 1 msec.  A variable called "timer" keeps track of system time and is incremented once every msec.

 // in a timer ISR that is called once every msec
 timer++;
 if ((timer %   5) == 0) { flag0 = 1; } // enable 5 msec task0 

 if ((timer %  10) == 0) { flag1 = 1; } // enable 10 msec task1
 if ((timer %  20) == 0) { flag2 = 1; } // enable 20 msec task2
 if ((timer % 100) == 0) { flag3 = 1; } // enable 100 msec task3
 if (timer >= 100) { timer = 0; } // avoid timer overflow bug


Every 5 msec the timer will be zero modulo 5, every 10 msec the timer will be zero modulo 10, and so on.  So this gives us tasks with periods of 5, 10, 20, and 100 msec.   Division is slow, and in many low-end microcontrollers should be avoided in an ISR. So it is common to see a set of counters (one per flag), where each counter is set to the period of a particular task and counts down toward zero once per msec. When a counter reaches zero the associated flag is set to 1 and the counter is reset to the tasks' period. This takes a little more RAM, but avoids division. How it's implemented depends upon your system tradeoffs.

The last line of this code avoids weird things happening when the timer overflows. The reset to zero should run at the least common multiple of all periods, which in this case happens to be equal to the longest period.

Concurrency Issues

As with any scheduling system there are potential concurrency issues. One subtle one is that the timer ISR can run part-way down the else..if structure in the main loop. This could cause a low-priority task to run before a high priority task if they both have their flags set to 1 on the same timer tick. It turns out that this doesn't make the worst case latency much worse. You could try to synchronize things, but that adds complexity. Another way to handle this is to copy the current time into a temporary variable and do the checks for when to run each task in the main loop instead of the timer ISR.

It's also important to note that there is a potential concurrency problem in writing flags in the main loop since both the ISR and the main task can write the flag variables. In practice the concurrency bug will only hit when you're missing deadlines, but good coding style dictates disabling interrupts when you update the flag values in the main loop, which isn't shown in the main loop code in an attempt to keep things simple for the purpose of explanation.

The Big Picture

OK, that's pretty much it.  We have a main loop that runs each task when its ready-to-run-flag is set, and a timer ISR that sets a ready-to-run flag for each task at the desired period. The result is a system that has the following properties:
  • Each task runs once during its assigned period
  • The tasks are prioritized, so for example task 2 only runs when task 0 and task 1 do not need to run
The big benefit is that, so long as you pay attention to schedulability math, you can run both fast and slow tasks without needing a fancy RTOS and without missing deadlines.

In terms of practical application this is quite similar to what I often see in commercial systems. Sometimes developers use arrays of counters, arrays of flags, and sometimes even arrays of pointers to functions if they have a whole lot of functions, allowing the code to be a generic loop rather than spelling out each flag name and each task name. This might be necessary, but I recommend keeping things simple and avoiding arrays and pointers if it is practical for your system.

Coming soon ... real time analysis of a multi-rate main loop

Job and Career Advice

I sometimes get requests from LinkedIn contacts about help deciding between job offers. I can't provide personalize advice, but here are...