technique to compute CPU idle time?

Go To Last Post
12 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In a single-CPU microprocessor that supports a multi-tasking scheduler (semi-RTOS, but non-preemptive), there is an idle task that runs when no other tasks are ready.

The idle task can do whatever. Currently, it does nothing and returns to the scheduler which then waits for an event to cause a ready task. While waiting, the scheduler again calls the idle task.

So, in the idle task, to do something to log a metric to derive the percent of idle time, what method comes to mind? There is a periodic clock interrupt, and the idle task or a new metric-gathering task could be in place to make note of clock tics, and so on, while idle.

something like the ratio of clock ticks while idle to tick while busy. Note that the scheduler is non-preemptive; the idle task must yield or do a wait(n) clock ticks.

What's the usual technique?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't know what the CS pros do, but in my rudimentary systems I've previously set a pin high in the idle task and low inside the other routines.

An LED or an O'scope can show the (approximate) High, (Idle) time.

Not too much overhead for this method.

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

yeah, I did that. But I need a metric that I can log and send to an Internet host.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Then feed it through an RC filter and read the voltage with an ADC input.

The RC filter's time constant will impact the span of time over which the measurement is being taken.

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Measure and average the time since the idle task was last invoked. The busier other tasks are (and the more there are), the longer the time.

Be aware that non-preemptive schedulers are very "binary" in their behavior. Either you get through all the tasks fast enough, in which case everything is fine, or you don't, in which case things go to hell pretty quickly. The "time between idle task invocations" is a metric, but it's difficult to translate to something like "54% busy." And customers get all bent out of shape when the metric goes from 30% to 60%, even though the nature of things means that that's fine.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks much, sir westfw.
The time since the idle task was last invoked... would be in units of clock ticks. That translates to mSec easily enough.

No customer to worry with.. just me wondering how much spare CPU time there is on average. All the real time events are interrupt driven and buffered, so there's no highly time critical code - other than avoiding buffer-full (and the buffers are large compared to the data rates). One interface is Ethernet (IP stack off-loaded from main CPU). So an overworked main CPU simply means the packet rate slows a bit.

I'll work on some code for the idle task delta time running average.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How about simply counting the number of entries into idle in some time interval. It sounds like these have a known duration, so counting the number is equivalent to timing.

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think you need a separate hi res hw timer. Instrument every task to read the start time and end time in each task and add em up. Thats the total execution time. Whats left over is idle time. The darn os cant deal with usecs if the smallest unit of time it knows about is a 5ms tick. I think.

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm with Jim here.

Determine the time for one call to the idle task. Let the idle task increment a counter variable.

Set up one extra task that activates periodically and reads the counter variable, computes the load, sends it and clears the counter variable.

Done!

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I made a small profiling lib with us precision. With a profile of each ISR and each task it's easy to compute what the load is.

A profile is min/max/total time spent/no samples.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks for the ideas.
I'm using the idle task in FreeRTOS which is compiled for non-preemptive task scheduling.
Here's a suggested approach from above, adapted.

I put code in the idle task that builds a histogram as follows. The histogram's bins 0..15 are incremented according to the number of clock ticks since the last time the idle task was called. So bin 0 means less than 1 clock tick and bin 15 means 15 (or more) ticks occurred.

Here's the first results:


2012-09-01 14:01:36  ticks/S=500,1220052,5001,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:01:26  ticks/S=500,1220594,5000,1,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:01:16  ticks/S=500,1220960,5000,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:01:06  ticks/S=500,1219788,5001,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:56  ticks/S=500,1221265,5000,1,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:46  ticks/S=500,1220655,5000,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:36  ticks/S=500,1220218,5001,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:26  ticks/S=500,1220903,5000,1,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:16  ticks/S=500,1219104,4999,1,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 14:00:06  ticks/S=500,1221340,5000,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2012-09-01 13:59:56  ticks/S=500,1220293,5001,0,0,0,0,0,0,0,0,0,0,0,0,0,0


As shown above, each 10 seconds the histogram is displayed then zero'd. The number after "ticks/S=500" is bin 0 of the histogram.

The 1.2 million number seen in bin 0 means there's a lot of idle CPU time. The idle task is most often called 120,000 times each 1 second by the FreeRTOS scheduler.

Bin 1 has a small number compared to bin 0. Bin 2 has a one now and then. Bins 3-15 are always 0.

The clock tick rate is 500Hz, as shown. So the histogram resolution is just 2mSec per bin..
Bin 0 is 0 to 2mSec
Bin 1 is 2 to 4mSec
Bin 2 is 4 to 6mSec
...
Bin 15 is 30 to 32mSec

This data was taken with the device in its normal minimal load state, where in 10 seconds, it has done a lot of useful routine work (Ethernet I/O, wireless transmissions, now and then contacting a NIST time server, computing status reports to send via UDP, and so on.

If I did an epoch of, say, 10 minutes rather than 10 seconds, I'd expect to see more in the other bins as tasks that run less often occur. And a longer run which includes more wireless comms activity will change the histogram some.

I suppose I could measure the idle task calls per second before other tasks begin, as a reference. Then perhaps I have a way to compute percent CPU utilization.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One more idea, inspired by the GPIO pin idea...

  1. Set a global flag in the idle task, and clear it in any other task.
  2. In the periodic clock interrupt, check the state of that flag.
  3. Use an exponential moving average to smooth the resulting data.
A little more detail on how the exponential moving average might work:
  1. Keep a running average S, which can be initialized to any value.
  2. On each clock interrupt, calculate the new average as S = a * Y + (1 - a) * S, where Y is the state of the idle flag (1 or 0), and a is a smoothing factor. The smaller a is, the longer the time constant for the low-pass filter.
Hope this helps!

Michael