Timer goes off with microseconds

Go To Last Post
18 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

 

So I'm working on a microsecond delay function for a Atmega 328p 16MHZ which gets the microseconds in arguments. The problem is the longer the time, the more the timer is off. So If I ask for a delay of 1000000 (1 second), the functions take like 2seconds to return. I'm using the timer 0 with a prescalar of 8. I'm really not sure why the timer gets off like that. Using my other millisecond delay function is fine and works great. Here is the code :

 

void delayMicro(unsigned long microseconds)
{

    unsigned long i = 0;

    TCCR0A = 0;
    TCCR0B = 1 << CS01; // Prescalar of 8

    while (i < microseconds)
    {
        TCNT0 = 254;
        TIFR0 = 1 << TOV0;

        while (!(TIFR0 & (1 << TOV0)));
        i++;

    }

}

TCNT0 gets 254 because : 256-((16/8)*1) = 254. So the loop would have to go from 1 to 1000000 to get a delay of 1 second right ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The time to do arithmetic is a substantial fraction of a microsecond.

Also, I'm not sure exactly what happens when the counter is assigned in the middle of a timer tick.

Try rewriting the code so that the timer is set to zero exactly once and started exactly once.

Compare match is good for that.

 

Note that a microsecond is short enough that ISRs might still extend the delay significantly.

Also, the prologue and epilogue might add to a microsecond.

What do you want to happen if microseconds is 0, 1 or 2?

Moderation in all things. -- ancient proverb

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you vary the prescaller, do you see any improvement?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:

The time to do arithmetic is a substantial fraction of a microsecond.

Also, I'm not sure exactly what happens when the counter is assigned in the middle of a timer tick.

Try rewriting the code so that the timer is set to zero exactly once and started exactly once.

Compare match is good for that.

 

I’m not sure how I could rewrite this without assigning the counter in the middle of a timer tick. Do you mean that when the timer overflows, I would stop the timer, assign a value then start it and so on ? 

skeeve wrote:

Note that a microsecond is short enough that ISRs might still extend the delay significantly.

Also, the prologue and epilogue might add to a microsecond.

 

I’m fairly new to this world as I have classes about microcontrollers and we have barely talked about timers so I wanted to extend my knowledge therefore, I might not fully understand the terms used here. 
how is the ISR related here with a timer set in normal mode and I don’t attach any interrupt. Should I disable interrupts while having the counter on ?

 

skeeve wrote:

What do you want to happen if microseconds is 0, 1 or 2?

 

Well obviously 0 should simply return birthing to be done. 1 would also return as returning from a function takes 1 tick (not sure ?)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

tepalia02 wrote:

If you vary the prescaller, do you see any improvement?


 

I tried with a prescaller of 1 and 8 as with the other one doing the maths, give me floating point instead of integer so the timer will not be precise

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'd say ignore the TIMER and do cycle counting instead.

The 32-bit compare and 32-bit increment take significant time.

Try this simpler version and time it with your stopwatch.

I've started the counter variable at 1 to account for the function call  / initialisation overhead.

Note how few NOP instructions it takes to make up the 1μs after accounting for the compare and increment.

void delayMicro (unsigned long microseconds)
{
    unsigned long i = 1;
    while (i < microseconds)
    {
        __builtin_avr_nop();
        __builtin_avr_nop();
        __builtin_avr_nop();
        __builtin_avr_nop();
        __builtin_avr_nop();
        __builtin_avr_nop();
        i++;
    }
}

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

N.Winterbottom wrote:

I'd say ignore the TIMER and do cycle counting instead.

The 32-bit compare and 32-bit increment take significant time.

Try this simpler version and time it with your stopwatch.

I've started the counter variable at 1 to account for the function call  / initialisation overhead.

Will do, it seems less complicated for sure. But how did you came up with 6 call to NOP instructions ? How did you find out that 6 calls is 1 microsecond ?

 

Last Edited: Sun. Jun 19, 2022 - 05:47 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

By inspecting the assembly produced by the compiler and counting the CPU cycles.

 

BTW: The NOP * 6 take only 0.375μs. It's the compare and increment that make up the remaining 0.625μs.

Last Edited: Sun. Jun 19, 2022 - 06:20 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

N.Winterbottom wrote:

By inspecting the assembly produced by the compiler and counting the CPU cycles.

 

N.Winterbottom wrote:

__builtin_avr...

Didn't the people that developed your chosen toolchain already create a cycle-accurate _builtin_avr_whatever for any number of cycles?

https://stackoverflow.com/questi...

https://gcc.gnu.org/onlinedocs/g...

 

But indeed, are you really looking for a callable function with a variable parameter?  Tell more about the important use of this facility.  Tell why, if you insist, why you are saddled with an 8-bit timer.  Tell the AVR's clock rate.  Why, you ask?  Well, at 1MHz you will have many fewer options for this microsecond schtick than at a higher clock rate.What happens after this critical-length time period?  Perhaps an I/O is set or toggled?  or what?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Last Edited: Mon. Jun 20, 2022 - 08:10 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The end goal was to have a microsecond that indeed takes a variable parameter and waits. Now obviously this can be approximate as it's just to trigger an LED. But the main purpose of this post was to know why the function I wrote waits more time than it should, as I didn't saw anything wrong with it and tought my maths where correct. But apparently I forgot many factors and ended being wrong.  As said on the original post it's a 16MHz CPU clock. Now for the 8 bit register well I also thought that it was enough.

 

But right now I still struggle to understand why it still doesn't work and why the solution given works. I truly lack of knowledge about this topic and really would like to know more to the point where I could understand what's going on with timers. So can perhaps someone give me a starting point where we talk about time consuming instructions and more over ? I've looked over google but they works but they never increment a variable as I do for example, or , they just write a function to wait for 500 milliseconds and never talk about how to achieve 1 microsecond or 10 and so on...

Last Edited: Mon. Jun 20, 2022 - 02:07 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My expectation was that OP wanted a delay function that would not be badly affected by other interrupts.

 

For precise timing, stopping the timer or assigning to its counter is pretty much always a bad idea.

 

As already noted, a CPU cycle is a significant fraction of a microsecond.

Just getting in and out of the function in less than a microsecond could be interesting.

'Tis necessary to precisely define how long the function should delay.

The possibilities include constant+microseconds*10**-6 seconds

and max(constant, microseconds*10**-6 seconds) .

 

The original code does not work because none of the arithmetic is counted as part of the delay.

Winterbottom's code works because because every loop cycle is counted.

Said code might not work with a different version of avr-gcc

or even different settings of the same version.

Sidestepping such issues is one reason to use assembly.

 

Resetting the timer counter ignores the cycles since the previous flag-setting.

 

My recommendation:

*Read* the timer early.

It should already be running at full speed in normal mode.

Do most of your computation before the loop:

Calculate the last timer value.

Use it as a compare match value.

Calculate the number of times said value will be reached.

Run through the loop that many times.

Moderation in all things. -- ancient proverb

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I’ve been looking through the instruction set and I now understand the given solution. My code wasn’t working because in order to do one iteration of the loop it would take more than one microsecond. 

 

Correct me if I’m wrong but I discovered that compare match was working way better then overflow because I don’t have to do as many operations per iteration as overflow timer right ?

 

skeeve wrote:

My recommendation:

*Read* the timer early.

It should already be running at full speed in normal mode.

Do most of your computation before the loop:

Calculate the last timer value.

Use it as a compare match value.

Calculate the number of times said value will be reached.

Run through the loop that many times.

 

I think I understand the logic here but I’m unsure how you would do that, I mean once we know how many times we should loop, I’m I supposed to use the overflow timer for the main loop and compare match timer for the last timer value ?

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm late to this thread. If opting for soft delays I gotta ask what is wrong with _delay_us() that is in util/delay.h anyway?

 

Sure it cannot take a parameter that is not a compile time constant but the idea is that you use it as:

#define F_CPU 1234567UL
#include <util/delay.h>

static inline void my_delay_us(int n) {
    while (n--) {
        _delay_us(1);
    }
}

int main(void) {
    my_delay_us(ADC);
}

The _delay_us() itself will resolve to __builtin_avr_delay_cycles() but will have taken on board whatever F_CPU is defined as and will have calculated the right number of cycles for 1.0us

Last Edited: Wed. Jun 22, 2022 - 08:19 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

I'm late to this thread. If opting for soft delays I gotta ask what is wrong with _delay_us() that is in util/delay.h anyway?

 

I mean there is nothing wrong about it for sure. Im the type of guy who truly wants to u d’état and what’s going on under the hood. An example I always take is comparing python with C. In python everything is almost premade. Don’t get me wrong it’s nice to have this kind of things but I would like to do what exists on my own way. 
 

Here I’m trying to understand how I could implement such a thing by my own in order to gain experience without using what’s already existing. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Several years ago, I also wanted a "callable" delay function for short intervals so I came up with:

  /****************
   *  This function provides an inline delay loop for short delays, using 6 bytes per instantiation
   *  Each loop requires about 3 CPU cycles per iteration (1 to 256) plus 2 cycles overhead
   *        Approximately:   delay(ns) = ((count * 3) + 2) * 1000/CPU_frequency(MHz)
   *  Note: 1 is minimum, 0 is maximum (256)   (Accurate timing only if interrupts are not active...)
   *        At 16MHz 1=312.5ns, 2=500ns, 10=2.0us ... 0=48.125us
   ****************/
static inline void short_delay(uint8_t count) __attribute__((always_inline));
void short_delay(uint8_t count) {
	__asm__ volatile (
	"1: dec %0" "\n\t"
	"brne 1b"
	: "=r" (count)
	: "0" (count)
	);
}

(The cycle count may be dependent upon the actual device used... YMMV)

David

Last Edited: Wed. Jun 22, 2022 - 10:44 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

frog_jr wrote:

Several years ago, I also wanted a "callable" delay function for short intervals so I came up with:

  /****************
   *  This function provides an inline delay loop for short delays, using 6 bytes per instantiation
   *  Each loop requires about 3 CPU cycles per iteration (1 to 256) plus 2 cycles overhead
   *        Approximately:   delay(ns) = ((count * 3) + 2) * 1000/CPU_frequency(MHz)
   *  Note: 1 is minimum, 0 is maximum (256)   (Accurate timing only if interrupts are not active...)
   *        At 16MHz 1=312.5ns, 2=500ns, 10=2.0us ... 0=48.125us
   ****************/
static inline void short_delay(uint8_t count) __attribute__((always_inline));
void short_delay(uint8_t count) {
	__asm__ volatile (
	"1: dec %0" "\n\t"
	"brne 1b"
	: "=r" (count)
	: "0" (count)
	);
}

(The cycle count may be dependent upon the actual device used... YMMV)

 

Thank you, comments make it very easy to understand ! Now from I see, timers are almost never used for precise timing right ? If can there other purpose be other than being independent from the CPU and therefore use them for interrupt ?

 

Also, what did help you to write this code ? I mean what resources have you used ? The instruction set manual right ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One often needs a timer for reliable timing.

 

If the delay required is short enough

to allow an uninterruptable busy wait,

one might as well use it.

 

An interrupt does not stop a timer.

An interrupt can cause a delay in noticing when

the timer expires or when it cycles around.

If interrupts are short enough,

only the expiration delay matters.

 

Liwinux wrote:
I think I understand the logic here but I’m unsure how you would do that, I mean once we know how many times we should loop, I’m I supposed to use the overflow timer for the main loop and compare match timer for the last timer value ?
Suppose the read value is 7 and you compute the need for 1,600 more cycles until timer expiration.

7 + 1,600 = 1,607 = 0x647.

The timer will need to hit 0x47 6+1=7 times.

Set compare match for 0x47.

Wait for it 7 times.

 

Note that the wait loop is at least 3 cycles.

That will introduce some delay in noticing timer expiration,

but less than a microsecind.

Calling it from C introduces some fuzziness of its own.

Moderation in all things. -- ancient proverb

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Liwinux wrote:
what did help you to write this code ? I mean what resources have you used ?

 

AVR-GCC Inline Assembler Cookbook:

    https://www.nongnu.org/avr-libc/...

 

AVR Instruction Set Manual:

 

    http://ww1.microchip.com/downloa...

 

I was using the short_delay function within a C program that was initializing a display and a proprietary interface originally designed to use shift registers as delay lines...

David