about creating a DELAY_MS(time) macro without the time limit

Go To Last Post
19 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi guys,

I got a little bit upset about _delay_ms() and _delay_us() in util/delay.h of AVR-Libc. Coz if I need longer delay, then I have to copy and paste a lot of _delay_ms() in the code, so I'm thinking of creating a new macro with these _delay_ms() and _delay_us() as elements and automaticly calculate the number of copy needed by the time input of user. So I just wanna ask if anyone has already done this, or has any idea.

Cheng

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

darthvader wrote:
Coz if I need longer delay, then I have to copy and paste a lot of _delay_ms() in the code

Umm, isn't this the exact reason that God invented the for() loop?

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

but I think for() loop create a little bit of extra clks, right?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hehe, maybe I'm just too picky

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you wrapped a _delay_us(1) in a for() loop then maybe the loop overhead would be a significant factor but certainly when you are talking in the realms of _delay_ms() the few us that a for() loop adds will be insignificant.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

yes, you are right, I will try to compensate for this for() loop extra clks.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well even on an 8MHz AVR the _delay_us() can take almost a 100 parameter. So if you wanted to delay 800us (say) then "for(10){ _delay_us(80);}" would be the way and those few us involved in the for() would be small beer compared to the _delay_us(80). So I wouldn't have thought it was worth worrying about.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> Umm, isn't this the exact reason that God invented the for() loop?

In particular, long delays are the exact reason why God invented
timers...

That doesn't mean though the functionality of could not
be extended, see

https://savannah.nongnu.org/bugs...

and the link referenced there.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

darthvader wrote:
but I think for() loop create a little bit of extra clks, right?

Even calling _delay_ms() creates a little bit of extra clocks.

It must set two 8-bit registers for a 16-bit counter, before going into a 4 clock cycle loop, which decreases the 16-bit counter.

Loading the counter can take at least 2 clock cycles, even if you are not counting the rounding errors which happen when using the F_CPU and millisecond parameter to calculate the loop count value.

Fortunately, this is all inline code, so no RCALL/RET overhead of 8 cycles or so.

- Jani

P.S. Yes, you are too picky, if you only complain about for loop overhead instead of looking at the whole picture :)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hans-Juergen's code is great!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> Hans-Juergen's code is great!

Using a timer is even greater. ;-)

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hehe, you know, Jörg, we students at school doing prototyping are lazy guys :)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> we students at school doing prototyping are lazy guys

If I were your professor, anyone who doesn't use timers and interrupts
for a long delay wouldn't pass the exam...

Learning about timers and interrupts is one of the basic tasks when
programming a microcontroller. Is not something optional you could
always learn later or such (because that way, you'll always be too
lazy). It's for a reason that even the simplest LED flasher demo in
the avr-libc demos uses a timer.

Btw., it's not that learning to operate a timer were more effort than
learning about all the constraints of delay macros...

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm with Jörg on this. When I was talking to a colleague about learning to use AVRs the first "hello world" program I gave him was:

#include  
#include 

ISR(TIMER0_OVF_vect) {
	PORTD ^= 0xFF;
}

int main(void) {
	DDRD = 0xFF;

	TIMSK0 |= (1 << TOIE0) ; // setup timer 0 to interrupt on overflow
	TCCR0B = 1; // start the timer running (no pre-scale so at CPU clock rate)

	sei(); // enable global interrupt system

	while (1) {
	}
}

it's not exactly "rocket science" is it? In fact it's just two lines of C to get a timer interrupt started.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hehe, yes, but I just use the delay for some simple function when your CPU don't have anything else to do

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well you can use timers synchronously as well as asychronously you know.

Just don't set the IE bit. Start the timer, perhaps set TCNT and then watch for it to get to (above?) the desired value (possibly 0)

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
Well you can use timers synchronously as well as asychronously you know.

Just don't set the IE bit. Start the timer, perhaps set TCNT and then watch for it to get to (above?) the desired value (possibly 0)

Cliff

It's better to just check the overflow flag bit. Without interrupts enabled.

So before starting the timer and setting TCNT to wanted value for delay, clear the overflow flag by writing just the overflow flag bit to 1, and see when overflow flag bit turns to 1 again.

- Jani

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Jepael wrote:
clawson wrote:
Well you can use timers synchronously as well as asychronously you know.

Just don't set the IE bit. Start the timer, perhaps set TCNT and then watch for it to get to (above?) the desired value (possibly 0)

Cliff

It's better to just check the overflow flag bit. Without interrupts enabled.

So before starting the timer and setting TCNT to wanted value for delay, clear the overflow flag by writing just the overflow flag bit to 1, and see when overflow flag bit turns to 1 again.

- Jani

Better set TCNT first, then clear the TOV flag. Saves lots of debug time chasing delays skipped due to overflow right after clearing the flag in case the timer was already running. I guess nobody stops the timer after previous delay expire.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Actually the way I actually do it (which kind of clouded my view) is that I usually have one of the timers set up to interrupt at some standard rate like 1ms where it just increments a long 'system_tick'. So if I want to delay, synchronously, for some time I actually just look for a difference between now and then in the tick variable.

(but this is probably virtualising things more than necessary for the question here)

Cliff