dynamic delays in us

Go To Last Post
17 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

I need a "dynamic" delay in one of my programs using the _delay_us() macro.
For example, using an X variable, that is being changing by a while loop, and that X variable is the number of microseconds that I need the delay.

//Note: This code does NOT works
int x;

while(1)
{
x++;
_delay_us(x);
}

What is the best way to do this in C?
It sounds as an easy task but I still don't figure out how to do it... :p

Thanks in advance

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What maximum delay do you want? What minimum?

I would use a free-running timer (normal mode) with a compare register. The code would look something like this

OCR = TCNT + us - US_ADJUST;
TIFR = (1<<OCF);
while ( !(TIFR & (1<<OCF)))
  ;
Last Edited: Fri. Feb 5, 2010 - 04:41 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
What is the best way to do this in C?

C has no concept of time, so there is no way of doing it in C. Do it in assembly.

What cpu frequency are you using?

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, I am using a 14.7456MHz crystal. The minimum amount of time I want to store should be around 10us and maximum as 29us.

The problem is that I do not have any knowledge of assembly programming. Just C.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The timer technique I proposed would be fine. Use a /8 prescale and you can time up to about 140us with an 8-bit timer. You'd need to translate between timer values and microseconds, but you could use a table for that, or just scale your time value directly.

You could also roll your own do-nothing loop, count the cycles and figure time based on your clock rate. You'd face the same issue about not having a direct correlation between the loop count and microseconds.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

kk6gm wrote:
The timer technique I proposed would be fine. Use a /8 prescale and you can time up to about 140us with an 8-bit timer. You'd need to translate between timer values and microseconds, but you could use a table for that, or just scale your time value directly.

You could also roll your own do-nothing loop, count the cycles and figure time based on your clock rate. You'd face the same issue about not having a direct correlation between the loop count and microseconds.

I will try that, thank you.
That'd be fine also for my minimum time in us(6-10us) right? =)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

10us is 147 cycles. Even 6 is about 100 cycles. Should be plenty.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
C has no concept of time, so there is no way of doing it in C.
That's an odd thing to say. The existence of timers in the AVR hardware and the ability to access the timers in C means that can you can realize the time-related concepts in C.

Don Kinzer
ZBasic Microcontrollers
http://www.zbasic.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you get brave and would like to try an assembly language delay, you can try the following simple code with GCC (in a .S file):

    .global delay_us
delay_us:
    clr   r26
1:  subi  r26, 69
    sbci  r24, 0
    brcc  1b
    ret

For delays of less than about 50 microseconds, you should see delay times of at least the number of microseconds requested and no more than 1 microsecond longer. Also, adjacent delays should be either about 0.8 or 1.1 microseconds different.

BTW, the constant "69" is computed for your given operating frequency of 14.7456MHz. In general, if f is your operating frequency in Hertz, then the constant k can be computed as k = 1024000000 / f.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

What is the best way to do this in C?

The only limit on _delay_us() in GCC is that the parameter must be a constant. So just:

void delay_us(uint16_t n) {
  while(n--) {
    _delay_us(1);
  }
}

(true in the realms of us the call/ret and loop in this will add an offset so you may need to "fiddle it" a bit)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

void delay_us(uint16_t n) {
  while(n--) {
    _delay_us(1);
  }
}

(true in the realms of us the call/ret and loop in this will add an offset so you may need to "fiddle it" a bit)

Using GCC and optimization level 3, the above code always delays 16 cycles per count -- at 14.7456MHz, this is 1.085 microseconds. This means that requesting a 29 microsecond delay will actually delay by more than 31.5 microseconds. The "offset" is about 5 cycles (0.3 microseconds) and is small in comparison to this 2.5 microsecond error.

Using the assembly language routine that I suggested provides a long-term average delay of 0.992 microseconds per count. A request for 29 microseconds will produce a delay of about 28.8 microseconds plus an offset. It is true that the out of line subroutine I suggested does have a larger offset (about 11 cycles, 0.7 microseconds), but the total error is still less than a microsecond and the total time (29.5 microseconds) is much closer to the requested delay. If desired, the offset error could be reduced by using inline assembly language.

I don't believe that there is any way that the delay functions provided with WinAVR can produce varying numbers of delay cycles per microsecond and this inability is what leads to the errors described above.

It may also be useful to note that the assembly language delay that I suggested is insensitive to optimization level and requires less program space than using either timers or the WinAVR delay utilities.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
The existence of timers in the AVR hardware

Which means that you are not "doing it in C", you're doing it in hardware. Certainly you can "access it" in C. You can access a video camera in C. Does that mean that C can do video? No, of course not.

Quote:
Using GCC and optimization level 3, the above code always delays 16 cycles per count -- at 14.7456MHz, this is 1.085 microseconds.

I got even worse than: 19 clocks. I could get 16 clocks by changing the value sent to _delay_us to 0.75. Of course with 14.7456MHz you will never get a routine (either by inline assembler or timer) that will not have slop since 1us is not an integral number of cycles. You would need something that adjusts for that.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
The existence of timers in the AVR hardware

Which means that you are not "doing it in C", you're doing it in hardware.
At the risk of being accused of pedantry, neither, then, can you do it in assembly language as you opined:
Koshchi wrote:
[T]here is no way of doing it in C. Do it in assembly.

Don Kinzer
ZBasic Microcontrollers
http://www.zbasic.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Not nearly as interesting as Compiler Wars or C vs. ASM, but the thread count for How Can I Waste All Of My AVR To Insanely Precise Intervals must rival the biggies.

Perhaps there are real apps that do a lot of precision delaying. Certainly none of mine. Gonna do that with interrupts off? I can't think of any of my apps even for the tiniest of AVRs that don't have at least one interrupt source enabled. Gonna take care of watering the dog, too?

Much of the thread count arises from the GCC forum especially with the "old style" facilities that had limits and bloat at -O0.

One of the more recent and thorough threads has my opinion:
https://www.avrfreaks.net/index.p...

Quote:
Regardless of how true the path is, I've just never comprehended the fascination with elaborate delays in an AVR8 environment. Yes, I'll use a delay at startup to let things settle--often flashing the LEDs and the like to signal "alive". Certainly not a critical timekeeping app. Yes, I use a delay of a few microseconds once in a while when I've got a slow transistor or the like. Again not critical, and usually can be replaced with some useful work such as calculating the next byte.

What a tempest in a teapot. Just as "GoTo Considered Harmful", so should delay.

With most (all?) newer AVR models having a CLKPR facility, the "precise delay" people must be having a tougher and tougher time creating these elaborate houses of cards.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It would be interesting for the OP to tell us why he needs (or thinks he needs?) this delay functionality.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
At the risk of being accused of pedantry, neither, then, can you do it in assembly language as you opined:

On the contrary, I can write a routine in assembly that can be tuned to precise numbers of clock cycles. It is a property of the language that each opcode takes a known number of clocks. This is not true of C.

Quote:
Perhaps there are real apps that do a lot of precision delaying. Certainly none of mine. Gonna do that with interrupts off? I can't think of any of my apps even for the tiniest of AVRs that don't have at least one interrupt source enabled.

But for delays of a few microseconds as the OP wants, using interrupts can be impossible since the interrupt mechanism will add an unacceptable amount of overhead. So a delay of 3µs on a 2MHz machine would be only 6 clocks, which would be less than the minimum overhead of an interrupt. Besides, how much other stuff could you really get done in those 6 clocks? Certainly when the delay is significantly larger than the clock period, fixed delays are wasteful.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

But for delays of a few microseconds as the OP wants, using interrupts can be impossible since the interrupt mechanism will add an unacceptable amount of overhead.

That wasn't what I was getting at--if an interrupt hits during this "precise delay" it won't be precise any more.

Yes, I have a delay here and there for a slow transistor or such. If I see on the 'scope about 4us needed then I'll probably do 6us or 8us to be safe. It doesn't matter whether it is 7.3us or 9.2us. (And in this case it doesn't matter if an interrupt hits and stretches the timing--e.g., bit-banged SPI can almost always be done with interrupts enabled.)

For longer delays in any of my full apps, I'm not going to sit in one place for many milliseconds, much less need it precisely. I've got a lot better things to do with those cycles.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.