millis() and micros() for ATtiny 1-series

Go To Last Post
12 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello,

 

I'd like to measure some time on the ATtiny1614 to learn more about how much time there is between certain interrupts and other events. Since there is no time measurements built-in, I'm looking for something that gives me timestamps at any time that I can subtract to get the time difference. AFAIK Arduino has millis() and micros() functions that do this. The closest match I could find is an old forum post here, but it contains code that seems to be incomplete and I can't adapt it to the current platform. I'm using AtmelStudio 7.

 

These are the issues I have:

 

  • TIFR0 is undefined, probably needs to be replaced with TCA0.SINGLE.something
  • TOV0 is undefined, probably needs to be replaced with TCA0.SINGLE.something
  • TCA0.SINGLE.CNT is uint16_t while the old code seems to use an 8-bit timer

 

I think I can figure out the timer initialisation myself. I guess it should just run at full CPU clock speed and overflow at its maximum possible value (16-bit).

 

Could somebody please point me in the right direction to get this working?

 

PS: I intend to run the code at either 10 or 20 MHz from the internal clock source.

Last Edited: Thu. Jan 16, 2020 - 09:53 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah, nevermind. I ditched that old code and did it all from scratch. It seems easier with a 16-bit timer instead of 8-bit. I couldn't fully get my head around that fraction thing there. Here's my result, in case someone should need it. The millis() function should be useful for up to 49 days, the micros() for up to 71 minutes. Tested between 20 ms and 2 min.

 

config.h: (used everywhere, includes F_CPU)

// Contains application configuration settings.

#ifndef CONFIG_H_
#define CONFIG_H_

// CPU frequency in Hz
#define F_CPU 20000000   // 20 MHz

#endif /* CONFIG_H_ */

timer.h:

#ifndef TIMER_H_
#define TIMER_H_

void initTimer();
uint32_t millis();
uint32_t micros();

#endif /* TIMER_H_ */

timer.cpp:

// Provides time measurement functions.

#include "config.h"
#include <avr/io.h>
#include <avr/interrupt.h>
#include "timer.h"

volatile static uint32_t timerMillis;

// Initializes the use of the timer functions by setting up the TCA timer.
void initTimer()
{
	TCA0.SINGLE.PER = F_CPU / 1000 - 1;   // Overflow after 1 ms
	TCA0.SINGLE.INTCTRL = TCA_SINGLE_OVF_bm;   // Enable overflow interrupt
	TCA0.SINGLE.CTRLA = TCA_SINGLE_ENABLE_bm;   // Start without prescaler, at full CPU clock speed
}

// TCA overflow handler, called every millisecond.
ISR(TCA0_OVF_vect)
{
	timerMillis++;
	
	// Clear the interrupt flag (to reset TCA0.CNT)
	TCA0.SINGLE.INTFLAGS |= TCA_SINGLE_OVF_bm;
}

// Gets the milliseconds of the current time.
uint32_t millis()
{
	uint32_t m;
	uint8_t oldSREG = SREG;

	cli();
	m = timerMillis;
	SREG = oldSREG;
	return m;
}

// Gets the microseconds of the current time.
uint32_t micros()
{
	uint32_t us;
	uint8_t oldSREG = SREG;

	cli();
	// First convert milliseconds to microseconds
	// Then add timer value converted to microseconds
	us = timerMillis * 1000 + TCA0.SINGLE.CNT / (F_CPU / 1000000L);
	SREG = oldSREG;
	return us;
}

 

License: Do what you want with it but don't blame me.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 2
	uint8_t oldSREG = SREG;

	cli();
	m = timerMillis;
	SREG = oldSREG;

when you find yourself doing the above have a read about:

 

https://www.nongnu.org/avr-libc/user-manual/group__util__atomic.html

 

(presumably using ATOMIC_RESTORESTATE)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Interrupts may be disabled, but the counter is still running-

 

us = timerMillis * 1000 + TCA0.SINGLE.CNT / (F_CPU / 1000000L);

 

so although interrupts are off, the CNT may have overflowed so timeMillis is not 'matched' to the count. There is probably only a window of 10/65536 where it can happen depending on how the compiler arranges that line of code, but eventually you will run into it, or simply never notice that the timestamp is wrong (probably only detected if closely spaced timestamps indicate a little time warp into the past). There are various ways to take care of the problem and I'm sure you can figure out a few of them.

 

There is also the Rtc with a ~30us resolution which can be used when you want the timers to do something else. The rtc can then be used as a 'real' timestamp with a 32bit seconds (2 seconds), and a 16bit fraction of 2 seconds. With an epoch seconds loaded and a crystal for the rtc, you can keep pretty good time along with using it for a general purpose ms timer.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

To test whether a particular target time has occurred yet,

a compare match flag is probably best.

If one is out of flags, something like

if( 0x8000U & (unsigned)((target-1)-time) ) { ... }

is probably good enough.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Apparently the usual problem on Xtiny/Mega0 is the mixture of timer types; you might want to consider setting up your code to use a TCB timer rather than TCA-type timer, since in general there seem to be more of the TXB timers, and they seem less capable (one compare channel vs three on TCA.)   The TinyMegaCore for Arduino has a compile option to use different timers for millis(), for example...

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

	uint8_t oldSREG = SREG;

	cli();
	m = timerMillis;
	SREG = oldSREG;

when you find yourself doing the above have a read about:

 

https://www.nongnu.org/avr-libc/user-manual/group__util__atomic.html

 

(presumably using ATOMIC_RESTORESTATE)

 

Thanks for the link. That looks better. Too bad that "any exit path" doesn't include the return statement, so this doesn't seem to be possible (generates a compiler warning):

 

    ATOMIC_BLOCK(ATOMIC_RESTORESTATE)
    {
        return timerMillis;
    }

 

(The code editor in this forum is broken today.)

 

I like to use such syntax in C# but the C macros (which I didn't understand when looking at their definition) don't seem to provide the same level of comfort.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

curtvm wrote:

Interrupts may be disabled, but the counter is still running-

 

so although interrupts are off, the CNT may have overflowed so timeMillis is not 'matched' to the count. (…) There are various ways to take care of the problem and I'm sure you can figure out a few of them.

 

I understand the problem, and wasn't aware of it before. I might read the counter two times, or I might read the timerMillis two times, each before and after, and do some comparison and fixup. Were you thinking in that direction?

 

Update: Reading timerMillis twice is nonsense. With interrupts disabled, it could never be updated meanwhile. So I'd have to read the counter twice and compare that.

 

Update 2: The whole idea of reading CNT twice is stupid because I still don't know whether timerMillis matches CNT anywhere in that function. So I'm out of ideas. Could you please provide some of yours?

 

Update 3: I took a look at the old code again and figured it might check the OVF (overflow) flag, and if it's set, another overflow count (here, a millisecond) is added. But that's still not safe. According to the datasheet, the OVF flag is set when CNT hits TOP, but only the next cycle CNT is reset to BOTTOM. This combination of CNT == TOP and OVF set causes problems. This is complicated.

Last Edited: Fri. Jan 17, 2020 - 11:23 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

westfw wrote:

Apparently the usual problem on Xtiny/Mega0 is the mixture of timer types; you might want to consider setting up your code to use a TCB timer rather than TCA-type timer, since in general there seem to be more of the TXB timers, and they seem less capable (one compare channel vs three on TCA.)

 

I'm already using the less capable TCB timer for other functions that I'll keep in production code (countdowns and callbacks at millisecond resolution). I couldn't figure out how to employ the TCD timer for this job, so I went back to the TCA timer. I'm also using that for PWM output, but not everywhere and I don't need these timer functions in production for now. I just wrote this to learn about the timing behaviour of the code. When I know that, I can write the code so that it works best for the timings I observed. I don't think I'll need generic microsecond measurements for real applications.

 

Also, from what I've seen, all 1-series chips have the same amount and types of timers. There's especially no more than 3 16-bit PWM outputs which severely limits this platform for my intended use as multi-channel LED dimmer (up to 8 channels). For that application I'll probably use an ESP32 that has many more (16) such outputs.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

>Could you please provide some of yours?

 

You can stop the timer while getting the values but you lose a few clock cycles each time, which may be insignificant and get buried in all the other errors.

 

Another option is to check if CNT rolled over somewhere in the process. The read of CNT is atomic- the read of L will get the H into temp, and your isr is not touching CNT to mess that up. You can read it, get timerMillis, then check CNT again and compare it to what your original value was, if the original value is lower or equal than the new value there cannot be a rollover where timerMillis has changed- timerMillis and cnt are now a matched/valid set. If CNT rolled over, do again (will only need 1 more time, and will happen infrequently).

 

uint32_t us;

uint16_t cnt;

for(;;){

    cnt = TCA0.SINGLE.CNT;

    us = timerMillis;

    if( cnt <= TCA0.SINGLE.CNT ) break;

}

//now calculate away

 

I do something like that in-

https://github.com/cv007/Avr0PlusPlus/blob/master/Rtc.cpp

line 78

 

 

 

Something else to maybe consider in the overall plan- just let the timer use all 16bits and let the isr count overflows. When someone needs a timestamp, just give them the CNT and the overflow count 16 or 32bit (so either a 16+16bit/32bit number, or a 32bit+16bit struct). You still end up with the same issue above, but you offload any calculations to the caller which can do whatever they want with the clock cycle timestamp- maybe all that is needed is the clock count and division can be eliminated. I guess it depends if clock cycles is what you are after, or time (where someone still has to calculate), or both. With 48bits and 20MHz, you can keep track of clock cycles for ~160 days until rollover.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hm, as I understand it, when the counter overflows, an interrupt is generated. That doesn't mean that the ISR is immediately called though. What if the following happens:

 

  1. The micros() function is entered
  2. The interrupts are disabled
  3. The counter increments, and at this time overflows. This also sets the overflow interrupt flag
  4. CNT is read (meanwhile it's somewhere between 0 and 20 because it increments with every CPU cycle)
  5. timerMillis is read (which hasn't been updated yet because interrupts are disabled)
  6. CNT is read again (it's greater than before) and the loop is left

 

Now we've got wrong data because we were using an outdated timerMillis and didn't notice it. The root issue is that we never know whether timerMillis is current or outdated related to CNT, and that's from the moment we disable interrupts. The only thing we know is that if the OVF flag is set, timerMillis was scheduled for incrementing, but hasn't yet.

 

Based on my update comment 3 I figured it might be the best compromise to look at the actual CNT value that was used for the calculation (only read that once). If the overflow interrupt is set and CNT was low, then timerMillis wasn't updated but the counter has already overflown. Should CNT still be high, it likely hasn't overflown yet so no correction is required. Here's the new code for that:

 

timer.cpp:

#include "config.h"
#include <avr/io.h>
#include <avr/interrupt.h>
#include <util/atomic.h>
#include "timer.h"

volatile static uint32_t timerMillis;

#define TIMER_TOP (F_CPU / 1000 - 1)   // Overflow after 1 ms

// Initializes the use of the timer functions by setting up the TCA timer.
void initTimer()
{
	TCA0.SINGLE.PER = TIMER_TOP;
	TCA0.SINGLE.INTCTRL = TCA_SINGLE_OVF_bm;   // Enable overflow interrupt
	TCA0.SINGLE.CTRLA = TCA_SINGLE_ENABLE_bm;   // Start without prescaler, at full CPU clock speed
}

// TCA overflow handler, called every millisecond.
ISR(TCA0_OVF_vect)
{
	timerMillis++;
	// Clear the interrupt flag (to reset TCA0.CNT)
	TCA0.SINGLE.INTFLAGS |= TCA_SINGLE_OVF_bm;
}

// Gets the milliseconds of the current time.
uint32_t millis()
{
	uint32_t m;
	ATOMIC_BLOCK(ATOMIC_RESTORESTATE)
	{
		m = timerMillis;
	}
	return m;
}

// Gets the microseconds of the current time.
uint32_t micros()
{
	uint32_t ms;
	uint16_t cnt;
	uint8_t flags;
	ATOMIC_BLOCK(ATOMIC_RESTORESTATE)
	{
		ms = timerMillis;
		cnt = TCA0.SINGLE.CNT;
		flags = TCA0.SINGLE.INTFLAGS;
	}
	// If the timer has overflowed, and the ISR hasn't run yet (it clears the overflow flag),
	// and CNT is in its first half (read CNT after overflow is more likely than overflow after
	// read CNT), then add 1 ms to compensate that timerMillis wasn't updated yet
	if ((flags & TCA_SINGLE_OVF_bm) && cnt < TIMER_TOP / 2)
		ms++;
	// First convert milliseconds to microseconds
	// Then add timer value converted to microseconds
	return ms * 1000 + cnt / (F_CPU / 1000000L);
}

 

The following has changed:

 

  • #define TIMER_TOP, to use the same value at init and for calculation
  • micros() implementation changed as described

 

I've tested this function to measure the time between two received bytes on UART. It provides consistently good values which are around the expected rate (for 9600 and 115 200 baud), ±5 µs, most are ±1 µs. I don't know whether I've actually hit an overflow though.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

>The interrupts are disabled

 

That is the mistake. You do not disable interrupts. I guess I assumed too much and should have been more specific-

 

uint32_t micros()
{
    uint32_t us;
    uint16_t cnt;

    for(;;){

        cnt = TCA0.SINGLE.CNT;

        us = timerMillis;

        if( cnt <= TCA0.SINGLE.CNT ) break;

    }

    return us * 1000 + cnt / (F_CPU / 1000000L);
}

 

and maybe a better explanation-

-the read of CNT starts with a read of CNTL, which is atomic and also gets the CNTH into the temp register

-since the isr does not read/write CNT, it does not matter if an irq interrupted after the CNTL read (temp is still CNTH)

-get timerMillis, and doesn't matter if an irq happens in this read

-get CNT again (atomic like before), and compare it to the previous value we read

-if cnt is <= than the latest CNT, there was no overflow and no irq so timeMillis is good and the original cnt is good

-if cnt is > CNT, there was an overflow somewhere after the first CNT read, so do again

Last Edited: Sat. Jan 18, 2020 - 05:13 PM