I'm having difficulty understanding the timing of an ISR. I'm using Timer/Counter0 to both toggle OC0A output and generate an interrupt. The ISR toggles output pin D2 while a loop in main() toggles output pin D0.
I'm using Atmel Studio 6.1 with an ATmega32U2 and looking at the output using a logic analyzer.
Here is the program:
#define F_CPU 16000000UL // @ 5.0V #include#include #include #include #include #include void Init_LED(void) { // Initialize LED on Mattair Board (D0) DDRD |= (1 << DDD0); // output PORTD |= (1 << PORTD0); // output High = LED On } void Init_OutD2(void) { // Pin D2 is an output DDRD |= (1 << DDD2); // output PORTD |= (1 << PORTD2); // set output High } void Init_Tmr0(void) // output square wave on OC0A=PB7 { // Mode 2: CTC (WGM0[2:0] = 2), TOP=OCR0A // PB7 is an output DDRB |= (1<<DDB7); // CTC with TOP=OCR0A // Toggle OC0A on Compare Match TCCR0A = ( (1<<COM0A0) | (1<<WGM01) ); // TCCR0B=0x03, Timer CLK=clk(I/O)/64 (From prescaler) TCCR0B = ( (1<<CS01) | (1<<CS00) ); OCR0A = 23; // TOP, div by 24 TIMSK0 = 0x02; // Output Compare Match_A Interrupt Enabled } void SetupHardware(void) { // Disable watchdog if enabled by bootloader/fuses MCUSR &= ~(1 << WDRF); wdt_disable(); // Set clock division clock_prescale_set(clock_div_1); // 16MHz/1 = 16MHz for CPU @ 5.0V Init_LED(); Init_OutD2(); Init_Tmr0(); } int main(void) { SetupHardware(); sei(); while(1) { // Toggle LED on output D0 PIND = (1 << PIND0); } } ISR ( TIMER0_COMPA_vect ) // interrupt from timer0 { // Toggle Output Pin D2 PIND = (1 << PIND2); }
This is from the .lss file for main():
When there is no interrupt, output pin D0 is toggled every 3 cpu clock cycles.
Next is the .lss for the ISR:
The entire interrupt duration is 31 cpu clock cycles (5 cycles to get to the ISR and 26 cycles for the ISR).
So far, so good. It matches what is expected from the data sheet.
Here's where things start to get funky:
I have aligned the "out" instruction in the ISR with the transition of pin D2. The instruction begins with the falling edge of the cpu clock and the output is asserted on the rising edge of the clock.
Edit: This is wrong! The instruction begins and ends with the rising edge of the clock. See this post.
You can see that the interrupt duration stretches the low phase of pin D0 from the normal 3 cycles to 34 cycles, which agrees with the 31 cycles calculated for the interrupt.
Two cycles are marked with a red "?". It appears that the two cycle rjmp instruction in main() is being divided by the ISR, even though the interrupt flag was asserted in the clock period prior to the out instruction.
Now look at the following trace:
The rising edge of OC0A is coincident with the falling edge of pin D0. Again, aligning the "out" instruction with the falling edge of pin D2, there is also a gap of one cycle (marked with a red "?") after the rjmp and before the first cycle of the interrupt. Also, it appears that the last cycle of reti instruction in the ISR occurs at the same time as the "out" instruction in main().
It is as though the falling edge of D2 is one cycle late.
I have no explanation for this.