effect of sei;cli

Go To Last Post
20 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What happens when the SEI and CLI instructions are executed back-to-back? Are pending interrupts handled or not? I'm a little confused by the descriptions of these instructions:

Quote:
SEI - Sets the Global Interrupt Flag (I) in SREG (Status Register). The instruction following SEI will be executed before any pending interrupts.

Quote:
CLI - Clears the Global Interrupt Flag (I) in SREG (Status Register). The interrupts will be immediately disabled. No interrupt will be executed after the CLI instruction, even if it occurs simultaneously with the CLI instruction.

It looks like you need at least one instruction in between to ensure that the interrupts will be serviced. Is this correct?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Seems reasonable; the following CLI instruction disables interrupts before the SEI allows any through. But why on earth would you want to do this?

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

dicks wrote:

It looks like you need at least one instruction in between to ensure that the interrupts will be serviced. Is this correct?

Thats not full correct.

It ensure, that only one interrupt was serviced !

Because after the RETI a further instruction of the main loop was executed, before the next interrupt was serviced.

So, if e.g. up to 5 interrupts sources enabled, you must insert at least 5 NOPs in between.

Peter

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

barnacle wrote:
But why on earth would you want to do this?

To service a pending interrupt at a specific point while running some code with interrupts disabled, for instance another interrupt handler.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

danni wrote:
It ensure, that only one interrupt was serviced !

In order for even one interrupt to be serviced global interrupts would have to have already been enabled before the SEI. If they were disabled when the SEI is executed then the SEI will not enable interrupts until after the CLI is executed (which will not work). The CLI will immediately shut down global interrupts the instant it is executed. This data sheet description does not leave any window for any interrupt to be serviced. You should need a NOP (or other suitable instruction) between the SEI/CLI instructions just to execute a single already pending interrupt service. Then extra NOP instructions would allow other pending interrupts to be serviced based on their priority.

If a simple SEI/CLI actually opens up an interrupt response window, either the the data sheet is wrong or there is an unexpected signal race condition inside interrupt global enable logic. If it is a race condition then using it would be risky since it might fail randomly or it might get fixed in the next silicon revision and make the program fail.

The RETI/SEI behavior is designed to allow an instruction from the main non-interrupt program code to be executed even when pending interrupts overwhelm the processor with continuous interrupts. If there was no RETI/SEI enable delay at all, then continuous interrupts could completely stall the main non-interrupt program code rather than just slowing it down (probably way down).

I anyone has a program that does this simple SEI/CLI two instruction sequence then you have found what should be a bug if the author expected to open up an interrupt response window with this code.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think danni's point was that SEI;NOP;CLI will (only) allow a single interrupt to be
serviced -- between the NOP and the CLI. If you want to allow (up to) two to be
serviced, you'll need two NOPs, and so on. (I'm not sure this scales well (:-)).)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Might it not be simpler to keep the interrupts permanently enabled, but use the interrupt routine to set a flag which controls the actual interrupt operations?

That way the asynchronous nature of the interrupt response is maintained and the timing of the executable is under your control.

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

danni wrote:
Because after the RETI a further instruction of the main loop was executed, before the next interrupt was serviced.

So, if e.g. up to 5 interrupts sources enabled, you must insert at least 5 NOPs in between.


Are you sure that RETI behaves like SEI in that it will also execute at least one instruction? I cannot find that information in the data sheets.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

From p. 14 of the ATmega8 Data Sheet (not copy protected!) near the end of the section
"Reset and Interrupt Handling":

Quote:
When the AVR exits from an interrupt, it will always return to the main program and execute
one more instruction before any pending interrupt is served.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Nor can I, but I know I've seen it somewhere. RETI is just a normal RET but with a SEI attached. Because of this, RETI is guaranteed to execute one cycle of the main program before the processing of the next waiting interrupt.

- Dean :twisted:

Make Atmel Studio better with my free extensions. Open source and feedback welcome!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This behavior of interrupts is normal. When an interrupt is dected, the program counter is incremented and AFTER that is stored in stack. After RETI, like RET, the incremented value stored in stack, is loaded in PC. So, the 'next' instruction is executed. CPU don't watch the interrupt flags at the end of RETI instruction.

Imagine you have two pending interrupts: INT0 (trigered by level), and RxC and two NOP betwen SEI and CLI. Since the INT0 have a higher priority, will be served first. But what happen if at the end of the INT0 ISR the level on the pin is still present? INT0 will be served again, and RxC interrupt not. So, to put NOPs as many as interrupts are enabled, don't assure you that all interrupts will be served.
But you can make a trick. At the begining of each ISR, before of any PUSH, read the stack (with two POP), decrement the value (word) by two (if you use NOP) and put it back (with two PUSH). After that you can 'push' the registers. That assure you, after each ISR served, the same NOP will be executed. Your program wil "freze" until all interrupts will be served.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

yeow. changing the PC for the interrupted program will wreck the program unless it is for sure doing your NOP thing. I'd hate to have to assure that the system will work correctly in all cases with with an ISR modifying the return address as suggested.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I mentioned just the idea. This technique can be impruved. For example, asign a variable in order the ISRs can decide if is the case to modify the pc or not.
The designer must decide if is worth or not.

Another techinque, if you want as result of interrupt event, to abandon the actual section of program, and jump to another section of main program, is replacing RETI with:
POP rx
POP rx
SEI
RJUMP to new location
(you must have a register who's value you don't need and be very careful using this trick)

I don't know if the compilers can do that. But that is the power of asm.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
This behavior of interrupts is normal.

It's A normal way of doing this, but not the only way. I was a bit
surprized to find out that the ARM7 does it this way, but the ARM9
doesn't. And then there's the pipeline-manic TI C6x, which as near
as I can tell will take an interrupt immediately after a RETI (equivalent)
but takes 5 clocks after an SEI (equivalent).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

angelu wrote:
Another techinque, if you want as result of interrupt event, to abandon the actual section of program, and jump to another section of main program, is replacing RETI with:
POP rx
POP rx
SEI
RJUMP to new location
(you must have a register who's value you don't need and be very careful using this trick)

I don't know if the compilers can do that. But that is the power of asm.

Its not the power of asm, its only a sign of very bad programming style !

Because you never know, on which point an interrupt occur (this is the power of interrupts), your program crashes immediately on such a sick approach.

And thus a compiler use not such a very bad practice.
But if I hear, that my compiler should do it however, I put it into the trash can instantly and use another reliable working compiler.

Typically such sick ideas are only the result of bad program planning.

Peter

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Danni:
"Because you never know, on which point an interrupt occur."

It is true. But, you can manage the interrupts (interrupt enable bits), and you know for a zone of program if an interrupt can be served or not.

The trick above mentioned, have limitations. But can work.
Here is not the place to fully describe an idea.

To say this it is a sick idea, or bad style mrogramming, I mean you go too far.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sick, no.
Ill-advised, yes.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Ill-advised, yes.

There are other ways to allow an "interrupt" at specific times. The one that comes to mind most quickly is to simply check the interrupt flags of those that you might expect. You have to pay attention to the IE flag when you call a service routine.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

angelu wrote:

To say this it is a sick idea, or bad style mrogramming, I mean you go too far.

It was the best way which I found to tell how dangerous it is:

It ruin your programming style totally and cost many man-years of debugging time and make code unreliable and unmaintainable and not upgradeable !

So I can only warning everybody:

HANDS OFF !!!!!!!!!

Peter

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There's nothing implicitly wrong with doctoring a return address - pre-emptive task switches rely on this technique to do a task switch. They'll also play with the stack pointer! Sudden death crash if you're not careful though!

Personally, i've written code that is interrupt sequence dependant and I would not recommend it. I would suggest a re-think of the program architecture and try to separate the real-time code from non real-time code. For example: serial interrupts - you don't necessarily need to process the incoming data in the ISR - you just need to store it away and another non-isr routine can process it. Also, try to minimise the number of different interrupt sources where possible. Its not fun to try to debug a piece of code that is so interrupt critical that a small changes stops the code from working.