Why not disable appropriate interrupt(s) instead of CLI

Go To Last Post
36 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hope you all had a good Christmas.

 

Been doing a lot of studying on atomicity, and as a sidenote it's really giving me an appreciation of why learning some assembler is very helpful. I'm seeing how some C instructions can end in one assembler instruction, and another similar C instruction ends up in a non atomic RMW set of assembler instructions. All very interesting. Anyway, back to the main question.

 

A typical example is when you have a multibyte variable shared between main code and an ISR, or you are copying a transmission buffer array to another "main code" array, and you want to stop an ISR stuffing things up in the middle.

The code I normally see suggested goes something like:

 

SREG_copy = SREG;      // Save SREG and its' I bit status (interrupts may already be disabled so need to put them back that way).

cli();

// Now run this section of code atomically.

SREG = SREG_copy;      // And put SREG back to what is was with the I bit however it was.

 

Is there any reason why it wouldn't be better to disable ONLY the interrupt(s) that could corrupt the section of code you want to run atomically. Why take a sledgehammer approach and disable all interrupts.

 

Keith.

Last Edited: Sun. Dec 25, 2016 - 12:08 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You are absolutely right,  for atomic protection you could just switch off the interrupt(s) that share the variable that is being accessed. Your choice. 

 

BTW have a look at ATOMIC_BLOCK() 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Artandsparks wrote:
Is there any reason why it wouldn't be better to disable ONLY the interrupt(s) that could corrupt the section of code you want to run atomically. Why take a sledgehammer approach and disable all interrupts.

 

Defensive programming. Even if you work out exactly which interrupts can be disabled, you need to review that every time a code change is made. It would be very easy to introduce a new interrupt and forget to disable it, and very hard to test and debug.

Bob.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Also, you might have a timing constraint with the enabling/disabling of the given interrupt source that might be violated by another interrupt source being serviced. Nevertheless, in concept you can enable disable the interrupt source in question.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Rubbish. You know which variable that you want to read. You know which ISR() updates it.
.
The typical atomic access is a single statement. Not too difficult to maintain.
.
Disabling specific interrupts is generally just as quick as disabling global interrupts.
.
It is your choice. Part of the project design process.
.
David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But you want that ISR just not now! 

So if disable and re enable the ISR some care should be taken.

 

If you do ASM I suggest to place a shared int (there should max be a couple otherwise there are something wrong with your data flow)

in a (low)register pair and then avoid atomic problems by using the movw instruction in "main". 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Artandsparks wrote:

 

SREG_copy = SREG;      // Save SREG and its' I bit status (interrupts may already be disabled so need to put them back that way).

cli();

// Now run this section of code atomically.

sei();     // And put SREG back to what is was with the I bit however it was.

 

Are you sure about that? 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks everyone.

 

Steve,

 

Are you sure about that? 

 

Crikey, I'm an idiot LOL.

That was meant to be:

 

SREG  = SREG_copy      >>NOT<<     sei();

 

At least my comment explains what I meant to do. I'll edit my original opening post. 

 

Keith.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Rubbish. You know which variable that you want to read. You know which ISR() updates it.
.
The typical atomic access is a single statement. Not too difficult to maintain.
.
Disabling specific interrupts is generally just as quick as disabling global interrupts.

+1.  I refer to it as "relative atomicity."  

Greg Muth

Portland, OR, US

Xplained/Pro/Mini Boards mostly

 

Make Xmega Great Again!

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Disable with cli leaves the individual interrupts enabled, and if one of them hits in the usecs that cli is off, the int that hits is still latched/pending and will hit the next instruction after sei, according to priority, so if two are pending, they both get serviced. Disclaimer: I think.

 

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Disable with cli leaves the individual interrupts enabled, and if one of them hits in the usecs that cli is off, the int that hits is still latched/pending and will hit the next instruction after sei, according to priority, so if two are pending, they both get serviced.

 

Disabling the specific interrupt also leaves the individual interrupts enabled, except for the specific one.  In the case of an Xmega, where there are three programmable interrupt levels, a higher priority interrupt can interrupt the "relatively atomic" operation that is currently executing.  When the specific interrupt is (re-) enabled, if the interrupt flag is set, an interrupt will be generated.  I don't see how an interrupt can be missed, as your statement seems to be implying.

 

EDIT: Err, I was thinking in terms of being inside an ISR in the statement above...  By disabling just the specific interrupt, another interrupt can be serviced on any of the AVRs (Xmega, Mega, Tiny...) while the "relatively atomic" operation is executing.

 

Greg Muth

Portland, OR, US

Xplained/Pro/Mini Boards mostly

 

Make Xmega Great Again!

 

Last Edited: Sun. Dec 25, 2016 - 03:52 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
You are absolutely right, ...

 

"Absolutely"?!?  Trotting out the old superlatives on Christmas Day?

 

1)  OP says atomicity is being explored.  Fair enough.  But I often see the construct used with a "timed sequence".  [is that atomicity when the instructions need to be run next to each other?] [[Generally in my apps I 'know' that global interrupts are disabled during init and enabled throughout normal operation.  But using the save/restore construct throughout is indeed best for bulletproof generic work.]]

 

2)  Fussing with bits in the MSK or similar registers introduces RMW considerations there.

 

3)  ...and those RMW sequences are slower and larger than the CLI/SEI pair.  Might I mention "OVERKILL"?

 

4)  sparrow2 mentioned the possibility of missing an interrupt.  Surely that is of utmost importance compared to any other possible "justification".  (Are there any "justifications" given so far other than the nebulous "overkill"?)

 

"Absolutely"?  I think not.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I said he was absolutely right.  You DO have an either/or choice of which way you want to approach this. I stand by that assertion.  You absolutely DO have such a choice to make. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks again, lots of interesting things to ponder.

 

What Russel said:

Also, you might have a timing constraint with the enabling/disabling of the given interrupt source that might be violated by another interrupt source being serviced. Nevertheless, in concept you can enable disable the interrupt source in question.

 

And Theusch said:

But I often see the construct used with a "timed sequence".  [is that atomicity when the instructions need to be run next to each other?]

 

Not sure if those two statements are talking about the same thing. I can't think up such a situation off the top of my head but I'll be looking for any in the future.

 

The RMW code needed to set/clear an interrupt enable bit is also something I didn't think about. That made me think about inline assembler and using CBI and SBI but they only operate on the lower 32 I/O registers.

 

As Cliff and others have said, it's a choice programmers have to make based on at least the considerations mentioned so far.

 

Keith.

 

Keith

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
I said he was absolutely right. You DO have an either/or choice of which way you want to approach this. I stand by that assertion.

I don't think you are correct.  At least I don't agree.

 

Besides the inefficiencies (which takes away the "absolutely" IMO), there is the hole that sparrow2 pointed out.

 

Yes, I suppose one could say "I can disable that interrupt in order to read that ADCW value in the mainline from my free-running ADC channel.  How could there possibly be enough interrupt latency to allow myself to trip over my own organ?"

 

Or I could say "I'll do the fastest and shortest and most bulletproof approach and use CLI and SEI."  (well, actually 'restore' right?)

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Artandsparks wrote:
As Cliff and others have said, it's a choice programmers have to make based on at least the considerations mentioned so far. Keith.

Again, I'll say "wrong".  It is no more of a "choice" than ignoring carry in multi-byte operations.  Go back to your original query/assertion:

 

Artandsparks wrote:
Is there any reason why it wouldn't be better to disable ONLY the interrupt(s) that could corrupt the section of code you want to run atomically. Why take a sledgehammer approach and disable all interrupts.

 

As you have seen in responses in this thread, there ARE resons to do things with the CLI approach.

 

Do you have the choice to do things in at best a less efficient manner?  Surely.  Do you have the choice to ignore the RMW hole that might arise, or ignore the possible missed interrupt?  Well, indeed, I'm wrong -- OF COURSE...ABSOLUTELY..YOU HAVE THE CHOICE to do it your own way.  It has just as much merit as ignoring atomic access considerations in the first place -- eventually you will burn.  Errr, I meant get burnt.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, I'm listening, so let's take the following example, where the specific interrupt is disable/re-enabled. Hope I don't mess this up, I've had a bourbon.

 

If I use the following to disable T2 compare A interrupt:

TIMSK2 &= ~(1<<OCIE2A);

I get the following assembly generated:

LDS  R30,112           ; 2 cycles
ANDI R30,0xFD        ; 1 cycle
STS  112,R30           ; 2 cycles

 

And to re-enable the interrupt:

TIMSK2 |= (1<<OCIE2A);

generates assembly code:

LDS  R30,112        ; 2 cycles
ORI  R30,2             ; 1 cycle
STS  112,R30        ; 2 cycles

 

So I notice straight away that it's not efficient compared to CLI and SEI. We are talking 10 cycles vs 2 cycles to disable and enable the interrupts. But that's only significant if that causes any issues, correct ?? A whole whopping microsecond (16 cycles) wouldn't hurt my application.

 

I'm using Codevision and it saves/restores SREG plug any other registers it uses in its' ISRs, so even if those RMW instructions were "intercepted" by an ISR, the RMW instruction would complete correctly after the ISR, correct ?? So what's the difference if an ISR fired just before a CLI instruction, or if one fired in the middle of the RMW operation that disables the specific interrupt. Sorry if there's something I can't see but with my newbie knowledge it appears things would end up the same.

 

Up to now I cannot SEE where I could get burnt. Not saying I wouldn't, I just can't see it yet.

 

I especially cannot see where/how an interrupt could get missed. What would cause that. I'd only be temporarily disabling a specific interrupt. All the rest are active and could fire in the middle of my atomic code operation, but that wouldn't make any difference because non of the other ISRs would affect that code block. OK, fair enough that means it's NOT an atomic code block, but at the end of the day it's all about that code block not being corrupted by certain interrupt ISR(s). As for the interrupt I've disabled, if it's interrupt flag was set then its' ISR would run AFTER its' interrupt enable bit is set again.

 

So up to this point, I see some inefficiency in cycle time to disable and re-enable an interrupt, but I also see having the ability to allow other important ISRs to run as needed. Isn't that sometimes a basic requirement. I probably don't need that at the moment but this whole question is interesting.

 

Not fighting anyone here, not arguing, just putting my newbie thoughts across and trying to see what it is I'm missing. Cliff has had to hammer me on occasion to get me to see something. Better stop writing now, that bourbon is reaching the neurological pathways.

 

Keith.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Keith, you've grasped the fundamentals, so you're in a good position to make a decision.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Artandsparks wrote:
I'm using Codevision and it saves/restores SREG plug any other registers it uses in its' ISRs, so even if those RMW instructions were "intercepted" by an ISR, the RMW instruction would complete correctly after the ISR, correct ??

 

Well, yes.  As with some of the other comments I've made, what you are saying is technically correct -- but doesn't show the whole picture.

 

Yes, your RMW demonstrated would "complete" after the ISR.  But your shown sequence isn't atomic.  So in the general app (and especially if you extend this construct) any ISR work that messes with TIMSK2 creates RMW problems.  So you are covering up one atomicity hole by creating another.

 

Artandsparks wrote:
Up to now I cannot SEE where I could get burnt. Not saying I wouldn't, I just can't see it yet.

Do you see it now?

 

Is the possibility remote?  Sure.  So are the possibilities with any simple atomicity issue on a 16-bit value -- what are the chances that high byte will change between byte reads?

 

Is the possibility of a lost interrupt remote?  Sure.  Do you want to chance it in your app?

 

Why fight it?  Why not do it "right"?  How can you still say it is a sledgehammer to use CLI/SEI, when it is in fact a surgical hammer and your proposed solution is a large mallet?

 

1)  Your solution has a larger ecological footprint.

2)  It has a possibility of an RMW problem.

3)  It has the possibility to lose an interruptible event.

 

Artandsparks wrote:
I especially cannot see where/how an interrupt could get missed. What would cause that. I'd only be temporarily disabling a specific interrupt. All the rest are active and could fire in the middle of my atomic code operation, but that wouldn't make any difference because non of the other ISRs would affect that code block.

You turn off servicing of that interrupt.  In your example a timer interrupt; fair enough.  An 8-bit timer is a bit contrived in itself, but I suppose you can be maintaining a counter of some sort.  You have a short interval between servicing that interrupt -- perhaps an ICP operation of some type. 

 

Let's just say that there will be two hits of the interrupt, 10us apart.

 

But one hits while the interrupt is disabled and then the UART and another interrupt fires while your interrupt is enabled.  You lost an event.

 

Again, indeed it is a tiny hole.  Indeed it may never happen in your current app in its simple configuration.  But why court the disaster?  ESPECIALLY when your "solution" is less than optimal?

 

But I seem to be a committee of one in this thread, and the gurus seem to agree with you.

 

 

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Artandsparks wrote:

 

I'm using Codevision and it saves/restores SREG plug any other registers it uses in its' ISRs, ...

NB:  CV "smart" approach to ISR code generation will indeed save and restore SREG -- when needed.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:

 

Let's just say that there will be two hits of the interrupt, 10us apart.

 

But one hits while the interrupt is disabled

Does it matter whether or not the interrupt is disabled?  When the device wants service the "interrupt flag" will be set.  If the "interrupt flag" is already set, you will lose one interrupt.

 

I'm thinking what Atmel calls the "interrupt flag" should better be called the "service request flag".  It gets set even if you don't use interrupts at all. 

 

No matter what you call it, only one "service request" can be on queue.  If the device requests another one before the first one is serviced, or at least before the service request flag is cleared, you lose one.

 

I guess if the "service request flag" gets cleared early in the interrupt handler, then the device can request service again while you are servicing the first request, and you will eventually process both of them.  But I think the state of the particular interrupt enable or global interrupt enable wouldn't matter.

 

But really the only way to settle this once and for all is to have a fist fight.smiley

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Couldnt Shirley spec a test case, and several freaks code it up and see if one technique is more robust? How about timer2 compa, timer1 compa and timer0 compa incrementing a their own 16 bit var in their handler and filling in 3 arrays, and the timers are a multiple of 2 or 3 of ea other, but pick times so ints arent off longer than the highest int, which is timer2. Rig up uart rx int. This event is asynch to the timers running, so pecking the space bar should hit during the timer ints occaisonally. And after the arrays fill up, shut off ints and print the arrays. Should be no gaps in the filled in arrays. I am reasonably sure the cli technique wont miss any ints. No sure about the disable-individual-int technique. I'm almost curious enough to do this. I'm off this week, and I sure dont want to do honeydo-s.

 

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bobgardner wrote:

. I am reasonably sure the cli technique wont miss any ints. No sure about the disable-individual-int technique.

 

What's the difference?

 

Maybe if interrupts are kept disabled and the CPU polls would be the fastest because there is no context switching.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A fist fight seems simpler to me.

Soooo glad I wasn't drinking coffee :)

 

For the record, I regularly disable specific interrupts instead of employing cli/sei, but not for the OP's queried purpose of atomic access.  Rather, within an interrupt source's selfsame ISR.  It's handy for reducing the latency experienced by other ISRs.  For example, if I have a relatively lengthy/slow ISR 'A', say 100 cycles, which doesn't itself require low latency, and another ISR 'B' which >>does<< require low latency, I re-enable global interrupts in 'A' after disabling 'A's interrupt source.  This can reduce the worst-case latency experienced by 'B' at the hands of 'A' from the above presumed 100 cycles often down to 10 or 20 cycles (depending on the interrupt source itself).  It is not a silver bullet.  There are often other solutions (e.g. make 'A' shorter/faster to begin with), and sometimes they are preferable.  As with everything, 'it depends'.

 

I appreciate Lee's opposition to the approach contemplated by the OP, and I expect my use of 'nested' interrupts will raise a few eyebrows, but in my view the pitfalls are not inherently greater than any other issue faced by embedded developers.  The answer in my view is good documentation.  The risk that:

So in the general app (and especially if you extend this construct) any ISR work that messes with TIMSK2 creates RMW problems.  So you are covering up one atomicity hole by creating another.

... or something like it is >>always<< there in embedded.  When working on 'bare metal', there is no-one there to make sure you're not shooting yourself in the foot when the left hand doesn't know what the right hand is doing (if you'll forgive the mixed metaphors).  There are plenty of ways you can create havoc by touching the same register/whatever from multiple separate functions/ISRs/threads, even if you adhere to the accepted, 'safe', cli/sei approach.  The developer's job is to know what those pitfalls are and to manage them, and document his code's journeys to the volcano's edge so that the next guy doesn't fall into volcano.

 

The 'win' in the OP's approach is that the atomic access at hand results in zero additional latency for all other enabled interrupt sources.  Whether or not that 'win' is important enough to bother with will depend on the application.  The 'lose' is the extra cycles/words consumed by the approach, and the need to carefully manage the risks elsewhere in the code, and >>especially<< in the documentation.  Again, 'it depends'.  Personally, I have never worked on an application which would require it, as most atomic accesses via cli/sei keep interrupts disabled for single-digit cycles.  Atomic access of a 32-bit entity in SRAM keeps interrupts disabled for ten cycles.  Less than a microsecond at 16 MHz:

    static volatile uint32_t foo;
    uint32_t bar;
    ATOMIC_BLOCK(ATOMIC_FORCEON) {
      bar = foo;
    }
static __inline__ uint8_t __iCliRetVal(void)
{
    cli();
    31ee:	f8 94       	cli
    }

    static volatile uint32_t foo;
    uint32_t bar;
    ATOMIC_BLOCK(ATOMIC_FORCEON) {
      bar = foo;
    31f0:	80 91 ac 01 	lds	r24, 0x01AC
    31f4:	90 91 ad 01 	lds	r25, 0x01AD
    31f8:	a0 91 ae 01 	lds	r26, 0x01AE
    31fc:	b0 91 af 01 	lds	r27, 0x01AF
    return 1;
}

static __inline__ void __iSeiParam(const uint8_t *__s)
{
    sei();
    3200:	78 94       	sei

Since sei guarantees that one more instruction will be executed before any pending interrupt is serviced, there will be another 1-5 cycles of latency, depending on the instruction which follows sei.  Most likely instructions are 1, 2, or 3 cycles.

 

 

I hope all who celebrate Christmas had a pleasant one.  I believe I am still digesting an enormous turkey dinner...

 

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The cli method is efficient: at most 3 cycles, 3 words and 1 register.

If one already knows interrupts are enabled, 2 cycles, 2 words and no registers.

All methods require at least 2 cycles, 2 words and 1 register: 2 OUT instructions.

One needs one more register or a LDI instruction.

Outside the OUT range, at least 4 cycles.

 

In the generic case, use the cli method.

If you have a need to keep some interrupts enabled,

use another method.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks lads, this thread is dynamite.

 

Even if I never need to do things this way, I'm sure many others are going to learn some good stuff from all this.

 

Theusch,

yes, I can see now how an interrupt could get missed. So if I've got this correct, your point is that for interrupts that fire within a short time of the last one, the RMW instructions HAVE THE POTENTIAL to add enough delay to the time the interrupt is disabled, and thus the disabled interrupt could "fire" (set it's interrrupt flag) TWO times, so to speak, then only get serviced once of course, after its' interrupt enable bit is set again. That's a good point, however small the chance is. It certainly won't affect what I'm playing with today (most of my ISRs are 1000s cycles apart), but tomorrow it could be very valid. Initially I was thinking, "But the interrupt is disabled whether you use CLI or the RMW method", then I realised you were talking about the extra overhead time added by the RMW method. Thanks for the continued hammering, my caveman brain has got it now.

 

"NB:  CV "smart" approach to ISR code generation will indeed save and restore SREG -- when needed."

Yes, I noticed some ISRs hardly save/restore anything whereas others are saving SREG, R30, etc. The documentation says CV saves/restores whatever it uses in the ISR. I haven't checked yet if GCC does the same. Earlier, I never realised C compilers would do this, and during my atomicity learning I was thinking so many simple C instruction translated to two, three, or more assembly instructions, so how come they aren't always getting corrupted by ISRs. Now I know.

 

I know I started off mentioning atomicity, but at the end of the day, I suppose I'm not truly looking for ONLY that. I'm more looking into running certain blocks of code that could get stuffed up by one or more ISRs. It's not the block of code being intruded upon by an ISR that is the concern, but more so the block of code being intruded upon by an ISR that could corrupt it. 

 

If we don't chat any more before New Year, I wish you all a good one.

 

Keith.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Is this in general or for a special case ?

 

Since you use CV  you can total avoid the problem if it's only to about a couple of 16bit variables by putting them in low registers (I guess theusch can help you to how, the problem has been up before and the GCC can't make the code).

The main thing is that if the generated code is using movw it's by definition atomic (for 16 bit variables). 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I hadn't heard of movw, but there are a lot of things I haven't heard of.  I'm guessing there is a macro for GCC that uses it.

 

I am somewhat familiar with setting a bit with LAS, LAC, and LAT.  I think they perform atomic operations.  There are macros for them.  They are used for Xmega USB where some of the registers are in RAM.  The names for them are only partially descriptive.  For instance LAS does a Load And Set (a bit) and store.

 

I think they only work on RAM, not I/O registers.

Last Edited: Tue. Dec 27, 2016 - 11:42 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi Sparrow,

 

this is just for my knowledge in general.

 

I have several cases where 2 byte variables are shared between an ISR and main code, and one case where a byte array used in coms needs to transfer its collected data to the main program and not get interrupted while it's doing that.

 

That's basically when the question popped into my head and I wondered why all the examples I've seen only mention CLI/SEI, and never mentioned the alternative of disabling only the potentially offending ISR(s).

 

That's interesting about Codevision being able to accomplish that whereas GCC can't. Makes me feel even better about spending the money LOL.

 

Keith.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Given that avr-gcc, like all AVR C compilers supports asm() there is,  as far as I know,  no sequence of opcodes it cannot create.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yes but it has to be a natural part of the compiled code and the way it read register variables.  (and then the normal problem is that the compiler don't use movw because it's atomic and perhaps next time it will make 2 8bit instructions directly on the registers and then the atomic problem is back).

 

Again it's great for ASM, where you know how the code is implemented, and I don't know of any C compilers that can be told to make it atomic, but I remember that when I showed my ASM code, theusch came back that CV can do it aswell.   

Last Edited: Tue. Dec 27, 2016 - 12:50 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Here was the official way to use the LAC instruction 4 years ago.   I'm still using this stuff in the USB driver.  I found this, and the rest of them, in ASF.

 

/*
 * Read modify write new instructions for Xmega
 * inline asm implementation with R16 register.
 * This should be removed later on when the new instructions
 * will be available whithin the compiler.
 *
 */
// Load and Clear
#ifdef __GNUC__
#define LACR16(addr,msk) \
   __asm__ __volatile__ ( \
         "ldi r16, %1" "\n\t" \
         ".dc.w 0x9306" "\n\t"\
         ::"z" (addr), "M" (msk):"r16")
#else
#define LACR16(addr,msk) __lac((unsigned char)msk,(unsigned char*)addr)
#endif

 

Last Edited: Tue. Dec 27, 2016 - 03:38 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Not all AVR cores support MOVW, although all 'modern' (since mega8/16/32) do.

 

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think other than 16 register AVR's all AVR's in production today have movw. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There's always "layering."  A common autonomous access usually ends up in "get_fifo" and "put_fifo" operations, which aren't necessarily associated with a particular peripheral, at least not "logically."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

And I guess that I should add :

 

Often it's better to use flags (semaphores), to mark events between "main" and ISR, and those bits can be placed in low registers as well. 

Last Edited: Thu. Dec 29, 2016 - 01:50 PM