Hope you all had a good Christmas.
Been doing a lot of studying on atomicity, and as a sidenote it's really giving me an appreciation of why learning some assembler is very helpful. I'm seeing how some C instructions can end in one assembler instruction, and another similar C instruction ends up in a non atomic RMW set of assembler instructions. All very interesting. Anyway, back to the main question.
A typical example is when you have a multibyte variable shared between main code and an ISR, or you are copying a transmission buffer array to another "main code" array, and you want to stop an ISR stuffing things up in the middle.
The code I normally see suggested goes something like:
SREG_copy = SREG; // Save SREG and its' I bit status (interrupts may already be disabled so need to put them back that way).
// Now run this section of code atomically.
SREG = SREG_copy; // And put SREG back to what is was with the I bit however it was.
Is there any reason why it wouldn't be better to disable ONLY the interrupt(s) that could corrupt the section of code you want to run atomically. Why take a sledgehammer approach and disable all interrupts.