For the impatient, my question is at the bottom of the post. For the patient, read on for some background.
Recently I've had to perform some maintenance and feature additions on some old code and after beginning some testing to ensure that I haven't screwed anything up, I notice that the timer I use to kick off an ADC conversion suffers from a lot of jitter. Analysis of the problem, both by poring over my source code and having an oscilloscope give me some insight as to what's going on, shows that my problem seems to be either other interrupts causing delays or my critical sections screwing things up. (I tried out some old code that had previously passed, but I either had a bad test setup or some undocumented code changes occured between then then that cause it to fail.) Due to the nature of the processing I'm doing, having a fairly accurate sampling rate for the ADC is important, thus I would like to minimize jitter: 190-200us between executions is OK but 250-400us is not; neither is missing a sample.
I have critical sections in my code since I have data buffers and flags that can be modified by both the ADC ISR and application code, and are polled by the application code. I used to disable only the relevant interrupts (in this case the ADC ISR and the ISR for the timer kicking off a conversion) but after reading the datasheet and a couple of posts in the forum, it seems like using cli() and sei() is the way to go, not only because its easier but because it seems that ISRs will execute if their flag was enabled while interrupts were turned off. (For example, if i call cli() to do a quick data copy but the ADC interrupt occurs before I re-enable interrupts, the ISR will execute after calling sei()). I do declare things volatile, but I question whether or not this protects against synchronization issues (I'm paranoid like that), thus the critical sections. After review of all my ISRs and critical section code, it seems to be pared down about as far as it can be; generally a flag is read/cleared or a value copied out of a buffer, occasionally some of both.
GCC optimization is set to -O3
Is using cli() and sei() the best method to ensure that interrupts are always serviced after I access shared data or do I still run the risk of missing some?
Surely I'm not the first one to face this type of problem. Does anyone have any words of wisdom?