I wasn't sure if I should put this in general programming or here, I hope I picked the right forum.
I'm working with an UC3C MCU.
So I have a main and some functions and I want to measure the exact executing time of each function and the whole main loop. I'm aware that Interrupts will add into this so I measure a few times and take the largest value.
I simply have a timer interrupt, that increments a variable "global time" each 10 µs.
When a function starts, it gets the current globalTime and at the end it stores the difference between start and end in an variable.
My problem is, how do I make sure, that the compiler optimization doesn't move the globalTime read functions around? In this case I would probably miss some code.
Or in other words, how can I guarantee that after optiminsing this
doesn't turn into something like this:
I'm not very familiar with optimizing compilers as you may have guessed.
I'd be thankful if you could shed some light on this.
edit: corrected some typos