I've been quietly restraining myself every time I see Cliff's FAQ#4:
If using avr-gcc avoid -O0 optimization at all costs
I swap to -O0 when testing small functions so that I don't get caught out by the var being optimised 'away'. It makes debugging a lot easier in AVR Studio.
I completely agree and have my makefile set up to allow me to not optimize modules when I need to debug them. I understand Cliff's "avoid -O0 like the plague" FAQ, but when debugging it's usually a logic bug I'm chasing, not a timing bug. Using -O0 makes that debugging vastly easier.
The issue as I see it is that sometimes it makes sense to use -O0. Advising newbies to avoid -O0 simply introduces a different set of questions. "Why did my my delay loop get optimized away?" (Needs intro to
When optimized, whole functions can be "inlined", variables are kept in registers, and the code can be apparently "rearranged" to achieve the optimizer's goals. This is a good thing. But, when debugging it can be incredibly confusing and frustrating.
It is my belief that the AVR Studio folks choose -O0 as a default as a way to help newbies through the initial debugging process. Is this the right decision? Well, they have to decide one way or the other, and I tend to agree with them.
I'm reluctant to gainsay Cliff, as his thousands of posts point to far more experience answering these questions than my feeble attempts, but I would like to hear other points of view.
So, I open this up for discussion: Should -O0 be banished to the netherworld as Cliff suggests? Is there some reasonable way of describing the peculiar subtleties of optimization to a newbie without the blanket always or never (or at all costs)? How do we balance the needs for debugging logic against the needs for small fast code?
Stu
PS: Cliff, I recommend FAQ #6: If you think the compiler is wrong, think again. It has far more experience than you do.