Hi, all; sorry for the provocative title. I can't claim that symbolic debugging of gcc-produced code with AStudio is completely nonfunctional, because at least five or ten percent of the time, I can actually mouse over a the name of a variable that's just been assigned a value by an instruction I've just single-stepped over, and actually have the assigned value appear in a tooltip. But that's about it. Try to inspect the passed-in values of subroutine arguments (just simple 8 or 16-bit integers)? Forget it; you have to ask for the disassembler window, figure out which register(s) are holding the variable, and then inspect those. The utility of the debugger is abysmal; I really ought to have planned on developing enough extra debug glue to let me compile and debug my AVR project with MS VisualStudio C++, just to get a functional debugger.
Yes, I know that a lot of the problems have to do with gcc's optimizer. But, being blissfully ignorant about the underlying details, I'm naive enough to ask why it's so unreasonable to expect even optimized code to be debuggable? The compiler knows which registers it's using for which purpose as it generates code, doesn't it? I don't care if the debugging sections in the generated .o files bloat out a hundredfold; they should be capable of explaining, instruction-by-instruction, what each machine register is doing for the benefit of which sourcecode statement, at least most of the time. I could see where very clever common subexpression factorings might make some variable placements a bit ambiguous, and I'd be willing to have occasional inability to inspect variables. But such situations ought to be rare, rather than the norm.
Can the "folks that know" suggest which of the tools are most to blame for the atrociously poor debuggability? Are AStudio's gcc-supporting features not making proper use of debugging information that's already present in the object files? Or are the WinAVR compiler tools not providing adequate debug info?