WinAVR debugging offset

Go To Last Post
15 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

When trying to compile & debug code C code with the simulator, the "next operation line" does not coincide with the code to be executed.

Sometimes there are even empty lines that are marked with the yellow background.

Please advise,
axos88

axos88

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It's optimisation in action - turn it off if it bothers you.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

yes, that was it, thanks :)

axos88

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

FAQ#4, Avoid -O0, but then how could I disable optimisation?

axos88

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

axos88 wrote:
FAQ#4, Avoid -O0, but then how could I disable optimisation?
BWA-HAH-HAH-HAH-HAH-HAH! It's all just an evil plot to drive you mad! MAD, I say!! :twisted:

I started a thread on exactly this topic (https://www.avrfreaks.net/index.p...). Essentially, Cliff's FAQ makes sens, since there are functions in avr-libc that will not work correctly at -O0 (delays, in particular). Most newbies run afould of that far more often that the debugging problem.

In addition, since it is most likely that the optimized code is what you will be running in the long run, there are those that believe that debugging optimized code is the only reasonable approach. That's the "throw them in the shark-infested water to teach them to swim" school, IMNSHO.

Go check the thread, but my opinion is that it makes more sense to turn off optimization to chase specific bugs, but turn on full optimization on everything else. This lets you in for optimization problems (like delays not working and the Watchdog timer not being reset), but allows you to step through the code in a somewhat sensible way. It also assumes that you break your code into modules (multiple .c files) so you toggle optimization on just the code you want to debug.

At any rate, this is a heavily debated topic with most parties coming to the agreement to disagree. :D

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, there is a range, when it comes to optimizations. It's conceivable that one can set it to -O1, and get the timing needed for certain things (like the aforementioned delay functions), but still have the code sane enough to allow debugging on it. The only way to find out is to try it and see if it works.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Or learn to debug optimised code - if you have any plans to become a professional engineer in future it's almost essential that you learn to debug optimised code - using the mixed C/Asm view is often a good way to do this and just spot which registers the compiler is holding various variables in.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cliff wrote:
Or learn to debug optimised code - if you have any plans to become a professional engineer in future it's almost essential that you learn to debug optimised code - using the mixed C/Asm view is often a good way to do this and just spot which registers the compiler is holding various variables in.
Aaawww, man! Here we go again! :evil:

I agree that debugging optimized code is an important skill. I also recommend that all users of AVR processors learn how to do it (perhaps sooner rather than later). Occasionally even I look at the optimized assembly listing and trace exactly what the compiler has done with my C code. I also admit that I have found problems in my code doing that.

However I do not agree that I should be expected or required to always debug only optimized code! As I said in the other thread, the grand majority of my bugs have absolutely nothing to do with "timing" or "performance", but with mistakes in my own logic. Why shouldn't I use the debug level that reflects most closely that which I programmed? The logic should not be any different between optimized and non-optimized, so why not use the method that allows me to step through my code the way I wrote it?

Perhaps some folks are such brilliant programmers that they never make logic mistakes. I, for one, though, am a little more pedestrian and actually like my debugger to single step through the code the way I wrote it, not the way the optimizer mangled it.

*sigh* :? I can tell this is going to be another war. Just can't keep my d*** mouth shut. :(

Stu

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As far as I'm concerned, I think the IDE should be smart enough to be able to trace those optimizations, and even if it skips some instructions, it should always show the line to be executed. They achieved this on Visual Studio, and many other IDEs, so they should be able to implement it into AVR Studio also.

Pls correct me if what I just said is bullshit :)

axos88

axos88

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Stu,

Sorry I was talking about the kind of projects I work on from day to day. We're talking about more than 1,000 source files, 35,000 symbols, more than 1,000,000 lines of source, multi-tasking with 100+ tasks, 250+ semaphores, 50+ message queues, etc. In this kind of system it's built optimised and there's no possibility of switching the part of it you are working on to non-optimised just to debug it as you'll upset the "balance" of everything else that's going on. In fact even stopping one thread task causes upsets! In that kind of professional environment I have no option but to debug the optimised code and that involves using mixed C/Asm debugging. (ARM isn't too bad but the MIPS compiler has a habit of rearranging HUGE sections of the code so trying to spot the C-Asm correspondence can be a total head-f**k!)

YMMV.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

axos88 wrote:

> As far as I'm concerned, I think the IDE should be smart enough to
> be able to trace those optimizations, and even if it skips some
> instructions, it should always show the line to be executed.

What if there's no /line/ structure anymore? The C compiler isn't
line oriented in any way (only the preprocessing stage works in terms
of lines). Consequently, the generated code does not need to be
related to lines of source code in any way.

> They achieved this on Visual Studio, and many other IDEs, ...

That's only because they are working on stupid processors (*) where
the compiler does not have much potential to optimize at all.
Consequently, the compiler quite frequently leaves much of the code
the way it has been written by the developer, without too much
rearranging. With just one free register to use or two, most of your
stuff is going to work in memory anyway, which is much easier to track
for a debugger than a compiler-arranged common subexpression stored in
a register (or a couple of them on an 8-bit CPU like the AVR).

The AVR CPU is very close to the RISC approach, so any decent compiler
can optimize the heck out of the source code, based on the rules
dictated by the C standard. The idea behind is that you can write
good human-readable (and thus maintainable) source code yet still get
efficient machine code.

(I started my Unix life on a Motorola m88000 CPU, a true RISC. Yes, I
can understand Cliff's sentiment about "learning to debug optimized
code"...)

I've never really debugged true AMD64 code. In theory, the compiler
should have much more room to play optimization games there than on
the archaic i386 architecture, so you're likely to observe similar
optimization effects as you could see on the AVR.

(*) Well, these processors aren't really stupid but they did the
complete opposite of RISC: instead of making a dumb CPU and a smart
compiler, they don't require any much optimization skill on behalf of
the compiler but rather do the compiler's job inside the CPU now, with
all that pipelining, branch prediction etc. pp.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cliff wrote:
We're talking about more than 1,000 source files, 35,000 symbols, more than 1,000,000 lines of source, multi-tasking with 100+ tasks, 250+ semaphores, 50+ message queues, etc.
On an AVR?!? Kewl!! (I know, it's on an ARM or MIPS -- I just am very impressed with such things, though.)
Cliff wrote:
In this kind of system it's built optimised and there's no possibility of switching the part of it you are working on to non-optimised just to debug it as you'll upset the "balance" of everything else that's going on. In fact even stopping one thread task causes upsets!
So you need to bring the system to a halt to "single line debug"? I suspect line-by-line debugging is far from your normal day-to-day life.
Cliff wrote:
In that kind of professional environment I have no option but to debug the optimised code and that involves using mixed C/Asm debugging. (ARM isn't too bad but the MIPS compiler has a habit of rearranging HUGE sections of the code so trying to spot the C-Asm correspondence can be a total head-f**k!)
Try the Itanium some time! Up to 3 instructions issued in parallel, with the rearrangement pre-figured by the compiler to make maximum use of the processor. Yuck.

Still, it beats the old IBM 360/195 "out-of-order instruction retirement" which yielded such wonderful failure messages as "An error occurred on or about line xxx...". Never had to mess with it, but I heard stories.

--------------

My point in my previous post is that, for most AVR processors, single stepping code is not a vile disease but a useful crutch. Granted, when one progresses on to Real(tm) processors, one's debugging techniques will need to become far more sophisticated.

As always, one must use the proper tool for the job.

Stu.

1,000,000 lines of code? 100+ tasks? 50 message queues? that ain't no "downstream" project! :twisted:

Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

stu_san wrote:
1,000,000 lines of code? 100+ tasks? 50 message queues? that ain't no "downstream" project! :twisted:

What's more we are developing or maintaining about 4 different projects of similar size based on different processor architectures. They are all digital satellite TV decoders and hard drive recording systems (what are generally called "PVR"s)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
about 4 different projects of similar size based on different processor architectures.
That makes you use of gcc quite smart as it runs on likely all those processor architectures.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sadly none of those projects uses GCC. For ARM we use ARM's own C compiler, for MIPS we use Greenhills and for ST we use ST's own toolchain.

Having said that the lure of Linux/GCC cannot be ignored for future designs ('nuff said)

Cliff