Forum Menu




 


Log in Problems?
New User? Sign Up!
AVR Freaks Forum Index

Post new topic   Reply to topic
View previous topic Printable version Log in to check your private messages View next topic
Author Message
stu_san
PostPosted: Sep 13, 2007 - 05:09 PM
Raving lunatic


Joined: Dec 30, 2005
Posts: 2327
Location: Fort Collins, CO USA

I've been quietly restraining myself every time I see Cliff's FAQ#4:
Quote:
If using avr-gcc avoid -O0 optimization at all costs
For many purposes, for production code, and especially when using any of the smaller AVRs, I agree. However, in another thread:
BrianS wrote:
I swap to -O0 when testing small functions so that I don't get caught out by the var being optimised 'away'. It makes debugging a lot easier in AVR Studio.
I completely agree and have my makefile set up to allow me to not optimize modules when I need to debug them. I understand Cliff's "avoid -O0 like the plague" FAQ, but when debugging it's usually a logic bug I'm chasing, not a timing bug. Using -O0 makes that debugging vastly easier.

The issue as I see it is that sometimes it makes sense to use -O0. Advising newbies to avoid -O0 simply introduces a different set of questions. "Why did my my delay loop get optimized away?" (Needs intro to <delay.h>.) "Why is it that the exact sequence of statements I gave in my mind-numbingly simple program not being followed?" (Because your do-nothing statements are being optimized away.) And blah, blah, blah.

When optimized, whole functions can be "inlined", variables are kept in registers, and the code can be apparently "rearranged" to achieve the optimizer's goals. This is a good thing. But, when debugging it can be incredibly confusing and frustrating.

It is my belief that the AVR Studio folks choose -O0 as a default as a way to help newbies through the initial debugging process. Is this the right decision? Well, they have to decide one way or the other, and I tend to agree with them.

I'm reluctant to gainsay Cliff, as his thousands of posts point to far more experience answering these questions than my feeble attempts, but I would like to hear other points of view.

So, I open this up for discussion: Should -O0 be banished to the netherworld as Cliff suggests? Is there some reasonable way of describing the peculiar subtleties of optimization to a newbie without the blanket always or never (or at all costs)? How do we balance the needs for debugging logic against the needs for small fast code?

Stu

PS: Cliff, I recommend FAQ #6: If you think the compiler is wrong, think again. It has far more experience than you do.

_________________
Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!
 
 View user's profile Send private message  
Reply with quote Back to top
JohanEkdahl
PostPosted: Sep 13, 2007 - 06:12 PM
10k+ Postman


Joined: Mar 27, 2002
Posts: 22029
Location: Lund, Sweden

I see two or three points here:

1) Yes, I think Cliff is a little bit too rigid. There are times when debugging un-optimized code is good. But...

2) Debugging un-optimized code, that is later going to be built with optimization for a Real Word App (tm), can be dangerous. You are not debugging the RWA. This leads to...

3) So you actually need to learn the art of debugging optimized code. The problem is that for a noob this is not easy. Indeed, the art of debugging in general is hard to learn. The complication of dealing with optimized code adds to this. Still, there is no way around it - if it's gonna hurt then take the pain as early as possible. Not doing it will ultimately lead to problems Real Soon Now (tm). And some things will not work at all (eg. the delay_ms() and delay_us() functions that should have been used in the first place, rather than ones own delay loop).

In my native language there is a saying that goes something like "It's like peeing in your pants. Initially its warm and cosy, but after a while...". Some things are intrinsically hard, and there are no ways around it. You cannot program embedded systems in C without having some understanding of the generated machine code. You need to learn to read machine code, and you need to learn it fairly early on.
 
 View user's profile Send private message Visit poster's website 
Reply with quote Back to top
clawson
PostPosted: Sep 13, 2007 - 06:30 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

JohanEkdahl wrote:
I think Cliff is a little bit too rigid.

I'll try and avoid making the rather obvious lewd joke Wink (though Mrs. Lawson has never complained)

Be interesting to see what other feedback comes from this but IMHO the positives of avoiding -O0 FAR outweigh the negatives. Perhaps I'll add a "YMMV" or an "IMHO" to that "FAQ"?

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
theusch
PostPosted: Sep 13, 2007 - 07:14 PM
10k+ Postman


Joined: Feb 19, 2001
Posts: 28971
Location: Wisconsin USA

Quote:

(though Mrs. Lawson has never complained)

??? That's not what she tells >>me<<.

[a bit more on topic] I've seen the counter-arguments (perhaps from the same people): "Stupid compiler generates xyz sequence when any idiot can see that [these constant expressions can be folded, etc.]!" You will probably never win.

I still say that more "debugging" should be in the head and not lean so heavily on the debugger crutches.

Lee
 
 View user's profile Send private message  
Reply with quote Back to top
stu_san
PostPosted: Sep 13, 2007 - 09:48 PM
Raving lunatic


Joined: Dec 30, 2005
Posts: 2327
Location: Fort Collins, CO USA

theusch wrote:
I still say that more "debugging" should be in the head and not lean so heavily on the debugger crutches.
I'll grant you that I can usually visualize the (bad) code in my head when someone gives me a bug report, and usually have a fix in mind within a minute or two. However, as a human I have blind spots, and sometimes the only way to see why the code is misbehaving is to step through it line by line.

Steve Maguire in his book Writing Solid Code has an entire chapter devoted to stepping through code and why that is good. If I have added functionality that is not time critical, I almost always will step through the new code first to be sure it's behaving correctly. Granted, that is my coding/debugging style. Unlike Mozart and his music, I cannot write perfect code in my head (or even on the terminal) first time.

In the long run, this may boil down to the way that a particular person debugs his/her code. The problem with many readers of this forum is that they are such newbies that "debugging" is a fairly vague concept.

JohanEkdahl wrote:
2) Debugging un-optimized code, that is later going to be built with optimization for a Real Word App (tm), can be dangerous. You are not debugging the RWA.
HMMMMmmmmm, perhaps. Depends on whether (and how much) performance and size are part of your Real World App. I currently have ~130KB of app in my mega2560 - if I add 10K to the size, does that matter? Not to me. I have already isolated most of the time-dependent code so that optimization level rarely affects operation of the code.

JohanEkdahl wrote:
3) So you actually need to learn the art of debugging optimized code. The problem is that for a noob this is not easy. Indeed, the art of debugging in general is hard to learn.
That certainly is a problem. I've been at this for over 30 years and I'm still learning. Just how steep a learning curve should we demand the newbies climb?

For most newbies, their introduction has been Basic, or perhaps C, but almost certainly on a PC. The debuggers there let them step through the code, look at variables as they change step-by-step, and so on. Micro$oft has even advanced the idea of "Checked" and "Release" builds, so the relatively sophisticated PC programmer understands the difference between large slow "debug" code and small fast optimized "release" code.

Granted the environment in an AVR is different. Granted that they should learn to debug optimized code. But is it the best thing to drop them in the shark-infested waters right off the bat?

Perhaps in my copious spare time ( Wink ) I'll write a debugging tutorial. On the other hand, it'll probably be ignored by the newbies just like all the other tutorials. *sigh*

I dunno. Just more food for thought.

Stu

_________________
Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!
 
 View user's profile Send private message  
Reply with quote Back to top
clawson
PostPosted: Sep 13, 2007 - 10:03 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

Can't help noticing that it's interesting the different perspectives the folks writing code in 128K/256K devices have compared to those trying to squeeze the last 88 bytes into a mega48 or something!

I've lost count of the times there's been posts on here about an app turning out to be larger than the 2K in someone's Tiny and then with -Os it suddenly drops to 1300 bytes or something.

I guess things like delay.h and the perennial 4 machine cycle timing requirements things are a given when it comes to arguing against -O0. How many people have posted here saying their writes to JTD aren't apparently disabling the JTAG and the solution turns out to be because they took Studio's -O0 default?

But I'm trying not to be partisan - really I am.

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
dl8dtl
PostPosted: Sep 13, 2007 - 10:48 PM
Raving lunatic


Joined: Dec 20, 2002
Posts: 7374
Location: Dresden, Germany

I think part of the game is that the CPUs in our host computers
are so notoriously lacking free registers the compiler could
use for optimization, so ``optimized code'' there is usually not
all that much different from unoptimized one. When I started Unix
on a Data General machine in the early 1990s, equipped with a
Motorola M88100 CPU, it was quite normal that when debugging
optimized code (which is the only code that make sense on a RISC
CPU, per definitionem [*]), the debugger cursor wildly jumped
around. Debugging optmized AVR code is almost benign, compared
to that.

[*] The entire idea behind RISC was to have a dumb CPU that
could execute everything in a single cycle, but have a smart
compiler that would do the job. Obviously, this requires the
compiler to have optmizations turned on.

_________________
Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.
Please read the `General information...' article before.
 
 View user's profile Send private message Send e-mail Visit poster's website 
Reply with quote Back to top
stu_san
PostPosted: Sep 14, 2007 - 04:28 PM
Raving lunatic


Joined: Dec 30, 2005
Posts: 2327
Location: Fort Collins, CO USA

clawson wrote:
Can't help noticing that it's interesting the different perspectives the folks writing code in 128K/256K devices have compared to those trying to squeeze the last 88 bytes into a mega48 or something!
Cliff, I think you've nailed it. The AVR usage (and those writing to this forum) really covers the gamut, from ATTiny to ATmega, and the coding and debugging approach must adapt to the processor used. Advice used for one type of processor may be (and commonly is) totally inappropriate for another.

The -O0 avoidance seems to me to be based on two premises: First, on most AVRs final code size is of the utmost importance; being left with "too much function and not enough memory" is a common problem. Using the optimizer can certainly help this problem. (BTW, I get a kick out of the folks who post a tiny do-nothing programs compiled under gcc 3.4.6 and 4.1.2, note an 8 byte difference, and indignantly exclaim "WTF?!?". Joerg, you have the patience of a saint!)

Second, from a timing perspective, there are certain operations (such as setting JTD to disable JTAG) that simply won't work in the cycle constraints with the optimizer turned off.

From those perspectives, I now understand your FAQ, Cliff.

Unfortunately, not using -O0 introduces the constant drone of people who won't read the forum to find out that an optimizer is at work and it will optimize away their "do-nothing" delay loop. *sigh* Well, if it's not one thing it's another, eh?

Thanks to everyone for letting my explore a (not so) philosophical subject. I value your opinions, even if I have played Devil's Advocate occasionally (and will continue to do so).

Thanks, all!

Stu

_________________
Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!
 
 View user's profile Send private message  
Reply with quote Back to top
EW
PostPosted: Sep 14, 2007 - 06:59 PM
Raving lunatic


Joined: Mar 01, 2001
Posts: 5013
Location: Rocky Mountains

For AVR GCC, -O0 is all but useless except for debugging purposes, when dealing with such optimized code that it's hard to follow in the debugger (AVR Studio). In that case, -O0 turns off all optimizations making debugging fairly easy.

However, there are some optimizations (ok, at least one) that are turned off that I think should be turned on even in -O0, that should be benign enough even for debugging. I've had it on my mental list to discuss this on the development lists (Hi Joerg), but haven't gotten around to it.

The subject of default values in the GCC plug-in to AVR Studio is tough. The default is set to -O0. How long does it take for a newbie to figure out that their code is totally bloated and they need to change the optimization setting to -Os? Once the setting is changed to -Os, how many people have had real difficulty in debugging after that?

I'm not really sure of the answers to those questions. The default has historically been -O0. This really distorts the size of your code, at the cost of perhaps simplified debugging.

So, based on Cliff's suggestion, when 4.13 SP1 is released (not the beta that is out now), the default is changed to be -Os. This at least gets people off to a good start. They will probably be able to debug using that setting. If anything weird happens, we can always tell them to throttle down to -O1 or -O0.

Let's see how this works for everyone.

Eric Weddington
 
 View user's profile Send private message Send e-mail Visit poster's website 
Reply with quote Back to top
clawson
PostPosted: Sep 15, 2007 - 03:37 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

Eric,

Well for my part I think that's brilliant news. Like you say, lets see how many "the watch window isn't working" questions are then generated as a result of it Wink

(I'm guessing a lot less than the ones about code size, delay.h, JTD and other 4 cycle ops etc.)

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
JohanEkdahl
PostPosted: Sep 15, 2007 - 04:55 PM
10k+ Postman


Joined: Mar 27, 2002
Posts: 22029
Location: Lund, Sweden

[whispering] Psssstt! Cliff! Why dont you poke him about the libm.a default now that he's in that generous mood... Wink
 
 View user's profile Send private message Visit poster's website 
Reply with quote Back to top
clawson
PostPosted: Sep 15, 2007 - 05:37 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

{whisper]I've half a hope he may have caught that point as well already

(in fact I don't think there'll be anyone who can think of ANY argument for not linking against libm.a by default - wonder if they'll include -std=gnu99 as well for all the "for(int i=0" folks?)

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
BrianS
PostPosted: Sep 17, 2007 - 01:16 PM
Hangaround


Joined: Sep 24, 2001
Posts: 113


Just to clarify, here is an example of a small function that I checked by compiling with -O0 to make sure the function was operationally correct.

Code:

void T6963PutIconPixel(uint16_t x, uint16_t y, uint8_t colour)
{
  uint16_t offs = (x | y) ? (((y*240)+x) / 4) : 0;
  uint8_t msk = (x | y) ? (3<<((((y*240)+x)%4)<<1)) : (3<<0);
  uint8_t cmsk = (x | y) ? (colour<<((((y*240)+x)%4)<<1)) : (colour<<0);
  screen_overlay[offs] &= (uint8_t)~(msk);
  screen_overlay[offs] |= cmsk;
}


It took a few goes to get this correct.

Debugging with AVR Studio on -O0 code is much easier. Insight handles optimised code much better than AVR Studio actually, but it's easier to quickly check functionality of code under AVR Studio on windows.
 
 View user's profile Send private message  
Reply with quote Back to top
clawson
PostPosted: Sep 17, 2007 - 01:25 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

An interesting example. When built into the following "program":
Code:
#include <avr/io.h>

uint8_t screen_overlay[100];

void T6963PutIconPixel(uint16_t x, uint16_t y, uint8_t colour)
{
  uint16_t offs = (x | y) ? (((y*240)+x) / 4) : 0;
  uint8_t msk = (x | y) ? (3<<((((y*240)+x)%4)<<1)) : (3<<0);
  uint8_t cmsk = (x | y) ? (colour<<((((y*240)+x)%4)<<1)) : (colour<<0);
  screen_overlay[offs] &= (uint8_t)~(msk);
  screen_overlay[offs] |= cmsk;
}

int main(void) {
   T6963PutIconPixel(38, 56, 17);
   while(1);
}

then the -O0 version builds to 528 bytes on a mega168 and the -Os version to 272 bytes, almost half the size.

Quod erat demonstrandum.

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
BrianS
PostPosted: Sep 17, 2007 - 01:40 PM
Hangaround


Joined: Sep 24, 2001
Posts: 113


This has nothing to do with running on a target, so who cares what size it ends up being with -O0?

Like I've said, debugging and verifying code in AVR Studio is much easier when gcc is set to -O0. This code was never run on a target when compiled with -O0.
 
 View user's profile Send private message  
Reply with quote Back to top
lfmorrison
PostPosted: Sep 17, 2007 - 03:11 PM
Raving lunatic


Joined: Dec 08, 2004
Posts: 4722
Location: Nova Scotia, Canada

BrianS wrote:
This has nothing to do with running on a target, so who cares what size it ends up being with -O0?

Like I've said, debugging and verifying code in AVR Studio is much easier when gcc is set to -O0. This code was never run on a target when compiled with -O0.


True.

But there exists sample code that can be taken directly out of an AVR datasheet, for example:
Code:
void EEPROM_write(unsigned int uiAddress, unsigned char ucData)
{
   /* Wait for completion of previous write */
   while(EECR & (1<<EEWE))
   ;
   /* Set up address and data registers */
   EEAR = uiAddress;
   EEDR = ucData;
   /* Write logical one to EEMWE */
   EECR |= (1<<EEMWE);
   /* Start eeprom write by setting EEWE */
   EECR |= (1<<EEWE);
}

which, when compiled under -o0 optimization, results in totally incorrect behaviour. No amount of source-level debugging is going to give you insight into why that operation is going wrong. However, using another debugging tool - observation of the interleaved assembly and source - will reveal the source of the problem.

As far as I'm concerned, the seemingly random jumping around phenomenon of stepping through optimized code isn't really a problem as long as the user is told to expect that this will be going on.

What really needs to be resolved, is the ability of the debugger to identify the correct locations of local variables which have been bound to registers. Right now, AVR Studio's insistence on looking at the SRAM for all variables, and its inability to cope with a single location being recycled for use with multiple different local variables at different points within the scope of the same function are the severe limitations preventing effective debugging with optimization enabled IMHO.
 
 View user's profile Send private message  
Reply with quote Back to top
clawson
PostPosted: Sep 17, 2007 - 03:31 PM
10k+ Postman


Joined: Jul 18, 2005
Posts: 71208
Location: (using avr-gcc in) Finchingfield, Essex, England

Totally agree with Luke's last paragraph. That is what seems to be the "problem" with optimised code.

A very "simplistic" fix would be for the ELF to contain a single bit flag to say "this was built with optimisation" and the debugger then simply to refuse to watch variables in this case Wink

Cliff

_________________
 
 View user's profile Send private message  
Reply with quote Back to top
BrianS
PostPosted: Sep 17, 2007 - 05:13 PM
Hangaround


Joined: Sep 24, 2001
Posts: 113


Yes, the problem with AVR Studio is its poor implementation of watching local vars.

I haven't looked at AVR Studio's V2 simulator, does it do any better?

The C examples in 99% of the AVR documentation are for the IAR compiler though. As usual with compilers, your mileage may vary. When debugging something like that, which has specific timing requirements, it's easy enough to check the cycle counter in the simulator or switch to the disassembly view to see what's going on.

Each debugging session has its own set of requirements.
 
 View user's profile Send private message  
Reply with quote Back to top
skeeve
PostPosted: Sep 17, 2007 - 05:52 PM
Raving lunatic


Joined: Oct 29, 2006
Posts: 3210


As others have noted,
debugging can be much easier compiling with -O0.
-O0 can also cause problems, e.g. with 4 cycle limits.
Perhaps newbies should be required
to make an explicit choice.
Make the default something like -Oyou_must_choose .

Is there a clean (usable by a newbie)
way to tell AVR-Studio to use different
-O flags on different files?
Even if -O0 would otherwise be useful, one might
want to make sure the result will fit in memory.

_________________
Michael Hennebry
"Religious obligations are absolute." -- Relg
 
 View user's profile Send private message  
Reply with quote Back to top
JohanEkdahl
PostPosted: Sep 17, 2007 - 07:14 PM
10k+ Postman


Joined: Mar 27, 2002
Posts: 22029
Location: Lund, Sweden

Quote:

Is there a clean (usable by a newbie)
way to tell AVR-Studio to use different
-O flags on different files?

Yes. Under Project, Configuration Options. Select Custom Options. Then mark the file, and add the compiler option you want (in this case "-O0").
 
 View user's profile Send private message Visit poster's website 
Reply with quote Back to top
EW
PostPosted: Sep 18, 2007 - 12:09 AM
Raving lunatic


Joined: Mar 01, 2001
Posts: 5013
Location: Rocky Mountains

clawson wrote:
{whisper]I've half a hope he may have caught that point as well already

(in fact I don't think there'll be anyone who can think of ANY argument for not linking against libm.a by default - wonder if they'll include -std=gnu99 as well for all the "for(int i=0" folks?)


Yes, I already knew about those. IIRC, -std=gnu99 will be in the 4.13 SP1 release. We did take a look at automatically linking in libc and libm. However, that will take a bit more work to do then we had time for. We would like to target that for 4.13 SP2.
 
 View user's profile Send private message Send e-mail Visit poster's website 
Reply with quote Back to top
EW
PostPosted: Sep 18, 2007 - 12:13 AM
Raving lunatic


Joined: Mar 01, 2001
Posts: 5013
Location: Rocky Mountains

skeeve wrote:
As others have noted,
debugging can be much easier compiling with -O0.
-O0 can also cause problems, e.g. with 4 cycle limits.
Perhaps newbies should be required
to make an explicit choice.
Make the default something like -Oyou_must_choose .


The world does not exist of -O0 and -Os exclusively. There is also -O1, -O2, and -O3. Perhaps -O1 is sufficient enough to handle the 4 cycle limits, but allows easier debugging than -Os. I don't know; I haven't tried it.

If one wants a sip from the firehose, then go through the GCC User Manual and look at all of the individual optimization flags that one can turn on and off at will. GCC is very flexible.
 
 View user's profile Send private message Send e-mail Visit poster's website 
Reply with quote Back to top
skeeve
PostPosted: Sep 18, 2007 - 01:40 AM
Raving lunatic


Joined: Oct 29, 2006
Posts: 3210


EW wrote:
skeeve wrote:
As others have noted,
debugging can be much easier compiling with -O0.
-O0 can also cause problems, e.g. with 4 cycle limits.
Perhaps newbies should be required
to make an explicit choice.
Make the default something like -Oyou_must_choose .


The world does not exist of -O0 and -Os exclusively. There is also -O1, -O2, and -O3. Perhaps -O1 is sufficient enough to handle the 4 cycle limits, but allows easier debugging than -Os. I don't know; I haven't tried it.
Choosing does not imply a binary choice.
If one wants to emphasize that,
perhaps -Oyou_must_choose_0123s would be good.
If -O1 will work both for newbie debugging
and for things that need optimization,
probably that should be the default.
Quote:
If one wants a sip from the firehose, then go through the GCC User Manual and look at all of the individual optimization flags that one can turn on and off at will. GCC is very flexible.

_________________
Michael Hennebry
"Religious obligations are absolute." -- Relg
 
 View user's profile Send private message  
Reply with quote Back to top
Geoff
PostPosted: Sep 19, 2007 - 03:51 AM
Hangaround


Joined: Apr 24, 2006
Posts: 285
Location: Sydney, Australia

skeeve wrote:
Perhaps newbies should be required
to make an explicit choice.
Make the default something like -Oyou_must_choose

ah, so you want a hundred threads a week on "why won't my program compile" Laughing

This is another one of those debates with no solution, because different problems require a different approach. Some bugs are timing related, so -O0 won't help you. Some bugs are logic related, and -O0 makes it much easier to follow your code.

I'm a fan of -O0 as it reduces the amount of thinking I have to do when debugging something. I have certainly come up on things that work on some debugging levels and not others though, so it requires that care must be taken and that realisation only comes with experience.

Bottom line I think you have to get newbies started as easily as possible, and forcing them to make a choice about debugging levels when they don't really grasp the implication of their choice is asking for trouble.
 
 View user's profile Send private message  
Reply with quote Back to top
skeeve
PostPosted: Sep 19, 2007 - 06:12 PM
Raving lunatic


Joined: Oct 29, 2006
Posts: 3210


Geoff wrote:
skeeve wrote:
Perhaps newbies should be required
to make an explicit choice.
Make the default something like -Oyou_must_choose

ah, so you want a hundred threads a week on "why won't my program compile" Laughing
cc1.exe wrote:
error: invalid option argument '-Oyou_must_choose_0123s'
If the above really is too uninformative for a newbie perhaps
we could add a sticky thread with the subject -Oyou_must_choose.
Geoff wrote:
This is another one of those debates with no solution, because different problems require a different approach. Some bugs are timing related, so -O0 won't help you. Some bugs are logic related, and -O0 makes it much easier to follow your code.

I'm a fan of -O0 as it reduces the amount of thinking I have to do when debugging something. I have certainly come up on things that work on some debugging levels and not others though, so it requires that care must be taken and that realisation only comes with experience.

Bottom line I think you have to get newbies started as easily as possible, and forcing them to make a choice about debugging levels when they don't really grasp the implication of their choice is asking for trouble.
If the choice they are handed is wrong half the time,
that is a problem too,
especially if the problem gives no hint of its origin.
On getting the above error message and making the wrong choice,
the newbie will be better off than if he had had the wrong choice handed to him.
He will be aware of the choice and that it was important.

_________________
Michael Hennebry
"Religious obligations are absolute." -- Relg
 
 View user's profile Send private message  
Reply with quote Back to top
Geoff
PostPosted: Sep 24, 2007 - 06:23 AM
Hangaround


Joined: Apr 24, 2006
Posts: 285
Location: Sydney, Australia

fair call.. I'd just try and make the error as informative as possible in terms of steering them towards the FAQ so they can understand and make a choice quickly... rather than scratching their heads about "choose what? 123? why?"
 
 View user's profile Send private message  
Reply with quote Back to top
skeeve
PostPosted: Sep 25, 2007 - 05:20 PM
Raving lunatic


Joined: Oct 29, 2006
Posts: 3210


Geoff wrote:
fair call.. I'd just try and make the error as informative as possible in terms of steering them towards the FAQ so they can understand and make a choice quickly... rather than scratching their heads about "choose what? 123? why?"
I'm not sure that the FAQ is all that useful in this regard.
It basically says Use -Os, you'll get smaller code.
It doesn't address debugging or what -O0 might break.
Perhaps a sticky would be better.

_________________
Michael Hennebry
"Religious obligations are absolute." -- Relg
 
 View user's profile Send private message  
Reply with quote Back to top
EW
PostPosted: Sep 25, 2007 - 08:04 PM
Raving lunatic


Joined: Mar 01, 2001
Posts: 5013
Location: Rocky Mountains

Actually, a patch to fix the avr-libc FAQ would be quite useful...
 
 View user's profile Send private message Send e-mail Visit poster's website 
Reply with quote Back to top
BrianS
PostPosted: Sep 28, 2007 - 05:34 PM
Hangaround


Joined: Sep 24, 2001
Posts: 113


I got bit today. Debug by the usual -O0 method, and got caught out by the following test being virtually optimised away in the actual device firmware (at -Os):

Code:

bool memtest( unsigned char* _start,
              unsigned char* _end,
              unsigned char** addr )
{
  /* Test the memory between start and end */
  int i;
  unsigned char tmp;
  addr = NULL;

  while(_start < _end)
  {
    /* Even bits test */
    tmp = *_start;
    *_start = 0xAA;
    if (*_start != 0xAA)
    {
      *addr = _start;
      return false;
    }

    /* Odd bits test */
    *_start = 0x55;
    if (*_start != 0x55)
    {
      *addr = _start;
      return false;
    }

    /* Rotating bit test */
    for(i=1; i<0x100; (i <<= 1))
    {
      *_start = (0xFF & i);
      if (*_start != (0xFF & i))
      {
        *addr = _start;
        return false;
      }
    }

    /* Test next location */
    *_start++ = tmp;
  }

  return true;
}


A memory test that I thought was being successful should infact have been failing - we have a HC573 on board instead of an AHC573!

Had to modify it to:
Code:

bool memtest( volatile unsigned char* _start,
              volatile unsigned char* _end,
              volatile unsigned char** addr )
{ ...


Should have been obvious, but it still tripped me up! Sad
 
 View user's profile Send private message  
Reply with quote Back to top
S-Sohn
PostPosted: Sep 28, 2007 - 07:14 PM
Posting Freak


Joined: Aug 22, 2004
Posts: 1630
Location: Germany

Quote:
Code:

void T6963PutIconPixel(uint16_t x, uint16_t y, uint8_t colour)
{
  uint16_t offs = (x | y) ? (((y*240)+x) / 4) : 0;
  uint8_t msk = (x | y) ? (3<<((((y*240)+x)%4)<<1)) : (3<<0);
  uint8_t cmsk = (x | y) ? (colour<<((((y*240)+x)%4)<<1)) : (colour<<0);
  screen_overlay[offs] &= (uint8_t)~(msk);
  screen_overlay[offs] |= cmsk;
}


The more experience I have with programming, the more I avoid code like this.

1. If I split my C-code in several lines, the compiled code won't become larger.

2. When debugging, I can monitor in several steps how the result is calculated and can better determine where the error hides.

3. If you don't take care, the compiler often makes 16 bit operations when only a 8 bit operation is neccessary. When looking at the *.lss file it's more easy to see where I need to add a typecast when I don't use large expressions like above.

When I debug code, I set a breakpoint and wait until the program is halted. Then I usually step through the mixed code. That gives me a much deeper understanding of how the compiler works. So I learned a lot of how I can program even more compact software. The number of C-code lines says nothing about the number of generated instructions. I never used the -O0 optimization level and I can only underline Cliff's conclusion to avoid -O0 optimization at all costs.

Regards
Sebastian
 
 View user's profile Send private message  
Reply with quote Back to top
skeeve
PostPosted: Sep 28, 2007 - 07:27 PM
Raving lunatic


Joined: Oct 29, 2006
Posts: 3210


BrianS wrote:
I got bit today. Debug by the usual -O0 method, and got caught out by the following test being virtually optimised away in the actual device firmware (at -Os):

Code:

bool memtest( unsigned char* _start,
              unsigned char* _end,
              unsigned char** addr )
{
  /* Test the memory between start and end */
  int i;
  unsigned char tmp;
  addr = NULL;

  while(_start < _end)
  {
    /* Even bits test */
    tmp = *_start;
    *_start = 0xAA;
    if (*_start != 0xAA)
    {
      *addr = _start;
      return false;
    }
I think that the first assignment should be "*addr=NULL;" .

_________________
Michael Hennebry
"Religious obligations are absolute." -- Relg
 
 View user's profile Send private message  
Reply with quote Back to top
CoolHammer
PostPosted: Mar 16, 2009 - 08:06 AM
Rookie


Joined: Aug 28, 2006
Posts: 43
Location: Finland

As a newbie this thread often refers i toss mine opinion into bowl.

I have had several problems with debugging during mine learning. I totally understand the hurdle of thoughts and desperate debugging of someone with not so deep understanding what happens under hood.

I found an article about those magic -O switches while ago. Since i'm using fairly new winavr and avrstudio default optimization is -Os. I must say debugging as a newbie with this setting aint easy. As a matter of fact I almost gave up with project because of debugging problems. After realizing reason of debugging problems, optimization, i turned optimization off with -Oo. It did not took long to find mine bug, and off we go with new functionality. Although i was lucky with -O0 switch, after compiling code filled up attiny 25 memory within 96%. With -Os it was about 40%.

As someone said earlier in this thread, best solution would be improving debugger ability to follow up -Os optimized code.
 
 View user's profile Send private message  
Reply with quote Back to top
js
PostPosted: Mar 16, 2009 - 09:08 AM
10k+ Postman


Joined: Mar 28, 2001
Posts: 22618
Location: Sydney, Australia (Gum trees, Koalas and Kangaroos, No Edelweiss)

I think STU should be made to write a multitasking O/S for the PIC in asm for such heresy!!

_________________
John Samperi
Ampertronics Pty. Ltd.
www.ampertronics.com.au
* Electronic Design * Custom Products * Contract Assembly
 
 View user's profile Send private message Visit poster's website 
Reply with quote Back to top
cmhicks
PostPosted: Mar 16, 2009 - 09:11 AM
Hangaround


Joined: Nov 20, 2008
Posts: 339
Location: Cambridge, UK

BrianS wrote:
Yes, the problem with AVR Studio is its poor implementation of watching local vars.

I haven't looked at AVR Studio's V2 simulator, does it do any better?

Simulator 2 is no better in this respect. You wouldn't really expect it to be, as it is a function of how the debugger interprets the object code, not of the target on which that object code is running (or being simulated).

Christopher Hicks
==
 
 View user's profile Send private message  
Reply with quote Back to top
stu_san
PostPosted: Mar 16, 2009 - 02:50 PM
Raving lunatic


Joined: Dec 30, 2005
Posts: 2327
Location: Fort Collins, CO USA

js wrote:
I think STU should be made to write a multitasking O/S for the PIC in asm for such heresy!!
Razz Make me, Roo Boy! Laughing

"I am endeavoring to build a duotronic mnemonic circuit using stone knives and bear skins." -- Spock.

Stu

_________________
Engineering seems to boil down to: Cheap. Fast. Good. Choose two. Sometimes choose only one.

Newbie? Be sure to read the thread Newbie? Start here!
 
 View user's profile Send private message  
Reply with quote Back to top
Display posts from previous:     
Jump to:  
All times are GMT + 1 Hour
Post new topic   Reply to topic
View previous topic Printable version Log in to check your private messages View next topic
Powered by PNphpBB2 © 2003-2006 The PNphpBB Group
Credits