I am running Atmel Studio 7 with pretty much default settings
Is there a way to turn off optimizing away variables when doing a simulation? For example, suppose I have something like
uint8_t foo(uint8_t x) { float y = x / 2.0; uint8_t z = (uint8_t) y; return z; }
The compiler will optimize away y (and possibly z). But I want to be able to see the y while stepping through the simulation, to see how the float form before it's converted to an int.
Is there a way to do this? I suspect some sort of optimization flag, but can't seem to find it. Do I just turn off optimization entirely by setting it to O0? (it's currently set to -O1)
For bonus points, I deploy to the chip using avrdude, which I've set up as an external tool in AS7. Works great. Is there a way to have the build process set up so that it creates one version (unoptimized in this fashion) for simulation and one version (fully debugged) for deployment to the uC? So that I don't have to change settings between simulation and deployment?
Thanks.