How do compilers handle unused code?

Go To Last Post
10 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Folks

 

This is hopefully a very basic question. I have an application using an ATTINY414 and one part of it uses TWI to take a reading from a temperature sensor, I'm using a basic single register read.

 

Due to the relative scarcity of posts about the newer ATTINY models and my lack of programming ability I created an ATMEL Start project for the chip, added the TWI option and then just copied all this code across to my project.

 

It's working fine but I'm sure that I've copied over a lot of code that I'm not actually using.

 

My question (there is one coming, honest) is whether it's worth my while going through this code and working out what I need and what's bloat. Is the compiler smart enough to work out when blocks of code will never be called and leave them out or does it assume that I know what I'm doing and include it all in the output that's uploaded to the MCU?

 

I'm fine for memory, I don't NEED to free any up, it just offends my sense of neatness to have redundant code sitting there and made me wonder how compilers handle this?

 

I'm using ATMEL Studio 7 if that's relevant.

 

Thanks

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Compilers analyse your code and throw away anything that is unused.
Linkers only link code that is used.
.
Don't worry about your AVR getting filled with unused code.
Just enjoy the expereience of wading through treacle that is Atmel Start.
All the same, the Start code should get you operational.
.
You probably want to do severe pruning if you want to actually follow the convoluted structure of Start / ASF.
.
David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If it is unused, it will be discarded (by the linker though, not the compiler). 

 

You can use https://gallery.microchip.com/pa... to check the sizes of the different symbols in your program. It should point pretty directly at things that you might save on...

:: Morten

 

(yes, I work for Atmel, yes, I do this in my spare time, now stop sending PMs)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

GilchristT wrote:

Is the compiler smart enough to work out when blocks of code will never be called and leave them out or does it assume that I know what I'm doing and include it all in the output that's uploaded to the MCU?

Yes to the "or" question.

 

More precise: It depends:

 

If the compiler can figure out that some code will be unsused, then it will remove it.  This also depends on analyze-capabilities of the compiler which may change with optimization level and optimization strategy like global optimization, value-range analysis, data evolution analysis etc.

 

If the linker can figure out that some code will be unused, then it will remove unreferenced sections provided --gc-sections is on and there is no KEEP stuff or entry symbol referring to that section.  Compiler-options like -ffunction-sections and -fdata-sections can increase section granularity, hence the chance of an unused code chunk to be thrown out might increase.

 

If it only turns out at run-time that some code is unused, you lost.  Even if a static code analysis could prove in principle that some chunks are not used, the tools cannot do anything about it. A common example is a full flavoured printf implementation that drags full float printing capabilities, even if printf is never used to print float.  avr-libc has some strategies to mitigate this by means of command-line options. The avr-libc docs will tell you more.

 

avrfreaks does not support Opera. Profile inactive.

Last Edited: Thu. Jan 18, 2018 - 04:35 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks folks, some great info there

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Avr studio uses GCC.

GCC has different "optimisation" settings which influence the code generation significantly.

Optimisation settings are often a tradeoff between code size and execution speed.

There are a few predefined optimisation levels:  -O  -O0  -O1  -O2  -O3  -Os -Ofast -Og and different optimisations can be turned on/off explicitly.

Below is a part of the GCC manual for the optimisation settings.

 

GCC can also generate list files.

In those list files you can see the asm instructions generated for each line of C source code.

Studying the LST file can be very helpfull if you're interested in the low level stuff (for example ISR optimisation).

   Options That Control Optimization
       These options control various sorts of optimizations.

       Without any optimization option, the compiler's goal is to reduce the cost
       of compilation and to make debugging produce the expected results.
       Statements are independent: if you stop the program with a breakpoint
       between statements, you can then assign a new value to any variable or
       change the program counter to any other statement in the function and get
       exactly the results you expect from the source code.

       Turning on optimization flags makes the compiler attempt to improve the
       performance and/or code size at the expense of compilation time and
       possibly the ability to debug the program.

       The compiler performs optimization based on the knowledge it has of the
       program.  Compiling multiple files at once to a single output file mode
       allows the compiler to use information gained from all of the files when
       compiling each of them.

       Not all optimizations are controlled directly by a flag.  Only
       optimizations that have a flag are listed in this section.

       Most optimizations are only enabled if an -O level is set on the command
       line.  Otherwise they are disabled, even if individual optimization flags
       are specified.

       Depending on the target and how GCC was configured, a slightly different
       set of optimizations may be enabled at each -O level than those listed
       here.  You can invoke GCC with -Q --help=optimizers to find out the exact
       set of optimizations that are enabled at each level.

       -O
       -O1 Optimize.  Optimizing compilation takes somewhat more time, and a lot
           more memory for a large function.

           With -O, the compiler tries to reduce code size and execution time,
           without performing any optimizations that take a great deal of
           compilation time.

           -O turns on the following optimization flags:

           -fauto-inc-dec -fcompare-elim -fcprop-registers -fdce -fdefer-pop
           -fdelayed-branch -fdse -fguess-branch-probability -fif-conversion2
           -fif-conversion -fipa-pure-const -fipa-profile -fipa-reference
           -fmerge-constants -fsplit-wide-types -ftree-bit-ccp
           -ftree-builtin-call-dce -ftree-ccp -ftree-ch -ftree-copyrename
           -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-forwprop -ftree-fre
           -ftree-phiprop -ftree-slsr -ftree-sra -ftree-pta -ftree-ter
           -funit-at-a-time

           -O also turns on -fomit-frame-pointer on machines where doing so does
           not interfere with debugging.

       -O2 Optimize even more.  GCC performs nearly all supported optimizations
           that do not involve a space-speed tradeoff.  As compared to -O, this
           option increases both compilation time and the performance of the
           generated code.

           -O2 turns on all optimization flags specified by -O.  It also turns on
           the following optimization flags: -fthread-jumps -falign-functions
           -falign-jumps -falign-loops  -falign-labels -fcaller-saves
           -fcrossjumping -fcse-follow-jumps  -fcse-skip-blocks
           -fdelete-null-pointer-checks -fdevirtualize -fexpensive-optimizations
           -fgcse  -fgcse-lm -fhoist-adjacent-loads -finline-small-functions
           -findirect-inlining -fipa-sra -foptimize-sibling-calls
           -fpartial-inlining -fpeephole2 -fregmove -freorder-blocks
           -freorder-functions -frerun-cse-after-loop -fsched-interblock
           -fsched-spec -fschedule-insns  -fschedule-insns2 -fstrict-aliasing
           -fstrict-overflow -ftree-switch-conversion -ftree-tail-merge -ftree-pre
           -ftree-vrp





 

 

 

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Most modern IDE and makefiles driving avr-gcc use -ffunction-sections in the compilation and -gc-sections in the link.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
Most modern IDE and makefiles driving avr-gcc use -ffunction-sections in the compilation and -gc-sections in the link.

Indeed:

and:

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

With respect to unused data, -fdata-sections -Wl,--gc-sections is not the whole story as demonstrated by the following example:

echo "int volatile v, w; int main() { return v; }" | avr-gcc -xc - -mmcu=atmega8 -Os -fdata-sections -Wl,--gc-sections && avr-size a.out
   text	   data	    bss	    dec	    hex	filename
     86	      0	      4	     90	     5a	a.out

It has one used variable (v) and one unused (w), yet there are 4 bytes in .bss (common actually), despite the the only 2 used bytes from v.

echo "int volatile v, w; int main() { return v; }" | avr-gcc -xc - -mmcu=atmega8 -Os -fdata-sections -Wl,--gc-sections -fno-common && avr-size a.out
   text	   data	    bss	    dec	    hex	filename
     86	      0	      2	     88	     58	a.out

With -fno-common, v and w are put in their own .bss.v resp. .bss.w so that now w can be thrown out.
 

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

With -fno-common, v and w are put in their own .bss.v resp. .bss.w so that now w can be thrown out.

Learn something new every day!

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"Read a lot.  Write a lot."

"We see a lot of arses on handlebars around here." - [J Ekdahl]