Heres one that got me today.
given the following code fragment. (used to illustrate the bug)
unsigned long mask; unsigned char shift; for(shift=0;shift<24;shift++) { mask = 1<<shift; printf("%02d: %06lx", shift, mask); }
Everything at first glance seems to be correct, and on a PC will compile and output what you would expect. Yet using IAR, the output suddenly begins to give the incorrect output once shift reaches 15.
output:
00: 000001 01: 000002 . . . 13: 002000 14: 004000 15: ff8000 16: 000000 . . . 23: 000000
You see I had forgotten that IAR treats lliterals as signed int's by default, and because of casting rules in the order of operations, the shift is done on a signed int, and then cast into a long. resulting in a 16 bit shift, and a sign extension to 32 bit operation.
Simply forcing the literal to ba a long corrects this.
mask = 1L<<shift;
So beware of those hidden casts, tehy can both affect the efficiency of your code, and it's reliability. While mine got me on reliability, the following will get you on efficiency.
unsigned char mask; unsigned char shift; for(shift=0;shift<8;shift++) { mask = 1<<shift; printf("%02d: %02x", shift, mask); }
While you will always get the correct result this time, the shift will still be performed as a 16bit shift, and then cast down to a char. the solution is to force the literal to be a char this time. (If you're lucky the optimizer might strip out the shift on the upper byte)
Also note, that while I used IAR, the same will happen on any compiler. It's just a matter of when, and that is determined by what the default datatype the compiler uses for its literals.
Note that these gotchas only apply if the result cannot be determined at compile time. If it can, the casting and resizing is done by the pre-processor, leaving an efficient set of a constant of the correct size.