I have read some great info about proper data types, when to use and what to use. More specifically how to keep the over all flash size down. But there is one thing that remains, that I dont understand. This is why unsigned char, char, and uint8_t are not the same size?
I know that uchar just counts from 0 and up ( 8 bytes total) where char includes negative numbers but still 8 bytes total, same goes for u ints8 and int8. but I have seen cases where using a uchar save flash size and via vera.
also, I have seen in some cases substituting an int for a char save flash size? I just don see how that is possible unless there is some truncation that I cant find . An int to a char, I get but char to int, saving flash size?
And this one really gets me. Replacing chars with uint8_t, and visa verse will gain flash size room. Are the universal data types better in some cases?
I know this all has to do with how its used, but I was wondering if there was a global answer to this strangeness.
If needed I can demonstrate with some example code. I'm going to venture to say its just some understand I need to acquire.
one quick example:
If do this
uint8_t CLOCK_BIT, LATCH_BIT, DATA_BIT, DATA_PIN;
my flash size is at 100.0
but changing to
char _t CLOCK_BIT, LATCH_BIT, DATA_BIT, DATA_PIN;
I'm now at 101.2
CLOCK_BIT, LATCH_BIT, DATA_BIT, DATA_PIN are set in my init and used throughout as static data, mainly for if statements.
So why does the char use so much more?