avr-libc variable definition (possibly stupid question)

Go To Last Post
9 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I´m reworking some example code to fit my project (and gcc).

Once again I stumbled over the different possibilities of variable definition. The avr-libc reference manual states the following typedefs for stdint.h:

Exact-width integer types
Integer types having exactly the specified width

typedef signed char             int8_t
typedef unsigned char           uint8_t
typedef signed int              int16_t
typedef unsigned int            uint16_t
[...]

This means that using a definition like:

int8_t     some_var;

is essentially the same as:

signed char     some_var;

How does gcc handle the definition if the "signed" is left out? Does such a definition default to "signed" or is it safer to use always signed/unsigned definitions?

What´s more is that I´m puzzled because I always believed "char" and "int" to be of the same size in avr-gcc (but I can´t tell you how that idea settled in my mind).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For char there's a compiler option to say whether it should be treated as signed or unsigned. By default Mfile Makefiles and also Studio's GCC project system bith default to select this option so char without signed/unsigned will be treated as unsigned.

But this is EXACTLY the reason for using the stdint types. When you use uint8_t there is absolutely no doubt whatsoever what you will be getting!

Cliff

BTW on AVR like almost every compiler I've ever come across char is 8 bits. On 8 bit compilers it tends to be the case that both short and int are 16 bits while on 32 bit compilers int will usually be 32bits (and short 16 bits)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cliff, thanks for the hint on compiler options. Time to RTFM once again.
(Been there, done that, but obviously reading stuff does not mean that
you also remember it when you need it :roll: )

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I beleive the C standard states that the size of int will match the native data size of the target architecture, unless that size is less than 16 bits. In this case the size is 16 bits.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> beleive the C standard states that the size of int will match
> the native data size of the target architecture

Well, the standard itself does not state anything about implementation
sizes, it only requires certain minimal value ranges for each type.
These ranges essentially require `int' to have at least 16 bits.

But yes, the basic idea behind an int is to match the native object
size on the target machine. However, when 64-bit CPUs came widely into
use, people apparently didn't have the heart to follow that idea, so
`int' is usually still only 32 bits on these systems.

Back to the OP: from a C standard's point of view, the only portable
way is to consider all three types, char, unsigned char, and signed
char to be distinct data types that always requires a cast when passing
data around between these types. A portable application simply must
not change its behaviour regardless of whether the implementation uses
signed or unsigned char as its default `char' type. GCC 4.x appears to
issue a lot more warnings when this is not met than older compiler
versions used to.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

dl8dtl wrote:
But yes, the basic idea behind an int is to match the native object
size on the target machine. [...]

I guess it´s been this statement (which I must have read in some documentation) which made me believe an int would be 8 bit wide on the AVRs. Thank you for clarifying.

dl8dtl wrote:
Back to the OP: from a C standard's point of view, the only portable
way is to consider all three types, char, unsigned char, and signed
char to be distinct data types that always requires a cast when passing
data around between these types. A portable application simply must
not change its behaviour regardless of whether the implementation uses
signed or unsigned char as its default `char' type. GCC 4.x appears to
issue a lot more warnings when this is not met than older compiler
versions used to.

So sticking to 'uintNN_t' and 'intNN_t' seems ok to me since they are typedef´d as 'signed int' and 'unsigned int' in stdint.h and therefor provide a clear distinction. What´s more (and this might be pure personal preference and/or imagination), reading 'int32_t' some how produces a clearer picture of the possible value range than 'signed long int' and understanding source code gets easier.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That's exactly the idea of the (u)intNN_t types but you will hit problems if, for example you have a "const uint8_t my_string[] = {"Hello"}" and then call a library function like strlen(my_string) as strlen() is going to be looking for a "const char *" and not a "const uint8_t *" so you may need to cast some things to quell the warnings you will get otherwise.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As a rule of thumb: use `char' (unqualified) for anything that is
true characters and character strings.

Use uint8_t and int8_t for anything where you are using it as a
small integer.

Normally, you'll only have to typecast once, when data passes the
"small integer" <-> "character" domain. This e.g. happens when
reading characters from a UART: the UDR is in the "small integer"
domain, but when you assemble the received data into a string,
it should be cast to "char".

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
That's exactly the idea of the (u)intNN_t types but you will hit problems if, for example you have a "const uint8_t my_string[] = {"Hello"}" and then call a library function like strlen(my_string) as strlen() is going to be looking for a "const char *" and not a "const uint8_t *" so you may need to cast some things to quell the warnings you will get otherwise.

Thanks for pointing this out. Hope I remember it when I hit such a situation.
dl8dtl wrote:
As a rule of thumb: use `char' (unqualified) for anything that is true characters and character strings.

Use uint8_t and int8_t for anything where you are using it as a small integer.


Sounds good. It´ll make it even more easier to see which variables are for characters/strings and which are for numbers (especially if you´ve not been working with the source code for some time).