2's Complement

Go To Last Post
23 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi All
I am trying to read a temperature off a I2C Chip
the data sheet says

Quote:

5.4.1 DIE_TEMP (0x0F)
Temperature °C expressed as an 8-bit 2's complement number, however, the temperature is not trimmed for offset and the
compensation for offset must be done in user application. The sensitivity of the temperature reading is correct. Note: The register
allows for temperatures from -128°C to 127°C but the output range is from -40°C to 125°C

I am getting 250 out of it, if i do
temperature= ~temperature;
I get 5,
neither is correct, and if I do the second option, the temp goes down when I raise the die temperature.

the device is a freescale MAG3110
Thanks
Phil

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Self solved, helps if you set the variable to a (signed) char, rather than an unsigned char :)
oops :)

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

Last Edited: Thu. Sep 13, 2012 - 10:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That's ok, it's good to see your face. :wink: :D

"I may make you feel but I can't make you think" - Jethro Tull - Thick As A Brick

"void transmigratus(void) {transmigratus();} // recursio infinitus" - larryvc

"It's much more practical to rely on the processing powers of the real debugger, i.e. the one between the keyboard and chair." - JW wek3

"When you arise in the morning think of what a privilege it is to be alive: to breathe, to think, to enjoy, to love." -  Marcus Aurelius

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You should use unsigned char, not char.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I miss something,
I was under the assumption that a char is by default unsigned as it was from "character" I do not have a manual at hand here, but wonder if I got it wrong....
int is signed by default
char is unsigned by default
byte is unsigned by default

for the record I always try to use uintx_t and intx_t for defining my variables for a simple sole like me that keeps it as simple as possible 8)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
You should use unsigned char, not char.

To get an output range of -128 to 127 he should use signed char, or a type to that effect.

Given that the output range is actually 40-125 it doesn't matter one way or the other.

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I was under the assumption that a char is by default unsigned as it was from "character" I do not have a manual at hand here, but wonder if I got it wrong....

This comes up about once a week here. C has THREE distinct char types:

char
unsigned char
signed char

char is neither signed nor unsigned. Always pick the appropriate one.

If dealing with characters/strings use char.

If holding 8bits as 0..255 use unsigned char

If holding 8bits as -128..+127 use signed char

Gcc does have a command line switch that most Makefiles and AS4/AS5/AS6 use that is -funsigned-char for which the manual says:

Quote:
-funsigned-char
Let the type char be unsigned, like unsigned char.

Each kind of machine has a default for what char should be. It is either like unsigned char by default or like signed char by default.

Ideally, a portable program should always use signed char or unsigned char when it depends on the signedness of an object. But many programs have been written to use plain char and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default.

The type char is always a distinct type from each of signed char or unsigned char, even though its behavior is always just like one of those two.


BTW Google hit this which looks interesting:

http://www.network-theory.co.uk/...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

larryvc wrote:
That's ok, it's good to see your face. :wink: :D

Thanks Larry :)
I've still been around lurking at the edges, just been a crazy last few months.
And yes, a good old signed char did the job.
Phil

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChaunceyGardiner wrote:
Koshchi wrote:
You should use unsigned char, not char.

To get an output range of -128 to 127 he should use signed char, or a type to that effect.

Given that the output range is actually 40-125 it doesn't matter one way or the other.

Sorry, I meant signed char. I guess I am just not used to writing signed since this is the only place that it is really used and I use int8_t.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
gcc does have a command line switch that most Makefiles and AS4/AS5/AS6 use that is -funsigned-char
Using -f[no-][un]signed-char is a really bad idea bacause it changes the application binary interface (ABI).

It's bad because linking object with different ABIs together may result in non-functional code. Notice that objects that come from a library are only save if this option is a multilib option, i.e. the tools pick the correct ABI flavor of libc/libm/libgcc depending no that option.

None of the mentioned options is a multilib option.

And most users don't compile their own standard libraries with their private options to render the library ABI compliant.

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
I meant signed char. I guess I am just not used to writing signed since this is the only place that it is really used and I use int8_t.

Same here - this is the only platform I have worked on where char is not signed by default. That bit me enough that I usually add this line close to the top of my source files:
#define char use__int8_t__not__char

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But you want to use char when it really is a character. I only use int8_t when it is a number.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You have a point there, I remove it in those cases. Most of the time it is a useful reminder, though.

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My habit
always be explicit in declarations:
uint8_t for example
don't assume the compiler config is any particular value.

use char only for 7 bit ASCII.
Anything that is arithmetic in nature, use unsigned char or uint8_t from stddefs.h

If you're not certain about fitting ranges into 8 bits, use an uint16_t or uint32_t or int16_t or int32_t (signed). Avoid 32 bit integers, and of course floating point wherever possible on 8 bitters.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Using -f[no-][un]signed-char is a really bad idea bacause it changes the application binary interface (ABI).

That's intriguing. Never realised that. As I say most build systems seem to default to specifying -funsigned-char, do you know if that's what's been used in the generation of libc/AVR-LibC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What does the compiler default to ?

As for libraries, I don't see how it would matter one way or the other.

Assuming the libraries were built with types suitable for what the library does, it doesn't matter one bit whether the caller uses signed or unsigned char for arguments.

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChaunceyGardiner wrote:
What does the compiler default to ?

As for libraries, I don't see how it would matter one way or the other.

Assuming the libraries were built with types suitable for what the library does, it doesn't matter one bit whether the caller uses signed or unsigned char for arguments.


But if you just declare "char" with no "signed or unsigned" what does it default to?

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

proficnc wrote:
ChaunceyGardiner wrote:
What does the compiler default to ?

But if you just declare "char" with no "signed or unsigned" what does it default to?

Define "it".

I was (rhetorically) asking what the compiler defaults to. Any sensible compiler will default to signed for char, just like it will for any other integer type.

Some build environments do, for reasons that are beyond me, override this default with their own defaults.

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It = char

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What the compiler default is depends on how the compiler was configured when it was built. The AS6 compiler says:

C:\Program Files\Atmel\Atmel Studio 6.0\extensions\Atmel\AVRGCC\3.4.0.65\AVRToolchain\libexec\gcc\avr\4.6.2>.\CC1 --help  | grep "signed-char"
  -fsigned-char                         [enabled]
  -funsigned-char                       [disabled]

so it's defaulting to "signed char" for "char". Doing the same on WinAVR20100110 produces:

E:\WinAVR-20100110\libexec\gcc\avr\4.3.3>.\cc1 --help | grep "signed-char"
  -fsigned-char
  -funsigned-char

which suggests neither was explicitly specified when the compiler was built.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In other words, don't leave it to the compiler, specify the signed or unsigned and be sure :)

_________________________________

www.proficnc.com
_________________________________
Go Aussie Go!!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you use signed char when you dont have to (all terms in expression are > 0) you might get an extra branch instruction in a relational expression to deal with the sign bit. That's the only down side I can think of.

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChaunceyGardiner wrote:
I was (rhetorically) asking what the compiler defaults to.
The C standard says it is implementation defined, thut it's up to the implementation, i.e. the compiler in this case. The compiler, in turn, has to follow the ABI which must specify if char operates as signed or as unsigned.

If an ABI is sloppy and does not mention signedness of char, you have to read the compiler sources (provided they are available to you) or ask the support. In the case of avr-gcc, DEFAULT_SIGNED_CHAR is define'd to 1 in GCC's avr.h.

Quote:
Any sensible compiler will default to signed for char, just like it will for any other integer type.
As just mentioned, this is up to the ABI, just like sizeof (void*), sizeof (int), signedness of plain int bitfields, number of significant initial letters in symbol names, alignment of data and composites, endianess, stack and frame layout, parameter and return value passing, the result of pointer-to-int casts (and vice versa), how to search for #include and #include "foo", the integer type compatible with each enumeration, ...

avrfreaks does not support Opera. Profile inactive.