Sign Extending 10-bit signed value?

Go To Last Post
120 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Greetings -

Not sure where this goes. It directly applies to Tiny25/45/85 so, I am starting here, even though the real question has to do with signed integer behavior.

This is an application running in a Tiny25/45. I am using the 20X amplifier as one of the ADC inputs. Since the potential offset is large enough, I am measuring the offset by shorting the differential inputs (internally, using one of the channel selections) and measuring using Differential Bipolar mode. I've posted a couple of other questions about this recently, if anyone wants the back story (its not very juicy).

The spec sheet says the following about the result:

Quote:
The result is presented in two's complement form, from 0x200 (-512d) through 0x000 (+0d) to 0x1FF (+511d).

It appears that using this signed 10-bit value arithmetically, I need to convert the format of negative values into a proper 16-bit signed number. It appears to me that this requires extending the high bits of negative values.The process I am considering looks like this (abbreviated:

int16_t result = ADC;

if (result & 0x0200)
    result |= 0xfc00;    //set high 6 bits

My question is this: do the bitwise operators "care" whether the value being operated on is signed or unsigned?

While speed is not much of an issue (this happens only during bootup and I have close to 100ms to do this 16 times!), I do wonder whether or not it would be better (by some objective measure) to make result part of a union and operate only on the high byte. It would certainly make the code more obscure while maybe reducing execution time .

Thanks for your input and insight -

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That is the same way I have done it.
and signed or unsigned don't matter.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

...operate only on the high byte.

Examine the generated code. ;) Depending on the phase of the moon and where "result" is used next, that sequence may end up to be a SBRC/ORI and take less than a microsecond.

#include 

int main(void)
{
int result;

	result = ADCW;
	if (result & 0x0200) result |= 0xfc00;
	PORTD = result;
	PORTD = result>>8;
}
	result = ADCW;
  46:	80 91 78 00 	lds	r24, 0x0078
  4a:	90 91 79 00 	lds	r25, 0x0079
	if (result & 0x0200) result |= 0xfc00;
  4e:	91 fd       	sbrc	r25, 1
  50:	9c 6f       	ori	r25, 0xFC	; 252
	PORTD = result;
  52:	8b b9       	out	0x0b, r24	; 11
	PORTD = result>>8;
  54:	89 2f       	mov	r24, r25
...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Where is the SEX instruction when you need it, eh?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

or just have ADLAR=1 then it's correct signed int.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

or just have ADLAR=1 then it's correct signed int.

But then you'd have to shift it right six bits, wouldn't you?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thats what I thought. It needs to be a true 16 bit signed value so that I can average a bunch of them.

Just had captcha sxs4u. Almost what Lee was asking for!

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
int16_t result = ADC; if (result & 0x0200) result |= 0xfc00; //set high 6 bits My question is this: do the bitwise operators "care" whether the value being operated on is signed or unsigned?
The issue does not arise because there is no the value being operated on. In result|0xfc00 , result is converted to unsigned int to match 0xfc00. The technical problem comes from assigning an unsigned value to a signed integer target. The value will never be representable and so the result will be implementation-defined. This can be cured with "result |= (int16_t)0xfc00" or with result |= -(1<<10) . Correctness relies on twos complement and is not otherwise compiler-specific. To be really portable, use "result-=1<<10" . To me, it is also more legible. Also for legibiity, I'd use (1u<<9) instead of 0x0200.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Thu. Jan 8, 2015 - 07:08 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
Thats what I thought. It needs to be a true 16 bit signed value so that I can average a bunch of them.
That depends on how you work with the value later. Given that the offset you're measuring is likely to have a magnitude of far less than 1024 (probably +/-64, actually), you could add and average 64 of them (or 1024 of them) in 16 bits without fear of overflow.

Whether or not this is a 'better' approach will depend on how you use those summed and/or averaged offset samples. If you propagate the 6-bit scaling this approach brings with it, it might be the more efficient way to go. Look at the generated asm for each and decide.

Quote:
Just had captcha sxs4u. Almost what Lee was asking for!
That wasn't my first read ;)

EDIT: This may all be much-ado, since the calibration is likely to happen only once after reset, so 'efficiency' is maybe not important?

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Examine the generated code.

Examine the C language international standard.

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

This can be cured with "result |= (int16_t)0xfc00" or with result |= -(1<<10) .
Correctness relies on twos complement and is not otherwise compiler-specific.
To be really portable, use "result-=1<<10" .
To me, it is also more legible.
Also for legibiity, I'd use (1u<<9) instead of 0x0200.


Quote:

Examine the C language international standard.

Bah! to all the political correctness. IMO

This is >>Nicro<< controller programming. Op is trying to address a particular situation on a particular model line of a particular brand of microcontroller, that happens to present information in the discussed format which needs further manipulation.

ANY further manipulation you do is lying to something. With the particular information involved, and the particular C toolchain, it is demonstrated that the straightforward manipulation takes two AVR instructions. After verification, and a note in the source, I'd be done and move on.

I don't have my test app here, but I don't remember seeing any warnings. Now I have to look at said standard--is 0xFC00 an unsigned int? THE POINT I WAS MAKING IS THAT THE COMPILER IS LIKELY TO RECOGNIZE THAT ONLY A SINGLE BYTE OF THE RESULT NEEDS TO BE MODIFIED AND DO A SINGLE ORI. No matter how mych you cast and discuss, with the fragment shown the compiler generates two AVR instructions. Cast away, and see if you get less. I'd wager more...

Sprinter is asking me to read the standard. [I don't know why--I was commenting that a particular C toolchain was likely to recognize the fragment as a single-bit test followed by a single ORI if the byte is in a register. WHAT DOES THAT HAVE TO DO WITH THE STANDARD? Yes, I am indeed peeved at the tone of that comment.

Quote:

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch's quoted quotation is from skeeve.

theusch wrote:
Quote:

This can be cured with "result |= (int16_t)0xfc00" or with result |= -(1<<10) .
Correctness relies on twos complement and is not otherwise compiler-specific.
To be really portable, use "result-=1<<10" .
To me, it is also more legible.
Also for legibiity, I'd use (1u<<9) instead of 0x0200.

Perhaps I should have explicitly mentioned that twos complement is pretty much a given for AVR compilers.
Note that OP did not specify a compiler.

theusch's quoted quotation is from SprinterSB.

theusch wrote:

Quote:

Examine the C language international standard.

Bah! to all the political correctness. IMO
What political correctness?
Quote:
This is >>Nicro<< controller programming. Op is trying to address a particular situation on a particular model line of a particular brand of microcontroller, that happens to present information in the discussed format which needs further manipulation.

Not a particular compiler.
Quote:
ANY further manipulation you do is lying to something. With the particular information involved, and the particular C toolchain, it is demonstrated that the straightforward manipulation takes two AVR instructions. After verification, and a note in the source, I'd be done and move on.
What lying? What note?
Quote:
I don't have my test app here, but I don't remember seeing any warnings. Now I have to look at said standard--is 0xFC00 an unsigned int? THE POINT I WAS MAKING IS THAT THE COMPILER IS LIKELY TO RECOGNIZE THAT ONLY A SINGLE BYTE OF THE RESULT NEEDS TO BE MODIFIED AND DO A SINGLE ORI. No matter how mych you cast and discuss, with the fragment shown the compiler generates two AVR instructions. Cast away, and see if you get less. I'd wager more...

Sprinter is asking me to read the standard. [I don't know why--I was commenting that a particular C toolchain was likely to recognize the fragment as a single-bit test followed by a single ORI if the byte is in a register. WHAT DOES THAT HAVE TO DO WITH THE STANDARD? Yes, I am indeed peeved at the tone of that comment.

Why?
OP was asking about reliability as much as efficiency.
The standard is certainly relevant when deciding what one can expect from a compiler.
Relying on the standard is certainly preferable to
examining the generated assembly after every single build.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Sun. Jul 13, 2014 - 03:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

theusch's quote quotation is from skeeve.
theusch wrote:
Quote:

This can be cured with "result |= (int16_t)0xfc00" or with result |= -(1<<10) .
Correctness relies on twos complement and is not otherwise compiler-specific.
To be really portable, use "result-=1<<10" .
To me, it is also more legible.
Also for legibiity, I'd use (1u<<9) instead of 0x0200.


What is up with you people this weekend? What ARE you going on about? I NEVER WROTE WHAT YOU JUST ATTRIBUTED TO ME, and I don't understand the "from skeeve" part. If you wrote it earlier, why don't you say "skeeve wrote:"?

Quote:

Not a particular compiler.

Well, what toolchain is OP using for THIS PARTICULAR SITUATION?!? If we are looking for an efficient solution, it will be w.r.t. OP's chosen tool chain. Will other C toolchains generate the same two instruction sequence? If they do or don't does it matter to OP?

Quote:

What political correctness?

YOU were the one introducing "correctness" to the thread.

Quote:

What lying? What note?

No matter how you cut it, OP has a 16-bit value xxxxxxsn/nnnnnnn in a supposed 16-bit signed twos-complement variable. However you manipulate it after that, you have to cast and fuss or just let defaults take over.

What note? Sheesh. The one I would put into the source code annotating what I said.

Quote:

Why?
OP was asking about reliability as much as efficiency.
The standard is certainly relevant when deciding what one can expect from a compiler.
Relying on the standard is certainly preferable to
examining the generated assembly after every single build.

I've used more upper-case in this thread than I've used here for the past year, it seems. I seem to get the impression that y'all must agonize and argue and analyze and discuss every line of each Mega48 program? All of my responses and comments had to do with "get'er done".

I already said bah! to the political correctness. Then I get repeated standard-waving (nice pun, eh?). Wave the standard all you want, we are probably quite close to the "implementation-defined" and "undefined behaviour" areas.

No, Virginia, OP will never port this app to a different architecture. And certainly not these lines of code, which depend on an AVR8 ADC result setup. Any union solution is out, then, as it won't work big-endian right? Same with a structure of two "int" bitfields?

If (low-10-bitsfield-declared-as-int < 0) then high-6-bitsfield-declared-as-int = -1

Something like that?

I don't know if toolchains will generate a 2-instruction solution for that. maybe.
Any union soluti

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My toolchain is avr-gcc provided with Atmel Studio 6.x as applied to Mega/Tiny devices (I did say Tiny25/45).

I don't think that I asked about either "reliability" or "portability" but I also did not specify a toolchain.

Thanks for everyone's input.

After looking at the responses and thinking a bit more about my concerns, the real center of the concern is bitwise operation ON a signed integer BY an (implied unsigned integer, I think) constant. The concern is whether or not the compiler would make inappropriate assumptions in the process of type promotion (I think that is the correct term, maybe it isn't). The responses seem to suggest that I have nothing to worry about IN THIS CASE.

A closely related NEW question is this: what happens when you add or subtract a (small positive or negative) signed value to an unsigned value of the same "width"? Both are variables. The result will always be positive and the numbers are sized so there will never be an overflow. Does the result loose a possible bit of range (due to being converted to signed)? If so, I may need to review the sizing of the larger unsigned variable.

Thanks, everyone -
Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

(implied unsigned integer, I think)

literal constants in C are implied "signed int" not unsigned ;-)

For the other see 6.3.1.8 in the C standard. It says something along the lines of:

Quote:

First, if the corresponding real type of either operand is long double, the other operand is converted, without change of type domain, to a type whose corresponding real type is long double.

Otherwise, if the corresponding real type of either operand is double, the other operand is converted, without change of type domain, to a type whose corresponding real type is double.

Otherwise, if the corresponding real type of either operand is float, the other operand is converted, without change of type domain, to a type whose corresponding real type is float.

Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands:

If both operands have the same type, then no further conversion is needed.

Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.

Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.

Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.

Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

literal constants in C are implied "signed int" not unsigned

That is probably what Sprinter implied, right? :twisted:
Quote:

Examine the C language international standard.


Quote:

I don't think that I asked about either "reliability" or "portability"...

I didn't think so either, but I was too would up to go back and check.

Quote:

...but I also did not specify a toolchain.

I implied that from other threads of yours on this same app.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks Cliff for that quote from the standard. I could have found that standard, certainly. Standards are important, really important, so please don't take me wrong, here. But, the more complex a standard is, the harder it is to find those important kernels when you look for them, especially when you have never seen that document, before.

Dare I ask? What is a "type domain"? I really did Google it, and got nothing useful.

That particular set of details is one of the more important that I have never read about. I took a class in C programming, but I don't think that we ever covered the rules of operations on mixed types, nor was the Standard even mentioned. So much for the excuse(s). Now, I gotta do something to make this easy to find again.

Its great to have that here, because those of us who are more or less self-taught will need it.

Appreciate your effort.

PS - trying to understand more about type promotion, I stumbled across this location: https://www.securecoding.cert.or...

Down near the middle is an example that gives me some concern - quoting here:

Quote:

Noncompliant Code Example
This noncompliant code example demonstrates how performing bitwise operations on integer types smaller than int may have unexpected results:

uint8_t port = 0x5a;
uint8_t result_8 = ( ~port ) >> 4;

In this example, a bitwise complement of port is first computed and then shifted 4 bits to the right. If both of these operations are performed on an 8-bit unsigned integer, then result_8 will have the value 0x0a. However, port is first promoted to a signed int, with the following results (on a typical architecture where type int is 32 bits wide):[

Expression    Type           Value            Notes
port               uint8_t       0x5a
 ~port           int              0xffffffa5
 ~port >> 4  int              0x0ffffffa     implementation-defined
result_8        uint8_t        0xfa
 

Compliant Solution
In this compliant solution, the bitwise complement of port is converted back to 8 bits. Consequently, result_8 is assigned the expected value of 0x0aU.

uint8_t port = 0x5a;
uint8_t result_8 = (uint8_t) (~port) >> 4;

Is this ONLY a concern on architectures wider than 8 bits?

Thanks
Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
literal constants in C are implied "signed int" not unsigned
But the constant in question is a hex constant, so it follows somewhat different rules. Michael's statement:
Quote:
In result|0xfc00 , result is converted to unsigned int to match 0xfc00
says that a hex constant is unsigned, and all of his "political correctness" stems from that. However the C Standard says this:
Quote:
The type of an integer constant is the first of the corresponding list in which its value can be represented.
with the list being referred to for hex (and octal) is this:
Quote:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
Now the question is, will the compiler interpret 0xfc00 as representing 64512 or -1024?

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Would the outcome be more certain if I used (uint16_t)0xfc00 ?

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Would the outcome be more certain if I used (uint16_t)0xfc00 ?


Do not give in to the repeated radio talk-show rhetoric. ("If I say it loud enough, and often enough, then pretty soon the unwashed masses will believe it is true.")

-- Your chosen toolchain, the Holy Grail for political correctness around here, gave me no warnings for the program I posted. Granted, it was with the default Studio6 new project settings, whatever they are.

And it generated a tight efficient sequence. What IS with this hailstorm? No warnings. Minimal resources. Correct code sequences. WHY are these others proposing solutions in search of a problem? Innuendo about the standard. Weird quote blocks. Claims of mentioned portability/reliability.

Sheesh.

Or is it me? Senility setting in? In a week I'll be 63...

[...and I'll still be cranking out AVR8 production apps, that work and are solid and not a lot of anguish over a working construct.]

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I never used to think about it until i was reading the MISRA standard then a day later the TI compiler complained about it. So now I pop a U on the end just to be kosher. As the Irish say - "to be sure, to be sure".

Since you've made a noise about it, it is going to bite you at some time!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Lee, I agree with you, actually.

My big fear is that I will make some small change in the code resulting in unexpected operation. So, I was looking for that magic "insurance policy" that would keep things humming along in the way I expect.

Does that cast result in extra code execution? I don't know but will find out over the next day or so.

Talk about impending senility: I'm your age +10!

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Technically the result of:

int x = 0xFC00;

is implementation defined. But I know of no implementation (where an int is 16 bits) that the result would not be -1024. In fact, an implementation would have to go out of its way to do anything else. Where problems would result is if the code was ported to a device that was not 16 bit. But using int16_t instead of int would prevent that problem.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Technically the result of:
Code:
int x = 0xFC00;
is implementation defined.

As long as we are being pedantic here, (and keeping in mind that figuring out the intent of standards writers is not my idea of light reading), I'd have to do a digging session. Cliff sez 0xfc00 is an int. You didn't say one way or the other, but said "first fit". 0xfc00 is indeed a perfectly good bit pattern to represent a signed value.

Now we are hearing about "implementation defined".

More digging next week with the sample project in question, and a faster internet connection. By that time the crusaders will be off finding other windmills to tilt and crusades to carry out by invading the lands of innocent non-believers.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
Is this ONLY a concern on architectures wider than 8 bits?
No. The '8-bit' entity here isn't the architecture, but the unambiguous type uint8_t. In the examples you found the type promotion is to the basic type int. The standard stipulates that int is signed with a width of at least 16 bits. It is the promotion to the wider type first, before the bit-wise complement operation (again stipulated by the standard) which leads to the 'non-compliance', and the deliberate recasting afterwards which resolves the issue.

Coincidentally, I ran across a similar issue a short time ago while revisiting my own software serial code. Reduced for clarity:

#include 

volatile int8_t foo;
volatile uint16_t bar;

int main(void) {
  bar = foo;
  bar = (uint8_t)foo;
  while(1);
}
00000040 
: volatile int8_t foo; volatile uint16_t bar; int main(void) { bar = foo; 40: 80 91 60 00 lds r24, 0x0060 44: 99 27 eor r25, r25 46: 87 fd sbrc r24, 7 48: 90 95 com r25 4a: 90 93 62 00 sts 0x0062, r25 4e: 80 93 61 00 sts 0x0061, r24 bar = (uint8_t)foo; 52: 80 91 60 00 lds r24, 0x0060 56: 90 e0 ldi r25, 0x00 ; 0 58: 90 93 62 00 sts 0x0062, r25 5c: 80 93 61 00 sts 0x0061, r24 60: ff cf rjmp .-2 ; 0x60

As for the excerpt of the standard which Cliff quoted, the part which can bite you is the very last clause:

Quote:
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
All of the previous clauses guarantee that both operands will survive the type promotion intact. This last one does not.

For example, with:

int8_t foo;
uint8_t bar;

... then the expression:

foo + bar

... the last clause applies and the expression undergoes type promotion like this:

(uint8_t)foo + bar

If foo has a value which cannot be represented as uint8_t (for example, -1), then the behaviour would seem to be implementation defined:

#include 

volatile int8_t foo;
volatile uint8_t bar;
volatile int16_t bat;

int main(void) {
  foo = -1;
  bar = 45;
  bat = foo + bar;
  while(1);
}
int main(void) {
  foo = -1;
  40:	8f ef       	ldi	r24, 0xFF	; 255
  42:	80 93 62 00 	sts	0x0062, r24
  bar = 45;
  46:	8d e2       	ldi	r24, 0x2D	; 45
  48:	80 93 63 00 	sts	0x0063, r24
  bat = foo + bar;
  4c:	20 91 62 00 	lds	r18, 0x0062
  50:	80 91 63 00 	lds	r24, 0x0063
  54:	90 e0       	ldi	r25, 0x00	; 0
  56:	82 0f       	add	r24, r18
  58:	91 1d       	adc	r25, r1
  5a:	27 fd       	sbrc	r18, 7
  5c:	9a 95       	dec	r25
  5e:	90 93 61 00 	sts	0x0061, r25
  62:	80 93 60 00 	sts	0x0060, r24
  66:	ff cf       	rjmp	.-2      	; 0x66 

Note the test for the sign bit of foo, and the way the high byte of the result is decremented if foo is negative. The result will accurately reflect the true sum of -1 and 45.

Smarter (and more patient) people than I will know whether this behaviour is stipulated by the standard, but my suspicion is that it is implementation defined.

An examination of the generated code when the operands are either both uint8_t or both int8_t is left as an exercise for the reader.

EDIT: typos

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

Last Edited: Sun. Jul 13, 2014 - 04:28 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

No. The '8-bit' entity here isn't the architecture, but the unambiguous type uint8_t. In the examples you found the type promotion is to the basic type int. The standard stipulates that int is signed with a width of at least 16 bits. It is the promotion to the wider type first, before the bit-wise complement operation (again stipulated by the standard) which leads to the 'non-compliance', and the deliberate recasting afterwards which resolves the issue.


Please stop making my head hurt!

I told Firefox to look for the first mention of uint8_t in this thread, and it came up with the instance I quoted here.

No wonder I am confused. Tell more, oh please do, how uint8_t and it being an unambiguous type (now, old guys would say unsigned char and uint8_t is the same thing in the toolchain under discussion but requires more use of Shift key) has to do with Jim's question about an "int" variable?

(I don't see how your foo-bar example has a direct bearing on the topic(s) in this thread.)

Hey, I'm willing to listen and learn but there seem to be moon shots coming from all directions on this one. uint8_t indeed.

Obviously, the problem is that the somatic cell count is too high.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It applies because I appended a question about this to my post which is about #18 in the thread.

Sorry about that. I thought it was close enough to the original intent to reasonably apply (being about type promotion).

Lee, you will sleep, sleep, sweet sleep!

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
0xfc00 is indeed a perfectly good bit pattern to represent a signed value
It is also a perfectly good bit pattern to represent an unsigned number, so that alone does not solve the issue.
Quote:
... the last clause applies and the expression undergoes type promotion like this:

(uint8_t)foo + bar

This is not true. This applies first:
Quote:
Otherwise, the integer promotions are performed on both operands.
So there is no coercion to uint8_t. By the time the "+" happens, both values are already promoted to int.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
... the last clause applies and the expression undergoes type promotion like this:

(uint8_t)foo + bar

This is not true. This applies first:
Quote:
Otherwise, the integer promotions are performed on both operands.
So there is no coercion to uint8_t. By the time the "+" happens, both values are already promoted to int.
OK, I'll bite.

How does this:

Quote:
Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands:
... lead to both operands becoming int?

That is, while I understand the English meaning of that first sentence, it's not clear to me that the promotion 'performed on both operands' spoken of is to the basic type of (signed) int. If that were the case, then why do the rules which follow stipulate action based on the signedness of one or both of the operands?

Honest question. Remember:

joeymorin wrote:
Smarter (and more patient) people than I will know whether this behaviour is stipulated by the standard

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
My big fear is that I will make some small change in the code resulting in unexpected operation. So, I was looking for that magic "insurance policy" that would keep things humming along in the way I expect.
To me, that is an aspect of reliability.
In any case, here is what I would write:
// extend sign bit in bit position 9 to all higher bit positions
// |= assumes twos complement
// AVRs are twos complement
if(result & (1<<9) result |= -(1<<10);   // delete me
if(result & 0x0200) result |= -0x0400;   // or me
if(result >= (1<<9) result -= (1<<10);   // or me
if(result >= 0x0200) result -= 0x0400;   // or me

All quantities are signed ints.
The size of signed int does not matter.

Edit: On further thought, I might change the code to avoid repeated use of magic numbers.

#define SIGN_POS 9
...
// assumes SIGN_POS< 15
if(result & (1<<SIGN_POS)) result |= 2<<SIGN_POS;  // delete me
if(result >= (1<<SIGN_POS)) result -= 2<<SIGN_POS; // or me

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Sun. Jul 13, 2014 - 05:05 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
What is up with you people this weekend? What ARE you going on about? I NEVER WROTE WHAT YOU JUST ATTRIBUTED TO ME, and I don't understand the "from skeeve" part. If you wrote it earlier, why don't you say "skeeve wrote:"?
The point was that you quoted skeeve (yours truly) and SprinterSB, apparently to find fault, without naming either of us and without mentioning that you were quoting two different people.
I wanted to add attributions without changing what I was quoting.
Quote:
Quote:

Not a particular compiler.

Well, what toolchain is OP using for THIS PARTICULAR SITUATION?!? If we are looking for an efficient solution, it will be w.r.t. OP's chosen tool chain. Will other C toolchains generate the same two instruction sequence? If they do or don't does it matter to OP?
What OP is trying to do is not terribly complicated.
A technique that relies on a specific toolchain is probably wrong for him.
Since he did not specify a toolchain,
I expect he does not want a toolchain-specific solution.
Quote:

Quote:

What political correctness?

YOU were the one introducing "correctness" to the thread.
So what is political about it?
Quote:
No, Virginia, OP will never port this app to a different architecture. And certainly not these lines of code, which depend on an AVR8 ADC result setup. Any union solution is out, then, as it won't work big-endian right? Same with a structure of two "int" bitfields?
OP will never port it to another architecture, but he might port it to another toolchain.
One can rely on little-endian or on the organization of bitfields.
These are both examples of implementation-defined behaviour.
By definition, implementation-defined behaviour is documented.
OP can look it up and put a comment in his source.
As simpler techniques are available, there is no reason to use either unions or bitfields.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

The point was that you quoted skeeve (yours truly) and SprinterSB, apparently to find fault, without naming either of us and without mentioning that you were quoting two different people.

Say, what? The thread was short at that point, and the two quoted fragments were nearby. But indeed, I was lumping them into the same objectionable bucket.

Quote:

So what is political about it?
I don't think I need to make any apologies for using the term "political correctness" even without an actual bureaucratic entity involved. At least not the way I'm using it. If I am way off base, please further instruct.

Quote:

A technique that relies on a specific toolchain is probably wrong for him.

I mentioned recently that I have to dig a bit further. But you can help to start:
1) What type is 0xfc00? What type is 0x0200?

That will lead me to
2) Since there are no warnings in the two-line solution without casts and such, as-written with "int result;", I can only assume at this point that it is perfectly valid C, follows the rules, and is NOT dependent on a particular toolchain for correct operation.

2a) Now, whether the two-instruction sequence is always generated by the compiler--certainly that might depend on other factors. Get real--the problem being solved is with a particular AVR8 model and a particular toolchain and version. Whether another toolchain generates 2 or 3 or 7 instructions is kind of immaterial, isn't it? Or for another type of micro.

Quote:

One can rely on little-endian or on the organization of bitfields.
These are both examples of implementation-defined behaviour.

I think you are spewing stuff again. Bit-field organization? Sheesh--throwing the little/big endian into this again?

The skeeve poster said earlier: (must attribute; must attribute; can't expect people to look back one or two posts... :twisted: )

Quote:
In any case, here is what I would write:
Code:
// extend sign bit in bit pos...

This is the same poster that said: "What note?" in reply to when I said:
"After verification, and a note in the source, I'd be done and move on."

Am I being what is called "trolled"? Dunno. Tell you what, y'all. Start over, and first answer my question 1.
1) What type is 0xfc00? What type is 0x0200?

Given that answer, I'll then start on 2) by putting GCC and other toolchains in picky mode (and others can do that as well with available toolchains and nitpicking tools.

Then we can see if the toolchain(s) violate standard C or not.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
... lead to both operands becoming int?
They will become ints of their respective signedness. The point was that there would be no promotions to uint8_t, not that signedness didn't matter.
Quote:
A technique that relies on a specific toolchain
Show me one single C compiler that would not produce the same result as gcc.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Show me one single C compiler that would not produce the same result as gcc.

It takes a good optimizer to recognize the single-bit test so that only one of the two bytes needs to be examined. And that byte needs to be in a register (or put there for the test). Then the optimizer must recognize that 0xfc00 being ORed only affects one of the bytes.

CodeVision 2.x and 3.x recognize the optimization opportunities, but of course the resulting sequence depends on where "result" is parked in the first place.

Quote:

The point was that there would be no promotions to uint8_t,

As a [very] side note, can one ever "promote" from int16_t to uint8_t ?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
It takes a good optimizer to recognize the single-bit test so that only one of the two bytes needs to be examined
The argument that Michael is making has nothing to do with how efficient the resulting code is. His argument is that the result of the operation might be different on a different compiler.
Quote:
As a [very] side note, can one ever "promote" from int16_t to uint8_t ?
No, promotion only goes from narrower to wider data types. Coercion can of course be done between any intrinsic data types (whether it makes sense to or not).

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Now, for us unwashed masses, here is something I did not know. As I said, I am mostly self-taught (a 1-quarter C class dealing mostly with language syntax). And I suspect that I am far from the only one in this situation.

Over the past 2 years or so that I've been aware of such things, I've seen the term "integer promotion" without understanding what is involved or how to find the rules. Now, I do know a LOT more than just a few days ago. For that, I appreciate the help provided.

And, for Lee, I do understand (I think) where you are coming from. Why spend all that mental effort keeping track of this stuff when it will never interfere with what you are doing? Well, this business of going from signed 10-bit world to signed 16-bits seems to be one place where it can bite you in the butt. Can you imagine the problems if you were to "send" this signed 10-bit value, unmodified, as 2 raw 8-bit bytes to a PC and had to deal with it in a 32-bit world with different endedness?

Pretty sure there are other spots, lurking in the tall weeds, where you can step in the deep doodoo without realizing it until it happens (or maybe never even realizing it). In this case, I don't think that a bit of knowledge will hurt, and even with the decreasing mental capacity with age, there still has to be a bit of available room up there, somewhere.

So, again, I do appreciate the comments and input. Even Lee's! Especially Lee's! Every one has resulted in learning something new.

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

.some. attributions by skeeve.

theusch wrote:
.skeeve. wrote:

So what is political about it?
I don't think I need to make any apologies for using the term "political correctness" even without an actual bureaucratic entity involved. At least not the way I'm using it. If I am way off base, please further instruct.
In context, I took "politically" to mean "in a way that I (theusch) prefer to ignore and wish others would shut up about".
How did I do?

The primary problem with OP's code was that he did not know whether it was correct:

ka7ehk aka OP wrote:
My question is this: do the bitwise operators "care" whether the value being operated on is signed or unsigned?
In response,
skeeve wrote:
The technical problem comes from assigning an unsigned value to a signed integer target.
The value will never be representable and so the result will be implementation-defined.
Perhaps the context made it seem that I thought implementation-defined was wrong or evil.
To be clear: By definition something that is implementation-defined can be found in the documentation for the implementation.
Probably I should have added that pretty much any sensible 16-bit AVR compiler will use twos complement and reinterpret-bits.
The code will therefore perform as desired.
Quote:

.skeeve. wrote:

A technique that relies on a specific toolchain is probably wrong for him.

I mentioned recently that I have to dig a bit further. But you can help to start:
1) What type is 0xfc00? What type is 0x0200?
0x0200 is signed int.
0xfc00 is unsigned in on 16-bit compilers, signed int on others.
-0x0400 is signed int.
Quote:
That will lead me to
2) Since there are no warnings in the two-line solution without casts and such, as-written with "int result;", I can only assume at this point that it is perfectly valid C, follows the rules, and is NOT dependent on a particular toolchain for correct operation.
The logic is incorrect. I can think of at least one valid sensible design choice that would cause OP's code to do other than what he wants.
That said, I know of no compiler for which that choice as been made.
I'd expect OP's code to work on any current AVR C compiler.
emphasis added:
Quote:
.skeeve. wrote:

One can rely on little-endian or on the organization of bitfields.
These are both examples of implementation-defined behaviour.

I think you are spewing stuff again. Bit-field organization? Sheesh--throwing the little/big endian into this again?

The skeeve poster said earlier: (must attribute; must attribute; can't expect people to look back one or two posts... :twisted: )

Quote:
In any case, here is what I would write:
Code:
// extend sign bit in bit pos...

This is the same poster that said: "What note?" in reply to when I said:
"After verification, and a note in the source, I'd be done and move on."

I was asking for information.
Someone already feeling attacked might assume any question is rhetorical.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Mon. Jul 14, 2014 - 05:37 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
... lead to both operands becoming int?
They will become ints of their respective signedness.
Understood.

That should have been obvious to me. I was peripherally aware of it, but am generally paranoid about types and usually explicitly cast operands and include suffixes to constants. How nice to have clarity now.

This was helpful:
http://publications.gbdirect.co....
In particular, the remider:

Quote:
No arithmetic is done by C at a precision shorter than int

Also:
http://stackoverflow.com/questio...
In particular:

Quote:
  • char or short values (signed or unsigned) are promoted to int (or unsigned) before anything else happens
  • this is done because int is assumed to be the most efficient integral datatype, and it is guaranteed that no information will be lost by going from a smaller datatype to a larger one

And:
https://www.securecoding.cert.or...

Quote:
Integer types smaller than int are promoted when an operation is performed on them.

Thanks for the kick in the pants Steve.

theusch wrote:
As a [very] side note, can one ever "promote" from int16_t to uint8_t ?
Ah, no ;) ... although a casual reading of my post might cause one to think I was saying so. I was referring to a conversion from int8_t to uint8_t under control of the last value preserving rule posted by Cliff, which I see clearly now would never take place. I was ignoring the initial promotion to int which types narrower than int first undergo prior to the application of those rules. @Koshci has set me straight.

ka7ehk wrote:
Every one has resulted in learning something new.
The learning has been shared ;)

Thanks to Cliff for dropping in just long enough to make his bombing run ;)

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hey, folks -

This should not be a religious war!

I do realize that this has some of the aspects of religion - arcane knowledge, strange and seemingly unexpected behavior if that arcane knowledge is ignored in some very limited (but not uncommon) situations, some "high priests" who seem to have intimate familiarity with said arcane knowledge, and such.

Lets back off! I simply wanted to know more about said arcane knowledge. Those who don't want to know more can ignore the thread.

Cheers and Peace!

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Can you imagine the problems if you were to "send" this signed 10-bit value, unmodified, as 2 raw 8-bit bytes to a PC and had to deal with it in a 32-bit world with different endedness?

Well there are two solutions to that:

1) you convert it to some universal format first that can't be mis-interpreted. The common one is ASCII. Sure it means your 2 bytes become up to 5 but it's often a cost worth paying to guarantee safe delivery of the information and has the added advantage that it's very easy for humans to debug when you are doing it.

2) Handle the 2 bytes "correctly" when they arrive. This relies on both ends knowing the endianess of what's being sent. If you are lucky both ends have the same endianess (in th case of AVR-PC this is true) and you can use similar constructs to amalgamate the bytes at either end (though "packing" can sometimes get in the way!). But the safe way to reassemble the bytes is to reconstruct the int from the individual bytes using shifts. So either:

short reconstructed = (byte1 << 8) | byte2;

or

short reconstructed = (byte2 << 8) | byte1;

depending on the endianness of what just arrived. (perversely I'm counting from 1 not 0!). This makes no assumptions about the PC's endianness but does rely on you knowing the endianness of the device that created the bytes or rather the order the transmitter actually chose to send them.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve's response to my question:

Quote:

I mentioned recently that I have to dig a bit further. But you can help to start:
1) What type is 0xfc00? What type is 0x0200?
0x0200 is signed int.
0xfc00 is unsigned in on 16-bit compilers, signed int on others.
-0x0400 is signed int.

In my view, this is the crux of the whole discussion.

I maintain that 0xfc00 is a [signed] int constant on AVR8 C compilers. At least with GCC and CV based on how they are handling them.

I use a freely-available draft C standard. WG14/N1124 Committee Draft — May 6, 2005 ISO/IEC 9899:TC2 Perhaps those that have access to "real" or more recent standards can confirm or refute the below? 6.4.4.1 Integer constants

Quote:

Semantics
4 The value of a decimal constant is computed base 10; that of an octal constant, base 8;
that of a hexadecimal constant, base 16. The lexically first digit is the most significant.
5 The type of an integer constant is the first of the corresponding list in which its value can be represented.

The first on the list following is "int". 0xfc00 can be represented in a 16-bit "int". So I say it is an int constant.

Before doing that digging, I took the fact that neither GCC nor CV whined with a warning about the two-line fragment that indeed it was treating it as an int constant.

The SBRS/ORI sequence generated, to me, doesn't conflict with it being an int constant.

And thus, my questioning of the statements about "with this particular toolchain" and portability and reliability and such.

You can convince me, but you need to start with my question 1) about the type of 0x0200 and 0xfc00.

Consider this:

#include 

int main(void)
{
int result;

	result = ADCW;
	if (result & 0x0200) result |= 0xfc00;
	PORTD = result;
	PORTD = result>>8;
	if (result & 512) result |= -1024;
	PORTD = result;
	PORTD = result>>8;
}

which generates (in part)

	if (result & 0x0200) result |= 0xfc00;
  4e:	91 fd       	sbrc	r25, 1
  50:	9c 6f       	ori	r25, 0xFC	; 252
...
	if (result & 512) result |= -1024;
  5e:	91 fd       	sbrc	r25, 1
  60:	9c 6f       	ori	r25, 0xFC	; 252

with no errors or warnings. (tell me how to tell GCC in Studio6.1 to be more picky?)

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I tried the bit-fields approach with my GCC app, and got results I didn't expect. The <0 signed test is ignored no matter how I cast things. (I tried a few different ways to go to/from "frog"...)

#include 

// Bitfield version
struct adc_signed_struct
{
int bits_adcw:10;
int bits_sex:6;
};
int main(void)
{
int result;
struct adc_signed_struct frog;

	frog.bits_adcw = (int)ADCW;
	if (frog.bits_adcw < 0) 
	{
	frog.bits_sex = -1;
	}
	else
	{
	frog.bits_sex = 0;
	}
	result = *(int*)&frog;
	PORTD = result;
	PORTD = result>>8;
}
	frog.bits_adcw = (int)ADCW;
  46:	20 91 78 00 	lds	r18, 0x0078
  4a:	30 91 79 00 	lds	r19, 0x0079
	}
	else
	{
	frog.bits_sex = 0;
	}
	result = *(int*)&frog;
  4e:	c9 01       	movw	r24, r18
  50:	93 70       	andi	r25, 0x03	; 3
	PORTD = result;
  52:	8b b9       	out	0x0b, r24	; 11
	PORTD = result>>8;
  54:	89 2f       	mov	r24, r25
  56:	99 0f       	add	r25, r25
  58:	99 0b       	sbc	r25, r25
  5a:	8b b9       	out	0x0b, r24	; 11
  5c:	80 e0       	ldi	r24, 0x00	; 0
  5e:	90 e0       	ldi	r25, 0x00	; 0
  60:	08 95       	ret

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
skeeve's response to my question:
Quote:

I mentioned recently that I have to dig a bit further. But you can help to start:
1) What type is 0xfc00? What type is 0x0200?
0x0200 is signed int.
0xfc00 is unsigned in on 16-bit compilers, signed int on others.
-0x0400 is signed int.

In my view, this is the crux of the whole discussion.

I maintain that 0xfc00 is a [signed] int constant on AVR8 C compilers. At least with GCC and CV based on how they are handling them.

Integer literals in general and 0xfc00 in particular represent non-negative values.
look at the assembly from
long fc00=0xfc00;

Regarding bit-fields:
The standard-writers made what, to me, is a silly mistake:
it is implementation-defined whether an int bit-field is treated as a signed int bit-field or an unsigned int bit-field.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Tue. Jul 15, 2014 - 01:53 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Integer literals in general and 0xfc00 in particular represent non-negative values.

Again, this is the crux of the matter.
I was told:
SprinterSB wrote:
theusch wrote:
Examine the generated code.

Examine the C language international standard.

I did examine it. My conclusion was that it was treated as a int, given the quote from the standard, lack of warnings, the generated code, and the same operation when decimal literals were used.

Now, you gave a counter-example with an assignment to [signed] long. Indeed, GCC and CV give different code with an assignment of -1024 vs. 0xfc00. Does that have to do with the treatment of the literal, or the promotion to long to do the assignment?

Unfortunately, there is no "i" suffix to force the constant to test.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
And, for Lee, I do understand (I think) where you are coming from. Why spend all that mental effort keeping track of this stuff when it will never interfere with what you are doing? Well, this business of going from signed 10-bit world to signed 16-bits seems to be one place where it can bite you in the butt. Can you imagine the problems if you were to "send" this signed 10-bit value, unmodified, as 2 raw 8-bit bytes to a PC and had to deal with it in a 32-bit world with different endedness?
I really do not see the problem.
The important thing is for both sides to know what is being transferred.
The receiver can do whatever arithmetic is required.
E.g. suppose the PC gets two bytes, MSB-first,
and puts them into the lower two bytes of a 16-, 17-, 23- or 32-bit int.

if(received & (1<<SIGN_POS)) received |= -(1<<SIGN_POS);

will do the right thing regardless of whether the value was sign-extended before transmission.

When playing with bit masks, we generally do not have to worry much about signs.
The reason is twos complement.
AVRs and most other machines use twos complement for integer arithmetic.
Compiler writers therefore use twos complement.
Conversion from signed to unsigned is well-defined whether or not one uses twos complement.
If one uses twos complement, no machine code is required.
One can simply reinterpret the former sign bit.
Conversion from unsigned to signed is partly implementation-defined.
Again, no machine code is required.
Reinterpreting the future sign bit is allowed.
In twos complement, the latter conversion is the inverse of the former.

Though I do not know of any examples,
other conversions from unsigned to signed are allowed, even for twos complement.
That there are few, if any, such examples is what allows some people to
ignore little things like signedness and to whine about those who do not.

I generally do not write bit masks as hexadecimal.
They require too much thought.
all bits below POS: ((1<<POS)-1)
all bits POS and above: (-(1<<POS))
Macros can be useful.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hmmm... I wrote a response, but somehow the post never went through. I'll try again:

Quote:
0xfc00 can be represented in a 16-bit "int". So I say it is an int constant.
This is incorrect. 0xfc00 is a hex number. There is nothing inherent in it that states or implies that the number represents a negative value. The rule finds the first match in the list that can fit the value. But the value must be evaluated >>before<< the rule is applied. In your interpretation the compiler must assume that a) the value is a 16 bit number and b) that the value is a two's complement representation. But both of these things can only be stated >>after<< the rule is applied.
Quote:
My conclusion was that it was treated as a int
Your conclusion is misguided. Since the "implementation" is using twos complement binary, the result will be the same regardless of whether the values in question were signed or unsigned. Therefore the result alone can not determine how the compiler interpreted the value.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

On a 32-bit machine

theusch wrote:
I did examine it. My conclusion was that it was treated as a int, given the quote from the standard, lack of warnings, the generated code, and the same operation when decimal literals were used.
Decimal is treated differently from octal and hexadecimal.
The standard provides no hint of a negative integer literal.
From your interpretation of the standard,
can you give us an example of a hexadecimal integer literal
which represents an unsigned int?

BTW he standard uses the term integer constant which
is different from integer constant expression.

Quote:
Now, you gave a counter-example with an assignment to [signed] long. Indeed, GCC and CV give different code with an assignment of -1024 vs. 0xfc00. Does that have to do with the treatment of the literal, or the promotion to long to do the assignment?
As I thought was well known,
the standard requires integer conversions to be value-preserving whenever possible.
The long, at least 32 bits, is big enough to hold 0xfc00 regardless of the latter's sign.
-1024 is wrong.
If gcc gave it to you, the reason is probably related to this:
GNU Coding Standards wrote:
5.6 Portability between CPUs

Even GNU systems will differ because of differences among CPU types—for example, difference in byte ordering and alignment requirements. It is absolutely essential to handle these differences. However, don’t make any effort to cater to the possibility that an int will be less than 32 bits. We don’t support 16-bit machines in GNU.

http://www.gnu.org/prep/standards/html_node/CPU-Portability.html#CPU-Portability

On a 32-bit machine,

long long fffffc00=0xfffffc00;

produces 4294966272, which does not fit into a 32-bit int.

Edit:

Quote:
Semantics
4 The value of a decimal constant is computed base 10; that of an octal constant, base 8;
that of a hexadecimal constant, base 16. The lexically first digit is the most significant.
5 The type of an integer constant is the first of the corresponding list in which its value can be represented.
What is the positional value of the f in 0xfc00?

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Decimal is treated differently from octal and hexadecimal.

Where in the standard does it say that?

Quote:

The standard provides no hint of a negative integer literal.

Aaah, the light is starting to come on. The unary minus >>operator<<!... (or, since it seems to be peeing on me the urinary minus operator?)

Quote:

What is the positional value of the f in 0xfc00?

I assume that question is rhetorical. First, lexically, so most significant is on the left when coding a constant. In other words, 1234 has 1 thousand not 4 thousands. Same for hex.

Does your reasoning change if it is 0xfc0 or 0xfc or 0xf ? The lexically first digit is the most significant and is f in all four cases.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Where in the standard does it say that?
n1256.pdf, p56?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
#include 

volatile long foo;

int main(void) {
  foo = 0xfc00;
  while(1);
}
00000040 
: #include volatile long foo; int main(void) { foo = 0xfc00; 40: 80 e0 ldi r24, 0x00 ; 0 42: 9c ef ldi r25, 0xFC ; 252 44: a0 e0 ldi r26, 0x00 ; 0 46: b0 e0 ldi r27, 0x00 ; 0 48: 80 93 60 00 sts 0x0060, r24 4c: 90 93 61 00 sts 0x0061, r25 50: a0 93 62 00 sts 0x0062, r26 54: b0 93 63 00 sts 0x0063, r27 58: ff cf rjmp .-2 ; 0x58

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

n1256.pdf, p56?

You'll have to be a bit more specific.

I guess I'm using n1124 and I don't see anything on p. 55 or 56 there that distinguishes.

OK, I found and downloaded N1256 which is a couple years later. I still don't see anything specific to what you might be referring to.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
You'll have to be a bit more specific.
Strange. It is p56 in both n1124.pdf and n1256.pdf. Does one of us have a crappy pdf reader that displays incorrect page numbers? Anyway, it is in the table at the end of section 6.4.4.1.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ezharkov wrote:
theusch wrote:
You'll have to be a bit more specific.
Strange. It is p56 in both n1124.pdf and n1256.pdf. Does one of us have a crappy pdf reader that displays incorrect page numbers? Anyway, it is in the table at the end of section 6.4.4.1.
On this document, evince (under Ubuntu) shows physcial page numbers, not 'logical' (i.e. the page number in the header/footer text).

The page with the page number 56 in the footer is actually the 68th page of the PDF.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

The page with the page number 56 in the footer is actually the 68th page of the PDF.

Indeed. But what phrase in particular is ez asking me to look at? I earlier quoted a piece that seems to be the only relevant part.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Quote:

The page with the page number 56 in the footer is actually the 68th page of the PDF.

Indeed. But what phrase in particular is ez asking me to look at? I earlier quoted a piece that seems to be the only relevant part.
OK, I see. I needed to provide a bit more context. Below is your question that I replied to. I didn't have a particular "phrase" in mind. Instead, I meant to point to that table at the end of 6.4.4.1. More specifically, to the two different columns for decimal vs octal&hexadecimal.
Quote:
theusch wrote:
Quote:

Decimal is treated differently from octal and hexadecimal.

Where in the standard does it say that?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
But what phrase in particular is ez asking me to look at?
I would think the table itself. It has a column for Decimal and a column for Octal/Hexadecimal. That table shows that any Decimal constant that is not explicitly marked as unsigned can only be assigned a signed type, but for Octal or Hexadecimal it could be either signed or unsigned.

As Joey showed, the compiler must be treating 0xfc00 as an unsigned int. If it were treated as a signed int, then the conversion from int to long would have caused sign extension.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I was following this thread, but did not particular care if 0xfc00 was signed or unsigned. Well, I knew that it was unsigned in GCC, but did not particular care why. Then I got curious. Then I thought about "the type of an integer constant is the first of the corresponding list in which its value can be represented" and got totally confused - what does "can be represented" really mean? But then finally everything fell into place. I believe that "can be represented" means that the value is within the valid range for a particular type. Since the "signed int" range is -32768-32767, 0xfc00 cannot possibly be a "signed int". Period.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Since the "signed int" range is -32768-32767, 0xfc00 cannot possibly be a "signed int". Period.


From the wording in the standard I don't know if I can agree with the "Period" part. As was pointed out to me, -1024 is not a constant but rather an expression. And 0xfc is a valid encoding for -1024.

In the end, "I don't know". The ADCW consist of a low 8 bits and a high 2 bits. (assuming no ADLAR but that just changes the situation). When a signed 10-bit value, neither the 2 bits in ADCH or the 10 bits in ADCW form a signed int in the form that GCC can handle directly. So some manipulation must be done.

In my experiemts the straight-forward two-lines-of-source-code naive implementation produces the desired result and in two AVR8 instructions. And no warnings about any forced coercion or similar.

What do those that protest now recommend as the best politically correct solution?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
From the wording in the standard I don't know if I can agree with the "Period" part.
At the beginning of 6.4.4.1, they define the syntax of an integer constant. I don't see any "minus" sign anywhere there. Therefore, an integer constant is a value in the range 0-32767. 0xfc00 is 64512. Definitely outside the range. Cannot possibly be a "signed int". Period.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
And 0xfc is a valid encoding for -1024.
Only if you presuppose that it represents a 16 bit signed integer (I presume you meant 0xfc00). Again, the compiler can not do this. It must take the value of the constant >>before<< assigning it a type. Since 0xfc00 by itself is >>NOT<< negative, the compiler >>MUST<< interpret the value as 64512.
Quote:
Therefore, an integer constant is a value in the range 0-32767.
No, integer != int. An integer integer can be in the range from 0 to positive infinity, which is the whole point. The rule is to determine what C type the integer value will be represented by in code.
Quote:
What do those that protest now recommend as the best politically correct solution?
Why, the original code of course. As I said above, the "implementation" is twos complement binary. With that, the "|" will produce the same result whether the constant and/or the variable being affected are signed or unsigned.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
From the wording in the standard I don't know if I can agree with the "Period" part. As was pointed out to me, -1024 is not a constant but rather an expression. And 0xfc is a valid encoding for -1024.
To use the standard's terminology, -1024 is not an integer constant, what we have been calling an integer literal.
-1024 is an integer constant expression.
0xfc00 represents the same value as 64512 .
64512 would be tagged a signed long, because it is an unsuffixed decimal that will not fit in a signed int and signed long is next in line for unsufixed decimals.
0xfc00 would be tagged an unsigned int because it is an unsuffixed hexadecimal that will not fit in a signed int and unsigned int is next in line for unsuffixed hexadecimals.
Quote:
In my experiemts the straight-forward two-lines-of-source-code naive implementation produces the desired result and in two AVR8 instructions. And no warnings about any forced coercion or similar.
I am unfond of the just-poke-it-and-see-what-it does method.
That will only tell you what happened.
It will not tell what was supposed to happen.
That it does what one wants might be the result of a bug.
Remember PROGMEM and typedefs?
Quote:
What do those that protest now recommend as the best politically correct solution?
Perhaps you should explain what you mean by politically.
In any case, I've already given several suggestions.
None of them involve unsigned arithmetic or require wrapping one's brain around 0xfc00.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

0xfc00 would be tagged an unsigned int because it is an unsuffixed hexadecimal that will not fit in a signed int...

Again, that is the crux. To me it is a perfectly good bit pattern to represent a 16-bit signed value. In my reading, the standard doesn't directly refute that.

Apparently I'm wrong.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Again, that is the crux. To me it is a perfectly good bit pattern to represent a 16-bit signed value.
But it is >>NOT<< a bit pattern, it is a hexadecimal number. Hexadecimal numbers know nothing about bits or about twos complement representation. If the compiler would be allowed to do what you suggest (i.e. assume the intended type and implementation before assigning the type), it would create inconsistent behavior. That is, on a target where ints are 16 bit twos complement number, the value would be -1024, but on a target where an int is 32 or more bits, the value would be 64512. It is even possible to have a 16 bit implementation that does not use twos complement representation. For any such implementation the value would be some other number. This can not happen. All implementations must treat the value of the number the same way. The implementation can only affect what type the value is assigned to, not the value itself.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Quote:

0xfc00 would be tagged an unsigned int because it is an unsuffixed hexadecimal that will not fit in a signed int...

Again, that is the crux. To me it is a perfectly good bit pattern to represent a 16-bit signed value. In my reading, the standard doesn't directly refute that.
the standard wrote:
Semantics

4 The value of a decimal constant is computed base 10; that of an octal constant, base 8;
that of a hexadecimal constant, base 16. The lexically first digit is the most significant.

The implied formulae make no mention of the type with which the integer constant will eventually be tagged.
Again, can you give an example of an unsuffixed hexadecimal integer constant that would be tagged unsigned int?
To you, what would 0xfc00 represent on a 16-bit sign-magnitude implementation?

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Again, can you give an example of an unsuffixed hexadecimal integer constant that would be tagged unsigned int?

Are you talking to me?

If so, the answer is "I don't know". And yet again, that is the crux of the situation in my view. Steve A. objects strongly.

I read the "Semantics 4 ..." section as lexical interpretation. Steve A. says it doesn't represent a bit pattern. If it doesn't I don't know what it represents.

Reminds me of Alice in Wonderland:
(from Wikipedia)

Quote:

The White Knight explains a confusing nomenclature for the song.

The song's name is called Haddocks' Eyes
The song's name is The Aged Aged Man
The song is called Ways and Means
The song is A-sitting on a Gate

The complicated terminology distinguishing between 'the song, the name of the song, and what the name of the song is called' entails the use–mention distinction.

All this for a "6.4.4.1 Integer constants".

Is the constant integer?
Is the constant called integer?
Is the constant's name called integer?
...

;) I don't know; I don't speak standardese.

http://www.alice-in-wonderland.n...

Quote:
Then there is the passage in which the White Knight proposes to comfort Alice by singing her a song:

"Is it very long?" Alice asked, for she had heard a good deal of poetry that day.

"It's long," said the Knight, "but it's very, very beautiful. Everybody that hears me sing it--either it brings the tears into their eyes, or else--"

"Or else what?" said Alice, for the Knight had made a sudden pause.

"Or else it doesn't, you know. The name of the song is called 'Haddock's Eyes'."

"Oh, that's the name of the song, is it?" Alice said, trying to feel interested.

"No, you don't understand," the Knight said, looking a little vexed. "That's what the name is called. The name really is 'The Aged Aged Man'."

"Then I ought to have said 'That's what the song is called?'" Alice corrected herself.

"No, you oughtn't: that's quite another thing! The song is called 'Ways and Means': but that's only what it's called, you know!"

"Well, what is the song, then?" said Alice, who was by this time completely bewildered.

"I was coming to that," the Knight said. "The song really is 'A-sitting on a Gate': and the tune's my own invention."

Now that is formal logic served up with an apple in its mouth! Those familiar with programming computers in higher-level languages will see there a clear delineation of the difference between a datum, the symbolic name of that datum, the address at which the datum is stored, and the symbolic name of that address.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
Quote:

Again, can you give an example of an unsuffixed hexadecimal integer constant that would be tagged unsigned int?

Are you talking to me?

If so, the answer is "I don't know". And yet again, that is the crux of the situation in my view. Steve A. objects strongly.

Of course he does.
It's a polynomial with coefficients 15, 12, 0 and 0 and with "variable" 16.
What else does one get out of "The value ... of a hexadecimal constant, base 16. The lexically first digit is the most significant."?

Let's try some easier questions.
To you, what are the types of the following integer literals (integer constants) on a twos complement 16-bit implementation:
0x7666
0x8666
0x9666
0xa666
0xb666
0xc666
0xd666
0xe666
0xf666
Again, to you, what would 0xfc00 represent on a 16-bit sign-magnitude implementation?

Quote:
I read the "Semantics 4 ..." section as lexical interpretation. Steve A. says it doesn't represent a bit pattern. If it doesn't I don't know what it represents.

Reminds me of Alice in Wonderland:

Horse hockey.
The term "integer constant" for what we have been calling "integer literal" is truly a poor choice,
but that does not make it something out of Alice in Wonderland.

What is your basis for claiming 0xfc00 represents a bit pattern?
Note that if it does represent a bit pattern, the only unsuffixed hexadecimal literal that might not be interpretable as an int on a 16-bit twos complement implementation is 0x8000.
C requires that int allow at least 2**16-1 values.
On any 16-bit implementation, at most one 16-bit bit pattern is not interpretable as an int.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
In my experiemts the straight-forward two-lines-of-source-code naive implementation produces the desired result and in two AVR8 instructions. And no warnings about any forced coercion or similar.
skeeve wrote:
I am unfond of the just-poke-it-and-see-what-it does method.
That will only tell you what happened.
It will not tell what was supposed to happen.
That it does what one wants might be the result of a bug.

'Poking' at code can't tell you the whole story, agreed. However it is important to understand how your toolchain generates code. You can study documentation on the code generating model of your toolchain, and compare that against the written standard to which that toolchain claims to adhere. Both of these are important.

However, there is no substitute for examinging the generated code itself. That's where the buck stops.

Surely it is important to have an understanding of all of these aspects of software development, from the standard right through to the generated opcodes, at least to some degree.

Should one rely upon the standard exclusively? Or objdump exclusively? Of course not.

Is it practical to reverse-engineer every generated opcode of your own source? Certainly not.

Is it necessary? Certainly not.

Is it helpful? Absolutely.

This whole debate reminds me of a joke I once heard.

More apropos, it reminds me of an article called A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World, which appeared in CACM in 2010:
http://cacm.acm.org/magazines/20...

A favourite quote:

Quote:
Checking code deeply requires understanding the code's semantics. The most basic requirement is that you parse it. Parsing is considered a solved problem. Unfortunately, this view is naïve, rooted in the widely believed myth that programming languages exist.

The C language does not exist; neither does Java, C++, and C#. While a language may exist as an abstract idea, and even have a pile of paper (a standard) purporting to define it, a standard is not a compiler. What language do people write code in? The character strings accepted by their compiler. Further, they equate compilation with certification. A file their compiler does not reject has been certified as "C code" no matter how blatantly illegal its contents may be to a language scholar.

I have enjoyed an embarrassingly large amount of learnin' throughout this thread, and filled in a couple of gaping holes in my knowledge.

I note also with some amusement:

ka7ehk wrote:
This should not be a religious war!
In his .sig, skeeve wrote:
"Religious obligations are absolute." -- Relg
;)

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
If it doesn't I don't know what it represents.
It represents a number. How hard in that to understand? It is a bit pattern only after it is assigned a type and implementation. Constants are not computer representations of numbers, they are mathematical representations of numbers. It is up to the compiler to find the smallest computer representation of that mathematical number.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Have we beaten this to death yet? I'll just make my summary here, and a few comments on skeeve's latest post.

-- OP posed a problem dealing with a signed 10-bit ADC result. Is it agreed that further manipulation is needed to sign-extend the result for further operations as e.g. an int16_t?

-- OP presented a two-line solution, and had questions -- which led to this extensive discussion. Is it fair to summarize it as: "Can this be relied upon to generate proper code?" Given a particular target toolchain, this would most likely be across versions, right? But jumping between AVR8 toolchains would be a legitimate concern as well IMO/IME -- right now, I have both CV and GCC apps.

-- I did the naive test by putting the code fragment into a vanilla Studio6.1 test app that I happened to have open. There were no warnings, and the generated code did what was desired. As GCC is well respected w.r.t. "proper" C compiling, I took that as "all OK; no promotions or truncations or the like".

-- Now, can a vanilla Studio6.1 app have more strict warning capabilities? I haven't yet found where this type of thing can be tweaked. How do you get to makefile and compiler options and linker options?

-- I put the same fragment into CV and got the same results for code generation and no errors as well. Supposedly the 3.x CV has proper "standard C" checking of some generation or another.

-- I found a later test program and results to be interesting. A code fragment:

volatile int wolf;
...
wolf = -1024;
wolf = 0xfc00;
wolf = 64512;

GCC swallowed it whole, no warnings, same value written to wolf in all three cases. In my mind (and certainly I may not be right) the swallowing of 0xfc00 led me to think that it is indeed an integer constant. But what of 64512?!?

That got me curious, so I plugged it into CV and got "constant expression too large" for the last two. NOW I'm indeed leaning the other way, and 0xfc00 only represents the int bit pattern in GCC and not CV.

================
Sorry, skeeve, it still seems like Alice to me.

Quote:

Let's try some easier questions.
To you, what are the types of the following integer literals (integer constants) on a twos complement 16-bit implementation:
0x7666
0x8666
0x9666
0xa666
0xb666
0xc666
0xd666
0xe666
0xf666
Again, to you, what would 0xfc00 represent on a 16-bit sign-magnitude implementation?

See above about my later "experiment". My answer now is: I don't know. I've earlier used "bit pattern", and 0xfc00 >>represents<< [Alice again] to me a valid bit encoding for the equivalent of -1024 with fits into an int16_t but -1024 is not a constant (whew) so -- I don't know.

An analogy that I thought of early on would be the representation of a floating point constant. Different source code ways of expressing a constant value that ends up with the same bit pattern in the "float" variable receiving it. E.g. 1.23 and 123e-2 .

Quote:

Horse hockey.

I resemble that remark.

Quote:

What is your basis for claiming 0xfc00 represents a bit pattern?

None. But why cannot I think of it that way?

Good luck on your endeavours.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Emphasis added:

joeymorin wrote:
theusch wrote:
In my experiemts the straight-forward two-lines-of-source-code naive implementation produces the desired result and in two AVR8 instructions. And no warnings about any forced coercion or similar.
skeeve wrote:
I am unfond of the just-poke-it-and-see-what-it does method.
That will only tell you what happened.
It will not tell what was supposed to happen.
That it does what one wants might be the result of a bug.

'Poking' at code can't tell you the whole story, agreed. However it is important to understand how your toolchain generates code. You can study documentation on the code generating model of your toolchain, and compare that against the written standard to which that toolchain claims to adhere. Both of these are important.

Agreed.
That said, lee seemed to be advocating the poking-only method.
He seems quite irate that anyone even mentioned the standard.
Sometimes one must decide whether to use code that seems to work
even though documentation says that it won't or might not.
As a rule, I'd say not.
If one does, I'd say that it was necessary to examine its assembly on every build.
Quote:
However, there is no substitute for examinging the generated code itself. That's where the buck stops.
Not quite yet.
Source has at least three audiences: the author, the compiler and the maintainer.
The code that started the discussion seems to be good only for the compiler.
One measure of maintainer-goodness of source is how long a maintainer
would look at it if he were passing through looking for bugs.
By that measure, the legibility of 0xfc00 is more important than its type.
Quote:
This whole debate reminds me of a joke I once heard.
In the version I'd read, there were three separate fires in three separate offices.
Some people are careless.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think we are now at the level of angels dancing on pinheads.

A written number only represents some quantity. Only when applied to a computer program does it take on an association with a "bit pattern". A very large number of bit patterns can be used to represent a given number, depending oe the base you choose and the representation within the digital device.

But, in the end, its a distinction that is best left to the philosophers. For all practical purposes, a number is simply what ever we imagine it to be.

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

That said, lee seemed to be advocating the poking-only method.
He seems quite irate that anyone even mentioned the standard.

Whoa! Now I MUST chime in again. Why do EITHER of those statements "seem" like anything I have said in this thread?

Yes, I've several times said that I don't speak standardese. Just because I have a hard time following some of the passages, including the one(s) pertinent to this discussion, doesn't in any shape, style, or form indicate any iration (if that is a word).

Are you referring to Sprinter's post and my response?

Quote:
Examine the C language international standard.

That was awfully generic, and I indeed had already gone there. As we've gone back-and-forth on, I don't see any wording that told me that 0xfc00 is a signed int or unsigned int or other.

Re the "poking-only method": I object to that as well. Indeed, the two-line fragment proposed by the OP seems straightforward. A straightforward try is "poking"? Why?

What is wrong with the approach I took? I plugged it into a well-regarded C compiler and it didn't object. As I'd just built the project, I looked at the generated code ans saw that the compiler recognized both optimizations and made a tight sequence.

Admirable, right? Didja see the current thread about the "crappy" code generation with aforementioned toolchain?

What is "poking-only" about the approach I took? Should instead a committee have been formed, to schedule a discussion on the matter? Sheesh--if I'm thinking like a microcontroller and had to sign-extend a 10-bit value to a 16-bit value, I'd look at the S bit and if set then set the higher bits, as the value is already in two's complement.

What non-poking approach would you take?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
-- OP posed a problem dealing with a signed 10-bit ADC result. Is it agreed that further manipulation is needed to sign-extend the result for further operations as e.g. an int16_t?

-- OP presented a two-line solution, and had questions -- which led to this extensive discussion. Is it fair to summarize it as: "Can this be relied upon to generate proper code?" Given a particular target toolchain, this would most likely be across versions, right? But jumping between AVR8 toolchains would be a legitimate concern as well IMO/IME -- right now, I have both CV and GCC apps.

The answer given was yes, for pretty much any AVR C compiler.
'Twould be possible for a valid twos complement 16-bit compiler to bite him,
but I do not know or suspect any that would.
The type of 0xfc00 really should not affect code generation.
To me, its evils are that it is a magic number and not terribly legible.
-0x400 would be easier to read.
Quote:
-- I found a later test program and results to be interesting. A code fragment:

volatile int wolf;
...
wolf = -1024;
wolf = 0xfc00;
wolf = 64512;

GCC swallowed it whole, no warnings, same value written to wolf in all three cases. In my mind (and certainly I may not be right) the swallowing of 0xfc00 led me to think that it is indeed an integer constant. But what of 64512?!?

That got me curious, so I plugged it into CV and got "constant expression too large" for the last two. NOW I'm indeed leaning the other way, and 0xfc00 only represents the int bit pattern in GCC and not CV.

gcc should have emitted the same warnings.
That it did not is probably a bug related to its 32-bit origins.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
Quote:
However, there is no substitute for examinging the generated code itself. That's where the buck stops.
Not quite yet.
Source has at least three audiences: the author, the compiler and the maintainer.
I would add a fourth: the hardware.

Inasmuch as the hardware never never sees and has no notion of source code, this statement may seem glib (not GLib ;)). However the binary that directs the operation of the hardware is the direct expression of the original source code as overseen by the toolchain. That's what I meant by 'that's where the buck stops'.

As for the other three audiences, the maintainer is not absolved of any of the author's goals and responsibilities including and especially ensuring the correct expression of the ideas embodied in the source code. The remaining audience (the compiler) was conjured into existence at the behest of the other two, merely as a tool in the chain of tools to achieve those goals.

Lee you got me thinking:

volatile int wolf;

int main(void) {
  wolf = -1024;
  wolf = 0xfc00;
  wolf = 64512; 
  while(1);
}
$ avr-gcc -Wpedantic type_promotion_test.c -o type_promotion_test.elf 
type_promotion_test.c: In function 'main':
type_promotion_test.c:6:3: warning: overflow in implicit constant conversion [-Woverflow]
   wolf = 64512; 
   ^

I note with some confusion that the hex literal doesn't generate an error.

What are we to make of this?

Jim, sorry I couldn't help myself... I realise that for you this matter is settled, but this thread has been unexpectedly provocative...

Maybe a moderator should come along to delete my post and lock the thread to give you the last word ;)

EDIT: Ah, too late! Lee and Michael beat me to it...

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
To me, its evils are that it is a magic number and not terribly legible.
-0x400 would be easier to read.
Bull. The purpose of the constant was to OR it in with the variable. With 0xfc00 it is perfectly evident what bits are being set. With -0x400 it is entirely non-obvious what bits will be set.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

joeymorin wrote:
Lee you got me thinking:
volatile int wolf;

int main(void) {
  wolf = -1024;
  wolf = 0xfc00;
  wolf = 64512; 
  while(1);
}
$ avr-gcc -Wpedantic type_promotion_test.c -o type_promotion_test.elf 
type_promotion_test.c: In function 'main':
type_promotion_test.c:6:3: warning: overflow in implicit constant conversion [-Woverflow]
   wolf = 64512; 
   ^

I note with some confusion that the hex literal doesn't generate an error.

What are we to make of this?

0xfcc is unsigned int.
64512 is signed long.
Neither conversion to signed int is value-preserving,
but the latter is also narrowing.
To find out what avr-gcc thinks of 0xfc00, initialize a long with it and look at the assembly.
Quote:
Jim, sorry I couldn't help myself... I realise that for you this matter is settled, but this thread has been unexpectedly provocative...
I'll say. Whoda thunk that the type rules were hard?

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
0xfcc is unsigned int.
64512 is signed long.
Neither conversion to signed int is value-preserving,
but the latter is also narrowing.
So the use of -pedantic exposes the warning for the assignment of a 32-bit signed value into a 16-bit signed variable because the overflow is due to 'narrowing', but doesn't throw a warning for the conversion of a 16-bit unsigned value into a 16-bit signed variable because the overflow is owing to the fact that this 16-bit unsigned value cannot be represented as a 16-bit signed integer?

Why is this distinction important? More importantly why doesn't it throw a warning, even if it's a lexically different warning? Surely the programmer wants to know that an overflow has occurred, regardless of why it occurred.

Quote:
To find out what avr-gcc thinks of 0xfc00, initialize a long with it and look at the assembly.
Yeah, we've covered that. I don't see the relevance w.r.t. an expected warning due to overflow.

Is there perhaps another compiler option which would elicit a warning for non-value-preserving conversions which don't involve narrowing? Or, can you point to the part of the standard which allows non-value-preserving, non-narrowing conversions without complaint?

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Surely the programmer wants to know that an overflow has occurred, regardless of why it occurred.
Well, would you expect the compiler to issue a warning about this line of code?
Quote:
int a, b, c;
a = b * c;
Surely the result of the multiply could be larger than fits in an int, but just as surely you would not expect the compiler to warn you about it. I think it is the difference between what is seen as the compiler's responsibility and what is the programmers responsibility.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
To me, its evils are that it is a magic number and not terribly legible.
-0x400 would be easier to read.
Bull. The purpose of the constant was to OR it in with the variable. With 0xfc00 it is perfectly evident what bits are being set. With -0x400 it is entirely non-obvious what bits will be set.
'Tis obvious to anyone at all familiar with twos complement.
If the coder is not familiar with twos complement, he really should not be writing code that relies on it.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

joeymorin wrote:
skeeve wrote:
0xfcc is unsigned int.
64512 is signed long.
Neither conversion to signed int is value-preserving,
but the latter is also narrowing.
So the use of -pedantic exposes the warning for the assignment of a 32-bit signed value into a 16-bit signed variable because the overflow is due to 'narrowing', but doesn't throw a warning for the conversion of a 16-bit unsigned value into a 16-bit signed variable because the overflow is owing to the fact that this 16-bit unsigned value cannot be represented as a 16-bit signed integer?

Why is this distinction important? More importantly why doesn't it throw a warning, even if it's a lexically different warning? Surely the programmer wants to know that an overflow has occurred, regardless of why it occurred.

To me, it isn't.
A compiler writer made a poor choice.
This is an example of what is wrong with the just-poke-it-and-see-what-it-does method.
One should do more than poke.
Quote:
Is there perhaps another compiler option which would elicit a warning for non-value-preserving conversions which don't involve narrowing? Or, can you point to the part of the standard which allows non-value-preserving, non-narrowing conversions without complaint?
Even when unwise, such conversions are valid,
hence no diagnostic is required.

Koshchi wrote:
Well, would you expect the compiler to issue a warning about this line of code?
Quote:
int a, b, c;
a = b * c;
Surely the result of the multiply could be larger than fits in an int, but just as surely you would not expect the compiler to warn you about it. I think it is the difference between what is seen as the compiler's responsibility and what is the programmers responsibility.
a and b are uninitialized.
I'd expect the compiler to warn about that.
Were a and b initialized with constant values,
the compiler could warn about overflow, if appropriate.
Were a or b initialized with non-constant values,
I'd expect the compiler not to warn about an overflow that might not happen.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm not an C expert but I know my numbers!
One has to distinguish between "values" and "storage classes", can you fit a value into a storage class the code will work (if you use it correct!).
The fun start when :

Quote:
the compiler could warn about overflow, if appropriate

what happen when
int a, b, c;
b=100;
c=500;
a = b * c; 

It store the correct value in a wrong format!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Were a and b initialized with constant values, the compiler could warn about overflow, if appropriate.
And if I were a compiler writer, I would never bother with such a warning. If the values are known at compile time, then the code is unnecessary, so why would I go out of my way to create a warning for it. If compilers writer spent their time creating warnings for every logical mistake that a programmer can make, then no C compiler would ever make it out the door. They need to concentrate on the syntactic mistakes. Again, this is the difference between what the compiler (and the compiler writer) is responsible for and what the programmer is responsible for.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

then no C compiler would ever make it out the door. They need to concentrate on the syntactic mistakes. Again, this is the difference between what the compiler (and the compiler writer) is responsible for and what the programmer is responsible for.

Those that want (some!) logical checking should take a look at tools such as lint, splint, cppcheck and the like. They will likely spot dangers such as potential integer overflow and the like.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
Were a and b initialized with constant values, the compiler could warn about overflow, if appropriate.
But it doesn't:
int a, b, c;

int main(void) {
  b=100;
  c=500;
  a = b * c; 
  while(1);
}

No diagnostic is issued (even with -pedantic) perhaps because the operands of the expression are variables (although better compile-time code analysis might have seen the overflow).

Koshchi wrote:
And if I were a compiler writer, I would never bother with such a warning. If the values are known at compile time, then the code is unnecessary, so why would I go out of my way to create a warning for it.
But GCC does when the operands are constants:
int a;

int main(void) {
  a = 100 * 500; 
  while(1);
}
$ avr-gcc type_promotion_test.c -o type_promotion_test.e
type_promotion_test.c: In function 'main':
type_promotion_test.c:4:11: warning: integer overflow in expression [-Woverflow]
   a = 100 * 500;                                                                                                                              
           ^

Here the compiler can see clearly the overflow.

As expected, the warning is issued without the need for -pedantic. The referenced diagnostic however is still [-Woverflow], same as a previous example:

  wolf = 64512;

However in that case the warning was different:

type_promotion_test.c:6:3: warning: overflow in implicit constant conversion [-Woverflow]

... but the diagnostic rule which was responsible was the same: [-Werror]

Why is this so? The overflow in a = 100 * 500; causes an overflow as a result of evaluating the expression, but the result of that expression is still a signed int. The operands undergo neither promotion nor conversion. No narrowing (or widening) is involved.

The overflow in wolf = 64512; is the result of conversion from the type of the integer literal after promotion (long int) to the type required by wolf.

A third example:

int a;

int main(void) {
  a = 100u * 500u; 
  while(1);
}

No diagnostic, even with pedantic. This is the same behaviour as with the earlier example wolf = 0xfc00;

Why should the compiler's behaviour be different in these three cases?

So far the notion of a difference between 'non-value-preserving' conversions and 'narrowing non-value-preserving' conversions has been suggested as an explanation for the different behaviour, but no part of the standard has been offered up to support this explanation.

I believe the type promotion rules are now clear to me. Certainly clearer than they were. I expect I'll be less paranoid with explicit operand casting in the future as a result. However the type promotion rules have (AFAICS) no direct relevance to the warnings emitted.

I understand that the compiler can't be 'responsible for everything'. However it should be responsible to the standard. I seek to understand which, if any, of these three examples represents standard-compliant behaviour, and to identify and understand the relevant parts of the standard. Are any of these 'missing' warnings an indication that GCC falls short of the standard's requirements w.r.t. diagnostics? Do any of the observed warnings go beyond what the standard requires?

Tilting madly on the head of a windmill... Perhaps it's time to carve off the last 50 or so posts into a new thread? (sorry Jim...)

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One of the things this thread has revealed is the fact that the C standard is pretty daunting and that there are (a few??) parts that are pretty hard to interpret, especially for those of use who have not been exposed very deeply to some of the details of C. I have a particularly hard time even finding something that I am specifically looking for.

Is there some reference between K&R and the Standard that one can refer to on details that might, on rare occasion, need checking?

Thanks
Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
but the result of that expression is still a signed int.
Is it? I would think that the "*" is done by the preprocessor, so the compiler would see 50000 and make it a signed long.
Quote:
This is the same behaviour as with the earlier example wolf = 0xfc00;
Of course it is. Both are unsigned ints.
Quote:
Are any of these 'missing' warnings an indication that GCC falls short of the standard's requirements w.r.t. diagnostics?
Warnings are not mandated by the standard:
Quote:
Annex I:
1 An implementation may generate warnings in many situations, none of which are specified as part of this International Standard.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Is there some reference between K&R and the Standard that one can refer to on details that might, on rare occasion, need checking?

Sadly no, not really. But you can find any amount of discussion about the interpretation of various bits of the standard on Stack Overflow which should always be the "goto place" for questions about C programming.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Koshchi wrote:
Quote:
but the result of that expression is still a signed int.
Is it? I would think that the "*" is done by the preprocessor, so the compiler would see 50000 and make it a signed long.
Why would it do that? The preprocessor does token substitution. It doesn't evaluate any expression. That's the job of the compiler.

The only circumstances under which the proprocessor will do any arithmetic is within certain preprocessor directives, such as:

#include 

#define FOO 100
#define BAR 500

#if (FOO * BAR) > INT_MAX
#error DOH!
#endif

int main(void) {
}
foo.c:4:2: error: #error DOH!
 #error DOH!
  ^

In these cases the preprocessor will use the widest integer type available to it. In the case of AVR GCC, that is long long int (although strictly speaking the preprocessor otherwise has no notion of type).

From:
https://gcc.gnu.org/onlinedocs/g...

Quote:
The `#if' directive allows you to test the value of an arithmetic expression, rather than the mere existence of one macro. Its syntax is
#if expression

controlled text

#endif /* expression */

.
.
.
The preprocessor calculates the value of expression. It carries out all calculations in the widest integer type known to the compiler; on most machines supported by GCC this is 64 bits. This is not the same rule as the compiler uses to calculate the value of a constant expression, and may give different results in some cases. If the value comes out to be nonzero, the `#if' succeeds and the controlled text is included; otherwise it is skipped.

Koshchi wrote:
Of course it is. Both are unsigned ints.
I wasn't asking a question. I know they are the same. That's why I pointed it out.

Koshchi wrote:
Warnings are not mandated by the standard:
Quote:
Annex I:
1 An implementation may generate warnings in many situations, none of which are specified as part of this International Standard.
Ah.

Hmmm:

Quote:
— An implicit narrowing conversion is encountered, such as the assignment of a long int or a double to an int, or a pointer to void to a pointer to any type other than a character type (6.3).
I shall have to read 6.3 in some detail...

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
GCC swallowed it whole, no warnings,

Not all warnings are on per default. With
int i = -32768;

you get no warning, even though 32768 and thus -32768 is long. However, that value fits into our 16-bit int. In

int i = -0x8000;

0x8000 is an unsigned int, thus also -0x8000 and you get a warning for the implicit conversion provided -Wsign-conversion is on. Don't ask me why this is not part of -W or -Wall; presumably for historical reasons.

The unsigned-ness of -0x8000 and long-ness of -32768 are the reasons for why INT_MIN is usually defined as something like (-0x7fff-1) or (-32767-1) and not as (-0x8000) or (-32768) or similar.

-Woverflow, which is part of -W and -Wall, reports situations like

int i = -32769;

This is implementation defined.

If the application relies on signed overflow, GCC offers -fwrapv with wrapping overflow similar to the unsigned case. With -fno-wrapv or -fno-strict-overflow (default for non-Java programs), you might get undefined behavior and warnings by means of -Wstrict-overflow or -Wstrict-overflow=, e.g. for constructs like

int y = (x + 1 < x) ? 0x7fff : x + 1;

with int x. This non-portable hack can be handy for saturated arithmetic when there is no wider type or you don't want a wider type for better performance.

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No warning is either required or prohibited by the standard. Warnings are glimpses into the minds of the compiler-writers. When it comes to warnings, said writers are not required to think well. On some occasions, e.g. syntax errors, the standard does require a "diagnostic". My recollection is that the GNU guys have a standing disagreement with compiler-testers regarding whether a warning is a diagnostic. I think that that is what -pedantic-error is for.

C standard wrote:
3.4.3 1 undefined behavior behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements 2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message). 3 EXAMPLE An example of undefined behavior is the behavior on integer overflow.
On a 16-bit implementation, outside of preprocessor statements, in a context where they might be interpreted as integer expressions (1):

50000 is a signed long int

100*500 is undefined (int overflow)

50000u is an unsigned int

100u*500 is an unsigned int

100*500u is an unsigned int

500*500u is an unsigned int

0x7fff is a signed int

0x8000 is an unsigned int (in principle, could be signed int  edit: c89 only)

0x7fffu is an unsigned int

0x8000u is an unsigned int

In no case does the type depend on the type of a variable.

(1) this excludes comments, strings and possibly something else I did not think of.

Note that all inter-integer-type conversions are valid, though some are implementation-defined and might get warnings. Arithmetic is not a conversion.

sparrow2 wrote:
The fun start when :
Quote:
the compiler could warn about overflow, if appropriate
what happen when
 int a, b, c; b=100; c=500; a = b * c; 

It store the correct value in a wrong format!

int overflow produces undefined behaviour. Diagnostics and nasal demons are allowed. Neither is required. Emphasis added:
Koshchi wrote:
Quote:
Were a and b initialized with constant values, the compiler could warn about overflow, if appropriate.
And if I were a compiler writer, I would never bother with such a warning. If the values are known at compile time, then the code is unnecessary, so why would I go out of my way to create a warning for it.
'Tis my understanding such a warning might show up if optimization is turned on. While doing constant-propagation or whatever the term is, the compiler encounters a value that does not fit in the place where it belongs. At that point, if it feels like doing so, the compiler issues a diagnostic. My understanding is that one type of speed optimization is to duplicate a statement for different code paths. Possibly one such path would require computing 500*500. As that is undefined behaviour, the compiler could, quietly or otherwise, assume said path is unreachable and perform optimizations accordingly. Note that all questions about warnings are really questions about the minds of compiler-writers. Given a silly enough compiler-writer,
char c=666; int i=666;

might generate a warning for i, but not for c.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Sat. Mar 14, 2015 - 08:11 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SprinterSB wrote:

int y = (x + 1 < x) ? 0x7fff : x + 1;

with int x. This non-portable hack can be handy for saturated arithmetic when there is no wider type or you don't want a wider type for better performance.


int y = (x< INT_MAX) ? x+1 : INT_MAX;

is portable.

Edit: portable code is now correct.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Thu. Jul 17, 2014 - 04:22 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
0x8000 is an unsigned int (in principle, could be signed int)
In principle how? Page 56 of n1256.pdf seems to make it pretty clear it would match the second item under Octal or Hexidecimal Constant for none. But I've been misguided before...

skeeve wrote:
Diagnostics and nasal demons are allowed.
:)

skeeve wrote:

int y = (x< INT_MAX) ? x+1 : INT_MAX;

is portable.

Did you perhaps mean?:
int y = (x< INT_MAX) ? x+1 : INT_MIN;

EDIT: corrected to match @skeeve's edit

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

Last Edited: Thu. Jul 17, 2014 - 04:40 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

joeymorin wrote:
skeeve wrote:
0x8000 is an unsigned int (in principle, could be signed int)
In principle how? Page 56 of n1256.pdf seems to make it pretty clear it would match the second item under Octal or Hexidecimal Constant for none. But I've been misguided before...
It might match int. -0x7fff..0x8000 is an allowed range for int.
-0x7fff..0x7fff would still be represented as by twos complement.
I do not know why any implementation would.
Quote:
skeeve wrote:

int y = (x< INT_MAX) ? INT_MAX : x+1;

is portable.

Did you perhaps mean?:
int y = (x< INT_MAX) ? INT_MIN : x+1;

SprinterSB was going for saturated arithmetic.
0x7fff> -0x7fff, therefore 0x7fff is not a valid INT_MIN.
Note that a compiler could do unsigned to signed conversions with saturated arithmetic.
Doing so would render OP's original code incorrect.

Edit:
My code has been corrected in the quoted post,
though not in the quotation above.
It still uses INT_MAX and not INT_MIN.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
I do not know why any implementation would.
Understood. I had a specific implementation on the brain (AVR-GCC).

Emphasis added:

skeeve wrote:
SprinterSB was going for saturated arithmetic.
That is an important word I missed. Twice.

Thanks all for your patience (especially Jim ;)) and willingness to help correct and old dog's tricks.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I am learning an incredible amount of stuff from this. Never have formally learned about any of this, and it is both an eye-opener and comforting to be able to see inside things with a little less fog.

Cheers and thanks to EVERYONE!

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There is a programming trick that can be used to  increase size of the numeric representation using two xor-operations and an add-operation. Search for "offset binary"  and you will be able to work out the details yourself.

 

Olof

Last Edited: Wed. Oct 8, 2014 - 05:10 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

http://en.wikipedia.org/wiki/Sig...

 

regarding the signdness: it is good style to apply the binary operators only on unsigned numbers

In the beginning was the Word, and the Word was with God, and the Word was God.

Last Edited: Thu. Oct 23, 2014 - 09:31 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I believe testing the sign bit, making conditional jumps and shifting in '1' had already been proposed earlier in the thread. Anyway since there is no concept of a data type like 10-bit signed variable in most C-compilers I don't see how good style is violated. If the target data type is a standard signed type the first time a signed type in the context of anything the a C/C++-compiler understands is after the last XOR and that operation can be done using a standard C/C++ language statement that will not violate good style.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So much fuss over a non-issue. Just use a struct with a bit-field and call it a day!

int x; // convert this from using 10 bits to a full int
int r; // resulting sign extended number goes here
struct {signed int x:10;} s;
r = s.x = x;

(Modified from https://graphics.stanford.edu/~s.... As noted on that page, you need to specify "signed" for a bit-field to be signed; otherwise the sign is undefined.)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So much fuss over a non-issue. Just use a struct with a bit-field and call it a day!

int x; // convert this from using 10 bits to a full int
int r; // resulting sign extended number goes here
struct {signed int x:10;} s;
r = s.x = x;

LOL--indeed straightforward. And violates no "rules".  I've lost track of this debate; I wonder how the compiler carries it out, and how that compares to the "handmade" solutions proposed earlier.

 

CodeVision, with x and r as register variables (in low registers):

                ;int x; // convert this from using 10 bits to a full int
                 ;int r; // resulting sign extended number goes here
                 ;struct {signed int x:10;} s;
                 ;void main()
                 ; 0000 000B {
                 
                 	.CSEG
                 _main:
                 ; .FSTART _main
                 ;0000 000F x = 555;
00003f e2eb      	LDI  R30,LOW(555)
000040 e0f2      	LDI  R31,HIGH(555)
000041 012f      	MOVW R4,R30
                 ;0000 0010 r = s.x = x;
000042 70f3      	ANDI R31,HIGH(0x3FF)
000043 010f      	MOVW R0,R30
000044 e0a0      	LDI  R26,LOW(_s)
000045 e0b2      	LDI  R27,HIGH(_s)
000046 91ed      	LD   R30,X+
000047 91fd      	LD   R31,X+
000048 70e0      	ANDI R30,LOW(0xFC00)
000049 7ffc      	ANDI R31,HIGH(0xFC00)
00004a 29e0      	OR   R30,R0
00004b 29f1      	OR   R31,R1
00004c 93fe      	ST   -X,R31
00004d 93ee      	ST   -X,R30
00004e 013f      	MOVW R6,R30
 

With x and r in high registers:

                 ;	x -> R16,R17
                 ;	r -> R18,R19
                 ;	s -> Y+0
                +
000040 e20b     +LDI R16 , LOW ( 555 )
000041 e012     +LDI R17 , HIGH ( 555 )
                 	__GETWRN 16,17,555
                 ;0000 000E r = s.x = x;
000042 01f8      	MOVW R30,R16
000043 70f3      	ANDI R31,HIGH(0x3FF)
000044 010f      	MOVW R0,R30
000045 01de      	MOVW R26,R28
000046 91ed      	LD   R30,X+
000047 91fd      	LD   R31,X+
000048 70e0      	ANDI R30,LOW(0xFC00)
000049 7ffc      	ANDI R31,HIGH(0xFC00)
00004a 29e0      	OR   R30,R0
00004b 29f1      	OR   R31,R1
00004c 93fe      	ST   -X,R31
00004d 93ee      	ST   -X,R30
00004e 019f      	MOVW R18,R30

 

GCC/Studio 6.1:

00000038 <main>:
#include <avr/io.h>
int main(void)
{
  38:	cf 93       	push	r28
  3a:	df 93       	push	r29
  3c:	00 d0       	rcall	.+0      	; 0x3e <__SP_H__>
  3e:	00 d0       	rcall	.+0      	; 0x40 <__SREG__+0x1>
  40:	cd b7       	in	r28, 0x3d	; 61
  42:	de b7       	in	r29, 0x3e	; 62
volatile int x; // convert this from using 10 bits to a full int
volatile int r; // resulting sign extended number goes here
struct {signed int x:10;} s;

x = 555;
  44:	8b e2       	ldi	r24, 0x2B	; 43
  46:	92 e0       	ldi	r25, 0x02	; 2
  48:	9c 83       	std	Y+4, r25	; 0x04
  4a:	8b 83       	std	Y+3, r24	; 0x03
r = s.x = x;
  4c:	8b 81       	ldd	r24, Y+3	; 0x03
  4e:	9c 81       	ldd	r25, Y+4	; 0x04
  50:	26 e0       	ldi	r18, 0x06	; 6
  52:	88 0f       	add	r24, r24
  54:	99 1f       	adc	r25, r25
  56:	2a 95       	dec	r18
  58:	e1 f7       	brne	.-8      	; 0x52 <__SREG__+0x13>
  5a:	36 e0       	ldi	r19, 0x06	; 6
  5c:	95 95       	asr	r25
  5e:	87 95       	ror	r24
  60:	3a 95       	dec	r19
  62:	e1 f7       	brne	.-8      	; 0x5c <__SREG__+0x1d>
  64:	9a 83       	std	Y+2, r25	; 0x02
  66:	89 83       	std	Y+1, r24	; 0x01

 

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Compare the "politically correct" bitfield solution with the simple sequence originally proposed by us pagans:

https://www.avrfreaks.net/comment...

	result = ADCW;
  46:	80 91 78 00 	lds	r24, 0x0078
  4a:	90 91 79 00 	lds	r25, 0x0079
	if (result & 0x0200) result |= 0xfc00;
  4e:	91 fd       	sbrc	r25, 1
  50:	9c 6f       	ori	r25, 0xFC	; 252
	PORTD = result;
  52:	8b b9       	out	0x0b, r24	; 11
	PORTD = result>>8;
  54:	89 2f       	mov	r24, r25
...

Same SBRC/ORI in both CodeVision and GCC.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Last Edited: Fri. Oct 24, 2014 - 07:55 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A re-read of the thread was "interesting".  ;)

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I didn't read all 11 pages of this thread, only the first couple. Here's a portable branch-free approach:

int n = <10-bit input>;
n = (n ^ 0x200) - 0x200;

This converts it to offset-binary, then subtracts the zero bias. It is completely portable (to satisfy purists here) and works regardless of how wide an int is (it'll of course work for int16_t, int32_t, or whatever). It should be just as efficient for AVR, since both instructions don't touch the lower byte.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cool solution. Edit: Seems like GCC (also avr-gcc) optimize my solution and this solution to to same 3 assembler instructions AVR, or 2 instructions on an IA64.

Last Edited: Sat. Nov 1, 2014 - 08:18 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Indeed.  Thank you @christop!  This is now and forever in my toolbox.

 

A neat link to other 'bit twiddling' hacks, too...

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Have to make such binary operations always clear by examples:

 

10-bit signed number is b000000ji hgfedcba

Zero or positive number has j=0, negative number has j=1.

Example for zero or positive number:
value >= 0     b0000000i hgfedcba
constant 0x200 b00000010 00000000
XOR result     b0000001i hgfedcba
constant 0x200 b00000010 00000000
minus result   b0000000i hgfedcba

Example for negative number:
value < 0      b0000001i hgfedcba
constant 0x200 b00000010 00000000
XOR result     b0000000i hgfedcba
constant 0x200 b00000010 00000000
minus result   b1111111i hgfedcba

 

Yea, seems to work great!

In the beginning was the Word, and the Word was with God, and the Word was God.

Last Edited: Sun. Nov 2, 2014 - 08:44 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This has been really interesting, folks.

 

Not being steeped in the mysteries of signed binary arithmetic, it has been a great learning event. Appreciate everyone's input.

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So here is an additional bonus weed end problem for you all. Find the optimal  solution to compute the average of 10 consecutive samples of the 10 bit signed numbers and give the result as a 16 bit signed number.
 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It is not clear, which average you mean, for example

- block average,

- moving average,

- average, where the newer values count more than the older

=> FIR or IIR filter.

So, instead of solving the "weed end" problem wink, I would start with the week end problem first:

compute the moving average of 64 consecutive samples of 10 bit signed numbers and give the result as a 16 bit signed number.

Samples of 511 should lead to 32702,

samples of -512 should lead to 32768.

In the beginning was the Word, and the Word was with God, and the Word was God.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

By average I ment arithmetic mean.  Of course input data should allow arbitrary values just as if the input is a bit noisy real time AD-converted signal.  Would it be best to convert to the 16-bit signed format first before adding a new number or is it possible to have some other solution that also performs better? For myself I guess rehab is the best solution.
 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Neither problem is terribly difficult.

For the first, the hard part is deciding how to divide by ten.

Should one be fussy about rounding?

For the second, the main problem is storage.

One will have to store 64 samples.

That is 80 or 128 bytes.

If one has the resources, the problem is not hard.

For such things, I'm more likely to use a decaying average.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A simple moving average (SMA) and weighted moving average would need to store a window's worth of samples but an exponential moving average (EMA) requires only the previous sample. I imagine that's what skeeve is referring to by "decaying average".

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Correct.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Often a decaying average is enough. But if you do not want to have any historical data after a given number of sampls, the simple moving average is better suited:

 

#define FILTER_SIZE 64
/* Simple Moving Average filter:
   inData is assumed to be in the range of [-512; 511] (= 10bit, sign extended);
   the return value is scaled by the factor FILTER_SIZE=64 related to the input data */   
int16_t filterSMA(int16_t inData)
{
    static int16_t buffer[FILTER_SIZE];
    static int16_t average;
    static uint16_t bufferIndex;
    static uint8_t startCounter;

    /* add the latest value to the average */
    average += inData;
    /* remove oldest value from the average */
    average -= buffer[bufferIndex];    
    
    /* store to buffer */
    buffer[bufferIndex] = inData;
    bufferIndex++;
    if (bufferIndex>=FILTER_SIZE)
    {
        bufferIndex = 0;
    }
    
    if (startCounter<FILTER_SIZE)
    {   /* special handling, if buffer is not completely filled (not settled) */
        startCounter++;
        return average*FILTER_SIZE/startCounter;
    }
    else
    {
        /* buffer is filled, filter is settled */
        return average;
    }
}

 

A decaying filter would look like this:

 

#define FILTER_STRENGTH        16
/* Decaying filter:
   inData is assumed to be in the range of [-512; 511] (= 10bit, sign extended);
   the return value is scaled by the factor FILTER_SIZE=64 related to the input data;
   Watch out for remaining offsets! Watch out for overflows, if FILTER_STRENGTH is too big! */
int16_t filterDec(int16_t inData)
{

    static int16_t average;
    static uint8_t startCounter;
    int32_t temp;
    
    if (startCounter<1)
    {   /* first value is the initialization for the average */
        startCounter++;
        average = inData*FILTER_SIZE;
    }
    temp = (int32_t)average*(FILTER_STRENGTH-1)+inData*FILTER_SIZE;
    average = (int16_t)(temp/FILTER_STRENGTH);
    return average;
}

 

The startCounter thing can be omitted, if you do not care, what happens during start-up.

Due to the integer division, the decaying filter will have some unwanted rounding effects.

If you have the memory, I propose the SMA filter, which actually needs only plus and minus.

 

In the beginning was the Word, and the Word was with God, and the Word was God.

Last Edited: Sun. Nov 9, 2014 - 01:17 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I skimmed the thread and did not see the obvious way of sign extending in C -- sorry if I missed it.

 

static inline int8_t sign_extend8(uint8_t val, uint8_t bits)

{

    uint8_t shift = 7 - bits;

    return (int8_t)(val << shift) >> shift;

}

 

static inline int16_t sign_extend16(uint16_t val, uint8_t bits)

{

    uint8_t shift = 15 - bits;

    return (int16_t)(val << shift) >> shift;

}

 

However since your value is 10 bits and we're on an LE 8bit MCU the fastest option is:

 

  uint8_t *arr = &val;

  arr[1] = sign_extend8(arr[1], 2);

 

Which should take just the 2 cycles, one LSL and one ASR, assuming the value is already in registers.

 

On your other questions, no the bitwise ops (|, &, ^) do not care about signed vs unsigned.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Some might prove to be a worthy of seeing other might not.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

LOL -- I thought we killed this pig.

 

 

static inline int16_t sign_extend16(uint16_t val, uint8_t bits)

{

    uint8_t shift = 15 - bits;

    return (int16_t)(val << shift) >> shift;

}

So, let's take OP's case, with a "fixed" problem of AVR8 10-bit ADC result, to a 16-bit signed int. If the app is like mine, with continuous ADC sampling at, say, 200us each/10ksps, the sequence I proposed in #3 is less than a microsecond.  How many cycles will your routine take?  Hmmm--with CodeVision and Studio6.2 simulator, I get 2506 cycles.  300us at 8MHz. (Remember that I might want to do this in my app every 200us, and do a bunch of other stuff in my app as well...)

 

EDIT:  Making the routine not inline cut the cycle count to a more-expected 150.  20us at 8MHz.  [It could be a simulator artifact but a quick run-through indicated a possible situation with ... wait for it ... SIGN EXTENDING the shift count in the loop.  LOL]

 

The very practical AVR8 solution from #3 is SBRC/ORI.  2 cycles if the operand(s) are in high registers.

 

Thus the objections to the political-correctness.  But, I respect your beliefs. Just sayin'.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Last Edited: Thu. Jan 8, 2015 - 08:20 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
(int8_t)(val << shift) >> shift

According to the C99 standard, the result of the >> operator is implementation-defined for signed negative numbers.

This means, that we can not rely on the experience, that an arithmetic shift must be used (MSB is copied).

 

Also, reinterpretting the int16_t value as an array of two uint8_t values makes the code dependent on the endianess of the machine.

 

So, I would prefer the solution of comments #3 or #104.

In the beginning was the Word, and the Word was with God, and the Word was God.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skotti wrote:
According to the C99 standard, the result of the >> operator is implementation-defined for signed negative numbers.

This means, that we can not rely on the experience, that an arithmetic shift must be used (MSB is copied).

static_assert((-1)>>1==-1);

That said, the algorithm is really ugly and a tad obscure.

Quote:
So, I would prefer the solution of comments #3 or #104.
#3 is not as good as the algorithm OP started with. #1's use of 16-bit arithmetic is explicit and twos-complement is pretty much a given for AVRs.

I would change the presentation a little:

#include "20X.h"
...
int16_t result;
...
//#define SIGN_MASK 0x200
//#define SIGN_MASK (1<<9)
#define SIGN_MASK (1<<(SIGN_BITS-1))
if(result & SIGN_MASK) result |= -SIGN_MASK;

#104 would be good on a machine for which the absence of conditional code mattered.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles

Last Edited: Sat. Jan 10, 2015 - 04:04 PM