I'd expected that this topic would've been discussed before, but I can't locate any matching posts.
I fear that I made some stupid programming a few years back, where I cast an unsigned int variable to a signed long before doing some calculations.
unsigned int SomeVar1; // 16 bits unsigned variable unsigned int SomeVar2; // 16 bits unsigned variable signed long TempVar; // 32 bits signed variable ... TempVar = (signed long)SomeVar1 + (signed long)SomeVar2;
This worked when I originally wrote it, and I never thought about the potential hazard.
Now the code has been reused in a new product, and I suspect that it no longer works as I expected. The compiler has changed over the years, and I fear that this is the cause of my problem.
When I cast an unsigned int variable to a signed long, what happens first? Is the value expanded to 32 bits first, and then considered a signed variable, or is it the opposite? Let's say that the original value is 65436. If the value is first expanded to 32 bits, and afterwords considered a signed variable, nothing changes. But if the value is considered signed before it is expanded, the value is changed to -100.
In the ANSI C book by K&R, I found the following text in A6.2: "When any integer is converted to a signed type, the value is unchanged if it can be represented in the new type and is implementation-defined otherwise." Unfortunately, I can't really tell if this answers my question, because A6.2 doesn't specifically mention variable size expansion. At least, I don't think it does. The wording in the section that I haven't quoted reminds me of the nearly unreadable FDA standards...
Can anyone help?