I have carefully scoured K&R and cannot find anything that explains what happens when one "down-casts" from an int16 or unint16 to a corresponding int8 or uint8. I was quite surprised to not find anything! Similarly int32 to int16.
Here is the situation:
I have an int32_t on which I have done several operations (specifically ">>16") that guarantees that the result is a 16-bit integer. Now, I want to use it as an int16_t.
Which half of the larger type is used for the smaller type? I have always assumed that it is the low half, but just realized that I have no basis for that assumption.
I've also looked at a list file but, at the moment, it seems pretty incomprehensible.
In a similar vein, where can I find the rules for sign-extension? K&R uses the term, but I cannot find a definition for it or what the conditions are under which it happens.
Thanks for your help,