failure to truncate to 16bits when -flto is on

Go To Last Post
30 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
void setup() {
  Serial.begin(9600);
}

void loop() {
  int i = 1;
  while (i > 0) {
    Serial.println((int16_t)i);
    i += 10000;
    delay(1000);
  }
  Serial.println(i);
}

 

The above sketch prints ever increasing 32bit numbers when compiled with -flto, and the expected wrapping-to-negative numbers when compiled without -flto.

Is this considered a bug, or does it fall under the "signed integer overflow is undefined, so we can do whatever we want" clause?

 

(I sort-of leaned to the latter without the (int16_t) cast, but I can't figure out an excuse for the cast not truncating the argument to 16bits.)

(Note that Serial.println() is defined as having a "long" argument)

 

(Yes, it's an Arduino sketch.  Same behavior with 1.8.9  (which has gcc 5.4) and (very recent) 1.8.13 (with gcc 7.3.0))

 

Last Edited: Wed. Oct 21, 2020 - 10:50 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Test your theory with a fixed example...
Int i = 70000;
Serial.println((int16_t)i);
Serial.println(i);

Last Edited: Thu. Oct 22, 2020 - 05:10 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
ldi r24,lo8(1)
ldi r25,0
rcall Serial_println(int)
ldi r24,lo8(17)
ldi r25,lo8(39)
rcall Serial_println(int)
ldi r24,lo8(33)
ldi r25,lo8(78)
rcall Serial_println(int)
ldi r24,lo8(49)
ldi r25,lo8(117)
rcall Serial_println(int)

Interesting: So under -Os the loop decays to in-line code. (https://godbolt.org/ doesn't seem to support -flto)

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would say the compiler is at fault, although a compiler writer could most likely claim 'not my problem' with what is going on.

 

In the process of going from an int (signed 16bits), to then getting cast into a long (signed 32bits), then ultimately ending up as an unsigned long in ::printNumber- I would guess the fact that 'i' was originally an int is somehow lost in this optimization process. If you look at the generated code, the 'int i' is treated as a unsigned long when the += 10000 takes place.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Serial.print((int16_t)70000) also “fails”...  (prints 70000)

 

in the -flto case, serial.print() gets inlined as well...  it ends up being very hard to read.

 (and godbolt would need all that source code as well, I think.)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If we examine the Arduino source code for the Print class: https://github.com/arduino/ArduinoCore-API/blob/master/api/Print.cpp

 

We find this uncontroversial code:

/* Print.h */
size_t print(int, int = DEC);


/* Print.cpp */
size_t Print::print(int n, int base)
{
  return print((long) n, base);
}

So when inlining that single line, we find we can actually pass a long. I wonder if the compiler wrongly assumes it can omit the down-cast to int16_t and also the immediately following up-cast to long.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

westfw wrote:
it's an Arduino sketch

For an 8-bit or 32-bit Arduino?

 

Does it make any difference?

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A quick fix to keep the int from getting sucked into the 'long' black hole-

 

[[ gnu::noinline ]]
size_t Print::print(int n, int base)
{
  return print((long) n, base);
}

 

but now you get another problem- your 'while(i > 0)' is always true (according to the asm listing).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm convinced you've tickled a bug (or more).  It took me almost 15 minutes to get an example program that will reproduce your problem, and in the process I found another avr-gcc bug.

I'll start with the code that reproduces your bug:

#include <avr/io.h>
#include <stdint.h>

void foo(long l)
{
    PORTB = ((l >> 12) | l) & 0xFF;
}

void bar(int i) { foo ( (long) i); }

int main()
{
    int i = 1;
    while (1) {
        bar((int16_t) i);
        i += 10000;
        PORTB = 0;
    }
}

With avr-gcc 5.4.0, -Os -flto, I get:

 

00000030 <main>:
  30:   81 e0           ldi     r24, 0x01       ; 1
  32:   90 e0           ldi     r25, 0x00       ; 0
  34:   a0 e0           ldi     r26, 0x00       ; 0
  36:   b0 e0           ldi     r27, 0x00       ; 0
  38:   ac 01           movw    r20, r24
  3a:   bd 01           movw    r22, r26
  3c:   2c e0           ldi     r18, 0x0C       ; 12
  3e:   75 95           asr     r23
  40:   67 95           ror     r22
  42:   57 95           ror     r21
  44:   47 95           ror     r20
  46:   2a 95           dec     r18
  48:   d1 f7           brne    .-12            ; 0x3e <__SP_H__>
  4a:   48 2b           or      r20, r24
  4c:   48 bb           out     0x18, r20       ; 24
  4e:   18 ba           out     0x18, r1        ; 24
  50:   80 5f           subi    r24, 0xF0       ; 240
  52:   98 4d           sbci    r25, 0xD8       ; 216
  54:   af 4f           sbci    r26, 0xFF       ; 255
  56:   bf 4f           sbci    r27, 0xFF       ; 255
  58:   ef cf           rjmp    .-34            ; 0x38 <main+0x8>

Most of us are aware of C's annoying int promotion rules that sometimes cause avr-gcc to use a 16-bit value even though you specify 8-bit.  However there is no rule that allows the compiler to promote an int to a long.  I suppose one could argue the compiler is free to implement it as 32-bits, as long it works the same as if it were 16-bits.  But before I get to that, I want to explain the reason for PORTB = 0 in the loop.  If you comment it out, here's what you'll get:

00000030 <main>:
  30:   81 e0           ldi     r24, 0x01       ; 1
  32:   88 bb           out     0x18, r24       ; 24
  34:   83 e1           ldi     r24, 0x13       ; 19
  36:   88 bb           out     0x18, r24       ; 24
  38:   85 e2           ldi     r24, 0x25       ; 37
  3a:   88 bb           out     0x18, r24       ; 24
  3c:   87 e3           ldi     r24, 0x37       ; 55
  3e:   88 bb           out     0x18, r24       ; 24

And no, I didn't make any mistake.  The compiler actually removed the loop!  If I compile without lto, it does the loop.  I still find it hard to believe avr-gcc actually does something so stupid.  I re-compiled this several times, checking timestamps, switching on and off lto because at first I thought I must've made a mistake.  I'm used to finding subtle bugs in gcc, but nothing like this.

 

Going back to the original problem, I wrote a modified test program with i inside a struct, in which case gcc only allocates 2 bytes in the data section.  I used an asm memory clobber in the loop to force gcc save/restore i on each itteration, and it still uses 4 registers in the loop when lto is enabled.  Only the lower 16 bits are saved and restored.  Without lto, it does the more intelligent thing and just uses 2 registers instead of 4.

 

I suspect what's going on is that gcc internally uses 32-bit values for constant propagation & constant folding for LTO.  I highly doubt avr-gcc will see a patch to improve the lto perfomance.  I even doubt the serious bug I found with the while(1) loop disappearing will get fixed.  And yet Microchip will continue to market the AVR line to the automotive sector, where you need not only robust and reliable hardware, but also a robust and reliable compiler.

 

p.s. Maybe now more people will understand one of the reasons I like writing in asm for AVR targets.   You don't have to coerce or trick the assembler into generating the code you really want.

 

p.p.s. For anyone trying to test the above code, I only tried it in c++ mode, since Bill's Arduino code would be compiled in c++ mode.  In c, mode, gcc sometimes gives different behavior.

 

 

I have no special talents.  I am only passionately curious. - Albert Einstein

 

Last Edited: Fri. Oct 23, 2020 - 11:32 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Try this......

Serial.println( ((int16_t)i) );

Notice the extra brackets! That worked for me once in a similar issue. Basically tells compiler to cast first!

Last Edited: Thu. Oct 22, 2020 - 02:36 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

12oclocker wrote:
I suspect what's going on is that gcc internally uses 32-bit values for constant propagation & constant folding for LTO.  I highly doubt avr-gcc will see a patch to improve the lto perfomance.  I even doubt the serious bug I found with the while(1) loop disappearing will get fixed.  And yet Microchip will continue to market the AVR line to the automotive sector, where you need not only robust and reliable hardware, but also a robust and reliable compiler.
Perhaps if it were publicized on an automotive software website, it would get more attention.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This is code with undefined behavior, full stop, since signed integer overflow is not defined in C. The compiler is free to do anything it wants, or more likely, whatever it just so happens to produce when it makes optimization assumptions that hold for valid code. For example, the compiler can assume that i > 0 always, since i starts with a positive number and is only incremented, so printing "40001" after i "should have" overflowed is valid (printing "potato" would be equally valid).

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A real issue is how far back does the license from undefined behavior reach?

Also, what should a compiler do when performing optimizations based on undefined behavior?

Methinks that deserves at least a warning.

If the compiler can legitimately remove the entire loop,

why not the entire body of main?

 

Somewhere in the bowels of advice for GNU developers was the statement

that they need not consider the possibility that int will be smaller than 32 bits.

Whether it is still there, I do not know.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 The compiler is free to do anything it wants

This is what I predicted over in the Arduino forums:

compiler authors seem to delight in allowing "unexpected" behavior for common situations, and then claiming "behavior is undefined!  It's perfectly legal for me to do that!"  Sigh.

 To claim that a cast of a supposedly 16bit quantity to a supposedly 16bit quantity can yield a 32bit result is stretching credulity, though.

 

 

Ralphd: thanks for coming up with a simpler demo case!

 

Serial.print((int16_t)70000) also “fails”...  (prints 70000)

I can't reproduce that any more  :-(  Perhaps I was mis-remembering, perhaps something mysterious changed.

 

Last Edited: Fri. Oct 23, 2020 - 02:12 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What results do you get if compiled with -fwrapv, thus making signed integer overflow defined?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This problem isn't about signed integer overflow per-se.

As I hinted at in #6 and Ralph demonstrated in #9; it's because the link-time-optimiser/compiler has chosen to implement a int16_t variable as int32_t.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MrKendo wrote:

What results do you get if compiled with -fwrapv, thus making signed integer overflow defined?

 

With -fwrapv, the bug I found where gcc silently omits the while(1) loop is "fixed".  Judging by the asm, it looks like it wraps at 16 bits, as expected.  And it only uses 16 bits for i (r19:r18), and casting int to long results in a sign extend of the integer (into r27:r26).

00000030 <main>:
  30:   21 e0           ldi     r18, 0x01       ; 1
  32:   30 e0           ldi     r19, 0x00       ; 0
  34:   c9 01           movw    r24, r18
  36:   03 2e           mov     r0, r19
  38:   00 0c           add     r0, r0
  3a:   aa 0b           sbc     r26, r26
  3c:   bb 0b           sbc     r27, r27
  3e:   4c e0           ldi     r20, 0x0C       ; 12
  40:   b5 95           asr     r27
  42:   a7 95           ror     r26
  44:   97 95           ror     r25
  46:   87 95           ror     r24
  48:   4a 95           dec     r20
  4a:   d1 f7           brne    .-12            ; 0x40 <__SREG__+0x1>
  4c:   82 2b           or      r24, r18
  4e:   88 bb           out     0x18, r24       ; 24
  50:   20 5f           subi    r18, 0xF0       ; 240
  52:   38 4d           sbci    r19, 0xD8       ; 216
  54:   ef cf           rjmp    .-34            ; 0x34 <main+0x4>

N.Winterbottom wrote:

This problem isn't about signed integer overflow per-se.

As I hinted at in #6 and Ralph demonstrated in #9; it's because the link-time-optimiser/compiler has chosen to implement a int16_t variable as int32_t.

 

Actually, the test above shows it doesn't use 32 bits for i with -flto -fwrapv.  This supports Bill's theory of a snobbish/lazy gcc developer deciding they can do whatever they want, without somuch as a warning, if you do something that the standard says is undefined.  Note that I tried every flag that should have resulted in a warning: -Wall -Wpedantic -pedantic-errors -Werror=pedantic -Wstrict-overflow -Wconversion -Wsign-conversion

 

I have no special talents.  I am only passionately curious. - Albert Einstein

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ralphd wrote:

00000030 <main>:
  30:   81 e0           ldi     r24, 0x01       ; 1
  32:   88 bb           out     0x18, r24       ; 24
  34:   83 e1           ldi     r24, 0x13       ; 19
  36:   88 bb           out     0x18, r24       ; 24
  38:   85 e2           ldi     r24, 0x25       ; 37
  3a:   88 bb           out     0x18, r24       ; 24
  3c:   87 e3           ldi     r24, 0x37       ; 55
  3e:   88 bb           out     0x18, r24       ; 24

And no, I didn't make any mistake.  The compiler actually removed the loop!  If I compile without lto, it does the loop.  I still find it hard to believe avr-gcc actually does something so stupid.  I re-compiled this several times, checking timestamps, switching on and off lto because at first I thought I must've made a mistake.  I'm used to finding subtle bugs in gcc, but nothing like this.

 

That's insane, I had to test it for myself to believe it. It seems gcc tried to unroll an infinite loop and failed, outputting some semi-random results. The output varies if you use other numbers instead of 10000, the loop returns once the number is less than 2^13 (<8192).

I'm sure "C standards lawyers" will say "Perfectly legit behaviour, you are lucky the compiler didn't wipe out your OS 🧐" or something like that.

 

 

edit: well my conclusion is that you should always(?) use  -fwrapv with -flto. And avoid undefined behaviour (I always try to but sometimes it slips...).

Last Edited: Fri. Oct 23, 2020 - 01:08 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

El Tangas wrote:

 

That's insane, I had to test it for myself to believe it. It seems gcc tried to unroll an infinite loop and failed, outputting some semi-random results. The output varies if you use other numbers instead of 10000, the loop returns once the number is less than 2^13 (<8192).

 

I'm sure "C standards lawyers" will say "Perfectly legit behaviour, you are lucky the compiler didn't wipe out your OS 🧐" or something like that.

 

That's where I'm convinced the gcc developers have really got it wrong.  No sensible standards organization like ISO would allow "undefined behavior" to mean "all bets are off" for the whole program.  We are talking about a standard that is used for things like medical devices.  Signed integer overflow being undefined means the implementation is not defined.  It could wrap, it could peg to INT_MAX, etc.  Everything else in the program that is well-defined stays well-defined.  What the holy gnu developers have decided is that one single line of code that the standard says is undefined renders everything else undefined as well.

"In contrast, the C standard says that signed integer overflow leads to undefined behavior where a program can do anything, including dumping core or overrunning a buffer. The misbehavior can even precede the overflow."

https://www.gnu.org/software/aut...

 

I have no special talents.  I am only passionately curious. - Albert Einstein

 

Last Edited: Fri. Oct 23, 2020 - 02:41 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yea ... due to a linter after enabling MISRA C++.

Guidelines for the use of the C++14 language in critical and safety-related systems (AUTOSAR)

[bottom of page 308]

5-2-4 (Required) C-style casts (other than void casts) and functional notation casts (other than explicit constructor calls) shall not be used.

The alternate ways?

Explicit type conversion - cppreference.com

 


PC-lint Plus Online Demo - Gimpel Software - The Leader in Static Analysis for C and C++ with PC-lint Plus (based on Clang)

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Really nasty behavior. Anyway I've got something, that makes it working even with -flto, and if it's commented out, it stops working again. Also String::operator+(int) seems to be working all the times:

 

void setup() {
  Serial.begin(115200);
}
size_t (Stream::*nasty_print)(int,int) = &Stream::print;

void print_z(int16_t value) {
  Serial.print(static_cast<int16_t>(value));
}

void loop() {
  int16_t i = 1;
  while (i > 0) {
    Serial.print(String{} + i);  // works anytime
    Serial.print(' ');

    // comment out next line and rest of it stops working:
    ((&Serial)->*nasty_print)(i,DEC); // works and causes others to work too, but if it's commented, rest stops working
    // (Serial.*nasty_print)(i,DEC);  // bit more clear than previous

    Serial.print(' ');
    Serial.print(i, DEC); // nope
    Serial.print(' ');
    print_z(i);           // nope
    Serial.println();

    i += 10000;
    delay(1000);
  }
}

So it seems to be just selecting wrong overload of the print/println method.

And maybe that indirect call of function pointer to class method caused it couldn't be optimized by -flto...

 

Edit: tried it on ARM aaaand... It is wrong on AVR even without -flto

 

Computers don't make errors - What they do they do on purpose.

Last Edited: Fri. Oct 23, 2020 - 06:19 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The way to do what OP seems to want without undefined behavior

is to make i unsigned and cast it to int for the comparison.

For values outside the common range, the conversion is implementation-defined.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So ultimately this is simply about an int16_t to uint32_t conversion, where the compiler looks ahead and if it can 'see' an int overflow in the future it will do a one time conversion only and then treat the converted as a uint32_t from then on. Not quite sure why they choose to treat the int16_t to uint32_t conversion differently than a int8_t to uint16_t conversion (which does seem to take place no matter what). Also seems to be in early optimization as it shows up in -O1.

 

Other gcc targets like arm/x86 seem to wrap these things by default, and you simply get the conversion where required, each time.

 

avr, gives up doing int16_t to uint32_t conversion, but does the conversion for int8_t to uint16_t-

https://godbolt.org/z/845Mes

 

arm, does conversion each time

https://godbolt.org/z/Kv7bdf

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Did anyone try this yet...
Serial.println( ((int16_t)i) );
With double brackets , interested to know the result.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

>interested to know the result

 

Doesn't change anything (and I did try it). 

 

Anything that forces the use of the calling convection (a call) will 'fix' the problem (such as turning off lto, marking print/int as noinline, etc.), since the calling convention is doing the conversion via the function parameters. But i is till treated as a uint32_t since the compiler gave up doing any conversion in the loop as it sees there is no end (considers i>0 is always true).

 

Even when the loop can end- while(i<31000) -the compiler will still only do the conversion once as there will be no overflow and the uint32_t value will always be correct. This is probably where this 'problem' originates- the one time conversion will always be correct UNLESS there is overflow, and apparently for avr-gcc they do not consider that their problem and are happy with the one time conversion. If you throw anything in the mix where the compiler can no longer figure out what i may be, then you get the conversion(s) taking place as expected. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It turns out this is an old gripe.

https://gcc.gnu.org/bugzilla/sho...

 

Back in 2007 Johannes Stezenbach noted the C99 standard says:

"An implementation that defines signed integer types as also being modulo need not detect integer overflow, in which case, only integer divide-by-zero need be detected."

 

I checked, and it's still in Annex H (H.2.2) for C17.

 

This means the standard permits a compiler to implement signed integers to wrap.  And for most modern CPUs, that's the simplest thing to do since integers are implemented in two's complement notation.  But since the standard says the implementation is undefined, one or to people controlling commits to gcc decided to fuck with users.  Early versions of GCC wrapped signed integers, which is permitted (but not required) by the standard.  Many programmers (albeit incorrectly) assumed that signed integers wrap just like unsigned integers.  Then a couple guys maintain gcc realize the standard permits them to break programs using signed overflow, so they do it.  When people complained, the gcc dudes, in so many words, say, "Ha!, the standard allows me to break your programs, so it's your fault.  Sucks to be you, doesn't it!"

 

I've decided to try to contact the ISO/IEC committee and request that undefined behavior be removed where there is a reasonable and simple implementation option, and those currently undefined behaviors be changed to "implementation-defined behavior" or "unspecified behavior".  "undefined behavior" should be reserved for truly dangerous code, such as random pointer pointer access.   The C17 standard has 9 pages of undefined behaviors in annex J.2, so I'm willing to be there's a lot more than just signed integer overflow that could be removed from the list.  And to be precise, overflow isn't explicitly mentioned in J.2.  It's in the example of the "undefined behavior" definition in section 3.4.3 where the standard says:

"An example of undefined behavior is the behavior on integer overflow."

And considering that unsigned integer overflow is well-defined (overflows or out-of-bounds results silently wrap as per H.2.2), the example statement is technically wrong.

 

Does anyone know how to contact committee members?  Yesterday I left a comment on Jens Gustedt's blog asking for his email address to provide feedback, but so far have not received a response.

 

I have no special talents.  I am only passionately curious. - Albert Einstein

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ralphd wrote:
Then a couple guys maintain gcc realize the standard permits them to break programs using signed overflow, so they do it.  When people complained, the gcc dudes, in so many words, say, "Ha!, the standard allows me to break your programs, so it's your fault.  Sucks to be you, doesn't it!"

You've missed out what comes next

"But if you want signed overflow to be defined to wrap, we give you an option to make our compiler do precisely that"

 

Seems reasonable enough to me.

As you've said, leaving something undefined in the standard doesn't prevent an implementation from making it defined, like with the -fwrapv option for gcc. It just doesn't force an implementation to do so.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MrKendo wrote:

ralphd wrote:
Then a couple guys maintain gcc realize the standard permits them to break programs using signed overflow, so they do it.  When people complained, the gcc dudes, in so many words, say, "Ha!, the standard allows me to break your programs, so it's your fault.  Sucks to be you, doesn't it!"

You've missed out what comes next

"But if you want signed overflow to be defined to wrap, we give you an option to make our compiler do precisely that"

 

Seems reasonable enough to me.

As you've said, leaving something undefined in the standard doesn't prevent an implementation from making it defined, like with the -fwrapv option for gcc. It just doesn't force an implementation to do so.

 

 

No, I didn't miss that.  The point is they could've left it the way it was, with signed ints wrapping, before the -fwrapv option was added.  The change broke existing code, sometimes in subtle ways, and this was done before there was good ubsan tools available to help find it in large projects.

If there had been a material performance benefit to changing the behavior to undefined, and the people in charge of gcc first made the change opt-in (i.e. -fnowrapv), that would be reasonable.

 

 

I have no special talents.  I am only passionately curious. - Albert Einstein

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ralphd wrote:
No, I didn't miss that.  The point is they could've left it the way it was, with signed ints wrapping, before the -fwrapv option was added.  The change broke existing code, sometimes in subtle ways, and this was done before there was good ubsan tools available to help find it in large projects.

I see what you're saying. Can be argued either way.

I think the lesson is, whenever changing to a new compiler version, you always have to be very paranoid that something in your existing code might break.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MrKendo wrote:
ralphd wrote:
Then a couple guys maintain gcc realize the standard permits them to break programs using signed overflow, so they do it.  When people complained, the gcc dudes, in so many words, say, "Ha!, the standard allows me to break your programs, so it's your fault.  Sucks to be you, doesn't it!"

You've missed out what comes next

"But if you want signed overflow to be defined to wrap, we give you an option to make our compiler do precisely that"

 

Seems reasonable enough to me.

The real suckage comes from the lack of a warning.

Iluvatar is the better part of Valar.