On Tue, Jun 28, 2005 at 07:02:49PM +0200, Gabriel Dos Reis wrote: > | Since behavior on integer overflow is undefined, we can optimize assuming > | that overflow has not occurred. Then a > c, so the for loop always > | executes b+1 times, and we end up with > | > | if (b > 0) > | some_func(b+1); > | > | Any attempt to assign meaning to integer overflow would prevent this > | optimization. > > We document that > > a = (int) ((unsigned) b + c) > > is well-defined and given by the wrapping semantics. Does the current > optimizer takes that into account or will it assume b+1 execution times?
C/C++ require unsigned to be modulo, and I think it is perfectly appropriate to define the cast from unsigned to int to assume two's complement behavior. But if unsigned variables are involved, in my example the compiler is forced to produce worse code (it must cover the case of wraparound). > If the optimizer takes that into account, then the question becomes > when do we consider breaking the ABI to switch numeric_limits<signed > type>::is_modulo back to old behaviour. I think that defining signed types as is_modulo is broken, but I'm not sure what consequences follow from this problem (e.g. what kind of user code is using this feature, and for what).