On 29 Dec 2006 07:55:59 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:
Paul Eggert <[EMAIL PROTECTED]> writes:
> * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
> optional second argument specifying the default optimization
> options for GCC. These optimizations now default to "-O2 -fwrapv"
> instead of to "-O2". This partly attacks the problem reported by
> Ralf Wildenhues in
> <http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html>
> and in <http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html>.
I fully appreciate that there is a real problem here which needs to be
addressed, but this does not seem like the best solution to me. A
great number of C programs are built using autoconf. If we make this
change, then they will all be built with -fwrapv. That will disable
useful loop optimizations, optimizations which are enabled by default
by gcc's competitors. The result will be to make gcc look worse than
it is.
You will recall that the problem with the original code was not in the
loop optimizers; it was in VRP. I think we would be better served by
changing VRP to not rely on undefined signed overflow. Or, at least,
to not rely on it without some additional option.
Actually, I seriously disagree with both patches.
Nobody has yet showed that any significant number of programs actually
rely on this undefined behavior. All they have shown is that we have
one program that does, and that some people can come up with loops
that break if you make signed overflow undefined..
OTOH, people who rely on signed overflow being wraparound generally
*know* they are relying on it.
Given this seems to be some small number of people and some small
amount of code (since nobody has produced any examples showing this
problem is rampant, in which case i'm happy to be proven wrong), why
don't they just compile *their* code with -fwrapv?
I posted numbers the last time this discussion came up, from both GCC
and XLC, that showed that making signed overflow wraparound can cause
up to a 50% performance regression in *real world* mathematical
fortran and C codes due to not being able to perform loop
optimizations.
Note that these were not just *my* numbers, this is what the XLC guys
found as well.
In fact, what they told me was that since they made their change in
1991, they have had *1* person who reported a program that didn't
work.
This is just the way the world goes. It completely ruins dependence
analysis, interchange, fusion, distribution, and just about everything
else. Hell, you can't even do a good job of unrolling because you
can't estimate loop bounds anymore.
I'll also point out that *none* of these codes that rely on signed
overflow wrapping will work on any *other* compiler as well, as they
all optimize it.
Most even optimize *unsigned* overflow to be undefined in loops at
high opt levels (XLC does it at -O3+), and warn about it being done,
because this gives them an additional 20-30% performance benefit (in
particular on 32 bit fortran codes that are now run on 64 bit
computers, as the induction variables are usually still 32 bit, but
they have to cast to 64 bit to index into arrays. Without them
assuming unsigned integer overflow is undefined for ivs, they can't do
*any* iv related optimization here because the wraparound point would
change). Since XLC made this change in 1993, they have had 2 bug
reports out of hundreds of thousands that were attributable to doing
this.
I believe what we have here is a very vocal minority. I will continue
to believe so until someone provides real world counter evidence that
people do, and *need to*, rely on signed overflow being wraparound to
a degree that we should disable the optimization.
--Dan