Ian Lance Taylor <[EMAIL PROTECTED]> writes:

> Also, it does not make sense to me to lump together all potentially
> troublesome optimizations under a single name.

As a compiler developer, you see the trees.  But most users just see a
forest and want things to be simple.  Even adding a single binary
switch (-fno-risky/-frisky) will be an extra level of complexity that
most users don't particularly want to know about.  Requiring users to
worry about lots of little switches (at least -fwrapv/-fundefinedv/-ftrapv,
-fstrict-signed-overflow/-fno-strict-signed-overflow, and
-fstrict-aliasiang/-fno-strict-aliasing, and probably more) makes GCC
harder to use conveniently, and will make things more likely to go
wrong in practical use.

That being said, I guess I wouldn't mind the extra complexity if
-fno-risky is the default at -O2.  The default would be simple, which
is good enough.

> I don't really see how you move from the needs of "many, many C
> applications" to the autoconf patch.  Many, many C applications do not
> use autoconf at all.

Sure, and from the GCC point of view it would be better to address
this problem at the GCC level, since we want to encourage GCC's use.

However, we will probably want to address this at the Autoconf level
too, in some form, since we also want to encourage GNU software to be
portable to other compilers with this problem, which include icc and
xlc.

I think we will also want to address the problem at the Gnulib level
too, since we want it to be easier to write software that is portable
even to compilers or compiler options that have undefined behavior on
signed overflow -- this will help answer the question "What do I do
when -Warnv complains?".

> 1) Add an option like -Warnv to issue warnings about cases where gcc
>    implements an optimization which relies on the fact that signed
>    overflow is undefined.
>
> 2) Add an option like -fstrict-signed-overflow which controls those
>    cases which appear to be risky.  Turn on that option at -O2.

This sounds like a good approach overall, but the devil is in the
details.  As you mentioned, (1) might have too many false positives.

More important, we don't yet have an easy way to characterize the
cases where (2) would apply.  For (2), we need a simple, documented
rule that programmers can easily understand, so that they can easily
verify that C code is safe: for most applications this is more
important than squeezing out the last ounce of performance.  Having
the rule merely be "does your version of GCC warn about it on your
platform?" doesn't really cut it.

So far, the only simple rule that has been proposed is -fwrapv, which
many have said is too conservative as it inhibits too many useful
optimizations for some numeric applications.  I'm not yet convinced
that -fwrapv harms performance that much for most real-world
applications, but if we can come up with a less-conservative (but
still simple) rule, that would be fine.  My worry, though, is that the
less-conservative rule will be too complicated.

> There is a substantial number of application developers who would
> prefer -failsafe.  There is a substantial number who would prefer
> -frisky.  We don't know which set is larger.

It's a controversial point, true.  To help resolve this difference of
opinion, we could post a call for comments on info-gnu.  The call
should be carefully worded, to avoid unduly biasing the results.


Reply via email to