Andrew Haley wrote:

Is it simply that one error is likely to be more common than another?
Or is there some more fundamental reason?

I think it is more fundamental. Yes, of course any optimization
will change resource utilization (space, time). An optimization
may well make a program larger, which means it no longer fits
in memory, or it may in some unfortunate case slow down the
execution of some program, e.g. due to cache effects from too
much inlining, and the program misses deadlines.

But these effects are very different from optimizations which
change the results in a way permitted by the standard as a
direct result of using the optimization.

For example if we write

  a = b * c + d;

an optimizer may choose to use a fused multiply-add with
subtly different rounding effects

or on the x86, keeping intermediate floats in 80 bits,
certainly permissible from the standard without LIA,
can subtly change results.

I do think this is a useful distinction.

We can't say "don't do any optimization that changes
resource utilization", that would be absurd. We can't
even say, warn when this optimization might change
resource utilization, that would be equally absurd.

But when we have an optimization that changes the
operational semantics of the particular operation
being optimized, we can say:

This change in behavior may be unexpected, or
undesirable, even if allowed by the standard.
Let's be sure the optimization is worth it before
enabling it by default.

Reply via email to