http://gcc.gnu.org/bugzilla/show_bug.cgi?id=60165

--- Comment #10 from Manuel López-Ibáñez <manu at gcc dot gnu.org> ---
(In reply to Vincent Lefèvre from comment #8)
> Concerning the "if it cannot prove the uninitialized paths are not executed
> at run time" part, GCC should be able to prove more things with -O3 than
> with -O2, meaning that -Wmaybe-uninitialized warnings could disappear with
> -O3 compared to -O2, but generally not the opposite.

Your assumption is mistaken because you don't seem to realize something that
Jakub has said repeatedly. GCC doesn't warn *on purpose* for very common code
such as

{
int c;
f(&c);
return c;
}

if GCC doesn't know what is going on within f(), because that will trigger a
lot of false positives (although it is easy to build testcases where warning
would have been warranted). At -O3, that code may be converted to

{
   int c;
   if (g())
       c = 3
    return c;
}

and then, even though g() may never return false, if GCC cannot prove that,
then GCC will warn, because not warning will case a lot of false negatives.
Such design decisions are based on experience.

Now, I agree that ideally, GCC should warn for your last testcase. But I guess
in that case inlining either doesn't happen or it happens too late, so GCC only
sees the first case. The analysis that GCC performs are predicated on the
transformations leading to better code, otherwise they are not performed. A
tool for static analysis will surely inline as much as it could, not matter if
the code is slower or larger, but this is not how GCC works because GCC is not
a tool for static analysis.

On the other hand, GCC has bugs (missed optimizations, PR24639), and it would
be nice if more people worked on those, but this PR doesn't look like one.

Reply via email to