https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89501

--- Comment #8 from Linus Torvalds <torva...@linux-foundation.org> ---
(In reply to Jeffrey A. Law from comment #7)
> It's reliably the case that a false positive uninit warning also represents
> a failure to optimize something.  So we've got significant incentives to
> deeply analyze and look for fixes.  So feel free to pass examples of those
> along.

Well, most of it is due to interactions with *other* issues entirely.

For example, when we enable GCOV for coverage checking, we have to disable
tree-loop-im, because of excessive stack usage due to gcc bugzilla 69702.

And disabling that optimization then leads to bogus "might be used
uninitialized" warnings.

We have a few other cases like that. Eg we can't use -Os without also disabling
-Wmaybe-uninitialized etc.

Some of the cases may be historical (ie maybe you've fixed the issues that
cause us to do that in newer versions), but for various distro reasons we end
up supporting old compilers for a _looong_ time.

We not that long ago ended up increasing the minimum required gcc version for
the kernel to gcc-4.6.

So we have this odd "we love using new features" fight with "oops, but we end
up having people using relatively ancient compiler versions".

Reply via email to