Eli Zaretskii <e...@gnu.org> writes:

>> Cc: Jonathan Wakely <jwakely....@gmail.com>, gcc@gcc.gnu.org
>> Date: Tue, 09 May 2023 18:38:05 +0200
>> From: Arsen Arsenović via Gcc <gcc@gcc.gnu.org>
>> 
>> You're actively dismissing the benefit.
>
> Which benefit?
>
> No one has yet explained why a warning about this is not enough, and
> why it must be made an error.  Florian's initial post doesn't explain
> that, and none of the followups did, although questions about whether
> a warning is not already sufficient were asked.

Quite simple: people don't (as easily) choose to ignore errors.

You can see this in any teaching environment, and I've had such
experience in many of them, so I can say with an extremely high degree
of confidence that people, by default, do not ignore errors.  A student
will see twenty warnings and brush it off, but will see one error and
diligently go back to fix it.

If we tally up the hypothetical users of the hypothetical -fpermissive
coverage for enabling these broken constructs, I think that we (compiler
and distro developers) will be a supermajority or more.

I am absolutely certain, by virtue of us having this conversation today,
that warnings are not enough.  I am also absolutely certain that an
opt-out error *will* work, as it has before for cases like -fcommon and
the G++ -fpermissive flag (naturally, these aren't magically gone, but
they are becoming rarer).  Hell, I've missed warnings before as they do
not normally raise a flag during development (hence -Werror), even
though I have many years of dealing with loose toolchain defaults.

> That's a simple question, and unless answered with valid arguments,
> the proposal cannot make sense to me, at least.

I'll repeat a few reasons others have cited:
- implicit or misdeclared calls lead to subtly incorrect code,
- implicit calls lead to a lack of toolchain features like
  _FORTIFY_SOURCE,
- implicit calls lead to wrong symbols being chosen, leading to
  data being trimmed, which can on occasion hide itself for a long time,
- all of these constructs have been unambiguously invalid for decades,
- the impact is relatively small (Florian cited a figure of six percent,
  which lines up roughly with my own observation), yet an escape hatch
  for aged code can be easily provided.
- as the compiler is less informed about what code its interfacing in,
  diagnostics are proportionally affected (alongside producing incorrect
  code).
- these constructs have been invalid for decades, and, if considered an
  extension, they're a hideous blot.
- by making GCC not a strict compiler *by default*, we encourage using
  non-GNU toolchains because 'they provide better error reporting'.
  This also applies for other components of the toolchain.  I, for one,
  have little interest in enabling that when the cost for keeping
  fast-and-loose-playing compilers for old (read: broken) codebases is
  low.

It is very much okay (nae, a feature) to be compatible with previous
versions of the compiler, and a prior status quo, but we should not let
it hold us to worse old standards in terms of strictness.

On the contrary, I'd like to hear the upsides of keeping these defaults,
besides compatibility, as that one is easily achieved without keeping
horrid defaults

Have a most lovely evening.
-- 
Arsen Arsenović

Attachment: signature.asc
Description: PGP signature

Reply via email to