Arsen Arsenović <ar...@aarsen.me> writes:

> Relying on human scrutiny when an algorithm can trivially deduce an
> answer more quickly and more reliably (because even in the case that the
> code you showed compiles, it assumes processor specific facts) has
> historic precedent as being a bad idea.

The algorithm in question is not much of an algorithm at all.  Rather,
it makes the blanket assumption that the lack of an explicit declaration
means the implicit declaration is incorrect, and that there is a correct
declaration in a header file somewhere.  That sounds rather premature,
don't you agree?

> I'm curious, though, about how lint checks for this.

Lint generates a lint library describing (among others) a description of
each function definition in every translation unit it checks.  It also
imports pregenerated lint libraries, such as those for the C and math
libraries.  All declarations are then checked against the lint libraries
being used.

> So be it.  The edge case still exists.  The normal case that I expect
> most code is dealing with is much simpler: a missing include.  That case
> is not discounted by the existence of misdeclarations across TUs.

I can't believe that the normal case is a missing include, because I see
`extern' declarations outside header files all the time (just look at
bash, and I believe one such declaration was just installed in Emacs.)

And how does that discount what I said people will do?  They will get
into the habit of placing:

  extern int foo ();

above each error.  Such declarations are also much more common than
implicit function declarations, and are more likely to lead to mistakes,
by the sole virtue of being 100% strictly conforming Standard C that
need not lead to any diagnostics, and thus, extra scrutiny from the
programmer.

The point is, such checks are the job of the linker and lint, because as
you yourself have observed, those are the only places where such bugs
can really be caught:

> There's already -Wlto-type-mismatch.  It has spotted a few bugs in my
> own code.

[...]

> Yes, indeed.  And code should keep working on those machines, because it
> costs little to nothing to keep it working.  And it will if you pass
> -fpermissive.

And what prevents -fpermissive from being removed in the future, once
the folks here decide that -fpermissive must go the way of -traditional?

> It is unfeasible for GCC to entirely stop compiling this code, there's
> nobody that is advocating for doing that; however, the current default
> of accepting this code in C is harmful to those who are writing new
> code, or are learning C.

I expect people who write new code to pay even more scrutiny to
diagnostics from their C translator than usual, so the existing strategy
of issuing warnings should be sufficient.

> Have a lovely day.

You too.

Reply via email to