Roger Leigh via Tiff <[email protected]> writes:

> The warnings are split into two groups: default and extra.  At the moment, 
> you have to opt-in to get the extra warnings, and we enable them in the CI 
> build.
>
> https://gitlab.com/rleigh/libtiff/-/blob/warning-additions5/cmake/CompilerChecks.cmake#L68
>  is the state of things after all of the proposed changes are done.
>
> We can look at making more of these the default, so long as we’re sure they 
> aren’t going to erroneously warn on other systems.

I see where you're coming from but would good to get to "always on" when
it is possible.

>>  -Wwarn-foo should only be added if the compiler, already probed, is
>>  known to support it, so that there are no problems with older
>>  compilers and compilers not known to the tiff developers
>
> This is the case, each option is individually tested.  But it only
> tests that the option is available, not whether it works properly or
> not.  In the past, I’ve seen some GCC bugs where warning options were
> erroneously warning, which is why we haven’t yet made them all on by
> default.

Fair enough.

> Yes, but it’s not just down to the language spec.  A fair few changes
> are due to C library implementation differences which are all
> compliant but different.  E.g. MinGW64 uses a 32-bit size_t even
> though it’s a 64-bit platform.  Luckily we already have a TIFFIOSize_t
> typedef for this purpose, but we weren’t using it everywhere and so
> enabling strict warnings required fixing up the callsites using size_t
> to use that instead.  As one example.  But increasing the CI test
> matrix to test a lot more combinations has flushed out a lot of
> portability issues we weren’t aware of before, or if we were once
> aware we certainly weren’t catching regressions through testing.
> Mostly benign, but some could well be causing obscure problems.

Your extension is within my intent; code that assume size_t is any
particular type, more than C99/POSIX say it is, is buggy.  I really
meant "there's a good argument the code is wrong" vs "add random casts
to quiet warnings" -- which I don't expect you to do, but I've seen it.

>>  If there's a situation where the code is not wrong, and the compiler
>>  is wrongly complaining, and there's a workaround, that should be
>>  loudly annotated in the checked-in code and the commit message.  And
>>  probably should be raised for discussion.
>
> There’s only a single one that I’m aware of, which looks like a GCC and/or C 
> standard issue.  GCC isn’t technically wrong, but it’s practically wrong.
>
> This is the use of "%"PRIu16 in a format string, with a uint16_t
> argument.  The variadic function promotes the value to int and then it
> complains about the signedness mismatch IIRC.  Clang and MSVC are fine
> with it.  I ended up switching it back to "%u" which works everywhere
> with no actual behaviour change given that PRIu16 is %u on most
> platforms anyway.  You could claim the GCC behaviour is in one respect
> actually fully standards compliant, but it kind of defeats the purpose
> of the inttypes macros and is annoying, but the workaround is simple
> enough.

If that's the only problem, that's fantastic.

(Interesting that %u is ok, if uint16_t is promoted to int because all
values fit....)
_______________________________________________
Tiff mailing list
[email protected]
https://lists.osgeo.org/mailman/listinfo/tiff

Reply via email to