[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #9 from joseph at codesourcery dot com --- A portability issue producing a compile failure is often better than one where there is no error but the code misbehaves at runtime on some platforms (a lot of code does not have good testsuites).
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #8 from Martin Uecker --- There are certainly other similar portability issues, e.g.: enum : long { X = 0xUL }; https://godbolt.org/z/hKsqPe9c1 BTW: Are there better examples where we have similar build failures also in pre-C2X? (not counting explicit compile-time tests for sizes or limits) Most simple C expressions do not seem to produce a hard error when switching between 64 and 32 bit archs, e.g. exceeding the range in an initializer of an enum does not produce hard errors without -predantic-error before C2X. That we now seem to have such issues worries me a little bit. In any case, I would argue that issues related to the size of integers are much better understood by programmers, while excess precision is rather obscure and also has much more implementation-defined degrees of freedom. The behavior of integers is more or less fixed by its width, but with what precision 1. / 3. is computed on any specific platform is not restricted. The use of such a thing in a constexpr initializer then makes the program inherently non-portable and I do not believe programmers are aware of this. Debugging such issues after the fact because a package fails to build on, for example, 3 of 20 architectures in Debian is generally a huge pain. On the other hand, maybe excess precision on i386 is obscure and i386 will go away and we should not worry?
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #7 from joseph at codesourcery dot com --- I think it's reasonable for such a portability issue to be detected only when building for i386, much like a portability issue from code that assumes long is 64-bit would only be detected when building for a 32-bit target. Then adding a note would help the user, seeing an error on i386, to understand the non-obvious reason for the error. I don't think it's such a good idea to try computing also in hypothetical excess precision, when building for a target that doesn't use excess precision, in attempt to generate a portability warning there.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #6 from Martin Uecker --- Adding a note is a good idea, but it doesn't really solve the issue. The problem I see is that somebody developing on x86-64 does not get a warning that the code is not strictly conforming and then it fails to build elsewhere. Ideally, there is an algorithm which can decide whether the result is exact in all versions with same or higher precision or not. Otherwise, computing it with some higher precision and comparing the results would catch most problems (but not all). Alternatively, one could downgrade the error to warning. The code may then be slightly wrong on i386 but at least the build does not fail.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #5 from joseph at codesourcery dot com --- We could add a "note: initializer represented with excess precision" or similar for the case where the required error might be surprising because the semantic types are the same.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 Laurent Rineau changed: What|Removed |Added CC||Laurent.Rineau__gcc@normale ||sup.org --- Comment #4 from Laurent Rineau --- Maybe `constexpr` evaluation of floating point expressions could be computed using MPFR, instead of using the local hardware.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #3 from Jakub Jelinek --- Excess precision changes behavior in lots of significant ways, and I don't really see how we could warn for this, there are different kinds of excess precision, the i386 one of promoting float/double to long double, s390{,x} way of promoting float to double, different arches depending on flags either promote _Float16 operations to float (or double or long double) or don't, so I don't really see how we could try to evaluate expressions 3-6 different ways and warn if there is some difference between those.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #2 from Martin Uecker --- On i386 1. / 3. is computed with higher precision than double and then the initializer changes the value which is a contraint violation in C23. But whether this happens or not depends on the architecture, so this code is not portable.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 Jakub Jelinek changed: What|Removed |Added CC||jakub at gcc dot gnu.org --- Comment #1 from Jakub Jelinek --- Excess precision?