[Bug middle-end/113082] builtin transforms do not honor errno
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113082 --- Comment #5 from joseph at codesourcery dot com --- I think it would be reasonable for glibc to require that audit modules don't change errno, at least when acting for libc function calls where glibc guarantees not changing errno. I think user-provided IFUNC resolvers are only relevant for user-provided functions and so shouldn't be relevant to this issue (if a user declares their own function with a noerrno attribute, and also has an IFUNC resolver for that function, they need to make sure the IFUNC resolver behaves consistently with the attribute). It would also seem reasonable for glibc to guarantee that most string and memory functions (maybe excluding a few that involve the locale or other external state, such as strcoll or strerror, and definitely excluding those such as strdup that involve memory allocation) don't change errno. We may need to be careful about what "obviously" shouldn't affect errno (consider e.g. the ongoing discussions around qsort - where avoiding memory allocation should as a side effect also avoid affecting errno, but it's unclear how we might simultaneously avoid memory allocation, keep a stable sort, achieve O(n log(n)) worst case performance, and keep good performance for typical inputs).
[Bug target/112762] [14 Regression] Cannot build crosscompilers for some uclinux targets since r14-5791-g24592abd68e6ef
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112762 --- Comment #5 from joseph at codesourcery dot com --- The *-uclinux* targets are generally for systems without an MMU and a corresponding ABI (FLAT, FDPIC, etc.), whereas *-linux-uclibc* targets are for systems with an MMU and an associated conventional ELF ABI.
[Bug middle-end/32667] block copy with exact overlap is expanded as memcpy
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=32667 --- Comment #50 from joseph at codesourcery dot com --- Qualifiers on function parameter types do not affect type compatibility or composite type (see 6.7.6.3#14). I think they're only actually of significance in the definition; in a declaration they effectively serve as documentation.
[Bug middle-end/112614] Compile-time float-to-_Decimal64 fails for -NAN
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112614 --- Comment #2 from joseph at codesourcery dot com --- The sign of a NaN isn't specified for conversions, only for a few operations such as absolute value, negation, copysign.
[Bug tree-optimization/112566] Some ctz/popcount/parity/ffs optimizations
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112566 --- Comment #4 from joseph at codesourcery dot com --- On Thu, 16 Nov 2023, jakub at gcc dot gnu.org via Gcc-bugs wrote: > ctz(ext(x)) == ctz(x) if UB on zero, In one direction, this should also be true for a narrowing conversion (changing ctz(narrow(x)) to ctz(x) might remove UB if x is nonzero but narrows to zero, but won't introduce UB, or change the result if narrow(x) is nonzero).
[Bug c/112556] Null pointer constants with enumeration type are not accepted
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112556 --- Comment #2 from joseph at codesourcery dot com --- Yes, this is a bug; null_pointer_constant_p gets this right, but convert_for_assignment fails to handle enumerations and booleans as possible null pointer constants. Other contexts such as comparisons and conditional expressions appear to be OK (through performing integer promotions so that enumerations and booleans can't appear, for example, or through handling all kinds of integer types together).
[Bug c/111811] [14 Regression] ICE with vector float bitfield after error
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111811 --- Comment #4 from joseph at codesourcery dot com --- The checks are in check_bitfield_type_and_width. I expect the attribute - in this position a declaration attribute - gets applied after that (and while applying it results in a change to the type, and thus in the declaration being laid out again, this check doesn't get repeated). In this case, the existing check is correct but not sufficient. In another case the check is arguably too early: struct s { int __attribute__ ((__mode__ (DI))) x : 50; }; Considering int __attribute__ ((__mode__ (DI))) as a DImode integer type, that bit-field width is valid - but it's rejected because the check is carried out on int, before the attribute gets applied. Getting that case to work might require extracting early those declaration attributes that actually affect the type, so they can be applied to the type before the declaration gets constructed and such checks are carried out.
[Bug c/112449] Arithmetic operations can produce signaling NaNs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112449 --- Comment #13 from joseph at codesourcery dot com --- On Fri, 10 Nov 2023, rguenth at gcc dot gnu.org via Gcc-bugs wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112449 > > --- Comment #11 from Richard Biener --- > (In reply to post+gcc from comment #10) > > The standard says > > > > > This annex does not require the full support for signaling NaNs specified > > > in IEC 60559. This > > annex uses the term NaN, unless explicitly qualified, to denote quiet NaNs. > > Where specification of > > signaling NaNs is not provided, the behavior of signaling NaNs is > > implementation-defined (either > > treated as an IEC 60559 quiet NaN or treated as an IEC 60559 signaling NaN). > > I don't see implement-c.texi saying anything about this. Joseph, can you > improve documentation here? Updating implement-c.texi for C23 is on my list for after C23 is out and so we have final subclause references (but the list of implementation-defined behavior in J.3 doesn't seem to have that point from Annex F at present).
[Bug c/112449] Arithmetic operations can produce signaling NaNs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112449 --- Comment #9 from joseph at codesourcery dot com --- To quote the C23 DIS, "This annex does not require the full support for signaling NaNs specified in IEC 60559. This annex uses the term NaN, unless explicitly qualified, to denote quiet NaNs.". Support for signaling NaNs is indicated by FE_SNANS_ALWAYS_SIGNAL in , which glibc makes sure to define only if __SUPPORT_SNAN__ (which is defined by GCC if -fsignaling-nans). If -fsignaling-nans is not used, you should not expect consistency in whether a signaling NaN is handled differently from a quiet NaN (including whether optimizations might be applied that result in a signaling NaN result from an operation that can't produce such a result with IEEE signaling NaN semantics).
[Bug libfortran/112364] calloc used incorrectly
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112364 --- Comment #10 from joseph at codesourcery dot com --- The wording refers to "the size requested", which I consider to be the product of two arguments in the case of calloc - not a particular argument to calloc.
[Bug libfortran/112364] calloc used incorrectly
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112364 --- Comment #7 from joseph at codesourcery dot com --- I believe "size requested" refers to the product nmemb * size in the case of calloc, so getting the arguments the "wrong" way round does not affect the required alignment. The point of the change was to override DR#075 and stop requiring e.g. 1-byte allocations to be suitably aligned for larger types, not to make alignment for calloc depend on more than the product of the two arguments.
[Bug tree-optimization/112296] __builtin_constant_p doesn't propagate through member functions
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112296 --- Comment #12 from joseph at codesourcery dot com --- I agree that the side effects of an argument to __builtin_constant_p must be discarded, for the original macro use case to work properly. There are various constructs with __builtin_* names that, although they look like function calls, in fact have syntactic or semantic differences from what can be done with a normal function call. In the cases of syntactic differences, they are actually keywords and handled specially in the parsers. That's probably not relevant here, because the issue is semantics of the call (argument not evaluated) rather than the syntax, but it does illustrate how it's reasonable to have special handling for some __builtin_* construct when needed for its semantics.
[Bug c/111884] unsigned char no longer aliases anything under -std=c2x
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111884 --- Comment #2 from joseph at codesourcery dot com --- I'm going to guess this was introduced by the char8_t changes ("C: Implement C2X N2653 char8_t and UTF-8 string literal changes", commit 703837b2cc8ac03c53ac7cc0fb1327055acaebd2). /* Unlike char, char8_t doesn't alias. */ if (flag_char8_t && t == char8_type_node) return -1; is not correct for C, where char8_t is not a distinct type.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #9 from joseph at codesourcery dot com --- A portability issue producing a compile failure is often better than one where there is no error but the code misbehaves at runtime on some platforms (a lot of code does not have good testsuites).
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #7 from joseph at codesourcery dot com --- I think it's reasonable for such a portability issue to be detected only when building for i386, much like a portability issue from code that assumes long is 64-bit would only be detected when building for a 32-bit target. Then adding a note would help the user, seeing an error on i386, to understand the non-obvious reason for the error. I don't think it's such a good idea to try computing also in hypothetical excess precision, when building for a target that doesn't use excess precision, in attempt to generate a portability warning there.
[Bug c/111808] [C23] constexpr with excess precision
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111808 --- Comment #5 from joseph at codesourcery dot com --- We could add a "note: initializer represented with excess precision" or similar for the case where the required error might be surprising because the semantic types are the same.
[Bug regression/111709] [13 Regression] Miscompilation of sysdeps/ieee754/dbl-64/s_fma.c
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111709 --- Comment #8 from joseph at codesourcery dot com --- Typically these sorts of issues result from floating-point operations being moved past environment manipulation (fesetround, feupdateenv, feholdexcept, etc.) - in either direction. This might be a compiler issue, or it might well be a bug in the glibc function implementation (insufficient use of math_opt_barrier / math_force_eval to prevent such movement). If the latter, make sure to fix it in all similar implementations of fma functions, not just the dbl-64 one.
[Bug target/111506] RISC-V: Failed to vectorize conversion from INT64 -> _Float16
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111506 --- Comment #6 from joseph at codesourcery dot com --- On Mon, 2 Oct 2023, rdapp at gcc dot gnu.org via Gcc-bugs wrote: > In our case the int64_t -> int32_t conversion is implementation defined when > the source doesn't fit the target. GCC documents the implementation-defined semantics it uses for out-of-range conversions from an integer type to a signed integer type. That does not depend on whether the conversion is vectorized or not. And for conversions between floating and integer types in either direction, there is no conversion between two integer types involved in the abstract machine.
[Bug target/111506] RISC-V: Failed to vectorize conversion from INT64 -> _Float16
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111506 --- Comment #4 from joseph at codesourcery dot com --- Conversion from 64-bit integers for _Float16 is fully defined, it produces the correctly rounded result according to the current rounding direction (round-to-nearest may be assumed in the absence of -frounding-math), which may be an infinity (depending on the rounding mode) in case of overflow (and in particular, anything not representable in a 32-bit integer certainly overflows on conversion to _Float16). That's just the same as for any integer-to-floating conversions.
[Bug middle-end/51446] -fno-trapping-math generates NaN constant with different sign
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51446 --- Comment #22 from joseph at codesourcery dot com --- On Mon, 2 Oct 2023, eggert at cs dot ucla.edu via Gcc-bugs wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51446 > > --- Comment #20 from Paul Eggert --- > (In reply to jos...@codesourcery.com from comment #14) > > This is just the same as other unspecified things like converting an > > out-of-range value from floating-point to integer. > No, because when GCC's constant folding disagrees with machine arithmetic, GCC > can generate code that violates the relevant standards. The issue you describe is orthogonal to my comment in this bug. The unspecified cases - both the one I mentioned in my comment and the one in the description of this bug - do not require any particular result (choice of quiet NaN, choice of value for out-of-range conversion to integer, etc.), and in particular do not require a result that could be generated by the hardware being used, but they do require that, for each evaluation of such an operation in the abstract machine, the implementation behaves as if some particular valid choice of result was made for that evaluation; wobbly values (some uses of the result behaving as if one choice of value were made and other uses behaving as if some other choice were made) are not permitted. (This is similar to the question of whether use of uninitialized variables (if not undefined behavior) can produce a wobbly value, as such a value naturally results from optimizing a PHI node with one uninitialized operand to the value of the other operand.)
[Bug c/111421] constexpr not working with array subscript
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111421 --- Comment #4 from joseph at codesourcery dot com --- The definition of constexpr in C2x is intentionally minimal, with potential for future expansion in subsequent standard revisions. Allowing array element accesses would run into needing an appropriate definition of exactly what cases are allowed and don't count as accessing the value of an object (presumably including requiring the array element index itself to be an integer constant expression within range for the array in question).
[Bug c/111421] constexpr not working with array subscript
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111421 --- Comment #1 from joseph at codesourcery dot com --- See the definitions of "named constant" and "compound literal constant". Array element accesses aren't allowed, and the example you have with "->" shouldn't be accepted either (although the standard rules for implementation-defined constant expressions probably allow implementations to accept such an example if they so choose).
[Bug libstdc++/111390] libstdc++-v3/scripts/check_compile script is not useful
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111390 --- Comment #7 from joseph at codesourcery dot com --- Stubbing out execution of tests can be done with a suitable board file (a board file to stub out linking as well is a bit more complicated). https://gcc.gnu.org/pipermail/gcc/2017-September/224422.html
[Bug c/111309] va_arg alternative for _BitInt
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111309 --- Comment #3 from joseph at codesourcery dot com --- Defined values for 0 are marginally more convenient for implementing the standard operations which have defined results for all arguments, and I think it's appropriate for the type-generic built-in functions to work for all integer types - at least all unsigned integer types (and including unsigned __int128) - rather than just _BitInt types. ( itself - providing both functions and type-generic macros - makes most sense to provide in libc, I think. The type-generic macros there don't actually support bit-precise types whose width doesn't match a standard/extended type, but providing such support, given appropriate built-in functions, certainly makes sense as an extension.)
[Bug c/111309] va_arg alternative for _BitInt
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111309 --- Comment #1 from joseph at codesourcery dot com --- Yes, we should have APIs for building type-generic _BitInt interfaces (also a width-of operation to give the width in bits of an integer type; also type-generic versions of operations such as clz, ctz, parity, popcount that work to the width in bits of any unsigned operand). Though I suspect any library implementations of printf _BitInt support would end up needing architecture-specific workarounds for a while to avoid depending on having GCC new enough to support _BitInt in order to build a library with that support.
[Bug c/111058] __builtin_nans (and its friends for other floating-point types) compiles to an external call to __builtin_nans for unsupported tag
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111058 --- Comment #7 from joseph at codesourcery dot com --- There shouldn't be such a thing as an unsupported constant payload; both __builtin_nan and __builtin_nans should rather be made consistent with parsing of payloads by glibc's nan functions (that may in some cases mean changing glibc; see https://sourceware.org/bugzilla/show_bug.cgi?id=28322 - glibc's functions don't handle payloads wider than 64 bits for _Float128, but GCC's do).
[Bug c/111058] __builtin_nans (and its friends for other floating-point types) compiles to an external call to __builtin_nans for unsupported tag
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111058 --- Comment #5 from joseph at codesourcery dot com --- We should absolutely *not* generate calls to a non-existent function "nans" based on a long-obsolescent standard proposal. The modern way to generate a signaling NaN with given payload, as specified in C23, is to generate a signaling NaN with one of the *_SNAN macros (FLT128_SNAN in this case) in , then use the relevant setpayload function (setpayloadf128 in this case) to set its payload. I don't think there is any bug here at all.
[Bug c/107954] Support -std=c23/gnu23 as aliases of -std=c2x/gnu2x
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107954 --- Comment #5 from joseph at codesourcery dot com --- The straw poll at the June meeting said to keep calling it C23 (votes 4/12/2 for/against/abstain on the question of changing the informal name to C24). Of course the actual standard will be ISO/IEC 9899:2024 (but __STDC_VERSION__ will remain as 202311L, consistent with the informal name rather than the publication date, in the absence of a technical DIS comment requesting a change of version number being accepted, and accepting any technical DIS comments would delay the standard by requiring an FDIS).
[Bug c/110664] -std=c2x -pedantic-errors pedwarns on _Float128
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110664 --- Comment #1 from joseph at codesourcery dot com --- Yes, this would be a bug.
[Bug c/105863] RFE: C23 #embed
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105863 --- Comment #6 from joseph at codesourcery dot com --- The latest version should be taken to be what's in the working draft N3096, plus the editorial fixes from CD2 comments GB-081 through GB-084.
[Bug c/109956] GCC reserves 9 bytes for struct s { int a; char b; char t[]; } x = {1, 2, 3};
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109956 --- Comment #7 from joseph at codesourcery dot com --- I suppose the question is how to interpret "the longest array (with the same element type) that would not make the structure larger than the object being accessed". The difficulty of interpreting "make the structure larger" in terms of including post-array padding in the replacement structure is that there might not be a definition of what that post-array padding should be given the offset of the array need not be the same as the offset with literal replacement in the struct definition.
[Bug c/109956] GCC reserves 9 bytes for struct s { int a; char b; char t[]; } x = {1, 2, 3};
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109956 --- Comment #6 from joseph at codesourcery dot com --- For the standard, dynamically allocated case, you should only need to allocate enough memory to contain the initial part of the struct and the array members being accessed - not any padding after that array. (There were wording problems before C99 TC2; see DR#282.)
[Bug c++/109936] error: extended character ≠ is not valid in an identifier
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109936 --- Comment #25 from joseph at codesourcery dot com --- Older versions of C++ - up to C++20 - would reject such characters (not allowed in identifiers based on the list of allowed characters in that standard version) even when not converted to a token, because (a) those older versions had (as-if) conversion of extended characters to UCNs in translation phase 1, and (b) UCNs not permitted in identifiers still matched the syntax for identifier preprocessing tokens ("Otherwise, the next preprocessing token is the longest sequence of characters that matches the syntax of a preprocessing token, even if that would cause further lexical analysis to fail") and then violated a semantic rule on which UCNs are allowed in identifiers. C++23 instead converts UCNs to extended characters in phase 3 rather than doing the reverse conversion, and has (as of N4944, at least), [lex.pptoken], "... single non-whitespace characters that do not lexically match the other preprocessing token categories ... If any character not in the basic character set matches the last category, the program is ill-formed.". That's part of the description of preprocessing tokens, before they get converted to tokens. I think it has the same effect of disallowing the use of such a character (outside contexts such as string literals) - even if a different diagnostic might be better.
[Bug libstdc++/43622] Incomplete C++ library support for __float128
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=43622 --- Comment #31 from joseph at codesourcery dot com --- It can be an extended integer type in C2x, but then stdint.h would be required to have int128_t / uint128_t / int_least128_t / uint_least128_t typedefs, and integer constant suffixes would be needed for the corresponding macros INT128_C / UINT128_C (and the other stdint.h macros for the types would need to be defined as well), and printf/scanf support would be required as well.
[Bug c/102989] Implement C2x's n2763 (_BitInt)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989 --- Comment #37 from joseph at codesourcery dot com --- If _BitInt constants aren't INTEGER_CST, then all places that expect that any integer constant expression is folded to an INTEGER_CST will need updating to handle whatever tree code is used for _BitInt constants. (In some places that may be needed for correctness, in other places - where a large value wouldn't actually be valid - only for proper diagnostics about an invalid value, if INTEGER_CST is still used for smaller _BitInt constants.)
[Bug c++/52339] using delete ptr1->ptr2 where ptr2 destructor deletes a const ptr1 fails
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52339 --- Comment #8 from joseph at codesourcery dot com --- I think it's valid C99, yes: the VLA size should be evaluated exactly once, when the declaration is passed in the order of execution.
[Bug c/109412] [13 Regression] ICE in fold_convert_loc, at fold-const.cc:2627
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109412 --- Comment #2 from joseph at codesourcery dot com --- May be related to bug 107682.
[Bug web/109355] Add a text warning to old gcc online manual stating it is out of date
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109355 --- Comment #5 from joseph at codesourcery dot com --- As I mentioned in previous discussions of this idea: any implementation should *not* involve simply editing the old generated files in place; it needs to involve keeping an unmodified copy of those files (which it might not readily be possible to regenerate now with current Texinfo) and having a properly automated process that goes from the unmodified source to the modified version served on the website, with the ability to rerun a new version of that process at any time.
[Bug analyzer/109098] Encoding errors on SARIF output for non-UTF-8 source files
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109098 --- Comment #6 from joseph at codesourcery dot com --- For diagnosis of non-UTF-8 in strings / comments, see commit 0b8c57ed40f19086e30ce54faec3222ac21cc0df, "libcpp: Add -Winvalid-utf8 warning [PR106655]" (implementing a new C++ requirement).
[Bug c/69960] "initializer element is not constant"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69960 --- Comment #24 from joseph at codesourcery dot com --- On Thu, 23 Feb 2023, daniel.lundin.mail at gmail dot com via Gcc-bugs wrote: > In this code > > static const int y = 1; > static int x = y; > > y is not an integer constant expression, nor is it an integer constant in the > meaning that ISO 9899 defines it. Correct, but irrelevant, since nothing in that code example is required by the standard to be an integer constant expression. > Therefore an initializer was given which is > not a constant expression. No, it's an "other form of constant expression" accepted by GCC. > "an implementation may accept other forms of constant expressions" does not > mean that an implementation can throw out any constraints it pleases out the > window. Correct. The Constraints on constant expressions say "Constant expressions shall not contain assignment, increment, decrement, function-call, or comma operators, except when they are contained within a subexpression that is not evaluated." and "Each constant expression shall evaluate to a constant that is in the range of representable values for its type.". The initializer is entirely consistent with those Constraints, so it is within the bounds of what an implementation may accept as an "other form of constant expression". Whereas it wouldn't be valid for an implementation to accept f() as a constant expression (contains a function call), for example. Note also that only violations of Syntax and Constraints require diagnostics (and thus -pedantic doesn't claim to ensure diagnostics for code that's not strictly conforming for some other reason than violating Syntax or Constraints).
[Bug c/69960] "initializer element is not constant"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69960 --- Comment #22 from joseph at codesourcery dot com --- I do however expect there may be cases in GCC 13 where constexpr initializers of floating type are accepted that do not meet the definition of arithmetic constant expressions, since GCC is generally a lot more careful about ensuring things are integer constant expressions when required than it is about doing the same for arithmetic constant expressions (before C2x there weren't any cases that allowed arithmetic constant expressions without also allowing other kinds of constant expressions permitted in initializers).
[Bug c/69960] "initializer element is not constant"
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69960 --- Comment #21 from joseph at codesourcery dot com --- On Wed, 22 Feb 2023, daniel.lundin.mail at gmail dot com via Gcc-bugs wrote: > First of all, it is questionable if gcc is still conforming after the change > discussed here and implemented as per gcc 8.0. Yes "an implementation may > accept other forms of constant expressions" but that doesn't mean that a > compiler is allowed to ignore the constraints in C17 6.7.9/4 nor the > definition > of an integer constant expression. So this ought to explicitly be a compiler > extension and we ought to have a way to reliably compile strictly conforming > programs with gcc without constraint violations silently getting ignored. "integer constant expression" does not mean the same thing as "constant expression of integer type". If you use this expression in a context requiring an integer constant expression (case label, bit-field width, array designator in initializer, enum value, array size at file scope, constexpr initializer for object of integer type, etc.), it's properly rejected as required; in contexts where both integer constant expressions and other expressions are valid but with different semantics (e.g. determining whether something is a null pointer constant, determining whether an array is a VLA in a context where both VLA and non-VLA arrays are valid), again it's treated as non-constant.
[Bug c/108796] Can't intermix C2x and GNU style attributes
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108796 --- Comment #8 from joseph at codesourcery dot com --- On Thu, 16 Feb 2023, aaron at aaronballman dot com via Gcc-bugs wrote: > > The logic is that GNU attributes are declaration specifiers (and can mix > > anywhere with other declaration specifiers), but standard attributes > > aren't declaration specifiers; rather, they come in specified positions > > relative to declaration specifiers (the semantics before and after the > > declaration specifiers are different), and in the middle isn't such a > > position. > > How does that square with: > ``` > struct __attribute__((packed)) S { ... }; > void func(int *ip) __attribute__((nonnull(1))); > ``` > where the GNU attribute is not written where a declaration specifier is > allowed? GNU attributes are declaration specifiers *in the previous examples given here*, not necessarily in all other cases. The position in relation to other declaration specifiers does not matter in those examples. Whereas a standard attribute at the start of declaration specifiers appertains to the entity declared, while a standard attribute at the end of declaration specifiers appertains to the type in those declaration specifiers. That is [[noreturn]] void f(); declares a non-returning function f, but void [[noreturn]] f(); applies the attribute (invalidly) to the type void, not to the function f. While __attribute__((noreturn)) means exactly the same thing in both locations - it appertains to the function (and you could also have it in the middle of other declaration specifiers, with the same meaning). So the two kinds of attributes are not interchangable, and the semantics for arbitrary mixtures would not be clear. It might work to have arbitrary mixtures in the struct context. But in the void func(int *ip) __attribute__((nonnull(1))); context you again have attributes appertaining to different things: a GNU attribute in that position is in a particular position *in a declaration* (after any asm ("identifier"), before an initializer), and appertains to the entity declared, whereas a standard attribute in such a position is part of the declarator (immediately following a function-declarator or array-declarator) and appertains to the function type - although they look superficially like the same case in simple examples such as this one, they aren't at all. And so again it would be unclear what attributes in arbitrary mixtures should appertain to. (There is then logic in GCC to handle __attribute__ that, according to the syntax, should appertain to a particular entity, so that it's instead applied to some other related entity; for example, moving an attribute from a declaration to its type. This is deliberately *not* done for [[]] attribute syntax; those attributes are expected to be written in a correct location for the entity they appertain to.)
[Bug c/108796] Can't intermix C2x and GNU style attributes
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108796 --- Comment #6 from joseph at codesourcery dot com --- The logic is that GNU attributes are declaration specifiers (and can mix anywhere with other declaration specifiers), but standard attributes aren't declaration specifiers; rather, they come in specified positions relative to declaration specifiers (the semantics before and after the declaration specifiers are different), and in the middle isn't such a position.
[Bug target/108742] Incorrect constant folding with (or exposed by) -fexcess-precision=standard
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108742 --- Comment #11 from joseph at codesourcery dot com --- As discussed, FLT_EVAL_METHOD applies to constants as well as to operations. See the example in C17 F.8.5, for example; it shows float y = 1.1e75f; // may raise exceptions since 1.1e75f may be evaluated to a wider range and precision than those of float, in which case the conversion to the range and precision of float occurs at runtime (whereas if there is no excess range and precision for float, the constant is evaluated to positive infinity of type float at translation time).
[Bug middle-end/108623] We need to grow the precision field in tree_type_common for PowerPC
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108623 --- Comment #8 from joseph at codesourcery dot com --- See also bug 102989 (C2x _BitInt) regarding another case for which growing TYPE_PRECISION would be useful.
[Bug c/84764] Wrong warning "so large that it is unsigned" for __int128 constant
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84764 --- Comment #5 from joseph at codesourcery dot com --- Also, for it to become an extended integer type, it would be necessary to define integer constant suffixes and implement printf / scanf support in the library, because is now required to provide intN_t / uintN_t when there is a matching standard or extended integer type, so would be required to provide int128_t / uint128_t, which in turn would require the corresponding and macros, so requiring constant suffixes and printf / scanf support.
[Bug c/108531] Imaginary types are not supported, violating ISO C Annex G
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108531 --- Comment #6 from joseph at codesourcery dot com --- My only real addition to my previous comments in the referenced glibc bug report is that, given we defined _Float32 which has the same "not promoted at language level in variable arguments" property as _Imaginary float and let it have the ABI arising naturally from the back ends in the absence of target maintainers / ABI maintainers choosing something different, it would probably be reasonable to do the same thing for imaginary types; this case is rather different from _BitInt where there are significant ABI choices to be made for each architecture (and I've filed bugs on various ABI repositories to request that such ABIs be defined). It would be good if psABI maintainers kept up more with C standard features, however.
[Bug c/47781] warnings from custom printf format specifiers
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47781 --- Comment #29 from joseph at codesourcery dot com --- As I said before, the issue is still how to define something general enough to be useful but that doesn't expose too much of the details of GCC's internal data structures for format checking.
[Bug libgcc/108279] Improved speed for float128 routines
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108279 --- Comment #17 from joseph at codesourcery dot com --- It's not part of the ABI for the Arm 32-bit Architecture (AAPCS32). https://github.com/ARM-software/abi-aa/blob/main/aapcs32/aapcs32.rst You can file an issue there if you want, though I don't know how interested the maintainers will be in that optional language feature.
[Bug tree-optimization/108068] [10/11/12/13 Regression] decimal floating point signed zero is not honored
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108068 --- Comment #5 from joseph at codesourcery dot com --- For DFP it's not just zero for which you can't infer an equivalence of values from an equality comparison; any finite value that can be represented with more than one quantum exponent (any value that can be represented with less precision than the type, unless it can only be represented with the largest or smallest possible quantum exponent) has the same property. So handling DFP zero here probably isn't enough to avoid bugs.
[Bug c/108194] GCC won't treat two compatible function types as compatible if any of them (or both of them) is declared _Noreturn
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108194 --- Comment #3 from joseph at codesourcery dot com --- If you use typeof instead of __typeof, and -std=c2x, these types are treated as compatible. I deliberately kept the existing semantics for __typeof, and for typeof in pre-C2x modes, when implementing C2x typeof; see the commit message for commit fa258f6894801aef6785f0327594dc803da63fbd.
[Bug c/108043] [13 Regression] ICE: in fold_convert_loc, at fold-const.cc:2618 on invalid function braced initializer since r13-2205-g14cfa01755a66afb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108043 --- Comment #3 from joseph at codesourcery dot com --- Probably the same as bug 107682.
[Bug c/108054] C2X auto with struct defined in statement expression
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108054 --- Comment #2 from joseph at codesourcery dot com --- The basic principle is that auto declarations should always be writable in a form without auto, so they should never result in a type escaping to a larger scope (but a rule expressed in that form would be very complicated to check, as discussed in the context of earlier drafts of the auto proposal with a rule expressed in such terms, hence the current version). Thus examples like these, where the type declared in a statement expression does escape via the type of that expression, are entirely appropriately rejected. For cases of non-ordinary identifiers declared in code using only standard constructs (not all of which are properly detected by GCC at present), and related issues for some cases other than auto, I provided BSI with a document c2x-declaration-context.pdf to include when submitting the NB comments I provided on the C2X CD to ISO, which gives 16 examples of code using such corner cases and discusses what general principles might be consistent with the interpretations previously applied by WG14 in some of those cases. So hopefully that will be included with the CD ballot results, if BSI includes that document as requested.
[Bug c/107980] va_start does not warn about an arbitrary number of arguments in C2x mode
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107980 --- Comment #17 from joseph at codesourcery dot com --- The details of not expanding in cases where it matters whether and how many times something is expanded - such as arguments expanding to have unbalanced parentheses - may be a non-obvious consequence that wasn't considered in WG14. The basic definition of ignoring the pp-tokens without converting them to tokens (and thus not requiring them to parse as any particular kind of C language construct) was clear enough from the paper (whether or not anyone felt the need to comment on that aspect of the definition).
[Bug c++/108001] unamed struct extension is documented for C++
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108001 --- Comment #1 from joseph at codesourcery dot com --- At least some cases of this are a standard C++ feature - which ones are still an extension for C++ and so need documenting as such?
[Bug c/107980] va_start does not warn about an arbitrary number of arguments in C2x mode
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107980 --- Comment #12 from joseph at codesourcery dot com --- The standard rule about not using extra arguments means that any warnings would need to avoid even converting those arguments from pp-tokens to tokens; it's OK for them to contain pp-tokens that cannot be converted to tokens. I think the accepted change to the standard was entirely clear about ignoring the extra arguments; it wasn't some obscure non-obvious consequence that such code would become valid.
[Bug c/107954] Support -std=c23/gnu23 as aliases of -std=c2x/gnu2x
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107954 --- Comment #3 from joseph at codesourcery dot com --- Editorial review before the CD ballot slipped the schedule. Second-round editorial review after a huge number of changes in the editorial review slipped the schedule. Getting a final draft with all the changes into the CD ballot process slipped the schedule. The January meeting is two weeks later than originally planned because of the schedule delays. I just sent 206 comments on the CD to BSI for submission to ISO as the UK comments and if other NBs have similar numbers of comments, (a) it wouldn't surprise me if there's difficulty in getting through all of them in a single week's meeting (even if we find a way not to need to discuss all the editorial comments individually in the meeting) and (b) there could easily be further delays getting all the changes into the working draft and reviewed for being correctly applied. So despite the 56-day "ISO editing" period on the schedule before the DIS ballot (which may be meant to deal with all the editorial issues ISO comes up with at the last minute), it's entirely plausible there could be schedule slip for the DIS ballot - even supposing we don't need any extra ballot stages (CD2 or FDIS). So while it's possible that the new standard will be published in 2023 - or with a __STDC_VERSION__ value from 2023 even if published in 2024 - there is plenty of scope for the schedule to slip given the amount of work that still needs to be done on the draft.
[Bug other/55899] GCC should provide built-ins in data types flavor/version/variation
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55899 --- Comment #3 from joseph at codesourcery dot com --- C2x provides type-generic versions of various such operations, in addition to type-specific versions (but the type-specific versions are for unsigned char through unsigned long long, so don't themselves address the issue from this PR, and my view is that type-generic functions are error-prone in cases, such as clz, where the result depends on the type and not just the integer value of the argument). Since is a library facility (with functions thus expected to be available with external linkage to link against whether or not the header is included) I expect to implement it in due course in glibc, not GCC, though as usual built-in functions with the standard names would be appropriate in GCC.
[Bug c/107405] [13 Regression] enum change causing Linux kernel to fail to build due to Linux depending on old behavior
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107405 --- Comment #19 from joseph at codesourcery dot com --- On Tue, 22 Nov 2022, macro at orcam dot me.uk via Gcc-bugs wrote: > Well, according to the assertion triggered `typeof (EM_MAX_SLOTS)' will > yield a data type of a different width depending on the compiler version. I don't think typeof(expression) should really be considered part of the ABI. Technically, yes, someone could declare a variable in a header as "extern typeof (EM_MAX_SLOTS) x;", and then the ABI for that variable would change. What hasn't changed is the normal case - where the variable is declared as "extern enum whatever_the_tag_is x;" - the size of the enum type itself is the same as what it was before.
[Bug c/107405] [13 Regression] enum change causing Linux kernel to fail to build due to Linux depending on old behavior
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107405 --- Comment #17 from joseph at codesourcery dot com --- On Sat, 19 Nov 2022, macro at orcam dot me.uk via Gcc-bugs wrote: > If in older C standard versions such enums are invalid, then I think > this should be a hard error rather than a silent ABI change for the code > produced. Not all code out there will have sanity checks such as the There is no ABI change. The size of the enum type does not change. What changes is the type given to enum constants in such an enum, if the value of the enum constant fits in int (now all enum constants in such an enum have the enum type rather than only those outside the range of int having the enum type).
[Bug middle-end/107702] {,unsigned} __int128 to _Float16 conversion shouldn't use libgcc routines
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107702 --- Comment #2 from joseph at codesourcery dot com --- (Where "check for any high bits being set" needs appropriate adjustment in the case of negative values for conversion from signed __int128, e.g. "the high 64 bits aren't the sign-extension of the low 64 bits" would be an appropriate condition to know there must be an overflow.)
[Bug middle-end/107702] {,unsigned} __int128 to _Float16 conversion shouldn't use libgcc routines
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107702 --- Comment #1 from joseph at codesourcery dot com --- On Tue, 15 Nov 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > _Float16 f9 (__int128 x) { return x; } > _Float16 f10 (__int128 x) { return x; } I suppose one of those is meant to be unsigned __int128? > verifies that the __floattihf implementation always gives the same answer as > does signed SImode -> SFmode cast followed by SFmode -> HFmode conversion. > Isn't a conversion of a value > 65504 && value < -65504 UB in both C and C++? No, an overflow is defined to produce an appropriately rounded value, either an infinity or the largest finite value with the right sign, depending on the rounding mode, with "overflow" and "inexact" raised (note that the exact threshold for overflow depends on the rounding mode). > So, can't we just implement the TI -> HF conversions by say ignoring upper 64 > bits of the __int128? No. You could check for any high bits being set and e.g. use a different path that converts a smaller value of the right sign that's still guaranteed to overflow, if that's beneficial on a particular architecture (it might well be if there's a hardware instruction for converting from 32-bit or 64-bit integers to _Float16, but not one for conversion from 128-bit integers, for example). Or you could go via converting such a saturated value to SFmode if that's beneficial (standard C doesn't provide any way to count the number of times an exception is raised by a single operation, or the order in which they are raised, so it's OK that such an approach may raise "inexact" before "overflow" and possibly more than once).
[Bug target/105480] Vectorized `isnan` appears to trigger FPE on ppc64le
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105480 --- Comment #12 from joseph at codesourcery dot com --- __builtin_isnan must not raise "invalid" for signaling NaN arguments. __builtin_isunordered (i.e. UNORDERED / UNORDERED_EXPR; standard macro isunordered) must raise "invalid" for signaling NaN arguments. The -ftrapping-math option (which is on by default) means code transformations that either add or remove exceptions should be avoided (though this isn't implemented very consistently, especially as regards transformations that remove exceptions). Thus, transforming in either direction between __builtin_isnan and UNORDERED_EXPR is undesirable given -ftrapping-math -fsignaling-nans. Given -fno-trapping-math or (default) -fno-signaling-nans, the transformation is OK.
[Bug target/105480] Vectorized `isnan` appears to trigger FPE on ppc64le
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105480 --- Comment #10 from joseph at codesourcery dot com --- For scalar isnan see bug 66462. (The claim in bug 66462 comment 19 that there was ever a working version of that patch ready to go in is incorrect: November 2016 is older than June 2017.)
[Bug c++/107571] Missing fallthrough attribute diagnostics
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107571 --- Comment #3 from joseph at codesourcery dot com --- On Tue, 8 Nov 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > And looking at the C wording in n2596.pdf, seems it is different again: That's a very old version. N3054 is the most recent public draft (SC22 N5777 is more recent than that and is the actual CD ballot text). > "The next block item(6.8.2) that would be encountered after a fallthrough > declaration shall be a case label or default label associated with the > smallest > enclosing switch statement." It's not exactly clear what "next block item" is for any of the examples you give - next lexically (OK once the current one is exited) or in execution (no good for a Constraint)? And thus not clear that any of these are invalid. I've noted that the inconsistency with C++ should be raised in an NB comment.
[Bug c/102989] Implement C2x's n2763 (_BitInt)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989 --- Comment #32 from joseph at codesourcery dot com --- On Fri, 28 Oct 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > > That said, if C allows us to limit to 128bits then let's do that for now. > > 32bit targets will still see all the complication when we give that a stab. > > I'm afraid once we define BITINT_MAXWIDTH, it will become part of the ABI, so > we can't increase it afterwards. I don't think it's part of the ABI; I think it's always OK to increase BITINT_MAXWIDTH, as long as the wider types don't need more alignment than the previous choice of max_align_t. Thus, starting with a 128-bit limit (or indeed a 64-bit limit on 32-bit platforms, so that all the types fix within existing modes supported for arithmetic), and adding support for wider _BitInt later, would be a reasonable thing to do. (You still have ABI considerations even with such a limit: apart from the padding question, on x86_64 the ABI says _BitInt(128) is 64-bit aligned but __int128 is 128-bit aligned.) > Anyway, I'm afraid we probably don't have enough time to implement this > properly in stage1, so might need to target GCC 14 with it. Unless somebody > spends on it > the remaining 2 weeks full time. I think https://gcc.gnu.org/pipermail/gcc/2022-October/239704.html is still current as a list of C2x language features likely not to make it into GCC 13. (I hope to get auto and constexpr done in the next two weeks, and the other C2x language features not on that list are done.)
[Bug c/102989] Implement C2x's n2763 (_BitInt)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989 --- Comment #31 from joseph at codesourcery dot com --- On Fri, 28 Oct 2022, rguenth at gcc dot gnu.org via Gcc-bugs wrote: > I wouldn't go with a new tree code, given semantics are INTEGER_TYPE it should > be an INTEGER_TYPE. Implementation note in that case: bit-precise integer types aren't allowed as underlying types for enums, so the code in c-parser.cc:c_parser_enum_specifier checking underlying types: else if (TREE_CODE (specs->type) != INTEGER_TYPE && TREE_CODE (specs->type) != BOOLEAN_TYPE) { error_at (enum_loc, "invalid % underlying type"); would then need to check that the type isn't a bit-precise type.
[Bug c/107405] [13 Regression] enum change causing Linux kernel to fail to build due to Linux depending on old behavior
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107405 --- Comment #13 from joseph at codesourcery dot com --- If the real issue in a particular place in the kernel is that a single (anonymous) enum type is being used for lots of different kinds of constants, then the appropriate fix in the kernel might be to split up the enum, so that large constants of one kind don't affect the types of small constants of a different kind.
[Bug c/102989] Implement C2x's n2763 (_BitInt)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989 --- Comment #25 from joseph at codesourcery dot com --- On Wed, 26 Oct 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > Seems LLVM currently only supports _BitInt up to 128, which is kind of useless > for users, those sizes can be easily handled as bitfields and performing > normal > arithmetics on them. Well, it would be useful for users of 32-bit targets who want 128-bit arithmetic, since we only support __int128 for 64-bit targets. > As for implementation, I'd like to brainstorm about it a little bit. > I'd say we want a new tree code for it, say BITINT_TYPE. OK. The signed and unsigned types of each precision do need to be distinguished from all the existing kinds of integer types (including the ones used for bit-fields: _BitInt types aren't subject to integer promotions, whereas bit-fields narrower than int are). In general the types operate like integer types (in terms of allowed operations etc.) so INTEGRAL_TYPE_P would be true for them. The main difference at front-end level is the lack of integer promotions, so that arithmetic can be carried out directly on narrower-than-int operands (but a bit-field declared with a _BitInt type gets promoted to that _BitInt type, e.g. unsigned _BitInt(7):2 acts as unsigned _BitInt(7) in arithmetic). Unlike the bit-field types, there's no such thing as a signed _BitInt(1); signed bit-precise integer types must havet least two bits. > TYPE_PRECISION unfortunately is only 10-bit, that is not enough, so it > would need the full precision to be specified somewhere else. That may complicate things because of code expecting TYPE_PRECISION to be meaningful for all integer types. But that could be addressed without needing to review every use of TYPE_PRECISION by e.g. changing TYPE_PRECISION to check wherever the _BitInt precision is specified, and instead using e.g. TYPE_RAW_PRECISION for direct access to the tree field (so only lvalue uses of TYPE_PRECISION would then need updating, other accesses would automatically get the full precision). > And have targetm specify the ABI > details (size of a limb (which would need to be exposed to libgcc with > -fbuilding-libgcc), unless it is everywhere the same whether the limbs are > least significant to most significant or vice versa, and whether the highest > limb is sign/zero extended or unspecified beyond the precision. I haven't seen an ABI specified for any architecture supporting big-endian yet, but I'd tend to expect such architectures to use big-endian ordering for the _BitInt representation to be consistent with existing integer types. > What about the large ones? I think we can at least slightly simplify things by assuming for now _BitInt multiplication / division / modulo are unlikely to be used much for arguments large enough that Karatsuba or asymptotically faster algorithms become relevant; that is, that naive quadratic-time algorithms are sufficient for those operations.
[Bug c/102989] Implement C2x's n2763 (_BitInt)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989 --- Comment #13 from joseph at codesourcery dot com --- On Tue, 25 Oct 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > The x86-64 psABI has been changed for this: > https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/8ca45392570e96920f8a15d903d6122f6d263cd0 > but the state of the padding bits isn't mentioned there anywhere. I think the words "The value of the unused bits beyond the width of the \texttt{_BitInt(N)} value but within the size of the \texttt{_BitInt(N)} are unspecified when stored in memory or register." are what deals with padding (both padding within sizeof(_BitInt(N)) bytes, and bytes within a register or stack slot used for argument passing / return but outside sizeof(_BitInt(N)) bytes). (Of course different architectures might make different choices for how to handle padding.) I filed https://github.com/riscv-non-isa/riscv-elf-psabi-doc/issues/300 in July to request an ABI for _BitInt on RISC-V. I've just now filed https://github.com/ARM-software/abi-aa/issues/175 to request such an ABI for both 32-bit and 64-bit Arm, and https://gitlab.com/x86-psABIs/i386-ABI/-/issues/5 to request such an ABI for 32-bit x86. I don't know if there are other psABIs with public issue trackers where such issues can be filed (but we'll need some sensible default anyway for architectures where we can't get an ABI properly specified in an upstream-maintained ABI document).
[Bug middle-end/107370] long double sqrtl constant folding is wrong
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107370 --- Comment #8 from joseph at codesourcery dot com --- On Mon, 24 Oct 2022, jacob at jacob dot remcomp.fr via Gcc-bugs wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107370 > > --- Comment #3 from jacob navia --- > 1 trunk gcc: > 2 .LC1: > 3.word 325511829 # 0x1366EA95 <<<--- SHOULD BE 325508205 > 4.word -922176773 # 0xC908B2FB OK > 5.word -429395012 # 0xE667F3BC OK > 6.word 1073703433 # 0x3FFF6A09 OK > > This data is wrong, I repeat, the first number (line 3) should be 325508205 > or > 0x1366DC6D. Why do you think that number is wrong? If I compute the square root of 2**225 using GMP (so not involving MPFR at all, just integer square root in GMP), I get 1366ea95 as the low 32 bits (and the next bit is a 0, so rounding toward 0 is correct in this case).
[Bug c/107314] [13 Regression] New -Wsign-compare since r13-3360-g3b3083a598ca3f4b
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107314 --- Comment #1 from joseph at codesourcery dot com --- This is a deliberate change: if any enumerators are outside the range of int, then all enumerators now have the enum type, rather than those outside the range of int having the enum type and those inside the range of int having type int. (The logic to determine the integer type with which the enum type is compatible is unchanged. In the case of this testcase, it produces unsigned int.) While, as noted in the commit message, the change could be made conditional on C2x mode if necessary, I'm doubtful if that would actually help grub; presumably they'd rather change things so they work in C2x mode rather than keeping using an older mode after -std=gnu2x is the default, or postponing the fix until then.
[Bug bootstrap/107059] [13 regression] bootstrap failure after r13-2887-gb04208895fed34
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107059 --- Comment #22 from joseph at codesourcery dot com --- Even with the fixincluded headers properly being used, the powerpc64le issue still applies because of the issue I noted in https://sourceware.org/pipermail/libc-alpha/2022-September/142259.html with certain required changes to the powerpc version of bits/floatn.h not being covered by the fixincludes fixes added. You get errors such as: /scratch/jmyers/glibc-bot/build/compilers/powerpc64le-linux-gnu/gcc/gcc/include-fixed/bits/floatn.h:88:9: error: multiple types in one declaration 88 | typedef __float128 _Float128; | ^~ while building libstdc++. (Whereas other architectures can build GCC OK but then run into failures building glibc that my glibc patch is intended to address.)
[Bug middle-end/106831] [13 Regression] mpfr-4.1.0 started failing 2 tests: tget_set_d64 and tget_set_d128
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106831 --- Comment #12 from joseph at codesourcery dot com --- The difference with __ibm128 is that in that case there is no semantic significance to whether the low part is +0 or -0, or what the low part is at all when the high part is a NaN. At the C level, such __ibm128 representations should be considered different representations of the same value, not different values. Whereas different DFP quantum exponents for the same real number correspond to different values that compare equal. (Noncanonical DFP encodings might be more closely analogous to the __ibm128 variants, except that most operations aren't supposed to return a noncanonical encoding even if inputs have such an encoding.)
[Bug c++/106652] [C++23] P1467 - Extended floating-point types and standard names
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106652 --- Comment #4 from joseph at codesourcery dot com --- Regarding mangling: I expect this change should fix bug 85518. General: I expect some glibc header changes might be appropriate, where they currently assume __FloatN keywords aren't supported in C++. And where glibc headers handle type-generic operations for C++ by defining appropriate overloaded functions in the headers, make sure the overloads for _Float128 work with both _Float128 and __float128 where supported and distinct, or otherwise adjust the headers as needed to handle both types. (Also, so far we don't have _Float16 support in glibc, and while it would be a sensible feature in principle, there would be issues to consider with the impact on minimum GCC versions for building glibc on relevant architectures, unless some kind of hack is used to allow _Float16 functions to be built and to get the correct ABI even when built with an older compiler. Requiring GCC 7 to build glibc for AArch64 and Arm might well be reasonable now; requiring GCC 12 for x86/x86_64 or GCC 13 for RISC-V probably not for a few years.)
[Bug target/106574] gcc 12 with O3 leads to failures in glibc's y1f128 tests
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106574 --- Comment #11 from joseph at codesourcery dot com --- On Wed, 10 Aug 2022, michael.hudson at canonical dot com via Gcc-bugs wrote: > I just changed > > z = xx * xx; > > to > > z = math_opt_barrier(xx * xx); > > which perhaps isn't sufficient. That wouldn't prevent the multiplication being moved before SET_RESTORE_ROUNDL, though it should suffice for the later computations as they all depend on z. > But my reading of the assembly is that the issue is that some of the math code > is being moved _after_ the restore of the fpu state implied by > SET_RESTORE_ROUNDL (FE_TONEAREST). To avoid code being moved after the restore, "math_force_eval (p);" just before the return would be appropriate.
[Bug target/106574] gcc 12 with O3 leads to failures in glibc's y1f128 tests
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106574 --- Comment #7 from joseph at codesourcery dot com --- I'd suggest looking at the generated assembly. I don't know exactly what you mean by "putting a math_opt_barrier on this line"; it would need to be used in a way that ensures a dependency for all the code after SET_RESTORE_ROUNDL (for example, "xx = math_opt_barrier (xx);").
[Bug target/106574] gcc 12 with O3 leads to failures in glibc's y1f128 tests
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106574 --- Comment #5 from joseph at codesourcery dot com --- It's possible code is being moved across SET_RESTORE_ROUNDL, in which case maybe math_opt_barrier needs to be used in glibc code to prevent that movement.
[Bug c/106117] Use of option -fexcess-precision for operation-by-operation emulation for _Float16 arithmetics.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106117 --- Comment #7 from joseph at codesourcery dot com --- FLT_EVAL_METHOD of 0 gives _Float16 excess precision ("evaluate all operations and constants, whose semantic type comprises a set of values that is a strict subset of the values of float, to the range and precision of float; evaluate all other operations and constants to the range and precision of the semantic type"). See the -fpermitted-flt-eval-methods= option that's used to control whether FLT_EVAL_METHOD may be defined to a value such as 16 that's not part of C11.
[Bug c/106117] Use of option -fexcess-precision for operation-by-operation emulation for _Float16 arithmetics.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106117 --- Comment #5 from joseph at codesourcery dot com --- The idea with "16" is to say that's the exact FLT_EVAL_METHOD value (defined in C23 Annex H) whose semantics should be followed. It would affect float/double promotion on i386 as well (the back end gives an error that the combination of that option with -mfpmath=387 is unsupported).
[Bug c/106117] Use of option -fexcess-precision for operation-by-operation emulation for _Float16 arithmetics.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106117 --- Comment #2 from joseph at codesourcery dot com --- "none" was something I mentioned as a possible future argument when originally posting -fexcess-precision <https://gcc.gnu.org/legacy-ml/gcc-patches/2008-11/msg00105.html>. I still think it's the appropriate name for that case. (Doing +-*/ operations on float and then immediately converting back to _Float16 has exactly the same semantics as direct _Float16 arithmetic; float has sufficient precision that no double rounding issues arise; that doesn't apply to fma, however. The effect of excess precision is that e.g. in "a + b + c", the value of a + b with the range and precision of float is what gets added to c; there's no intermediate truncation of a + b to _Float16. But (_Float16)(a + b) + c would have such a truncation, because casts and conversion as if by assignment remove excess range and precision.)
[Bug c/105969] [12 Regression] ICE in Floating point exception
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105969 --- Comment #3 from joseph at codesourcery dot com --- Overlapping elements is simply a consequence of the zero-sized-objects extension, I don't see anything invalid here to reject (though there might be undefined behavior at runtime when sprintf accesses bytes beyond the zero-sized object; even if char a[0][0][0] is treated like a flexible array member, it's not clear a flexible array member whose elements themselves have zero size can be validly used to access any bytes).
[Bug libquadmath/105101] incorrect rounding for sqrtq
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105101 --- Comment #24 from joseph at codesourcery dot com --- On Mon, 13 Jun 2022, already5chosen at yahoo dot com via Gcc-bugs wrote: > > For long double it's sysdeps/ieee754/soft-fp/s_fmal.c in glibc - some > > adjustments would be needed to be able to use that as a version for > > _Float128 (where sysdeps/ieee754/float128/s_fmaf128.c currently always > > uses the ldbl-128 version), in appropriate cases. > > > > Way to complicated for mere gcc user like myself. > Hopefully, Thomas Koenig will understand better. glibc needs to handle a lot of different configurations with various choices of supported floating-point types - resulting in complexity around how the particular function implementations are chosen for a given system - as well as other portability considerations. There is also complexity resulting from the functions covering many different use cases - and thus needing to follow all the IEEE 754 requirements for those functions although many users may only care about some of those requirements. > > The underlying arithmetic (in libgcc, not libquadmath) uses the hardware > > rounding mode and exceptions (if the x87 and SSE rounding modes disagree, > > things are liable to go wrong), via various macros defined in > > sfp-machine.h. > > Oh, a mess! > With implementation that is either 99% or 100% integer being controlled by SSE > control is WRONG. x87 control word, of course, is no better than SSE. > But BOTH I have no words. Any given libgcc build will only use one of the rounding modes (SSE for 64-bit, x87 for 32-bit) - but which exception state gets updated in the 32-bit case depends on whether libgcc was built for SSE arithmetic. As far as IEEE 754 is concerned, there is only one rounding mode for all operations with a binary result (and a separate rounding mode for decimal FP results). As far as the C ABI is concerned, it's not valid for the two rounding modes to be different at any ABI boundary; fesetround will always set both, while fetestexcept etc. handle both sets of exception flags by ORing them together. *But* glibc's internal optimizations for code that saves and restores floating-point state internally try to manipulate only SSE state, or only x87 state, in functions that should only use one of those two sets of state, so code called from within those functions may need to work properly when the two rounding modes are different. Together with the above point about which exception state gets used in libgcc support for _Float128, this results in those optimizations, in the float128 case, sometimes only needing to handle SSE state but sometimes needing to handle both sets of state (see glibc's sysdeps/x86/fpu/fenv_private.h).
[Bug libquadmath/105101] incorrect rounding for sqrtq
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105101 --- Comment #22 from joseph at codesourcery dot com --- On Mon, 13 Jun 2022, already5chosen at yahoo dot com via Gcc-bugs wrote: > > The function should be sqrtf128 (present in glibc 2.26 and later on > > x86_64, x86, powerpc64le, ia64). I don't know about support in non-glibc > > C libraries. > > x86-64 gcc on Godbolt does not appear to know about it. > I think, Godbolt uses rather standard Linux with quite new glibc and headers. > https://godbolt.org/z/Y4YecvxK6 Make sure to define _GNU_SOURCE or __STDC_WANT_IEC_60559_TYPES_EXT__ to get these declarations. > May be. I don't know how to get soft-fp version. For long double it's sysdeps/ieee754/soft-fp/s_fmal.c in glibc - some adjustments would be needed to be able to use that as a version for _Float128 (where sysdeps/ieee754/float128/s_fmaf128.c currently always uses the ldbl-128 version), in appropriate cases. > It seems, you didn't pay attention that in my later posts I am giving > implementations of binary128 *division* rather than sqrtq(). Ah - binary128 division is nothing to do with libquadmath at all (the basic arithmetic operations go in libgcc, not libquadmath). Using a PR about one issue as an umbrella discussion of various vaguely related things is generally confusing and unhelpful to tracking the status of what is or is not fixed. In general, working out how to optimize the format-generic code in soft-fp if possible would be preferred to writing format-specific implementations. Note that for multiplication and division there are already various choices of implementation approaches that can be selected via macros defined in sfp-machine.h. > BTW, I see no mentioning of rounding control or of any sort of exceptions in > GCC libquadmath docs. No APIs with names resembling fesetround() or > mpfr_set_default_rounding_mode(). The underlying arithmetic (in libgcc, not libquadmath) uses the hardware rounding mode and exceptions (if the x87 and SSE rounding modes disagree, things are liable to go wrong), via various macros defined in sfp-machine.h.
[Bug libquadmath/105101] incorrect rounding for sqrtq
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105101 --- Comment #20 from joseph at codesourcery dot com --- On Sat, 11 Jun 2022, already5chosen at yahoo dot com via Gcc-bugs wrote: > On MSYS2 _Float128 and __float128 appears to be mostly the same thing, mapped > to the same library routines with significant difference that _Float128 is not > accessible from C++. Since all my test benches are written in C++ I can't even > validate that what I wrote above is 100% true. > > Also according to my understanding of glibc docs (not the clearest piece of > text that I ever read) a relevant square root routine should be named > sqrtf128(). > Unfortunately, nothing like that appears to be present in either math.h or in > library. Am I doing something wrong? The function should be sqrtf128 (present in glibc 2.26 and later on x86_64, x86, powerpc64le, ia64). I don't know about support in non-glibc C libraries. > Right now, there are only two [gcc] platforms with hw binary128 - IBM POWER > and > IBM z. I am not sure about the later, but the former has xssqrtqp instruction > which is likely the right way to do sqrtq()/sqrtf128() on this platform. If z > is the same, which sound likely, then implementation based on binary128 > mul/add/fma by now has no use cases at all. That may well be the case for sqrt. > > fma is a particularly tricky case because it *is* required to be correctly > > rounding, in all rounding modes, and correct rounding implies correct > > exceptions, *and* correct exceptions for fma includes getting right the > > architecture-specific choice of whether tininess is detected before or > > after rounding. > > I suspect that by strict IEEE-754 rules sqrt() is the same as fma(), i.e. you > have to calculate a root to infinite precision and then to round with > accordance to current mode. Yes, but fma has the extra complication of the architecture-specific tininess detection rules (to quote IEEE 754, "The implementer shall choose how tininess is detected [i.e. from the two options listed immediately above this quote in IEEE 754], but shall detect tininess in the same way for all operations in radix two"), which doesn't apply to sqrt because sqrt results can never underflow. (I expect the existing soft-fp version of fma in glibc to be rather better optimized than the soft-fp version of sqrt.) > I don't quite or understand why you say that. First, I don't remember using > any > double math in the variant of sqrtq() posted here. So, may be, you meant > division? > Here I use doable math, but IMO, I use it in a way that never causes > exceptions. Yes, the double division. If that division can ever be inexact when the result of the square root itself is exact, you have a problem (which in turn could lead to actual incorrectly rounded results from other functions such as the square root operations rounding to a narrower type, when the round-to-odd versions of those functions are used, because the round-to-odd technique relies on correct "inexact" exceptions).
[Bug libquadmath/105101] incorrect rounding for sqrtq
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105101 --- Comment #18 from joseph at codesourcery dot com --- libquadmath is essentially legacy code. People working directly in C should be using the C23 _Float128 interfaces and *f128 functions, as in current glibc, rather than libquadmath interfaces (unless their code needs to support old glibc or non-glibc C libraries that don't support _Float128 in C23 Annex H). It would be desirable to make GCC generate *f128 calls when appropriate from Fortran code using this format as well; see <https://gcc.gnu.org/pipermail/gcc-patches/2021-September/578937.html> for more discussion of the different cases involved. Most of libquadmath is derived from code in glibc - some of it can now be updated from the glibc code automatically (see update-quadmath.py), other parts can't (although it would certainly be desirable to extend update-quadmath.py to cover that other code as well). See the commit message for commit 4239f144ce50c94f2c6cc232028f167b6ebfd506 for a more detailed discussion of what code comes from glibc and what is / is not automatically handled by update-quadmath.py. Since update-quadmath.py hasn't been run for a while, it might need changes to work with more recent changes to the glibc code. sqrtq.c is one of the files not based on glibc code. That's probably because glibc didn't have a convenient generic implementation of binary128 sqrt to use when libquadmath was added - it has soft-fp implementations used for various architectures, but those require sfp-machine.h for each architecture (which maybe we do in fact have in libgcc for each relevant architecture, but it's an extra complication). Certainly making it possible to use code from glibc for binary128 sqrt would be a good idea, but while we aren't doing that, it should also be OK to improve sqrtq locally in libquadmath. The glibc functions for this format are generally *not* optimized for speed yet (this includes the soft-fp-based versions of sqrt). Note that what's best for speed may depend a lot on whether the architecture has hardware support for binary128 arithmetic; if it has such support, it's more likely an implementation based on binary128 floating-point operations is efficient; if it doesn't, direct use of integer arithmetic, without lots of intermediate packing / unpacking into the binary128 format, is likely to be more efficient. See the discussion starting at <https://sourceware.org/pipermail/libc-alpha/2020-June/thread.html#115229> for more on this - glibc is a better place for working on most optimized function implementations than GCC. See also <https://core-math.gitlabpages.inria.fr/> - those functions are aiming to be correctly rounding, which is *not* a goal for most glibc libm functions, but are still quite likely to be faster than the existing non-optimized functions in glibc. fma is a particularly tricky case because it *is* required to be correctly rounding, in all rounding modes, and correct rounding implies correct exceptions, *and* correct exceptions for fma includes getting right the architecture-specific choice of whether tininess is detected before or after rounding. Correct exceptions for sqrt are simpler, but to be correct for glibc it still needs to avoid spurious "inexact" exceptions - for example, from the use of double in intermediate computations in your version (see the optimized feholdexcept / fesetenv operations used in glibc for cases where exceptions from intermediate computations are to be discarded). For functions that aren't required to be correctly rounding, the glibc manual discusses the accuracy goals (including on exceptions, e.g. avoiding spurious "underflow" exceptions from intermediate computations for results where the rounded result returned is not consistent with rounding a tiny, inexact value).
[Bug c/61469] language feature: Support for enum underlying type
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61469 --- Comment #9 from joseph at codesourcery dot com --- N2963 is up for discussion in WG14 tomorrow, but there are still significant issues with the wording to resolve.
[Bug c/105510] error: initializer element is not constant
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105510 --- Comment #4 from joseph at codesourcery dot com --- We have a documented extension: As a GNU extension, GCC allows initialization of objects with static storage duration by compound literals (which is not possible in ISO C99 because the initializer is not a constant). It is handled as if the object were initialized only with the brace-enclosed list if the types of the compound literal and the object match. The elements of the compound literal must be constant. If the object being initialized has array type of unknown size, the size is determined by the size of the compound literal. So the question is whether this extension should also allow the case where a compound literal is used to initialize a sub-object.
[Bug bootstrap/105487] Sysroots without 32bit components cause mysterious errors
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105487 --- Comment #8 from joseph at codesourcery dot com --- I expect you'd also see this issue with build-many-glibcs.py (from glibc) if you remove the workaround code in that script: # GCC uses paths such as lib/../lib64, so make sure lib # directories always exist. mkdir_cmd = ['mkdir', '-p', os.path.join(policy.installdir, 'lib')] if policy.use_usr: mkdir_cmd += [os.path.join(policy.installdir, 'usr', 'lib')] cmdlist.add_command('mkdir-lib', mkdir_cmd)
[Bug target/105428] compilation never (?) finishes with __builtin_casinl() and __builtin_csqrtl() with -O -mlong-double-128
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105428 --- Comment #4 from joseph at codesourcery dot com --- If you can identify specific arguments passed to mpc_asin for which it is excessively slow, that should be reported as an MPC bug. Computing correctly rounded mpc_asin shouldn't need to be that slow - provided the algorithm used is appropriate to the input value. See for example how glibc implements casin / casinh / cacos / cacosh. Or https://dl.acm.org/doi/10.1145/275323.275324 (Hull et al, Implementing the complex arcsine and arccosine functions using exception handling, ACM TOMS vol. 23 no. 3 (Sep 1997) pp 299-335). That may require several different algorithms to be implemented, but each such algorithm is straightforward. That's different from the case of Bessel functions of high order - for which there is some literature about computational techniques that shouldn't take time proportional to the order, but where the algorithms are certainly a lot more complicated.
[Bug target/105428] compilation never (?) finishes with __builtin_casinl() and __builtin_csqrtl() with -O -mlong-double-128
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105428 --- Comment #1 from joseph at codesourcery dot com --- What MPC version are you using? There have been a few fixes for slowness in the MPC inverse trigonometric and hyperbolic functions over the years, though there may still be scope for substantial further improvements by choosing different algorithms for different ranges of inputs. If you're using current MPC then this case should probably be reported to the MPC maintainers.
[Bug target/103605] [PowerPC] fmin/fmax should be inlined always with xsmindp/xsmaxdp
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103605 --- Comment #4 from joseph at codesourcery dot com --- On Tue, 26 Apr 2022, guihaoc at gcc dot gnu.org via Gcc-bugs wrote: > C99/11 standard > If just one argument is a NaN, the fmin functions return the other argument > (if > both arguments are NaNs, the functions return a NaN). > fmin(NaN, 3.0) = fmin(3.0, NaN) = 3.0 "NaN" here means quiet NaN. > xsmindp > The minimum of a QNaN and any value is that value. The minimum of any value > and > an SNaN is that SNaN converted to a QNaN. > xsmindp(NaN, 3.0) = 3.0 xsmindp(3.0, NaN) = NaN That seems right for fmin, provided that (QNaN, SNaN) arguments in either order produce a QNaN result (with "invalid" raised). Note that fmin and fmax follow the old operations from IEEE 754-2008 (that aren't associative in the presence of SNaN), not any of the new operations from IEEE 754-2019.
[Bug tree-optimization/105384] compilation never (?) finishes with __builtin_yn{,f,l} at -O and above
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105384 --- Comment #7 from joseph at codesourcery dot com --- Using host libm routines is a bad idea, that would make the generated code depend on the host libm and processor. Having a cut-off to avoid constant folding these functions for n >= 128 might make sense (that cut-off is chosen as the one beyond which the ISO 24747 versions of the functions, and the versions in the standard C++ library, have implementation-defined behavior).
[Bug rtl-optimization/105376] ICE: in decimal_to_decnumber, at dfp.cc:134 with _Decimal128 at -O -g
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105376 --- Comment #3 from joseph at codesourcery dot com --- For this transformation to be correct for DFP, you need a 2 with quantum exponent 0. Converting from either integer or binary floating-point 2 will work for that. However, I note that decimal_to_decnumber has case rvc_normal: if (!r->decimal) { /* dconst{1,2,m1,half} are used in various places in the middle-end and optimizers, allow them here as an exception by converting them to decimal. */ so the existing code ought to work as-is. Maybe there is a problem with padding in REAL_VALUE_TYPE meaning the comparisons don't work as intended?
[Bug c/105149] [9/10/11/12 Regression] ICE in verify_ssa, at tree-ssa.cc:1211
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105149 --- Comment #7 from joseph at codesourcery dot com --- I think it's valid to reject this at compile time (rather than just generating a runtime trap): the "such that the type of a pointer to an object that has the specified type can be obtained simply by postfixing a * to type" can never be satisfied for a function type, even if e.g. a typedef name is used so that postfixing '*' produces valid syntax for the corresponding pointer type, because it still wouldn't be "the type of a pointer to an object".
[Bug other/105114] [12 regression] contrib/gcc_update hangs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105114 --- Comment #9 from joseph at codesourcery dot com --- The dependencies in gcc_update refer to gcc/config/loongarch/genopts/loongarch-string which doesn't exist (should be loongarch-strings not loongarch-string, I suppose). Maybe that's causing the problem?
[Bug target/104984] Use hard-fp for libgcc single-floating-point routines
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104984 --- Comment #2 from joseph at codesourcery dot com --- See libgcc/config/rs6000/t-e500v1-fp (which should have been removed along with associated configure logic when the powerpcspe port was removed, the cases using that file should no longer be reachable), for an example of a configuration using hardware floating point for single precision only.
[Bug target/104829] [12 Regression] Pure 32-bit PowerPC build broken
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104829 --- Comment #15 from joseph at codesourcery dot com --- I confirm that the second patch does fix the problem I see.
[Bug target/104829] [12 Regression] Pure 32-bit PowerPC build broken
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104829 --- Comment #12 from joseph at codesourcery dot com --- I still get the same error (and the same ".machine ppc") with that patch applied.