https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122624
--- Comment #13 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
The type_hash_canon calls for build_nonstandard_integer_type and
build_bitint_type certainly look wrong to me, they are using a different hash
than the hash table will then use:
--- gcc/tree.cc 2025-11-08 08:27:58.115107963 +0100
+++ gcc/tree.cc 2025-11-22 09:38:03.051016099 +0100
@@ -7346,9 +7346,8 @@ build_nonstandard_integer_type (unsigned
else
fixup_signed_type (itype);
- inchash::hash hstate;
- inchash::add_expr (TYPE_MAX_VALUE (itype), hstate);
- ret = type_hash_canon (hstate.end (), itype);
+ hashval_t hash = type_hash_canon_hash (itype);
+ ret = type_hash_canon (hash, itype);
if (precision <= MAX_INT_CACHED_PREC)
nonstandard_integer_type_cache[precision + unsignedp] = ret;
@@ -7414,9 +7413,8 @@ build_bitint_type (unsigned HOST_WIDE_IN
else
fixup_signed_type (itype);
- inchash::hash hstate;
- inchash::add_expr (TYPE_MAX_VALUE (itype), hstate);
- ret = type_hash_canon (hstate.end (), itype);
+ hashval_t hash = type_hash_canon_hash (itype);
+ ret = type_hash_canon (hash, itype);
if (precision <= MAX_INT_CACHED_PREC)
(*bitint_type_cache)[precision + unsignedp] = ret;
but why this would cause code generation differences is unclear to me, two
different BITINT_TYPEs with same precision and same signedness should be still
considered uselessly convertible.