https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122767

Alexandre Oliva <aoliva at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
     Ever confirmed|0                           |1
             Status|UNCONFIRMED                 |ASSIGNED
           Assignee|unassigned at gcc dot gnu.org      |aoliva at gcc dot 
gnu.org
   Last reconfirmed|                            |2025-11-20

--- Comment #2 from Alexandre Oliva <aoliva at gcc dot gnu.org> ---
Created attachment 62855
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=62855&action=edit
patch that makes HARD_REG_SET hashing insensitive to word size and endianness

This first-cut blunt patch allows a 64-to-32-bit stage1, 32-bit stage[23]
x86_64 toolchain to pass the bootstrap compare test, that otherwise fails for
cp/call.o.

If we settled on a canonical endianness (the target's?), normalization could be
done on a 32-bit word-by-word basis instead, possibly involving byte reversal.

But depending on endianness, we can bypass normalization altogether, and simply
hash a prefix of HARD_REG_SET, which might justify making that the normalized
form.  WDYT?

Reply via email to