https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95891

Andrew Pinski <pinskia at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Severity|normal                      |enhancement
          Component|rtl-optimization            |tree-optimization

--- Comment #2 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
Confirmed.  Happens on aarch64 too:
        cmp     w0, w1
        beq     .L5
        mov     w0, 0
        ret
        .p2align 2,,3
.L5:
        asr     x0, x0, 32
        asr     x1, x1, 32
        cmp     w0, w1
        cset    w0, eq
        ret

I wonder if we could expose that point is passed via a 64bit argument at the
tree level and then use BIT_FIELD_REF to do the extraction or lower the field
extractions to BIT_FIELD_REF.

Also we don't optimize:
bool f1(unsigned long long a, unsigned long long b) {
  return (((int)a) == ((int)b)) && ((int)(a>>32) == (int)(b>>32));
}

into just return a==b; either.
Which is another thing which needs to happen after the BIT_FIELD_REF Change ...

Reply via email to