https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94356

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Given
struct S { char a[37]; };
int f1 (struct S *s, int a) { struct S *sa = s + a; return s < sa; }
int f2 (struct S *s, int a, int b) { struct S *sa = s + a; return sa < s + b; }
int f3 (struct S *s, int a) { struct S *sa = s + a; return s == sa; }
int f4 (struct S *s, int a, int b) { struct S *sa = s + a; return sa == s + b;
}
f3 is optimized in:
/* X + Y < Y is the same as X < 0 when there is no overflow.  */
...
/* For equality, this is also true with wrapping overflow.  */
...
 (simplify
  (op:c (nop_convert?@3 (pointer_plus@2 (convert1? @0) @1)) (convert2? @0))
  (if (tree_nop_conversion_p (TREE_TYPE (@2), TREE_TYPE (@0))
       && tree_nop_conversion_p (TREE_TYPE (@3), TREE_TYPE (@0))
       && (CONSTANT_CLASS_P (@1) || (single_use (@2) && single_use (@3))))
   (op @1 { build_zero_cst (TREE_TYPE (@1)); }))))
f4 is optimized too, but not exactly sure where, partly e.g. in
/* For integral types with wrapping overflow and C odd fold
   x * C EQ/NE y * C into x EQ/NE y.  */
(for cmp (eq ne)
 (simplify
  (cmp (mult @0 INTEGER_CST@1) (mult @2 @1))
  (if (INTEGRAL_TYPE_P (TREE_TYPE (@1))
       && TYPE_OVERFLOW_WRAPS (TREE_TYPE (@0))
       && (TREE_INT_CST_LOW (@1) & 1) != 0)
   (cmp @0 @2))))
For f1/f2 we likely only optimize the ptr + x < ptr + y into x < y at RTL time,
and in any case, the multiplications are done right now in sizetype which is
unsigned type.

Reply via email to