https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85164

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #5 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
The first above is on:
    case MINUS:
      /* If X is (minus C Y) where C's least set bit is larger than any bit
         in the mask, then we may replace with (neg Y).  */
      if (poly_int_rtx_p (XEXP (x, 0), &const_op0)
          && (unsigned HOST_WIDE_INT) known_alignment (const_op0) > mask)
and
template<unsigned int N, typename Ca>
inline POLY_BINARY_COEFF (Ca, Ca)
known_alignment (const poly_int_pod<N, Ca> &a)
{
  typedef POLY_BINARY_COEFF (Ca, Ca) C;
  C r = a.coeffs[0];
  for (unsigned int i = 1; i < N; ++i)
    r |= a.coeffs[i];
  return r & -r;
}

The poly_int* stuff makes this much harder to fix, it is unclear if there is
some way to get the unsigned type for the C type and use that as r & -(Cuns) r;
to avoid the UB, and there is no poly_uint_rtx_p or something to request
poly_uint64 from the rtx.  Richard?

The second one is
          return (!known_size_p (decl_size) || known_eq (decl_size, 0)
                  ? maybe_ne (offset, 0)
                  : maybe_gt (offset + size, decl_size));
and again, both offset and size are poly_int64, not sure how can one
reinterpret cast that to poly_uint64 for the operation and then cast back to
poly_int64.
But in that case also if we shouldn't punt on the overflow somehow.

Reply via email to