Hi, on riscv gcc.dg/pr70252.cĀ ICEs at gimple-isel.cc:283. This is because we created the gimple statement
mask__7.36_170 = VEC_COND_EXPR <mask__47.20_148, mask__5.33_165, { -1, ... }>; during vrp2. What happens is that, starting with maskdest = (vec_cond mask1 1 0) >= (vec_cond mask2 1 0) we fold to maskdest = mask1 >= (vec_cond (mask2 1 0)) and then sink the "mask1 >=" into the vec_cond so we end up with maskdest = vec_cond (mask2 ? mask1 : 0), i.e. a vec_cond with a mask "data mode". In gimple-isel, when the target does not provide a vcond_mask implementation for that (which none does) we fail the assertion that the mask mode be MODE_VECTOR_INT. To prevent this, this patch restricts the match.pd sinking pattern to non-mask types. I was also thinking about restricting the type of the operands, wondering if that would be less intrusive. Bootstrapped and regression-tested on x86 and aarch64. Regards Robin gcc/ChangeLog: PR target/111337 * match.pd: Do not sink comparisons into vec_conds when the type is a vector mask. --- gcc/match.pd | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/gcc/match.pd b/gcc/match.pd index 8c24dae71cd..db3e698f471 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -4856,7 +4856,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (vec_cond @0 (view_convert! @1) (view_convert! @2)))) /* Sink binary operation to branches, but only if we can fold it. */ -(for op (tcc_comparison plus minus mult bit_and bit_ior bit_xor +(for op (plus minus mult bit_and bit_ior bit_xor lshift rshift rdiv trunc_div ceil_div floor_div round_div trunc_mod ceil_mod floor_mod round_mod min max) /* (c ? a : b) op (c ? d : e) --> c ? (a op d) : (b op e) */ @@ -4872,6 +4872,28 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (op @3 (vec_cond:s @0 @1 @2)) (vec_cond @0 (op! @3 @1) (op! @3 @2)))) +/* Comparison sinks might be folded into vector masks which could + end up as "data" operand of a vec_cond + e.g. (vec_cond @0 (mask1) (...)). + gimple-isel does not handle such cases if the target does not provide + a vcond_mask. Therefore, restrict the operands to non-mask classes. */ +(for op (tcc_comparison) +/* (c ? a : b) op (c ? d : e) --> c ? (a op d) : (b op e) */ + (simplify + (op (vec_cond:s @0 @1 @2) (vec_cond:s @0 @3 @4)) + (if (GET_MODE_CLASS (TYPE_MODE (type)) != MODE_VECTOR_BOOL) + (vec_cond @0 (op! @1 @3) (op! @2 @4)))) + +/* (c ? a : b) op d --> c ? (a op d) : (b op d) */ + (simplify + (op (vec_cond:s @0 @1 @2) @3) + (if (GET_MODE_CLASS (TYPE_MODE (type)) != MODE_VECTOR_BOOL) + (vec_cond @0 (op! @1 @3) (op! @2 @3)))) + (simplify + (op @3 (vec_cond:s @0 @1 @2)) + (if (GET_MODE_CLASS (TYPE_MODE (type)) != MODE_VECTOR_BOOL) + (vec_cond @0 (op! @3 @1) (op! @3 @2))))) + #if GIMPLE (match (nop_atomic_bit_test_and_p @0 @1 @4) (bit_and (convert?@4 (ATOMIC_FETCH_OR_XOR_N @2 INTEGER_CST@0 @3)) -- 2.41.0