https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115092
--- Comment #8 from Jakub Jelinek <jakub at gcc dot gnu.org> --- (In reply to Segher Boessenkool from comment #7) > (In reply to Jakub Jelinek from comment #5) > > I think the bug is in simplify_comparison. > > We have there > > GE (sign_extract:SI (reg/v:SI 101 [ g ]) (const_int 1 [0x1]) (const_int 0 > > [0])) (const_int -1 [0xffffffffffffffff]) > > That is first changed into > > GE (ashiftrt:SI (ashift:SI (reg/v:SI 101 [ g ]) (const_int 31 [0x1f])) > > (const_int 31 [0x1f])) (const_int -1 [0xffffffffffffffff]) > > Both are always true. > > But then the > > /* FALLTHROUGH */ > > case LSHIFTRT: > > /* If we have (compare (xshiftrt FOO N) (const_int C)) and > > the low order N bits of FOO are known to be zero, we can do > > this > > by comparing FOO with C shifted left N bits so long as no > > overflow occurs. Even if the low order N bits of FOO aren't > > known > > to be zero, if the comparison is >= or < we can use the same > > optimization and for > or <= by setting all the low > > order N bits in the comparison constant. */ > > optimization triggers and optimizes it into > > GE (ashift:SI (reg/v:SI 101 [ g ]) (const_int 31 [0x1f])) (const_int > > -2147483648 [0xffffffff80000000]) > > I think that is ok too. > > But then > > code = simplify_compare_const (code, raw_mode, &op0, &op1); > > simplifies that to NE and I think that step is wrong, because GE of anything > > >= INT_MIN > > is true. > > > > So, I think > > /* If we are comparing against a constant power of two and the value > > being compared can only have that single bit nonzero (e.g., it was > > `and'ed with that bit), we can replace this with a comparison > > with zero. */ > > if (const_op > > && (code == EQ || code == NE || code == GE || code == GEU > > || code == LT || code == LTU) > > && is_a <scalar_int_mode> (mode, &int_mode) > > && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT > > && pow2p_hwi (const_op & GET_MODE_MASK (int_mode)) > > && (nonzero_bits (op0, int_mode) > > == (unsigned HOST_WIDE_INT) (const_op & GET_MODE_MASK > > (int_mode)))) > > { > > code = (code == EQ || code == GE || code == GEU ? NE : EQ); > > const_op = 0; > > } > > in simplify_compare_const is wrong if const_op is the most significant bit > > of int_mode. > > Yeah, that look like it is missing some test. I'd go with --- gcc/combine.cc.jj 2024-05-07 18:10:10.415874636 +0200 +++ gcc/combine.cc 2024-05-15 13:33:26.555081215 +0200 @@ -11852,8 +11852,10 @@ simplify_compare_const (enum rtx_code co `and'ed with that bit), we can replace this with a comparison with zero. */ if (const_op - && (code == EQ || code == NE || code == GE || code == GEU - || code == LT || code == LTU) + && (code == EQ || code == NE || code == GEU || code == LTU + /* This optimization is incorrect for signed >= INT_MIN or + < INT_MIN, those are always true or always false. */ + || ((code == GE || code == LT) && const_op > 0)) && is_a <scalar_int_mode> (mode, &int_mode) && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT && pow2p_hwi (const_op & GET_MODE_MASK (int_mode)) Seems there is no canonical way to return this is always true or this is always false, sure, we could make up something like NE 1 0 or EQ 1 0 or similar, but it wouldn't likely match and the question is if it would simplify. The const_op == -1 handling below this looks correct to me. > That needs to be fixed of course, but independent of that, this should really > have been completely folded away earlier already? It would if one wouldn't carefully disable tons of optimizations (say -O1, so no (significant) VRP, dom* disabled, fre disabled). Furthermore, at least in the optimized dump it is obfuscated through: _22 = _21 & 1; # RANGE [irange] int [0, 1] MASK 0x1 VALUE 0x0 _24 = 1 >> _22; _26 = -_24; <bb 4> : # prephitmp_27 = PHI <_26(3), -1(2)> if (prephitmp_27 < -1) Sure, VRP could see that _26 has [-1, 0] range, unioned with [-1, -1] and that is never < -1.