https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70941
--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> --- I'd say this is already wrong in the original dump, the narrowing is done wrongly: *.original has: a = (char) (((unsigned char) -((signed char) (d != 0 && c != 0) ^ -128(OVF)) - (unsigned char) b) + 19); The (OVF) looks fishy and especially the negation performed in signed char instead of unsigned. So we then have in signed char types: _1 = _2 ^ -128; _3 = -_1; and VRP figures out that for valid program that is only possible if _2 is non-zero (i.e. if d and c are both non-zero). But ((d && c) ^ 2040097152) is in the source subtracted in int type rather than signed char due to promotion, so we need to use unsigned char arithmetics if we want to narrow it.