https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89536

--- Comment #18 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
We do take the range as granted in both cases.  If for BIT_NOT_EXPR on say int
the result is -2 or -1, then your TREE_INT_CST_LOW fix would DTRT, sure.  If
the result is any other value, then we run into the impossible territory, if
previous optimizations did a good job, then likely that comparison should have
been folded away, but what if it appeared since the last pass that could have
optimized it away?  By that I mean something like if we end up with following
before dom2:
  a.0_1 = a;
  _2 = a.0_1 != 0;
  # RANGE [0, 1] NONZERO 1
  _3 = (int) _2;
  # RANGE [-2, -1]
  _4 = ~_3;
  # .MEM_8 = VDEF <.MEM_7(D)>
  a = _4;
  if (_4 == -5) // on the testcase from this pr it is -2 here
    goto <bb 3>; [34.00%]
  else
    goto <bb 4>; [66.00%]

  <bb 3> [local count: 365072220]:
  # .MEM_9 = VDEF <.MEM_8>
  a = _3;

Normally I'd say the _4 == -5 comparison would be optimized to 0 because _4 has
RANGE [-2, -1].  If for whatever reason it is not, then I think it is better to
keep status quo in the dominated code.  I guess your TREE_INT_CST_LOW & 1 hunk
is probably still desirable for the Ada boolean case (or we fix match.pd and
drop this altogether at some point).  As the patch has been applied to 8.3.1,
we have a serious regression and need a fix fast though.

Reply via email to