https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88425

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2018-12-10
                 CC|                            |jakub at gcc dot gnu.org,
                   |                            |uros at gcc dot gnu.org
     Ever confirmed|0                           |1

--- Comment #1 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
That is because the middle-end canonicalizes a < 123 comparison to a <= 122,
and for this exact optimization we need LTU rather than LEUm where the former
can be matched by the
*x86_movdicc_0_m1_neg pattern (or si if -m32).
So, in order to solve this, we'd need a define_insn_and_split (well, a set of
them) that would catch:
(parallel [
        (set (reg:DI 86)
            (neg:DI (leu:DI (reg:SI 89)
                    (const_int 122 [0x7a]))))
        (clobber (reg:CC 17 flags))
    ])
in this case, with both SWI48 iterator (the mode of the result/neg/leu) and
SWI iterator (the mode of the comparison operands) and split that back into a
lt:CC comparison and the *x86_mov<mode>_0_m1_neg insn.

Reply via email to