https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92953
--- Comment #5 from Andrew Pinski ---
On x86_64 since the flags get clobbered with almost all instructions. Either
you do the subtraction twice or you use set instruction. GCC choses the later
... I suspect that is a general issue that shows up m
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92953
--- Comment #4 from Alexander Monakov ---
At least then GCC should try to use cmovno instead of seto-test-cmove for
if-conversion:
foo:
movl%edi, %eax
subl%esi, %eax
notl%eax
orl $1, %eax
s
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92953
--- Comment #3 from Andrew Pinski ---
(In reply to Alexander Monakov from comment #2)
> Well, the aarch64 backend does not implement subv4 pattern in the
> first place, which would be required for efficient branchy code:
>
> foo:
> subs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92953
--- Comment #2 from Alexander Monakov ---
Well, the aarch64 backend does not implement subv4 pattern in the first
place, which would be required for efficient branchy code:
foo:
subsw0, w0, w1
b.vc.LBB0_2
mvn
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92953
Andrew Pinski changed:
What|Removed |Added
Keywords||missed-optimization
Target|