[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 Uroš Bizjak changed: What|Removed |Added Status|ASSIGNED|RESOLVED Resolution|--- |FIXED --- Comment #6 from Uroš Bizjak --- Implemented in gcc-12.
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 --- Comment #5 from CVS Commits --- The master branch has been updated by Uros Bizjak : https://gcc.gnu.org/g:c111f6066043d3b7bc4141ca0411eae9294aa6c5 commit r12-311-gc111f6066043d3b7bc4141ca0411eae9294aa6c5 Author: Uros Bizjak Date: Fri Apr 30 10:15:26 2021 +0200 i386: Introduce reversed ADC and SBB patterns [PR98060] The compiler is able to merge LTU comparisons with PLUS or MINUS pattern to form addition with carry (ADC) and subtraction with borrow (SBB) instructions: op = op + carry [ADC $0, op] op = op - carry [SBB $0, op] The patch introduces reversed ADC and SBB insn patterns: op = op + !carry[SBB $-1, op] op = op - !carry[ADC $-1, op] allowing the compiler to also merge GEU comparisons. 2021-04-30 Uroš Bizjak gcc/ PR target/98060 * config/i386/i386.md (*add3_carry_0r): New insn pattern. (*addsi3_carry_zext_0r): Ditto. (*sub3_carry_0): Ditto. (*subsi3_carry_zext_0r): Ditto. * config/i386/predicates.md (ix86_carry_flag_unset_operator): New predicate. * config/i386/i386.c (ix86_rtx_costs) : Also consider ix86_carry_flag_unset_operator to calculate the cost of adc/sbb insn. gcc/testsuite/ PR target/98060 * gcc.target/i386/pr98060.c: New test.
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 Uroš Bizjak changed: What|Removed |Added Target Milestone|--- |12.0
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 Uroš Bizjak changed: What|Removed |Added Attachment #49663|0 |1 is obsolete|| --- Comment #4 from Uroš Bizjak --- Created attachment 49666 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=49666=edit Updated patch
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 Uroš Bizjak changed: What|Removed |Added Assignee|unassigned at gcc dot gnu.org |ubizjak at gmail dot com Status|NEW |ASSIGNED --- Comment #3 from Uroš Bizjak --- Created attachment 49663 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=49663=edit Proposed patch Proposed patch does a couple of things: a) introduces reversed adc and sbb insn patterns b) introduces carry_flag_unset_operator c) SUBSTANTIALLY cleans carry flag checking predicates d) rearranges integer compares to allow more combines with carry flag e) enhances ix86_unary_operator_ok to allow input memory operand for adc and sbb I don't think this patch is appropriate for stage-3+, let's wait for stage-1 to reopen.
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 --- Comment #2 from Uroš Bizjak --- Created attachment 49662 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=49662=edit Testcases
[Bug target/98060] Failure to optimize cmp+setnb+add to cmp+sbb
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98060 Uroš Bizjak changed: What|Removed |Added Severity|normal |enhancement Last reconfirmed||2020-11-30 Status|UNCONFIRMED |NEW Ever confirmed|0 |1 --- Comment #1 from Uroš Bizjak --- There are several other cases where sbb/adc can be used: --cut here-- int r1 (unsigned v0, unsigned v1, int v2) { return (v0 >= v1) + v2; } int r2 (unsigned v0, unsigned v1, int v2) { return (v1 > v0) + v2; } int r3 (unsigned v0, unsigned v1, int v2) { return (v0 <= v1) + v2; } int r4 (unsigned v0, unsigned v1, int v2) { return (v1 < v0) + v2; } --cut here-- r1 and r3 can be implemented with sbb $-1, reg, r2 and r4 with adc $0, reg. gcc currently converts only r4.