On Tue, Feb 04, 2020 at 02:15:04PM +0100, Uros Bizjak wrote: > On Tue, Feb 4, 2020 at 2:13 PM Jakub Jelinek <ja...@redhat.com> wrote: > > > > On Tue, Feb 04, 2020 at 01:38:51PM +0100, Uros Bizjak wrote: > > > As Richard advised, let's put this safety stuff back. Usually, in > > > i386.md, these kind of splitters are implemented as two patterns, one > > > (define_insn_and_split) having "#", and the other (define_insn) with a > > > real insn. My opinion is, that this separation avoids confusion as > > > much as possible. > > > > Okay. So like this if it passes bootstrap/regtest then? > > Yes. > > > 2020-02-04 Jakub Jelinek <ja...@redhat.com> > > > > PR target/92190 > > * config/i386/i386-features.c (ix86_add_reg_usage_to_vzeroupper): > > Only > > include sets and not clobbers in the vzeroupper pattern. > > * config/i386/sse.md (*avx_vzeroupper): Require in insn condition > > that > > the parallel has 17 (64-bit) or 9 (32-bit) elts. > > (*avx_vzeroupper_1): New define_insn_and_split. > > > > * gcc.target/i386/pr92190.c: New test. > > OK.
Unfortunately, it breaks +FAIL: gcc.target/i386/avx-2.c (internal compiler error) +FAIL: gcc.target/i386/avx-2.c (test for excess errors) +FAIL: gcc.target/i386/avx-vzeroupper-10.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-10.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-10.c scan-assembler-times avx_vzeroupper 3 +FAIL: gcc.target/i386/avx-vzeroupper-11.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-11.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-11.c scan-assembler-times \\\\*avx_vzeroall 1 +UNRESOLVED: gcc.target/i386/avx-vzeroupper-11.c scan-assembler-times avx_vzeroupper 3 +FAIL: gcc.target/i386/avx-vzeroupper-12.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-12.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-12.c scan-assembler-times \\\\*avx_vzeroall 1 +UNRESOLVED: gcc.target/i386/avx-vzeroupper-12.c scan-assembler-times avx_vzeroupper 4 +FAIL: gcc.target/i386/avx-vzeroupper-5.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-5.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-5.c scan-assembler-times avx_vzeroupper 1 +FAIL: gcc.target/i386/avx-vzeroupper-7.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-7.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-7.c scan-assembler-times avx_vzeroupper 1 +FAIL: gcc.target/i386/avx-vzeroupper-8.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-8.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-8.c scan-assembler-times avx_vzeroupper 1 +FAIL: gcc.target/i386/avx-vzeroupper-9.c (internal compiler error) +FAIL: gcc.target/i386/avx-vzeroupper-9.c (test for excess errors) +UNRESOLVED: gcc.target/i386/avx-vzeroupper-9.c scan-assembler-times avx_vzeroupper 4 +FAIL: gcc.target/i386/sse-14.c (internal compiler error) +FAIL: gcc.target/i386/sse-14.c (test for excess errors) +FAIL: gcc.target/i386/sse-22.c (internal compiler error) +FAIL: gcc.target/i386/sse-22.c (test for excess errors) +FAIL: gcc.target/i386/sse-22a.c (internal compiler error) +FAIL: gcc.target/i386/sse-22a.c (test for excess errors) The problem is that x86 is the only target that defines HAVE_ATTR_length and doesn't schedule any splitting pass at -O0 after pro_and_epilogue. So, either we go back to handling this during vzeroupper output (unconditionally, rather than flag_ipa_ra guarded), or we need to tweak the split* passes for x86. Seems there are 5 split passes, split1 is run unconditionally before reload, split2 is run for optimize > 0 or STACK_REGS (x86) after ra but before epilogue_completed, split3 is run before regstack for STACK_REGS and optimize and -fno-schedule-insns2, split4 is run before sched2 if sched2 is run and split5 is run before shorten_branches if HAVE_ATTR_length and not STACK_REGS. Attached are 3 possible incremental patches for recog.c, all of them fix all the above regressions, but haven't fully bootstrapped/regtested any of them yet. My preference would be the last one, which for -O0 and x86 disables split2 and enables split3, as it doesn't add any extra passes. The first one just enables split3 for -O0 on x86, the second one enables split5 for -O0 on x86. Jakub
2020-02-05 Jakub Jelinek <ja...@redhat.com> PR target/92190 * recog.c (pass_split_before_regstack::gate): For STACK_REGS targets, run even when !optimize. --- gcc/recog.c.jj 2020-01-12 11:54:36.918405790 +0100 +++ gcc/recog.c 2020-02-05 10:13:20.236989567 +0100 @@ -3991,12 +3991,12 @@ pass_split_before_regstack::gate (functi split until final which doesn't allow splitting if HAVE_ATTR_length. */ # ifdef INSN_SCHEDULING - return (optimize && !flag_schedule_insns_after_reload); + return !optimize || !flag_schedule_insns_after_reload; # else - return (optimize); + return true; # endif #else - return 0; + return false; #endif }
2020-02-05 Jakub Jelinek <ja...@redhat.com> PR target/92190 * recog.c (pass_split_for_shorten_branches::gate): For STACK_REGS targets, return !optimize. --- gcc/recog.c.jj 2020-01-12 11:54:36.918405790 +0100 +++ gcc/recog.c 2020-02-05 10:24:34.506773724 +0100 @@ -4091,8 +4091,12 @@ public: { /* The placement of the splitting that we do for shorten_branches depends on whether regstack is used by the target or not. */ -#if HAVE_ATTR_length && !defined (STACK_REGS) +#if HAVE_ATTR_length +# ifndef STACK_REGS return true; +# else + return !optimize; +# endif #else return false; #endif
2020-02-05 Jakub Jelinek <ja...@redhat.com> PR target/92190 * recog.c (pass_split_after_reload::gate): For STACK_REGS targets, don't run when !optimize. (pass_split_before_regstack::gate): For STACK_REGS targets, run even when !optimize. --- gcc/recog.c.jj 2020-01-12 11:54:36.918405790 +0100 +++ gcc/recog.c 2020-02-05 10:45:19.125938154 +0100 @@ -3924,14 +3924,7 @@ public: virtual bool gate (function *) { /* If optimizing, then go ahead and split insns now. */ - if (optimize > 0) - return true; - -#ifdef STACK_REGS - return true; -#else - return false; -#endif + return optimize > 0; } virtual unsigned int execute (function *) @@ -3991,12 +3984,12 @@ pass_split_before_regstack::gate (functi split until final which doesn't allow splitting if HAVE_ATTR_length. */ # ifdef INSN_SCHEDULING - return (optimize && !flag_schedule_insns_after_reload); + return !optimize || !flag_schedule_insns_after_reload; # else - return (optimize); + return true; # endif #else - return 0; + return false; #endif }