> When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that we're not > papering over an issue elsewhere.
Yes, I also wonder if this is an issue in vectorizable_reduction. Below is the the gimple of "gcc.target/aarch64/sve/cost_model_13.c": <bb 3>: # res_18 = PHI <res_15(7), 0(6)> # i_20 = PHI <i_16(7), 0(6)> _1 = (long unsigned int) i_20; _2 = _1 * 2; _3 = x_14(D) + _2; _4 = *_3; _5 = (unsigned short) _4; res.0_6 = (unsigned short) res_18; _7 = _5 + res.0_6; <-- The current stmt_info res_15 = (short int) _7; i_16 = i_20 + 1; if (n_11(D) > i_16) goto <bb 7>; else goto <bb 4>; <bb 7>: goto <bb 3>; It looks like that STMT_VINFO_REDUC_DEF should be "res_18 = PHI <res_15(7), 0(6)>"? The status here is: STMT_VINFO_REDUC_IDX (stmt_info): 1 STMT_VINFO_REDUC_TYPE (stmt_info): TREE_CODE_REDUCTION STMT_VINFO_REDUC_VECTYPE (stmt_info): 0x0 Thanks, Hao ________________________________________ From: Richard Sandiford <richard.sandif...@arm.com> Sent: Tuesday, July 25, 2023 17:44 To: Hao Liu OS Cc: GCC-patches@gcc.gnu.org Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625] Hao Liu OS <h...@os.amperecomputing.com> writes: > Hi, > > Thanks for the suggestion. I tested it and found a gcc_assert failure: > gcc.target/aarch64/sve/cost_model_13.c (internal compiler error: in > info_for_reduction, at tree-vect-loop.cc:5473) > > It is caused by empty STMT_VINFO_REDUC_DEF. When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that we're not papering over an issue elsewhere. Thanks, Richard So, I added an extra check before checking single_defuse_cycle. The updated patch is below. Is it OK for trunk? > > --- > > The new costs should only count reduction latency by multiplying count for > single_defuse_cycle. For other situations, this will increase the reduction > latency a lot and miss vectorization opportunities. > > Tested on aarch64-linux-gnu. > > gcc/ChangeLog: > > PR target/110625 > * config/aarch64/aarch64.cc (count_ops): Only '* count' for > single_defuse_cycle while counting reduction_latency. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/pr110625_1.c: New testcase. > * gcc.target/aarch64/pr110625_2.c: New testcase. > --- > gcc/config/aarch64/aarch64.cc | 13 ++++-- > gcc/testsuite/gcc.target/aarch64/pr110625_1.c | 46 +++++++++++++++++++ > gcc/testsuite/gcc.target/aarch64/pr110625_2.c | 14 ++++++ > 3 files changed, 69 insertions(+), 4 deletions(-) > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_1.c > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_2.c > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index 560e5431636..478a4e00110 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -16788,10 +16788,15 @@ aarch64_vector_costs::count_ops (unsigned int > count, vect_cost_for_stmt kind, > { > unsigned int base > = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_flags); > - > - /* ??? Ideally we'd do COUNT reductions in parallel, but unfortunately > - that's not yet the case. */ > - ops->reduction_latency = MAX (ops->reduction_latency, base * count); > + if (STMT_VINFO_REDUC_DEF (stmt_info) > + && STMT_VINFO_FORCE_SINGLE_CYCLE ( > + info_for_reduction (m_vinfo, stmt_info))) > + /* ??? Ideally we'd use a tree to reduce the copies down to 1 vector, > + and then accumulate that, but at the moment the loop-carried > + dependency includes all copies. */ > + ops->reduction_latency = MAX (ops->reduction_latency, base * count); > + else > + ops->reduction_latency = MAX (ops->reduction_latency, base); > } > > /* Assume that multiply-adds will become a single operation. */ > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_1.c > b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c > new file mode 100644 > index 00000000000..0965cac33a0 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c > @@ -0,0 +1,46 @@ > +/* { dg-do compile } */ > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details > -fno-tree-slp-vectorize" } */ > +/* { dg-final { scan-tree-dump-not "reduction latency = 8" "vect" } } */ > + > +/* Do not increase the vector body cost due to the incorrect reduction > latency > + Original vector body cost = 51 > + Scalar issue estimate: > + ... > + reduction latency = 2 > + estimated min cycles per iteration = 2.000000 > + estimated cycles per vector iteration (for VF 2) = 4.000000 > + Vector issue estimate: > + ... > + reduction latency = 8 <-- Too large > + estimated min cycles per iteration = 8.000000 > + Increasing body cost to 102 because scalar code would issue more quickly > + ... > + missed: cost model: the vector iteration cost = 102 divided by the > scalar iteration cost = 44 is greater or equal to the vectorization factor = > 2. > + missed: not vectorized: vectorization not profitable. */ > + > +typedef struct > +{ > + unsigned short m1, m2, m3, m4; > +} the_struct_t; > +typedef struct > +{ > + double m1, m2, m3, m4, m5; > +} the_struct2_t; > + > +double > +bar (the_struct2_t *); > + > +double > +foo (double *k, unsigned int n, the_struct_t *the_struct) > +{ > + unsigned int u; > + the_struct2_t result; > + for (u = 0; u < n; u++, k--) > + { > + result.m1 += (*k) * the_struct[u].m1; > + result.m2 += (*k) * the_struct[u].m2; > + result.m3 += (*k) * the_struct[u].m3; > + result.m4 += (*k) * the_struct[u].m4; > + } > + return bar (&result); > +} > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_2.c > b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c > new file mode 100644 > index 00000000000..7a84aa8355e > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c > @@ -0,0 +1,14 @@ > +/* { dg-do compile } */ > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details > -fno-tree-slp-vectorize" } */ > +/* { dg-final { scan-tree-dump "reduction latency = 8" "vect" } } */ > + > +/* The reduction latency should be multiplied by the count for > + single_defuse_cycle. */ > + > +long > +f (long res, short *ptr1, short *ptr2, int n) > +{ > + for (int i = 0; i < n; ++i) > + res += (long) ptr1[i] << ptr2[i]; > + return res; > +}