Re: [PATCH 4/9] vect: Adjust vectorizable_load costing on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP

2023-07-23 Thread Kewen.Lin via Gcc-patches
Hi Iain,

on 2023/7/22 23:58, Iain Sandoe wrote:
> Hi Kewen,
> 
> This patch breaks bootstrap on powerpc-darwin (which has Altivec, but not 
> VSX) while building libgfortran.
> 
>> On 3 Jul 2023, at 04:19, Kewen.Lin via Gcc-patches  
>> wrote:
> 
> Please see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110776
> thanks
> Iain
> 

Thanks for reporting!  I'll have a look at it.


BR,
Kewen


Re: [PATCH 4/9] vect: Adjust vectorizable_load costing on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP

2023-07-22 Thread Iain Sandoe
Hi Kewen,

This patch breaks bootstrap on powerpc-darwin (which has Altivec, but not VSX) 
while building libgfortran.

> On 3 Jul 2023, at 04:19, Kewen.Lin via Gcc-patches  
> wrote:

Please see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110776
thanks
Iain



Re: [PATCH 4/9] vect: Adjust vectorizable_load costing on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP

2023-07-02 Thread Kewen.Lin via Gcc-patches
Hi Richard,

on 2023/7/2 16:58, Richard Sandiford wrote:
> Kewen Lin  writes:
>> @@ -9743,11 +9739,23 @@ vectorizable_load (vec_info *vinfo,
>>unsigned int n_groups = 0;
>>for (j = 0; j < ncopies; j++)
>>  {
>> -  if (nloads > 1)
>> +  if (nloads > 1 && !costing_p)
>>  vec_alloc (v, nloads);
>>gimple *new_stmt = NULL;
>>for (i = 0; i < nloads; i++)
>>  {
>> +  if (costing_p)
>> +{
>> +  if (VECTOR_TYPE_P (ltype))
>> +vect_get_load_cost (vinfo, stmt_info, 1,
>> +alignment_support_scheme, misalignment,
>> +false, _cost, nullptr, cost_vec,
>> +cost_vec, true);
>> +  else
>> +inside_cost += record_stmt_cost (cost_vec, 1, scalar_load,
>> + stmt_info, 0, vect_body);
>> +  continue;
>> +}
> 
> Just a note that this might make life harder for AArch64 costing.
> Strided SLP loads are still equivalent to vector loads for AArch64,
> since they happen on the FPR/vector side even if they have integral
> modes.
> 
> But I agree this is more accurate from a general target-independent POV,
> especially given the relatively coarse-grain costing enum.  So I think
> that's just something AArch64 will need to account for.

Sorry for the inconvenience.  Hope accounting for it with target hook on
vect costing isn't very complicated.

BR,
Kewen


Re: [PATCH 4/9] vect: Adjust vectorizable_load costing on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP

2023-07-02 Thread Richard Sandiford via Gcc-patches
Kewen Lin  writes:
> @@ -9743,11 +9739,23 @@ vectorizable_load (vec_info *vinfo,
>unsigned int n_groups = 0;
>for (j = 0; j < ncopies; j++)
>   {
> -   if (nloads > 1)
> +   if (nloads > 1 && !costing_p)
>   vec_alloc (v, nloads);
> gimple *new_stmt = NULL;
> for (i = 0; i < nloads; i++)
>   {
> +   if (costing_p)
> + {
> +   if (VECTOR_TYPE_P (ltype))
> + vect_get_load_cost (vinfo, stmt_info, 1,
> + alignment_support_scheme, misalignment,
> + false, _cost, nullptr, cost_vec,
> + cost_vec, true);
> +   else
> + inside_cost += record_stmt_cost (cost_vec, 1, scalar_load,
> +  stmt_info, 0, vect_body);
> +   continue;
> + }

Just a note that this might make life harder for AArch64 costing.
Strided SLP loads are still equivalent to vector loads for AArch64,
since they happen on the FPR/vector side even if they have integral
modes.

But I agree this is more accurate from a general target-independent POV,
especially given the relatively coarse-grain costing enum.  So I think
that's just something AArch64 will need to account for.

Thanks,
Richard


[PATCH 4/9] vect: Adjust vectorizable_load costing on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP

2023-06-12 Thread Kewen Lin via Gcc-patches
This patch adjusts the cost handling on VMAT_ELEMENTWISE
and VMAT_STRIDED_SLP in function vectorizable_load.  We
don't call function vect_model_load_cost for them any more.

As PR82255 shows, we don't always need a vector construction
there, moving costing next to the transform can make us only
cost for vector construction when it's actually needed.
Besides, it can count the number of loads consistently for
some cases.

 PR tree-optimization/82255

gcc/ChangeLog:

* tree-vect-stmts.cc (vectorizable_load): Adjust the cost handling
on VMAT_ELEMENTWISE and VMAT_STRIDED_SLP without calling
vect_model_load_cost.
(vect_model_load_cost): Assert it won't get VMAT_ELEMENTWISE and
VMAT_STRIDED_SLP any more, and remove their related handlings.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/ppc/costmodel-pr82255.c: New test.

2023-06-13  Bill Schmidt  
Kewen Lin  
---
 .../vect/costmodel/ppc/costmodel-pr82255.c|  31 
 gcc/tree-vect-stmts.cc| 170 +++---
 2 files changed, 134 insertions(+), 67 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/vect/costmodel/ppc/costmodel-pr82255.c

diff --git a/gcc/testsuite/gcc.dg/vect/costmodel/ppc/costmodel-pr82255.c 
b/gcc/testsuite/gcc.dg/vect/costmodel/ppc/costmodel-pr82255.c
new file mode 100644
index 000..9317ee2e15b
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/costmodel/ppc/costmodel-pr82255.c
@@ -0,0 +1,31 @@
+/* { dg-do compile } */
+/* { dg-require-effective-target vect_int } */
+
+/* PR82255: Ensure we don't require a vec_construct cost when we aren't
+   going to generate a strided load.  */
+
+extern int abs (int __x) __attribute__ ((__nothrow__, __leaf__))
+__attribute__ ((__const__));
+
+static int
+foo (unsigned char *w, int i, unsigned char *x, int j)
+{
+  int tot = 0;
+  for (int a = 0; a < 16; a++)
+{
+#pragma GCC unroll 16
+  for (int b = 0; b < 16; b++)
+   tot += abs (w[b] - x[b]);
+  w += i;
+  x += j;
+}
+  return tot;
+}
+
+void
+bar (unsigned char *w, unsigned char *x, int i, int *result)
+{
+  *result = foo (w, 16, x, i);
+}
+
+/* { dg-final { scan-tree-dump-times "vec_construct" 0 "vect" } } */
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 19c61d703c8..651dc800380 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -1136,7 +1136,9 @@ vect_model_load_cost (vec_info *vinfo,
  stmt_vector_for_cost *cost_vec)
 {
   gcc_assert ((memory_access_type != VMAT_GATHER_SCATTER || !gs_info->decl)
- && memory_access_type != VMAT_INVARIANT);
+ && memory_access_type != VMAT_INVARIANT
+ && memory_access_type != VMAT_ELEMENTWISE
+ && memory_access_type != VMAT_STRIDED_SLP);
 
   unsigned int inside_cost = 0, prologue_cost = 0;
   bool grouped_access_p = STMT_VINFO_GROUPED_ACCESS (stmt_info);
@@ -1221,8 +1223,7 @@ vect_model_load_cost (vec_info *vinfo,
 }
 
   /* The loads themselves.  */
-  if (memory_access_type == VMAT_ELEMENTWISE
-  || memory_access_type == VMAT_GATHER_SCATTER)
+  if (memory_access_type == VMAT_GATHER_SCATTER)
 {
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   unsigned int assumed_nunits = vect_nunits_for_cost (vectype);
@@ -1244,10 +1245,10 @@ vect_model_load_cost (vec_info *vinfo,
alignment_support_scheme, misalignment, first_stmt_p,
_cost, _cost, 
cost_vec, cost_vec, true);
-  if (memory_access_type == VMAT_ELEMENTWISE
-  || memory_access_type == VMAT_STRIDED_SLP
-  || (memory_access_type == VMAT_GATHER_SCATTER
- && gs_info->ifn == IFN_LAST && !gs_info->decl))
+
+  if (memory_access_type == VMAT_GATHER_SCATTER
+  && gs_info->ifn == IFN_LAST
+  && !gs_info->decl)
 inside_cost += record_stmt_cost (cost_vec, ncopies, vec_construct,
 stmt_info, 0, vect_body);
 
@@ -9591,14 +9592,6 @@ vectorizable_load (vec_info *vinfo,
   if (memory_access_type == VMAT_ELEMENTWISE
   || memory_access_type == VMAT_STRIDED_SLP)
 {
-  if (costing_p)
-   {
- vect_model_load_cost (vinfo, stmt_info, ncopies, vf,
-   memory_access_type, alignment_support_scheme,
-   misalignment, _info, slp_node, cost_vec);
- return true;
-   }
-
   gimple_stmt_iterator incr_gsi;
   bool insert_after;
   tree offvar;
@@ -9610,6 +9603,7 @@ vectorizable_load (vec_info *vinfo,
   unsigned int const_nunits = nunits.to_constant ();
   unsigned HOST_WIDE_INT cst_offset = 0;
   tree dr_offset;
+  unsigned int inside_cost = 0;
 
   gcc_assert (!LOOP_VINFO_USING_PARTIAL_VECTORS_P (loop_vinfo));
   gcc_assert (!nested_in_vect_loop);
@@ -9624,6 +9618,7 @@ vectorizable_load (vec_info *vinfo,
  first_stmt_info = stmt_info;
  first_dr_info = dr_info;