Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-05 Thread Richard Biener via Gcc-patches
On Thu, Aug 5, 2021 at 3:26 PM Christophe Lyon via Gcc-patches
 wrote:
>
> On Thu, Aug 5, 2021 at 11:53 AM Richard Biener  wrote:
>
> > On Thu, 5 Aug 2021, Christophe Lyon wrote:
> >
> > > On Wed, Aug 4, 2021 at 2:08 PM Richard Biener  wrote:
> > >
> > > > On Wed, 4 Aug 2021, Richard Sandiford wrote:
> > > >
> > > > > Richard Biener  writes:
> > > > > > This adds a gather vectorization capability to the vectorizer
> > > > > > without target support by decomposing the offset vector, doing
> > > > > > sclar loads and then building a vector from the result.  This
> > > > > > is aimed mainly at cases where vectorizing the rest of the loop
> > > > > > offsets the cost of vectorizing the gather.
> > > > > >
> > > > > > Note it's difficult to avoid vectorizing the offset load, but in
> > > > > > some cases later passes can turn the vector load + extract into
> > > > > > scalar loads, see the followup patch.
> > > > > >
> > > > > > On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> > > > > > to 219s on a Zen2 CPU which has its native gather instructions
> > > > > > disabled (using those the runtime instead increases to 254s)
> > > > > > using -Ofast -march=znver2 [-flto].  It turns out the critical
> > > > > > loops in this benchmark all perform gather operations.
> > > > > >
> > > > > > Bootstrapped and tested on x86_64-unknown-linux-gnu.
> > > > > >
> > > > > > 2021-07-30  Richard Biener  
> > > > > >
> > > > > > * tree-vect-data-refs.c (vect_check_gather_scatter):
> > > > > > Include widening conversions only when the result is
> > > > > > still handed by native gather or the current offset
> > > > > > size not already matches the data size.
> > > > > > Also succeed analysis in case there's no native support,
> > > > > > noted by a IFN_LAST ifn and a NULL decl.
> > > > > > (vect_analyze_data_refs): Always consider gathers.
> > > > > > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
> > > > > > Test for no IFN gather rather than decl gather.
> > > > > > * tree-vect-stmts.c (vect_model_load_cost): Pass in the
> > > > > > gather-scatter info and cost emulated gathers accordingly.
> > > > > > (vect_truncate_gather_scatter_offset): Properly test for
> > > > > > no IFN gather.
> > > > > > (vect_use_strided_gather_scatters_p): Likewise.
> > > > > > (get_load_store_type): Handle emulated gathers and its
> > > > > > restrictions.
> > > > > > (vectorizable_load): Likewise.  Emulate them by extracting
> > > > > > scalar offsets, doing scalar loads and a vector construct.
> > > > > >
> > > > > > * gcc.target/i386/vect-gather-1.c: New testcase.
> > > > > > * gfortran.dg/vect/vect-8.f90: Adjust.
> > > >
> > >
> > > Hi,
> > >
> > > The adjusted testcase now fails on aarch64:
> > > FAIL:  gfortran.dg/vect/vect-8.f90   -O   scan-tree-dump-times vect
> > > "vectorized 23 loops" 1
> >
> > That likely means it needs adjustment for the aarch64 case as well
> > which I didn't touch.  I suppose it's now vectorizing 24 loops?
> > And 24 with SVE as well, so we might be able to merge the
> > aarch64_sve and aarch64 && ! aarch64_sve cases?
> >
> > Like with
> >
> > diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > index cc1aebfbd84..c8a7d896bac 100644
> > --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > @@ -704,7 +704,6 @@ CALL track('KERNEL  ')
> >  RETURN
> >  END SUBROUTINE kernel
> >
> > -! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" {
> > target aarch64_sve } } }
> > -! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" {
> > target { aarch64*-*-* && { ! aarch64_sve } } } } }
> > +! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" {
> > target aarch64*-*-* } } }
> >  ! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect"
> > { target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> >  ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" {
> > target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
> >
> > f951 vect.exp testing with and without -march=armv8.3-a+sve shows
> > this might work, but if you can double-check that would be nice.
> >
> >
> Indeed LGTM, thanks

Pushed.

>
> > Richard.
> >


Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-05 Thread Christophe Lyon via Gcc-patches
On Thu, Aug 5, 2021 at 11:53 AM Richard Biener  wrote:

> On Thu, 5 Aug 2021, Christophe Lyon wrote:
>
> > On Wed, Aug 4, 2021 at 2:08 PM Richard Biener  wrote:
> >
> > > On Wed, 4 Aug 2021, Richard Sandiford wrote:
> > >
> > > > Richard Biener  writes:
> > > > > This adds a gather vectorization capability to the vectorizer
> > > > > without target support by decomposing the offset vector, doing
> > > > > sclar loads and then building a vector from the result.  This
> > > > > is aimed mainly at cases where vectorizing the rest of the loop
> > > > > offsets the cost of vectorizing the gather.
> > > > >
> > > > > Note it's difficult to avoid vectorizing the offset load, but in
> > > > > some cases later passes can turn the vector load + extract into
> > > > > scalar loads, see the followup patch.
> > > > >
> > > > > On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> > > > > to 219s on a Zen2 CPU which has its native gather instructions
> > > > > disabled (using those the runtime instead increases to 254s)
> > > > > using -Ofast -march=znver2 [-flto].  It turns out the critical
> > > > > loops in this benchmark all perform gather operations.
> > > > >
> > > > > Bootstrapped and tested on x86_64-unknown-linux-gnu.
> > > > >
> > > > > 2021-07-30  Richard Biener  
> > > > >
> > > > > * tree-vect-data-refs.c (vect_check_gather_scatter):
> > > > > Include widening conversions only when the result is
> > > > > still handed by native gather or the current offset
> > > > > size not already matches the data size.
> > > > > Also succeed analysis in case there's no native support,
> > > > > noted by a IFN_LAST ifn and a NULL decl.
> > > > > (vect_analyze_data_refs): Always consider gathers.
> > > > > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
> > > > > Test for no IFN gather rather than decl gather.
> > > > > * tree-vect-stmts.c (vect_model_load_cost): Pass in the
> > > > > gather-scatter info and cost emulated gathers accordingly.
> > > > > (vect_truncate_gather_scatter_offset): Properly test for
> > > > > no IFN gather.
> > > > > (vect_use_strided_gather_scatters_p): Likewise.
> > > > > (get_load_store_type): Handle emulated gathers and its
> > > > > restrictions.
> > > > > (vectorizable_load): Likewise.  Emulate them by extracting
> > > > > scalar offsets, doing scalar loads and a vector construct.
> > > > >
> > > > > * gcc.target/i386/vect-gather-1.c: New testcase.
> > > > > * gfortran.dg/vect/vect-8.f90: Adjust.
> > >
> >
> > Hi,
> >
> > The adjusted testcase now fails on aarch64:
> > FAIL:  gfortran.dg/vect/vect-8.f90   -O   scan-tree-dump-times vect
> > "vectorized 23 loops" 1
>
> That likely means it needs adjustment for the aarch64 case as well
> which I didn't touch.  I suppose it's now vectorizing 24 loops?
> And 24 with SVE as well, so we might be able to merge the
> aarch64_sve and aarch64 && ! aarch64_sve cases?
>
> Like with
>
> diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> index cc1aebfbd84..c8a7d896bac 100644
> --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> @@ -704,7 +704,6 @@ CALL track('KERNEL  ')
>  RETURN
>  END SUBROUTINE kernel
>
> -! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" {
> target aarch64_sve } } }
> -! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" {
> target { aarch64*-*-* && { ! aarch64_sve } } } } }
> +! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" {
> target aarch64*-*-* } } }
>  ! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect"
> { target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
>  ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" {
> target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
>
> f951 vect.exp testing with and without -march=armv8.3-a+sve shows
> this might work, but if you can double-check that would be nice.
>
>
Indeed LGTM, thanks


> Richard.
>


Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-05 Thread Richard Biener
On Thu, 5 Aug 2021, Christophe Lyon wrote:

> On Wed, Aug 4, 2021 at 2:08 PM Richard Biener  wrote:
> 
> > On Wed, 4 Aug 2021, Richard Sandiford wrote:
> >
> > > Richard Biener  writes:
> > > > This adds a gather vectorization capability to the vectorizer
> > > > without target support by decomposing the offset vector, doing
> > > > sclar loads and then building a vector from the result.  This
> > > > is aimed mainly at cases where vectorizing the rest of the loop
> > > > offsets the cost of vectorizing the gather.
> > > >
> > > > Note it's difficult to avoid vectorizing the offset load, but in
> > > > some cases later passes can turn the vector load + extract into
> > > > scalar loads, see the followup patch.
> > > >
> > > > On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> > > > to 219s on a Zen2 CPU which has its native gather instructions
> > > > disabled (using those the runtime instead increases to 254s)
> > > > using -Ofast -march=znver2 [-flto].  It turns out the critical
> > > > loops in this benchmark all perform gather operations.
> > > >
> > > > Bootstrapped and tested on x86_64-unknown-linux-gnu.
> > > >
> > > > 2021-07-30  Richard Biener  
> > > >
> > > > * tree-vect-data-refs.c (vect_check_gather_scatter):
> > > > Include widening conversions only when the result is
> > > > still handed by native gather or the current offset
> > > > size not already matches the data size.
> > > > Also succeed analysis in case there's no native support,
> > > > noted by a IFN_LAST ifn and a NULL decl.
> > > > (vect_analyze_data_refs): Always consider gathers.
> > > > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
> > > > Test for no IFN gather rather than decl gather.
> > > > * tree-vect-stmts.c (vect_model_load_cost): Pass in the
> > > > gather-scatter info and cost emulated gathers accordingly.
> > > > (vect_truncate_gather_scatter_offset): Properly test for
> > > > no IFN gather.
> > > > (vect_use_strided_gather_scatters_p): Likewise.
> > > > (get_load_store_type): Handle emulated gathers and its
> > > > restrictions.
> > > > (vectorizable_load): Likewise.  Emulate them by extracting
> > > > scalar offsets, doing scalar loads and a vector construct.
> > > >
> > > > * gcc.target/i386/vect-gather-1.c: New testcase.
> > > > * gfortran.dg/vect/vect-8.f90: Adjust.
> >
> 
> Hi,
> 
> The adjusted testcase now fails on aarch64:
> FAIL:  gfortran.dg/vect/vect-8.f90   -O   scan-tree-dump-times vect
> "vectorized 23 loops" 1

That likely means it needs adjustment for the aarch64 case as well
which I didn't touch.  I suppose it's now vectorizing 24 loops?
And 24 with SVE as well, so we might be able to merge the
aarch64_sve and aarch64 && ! aarch64_sve cases?

Like with

diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 
b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
index cc1aebfbd84..c8a7d896bac 100644
--- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
+++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
@@ -704,7 +704,6 @@ CALL track('KERNEL  ')
 RETURN
 END SUBROUTINE kernel
 
-! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { 
target aarch64_sve } } }
-! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" { 
target { aarch64*-*-* && { ! aarch64_sve } } } } }
+! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { 
target aarch64*-*-* } } }
 ! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect" 
{ target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
 ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" { 
target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }

f951 vect.exp testing with and without -march=armv8.3-a+sve shows
this might work, but if you can double-check that would be nice.

Richard.


Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-05 Thread Christophe Lyon via Gcc-patches
On Wed, Aug 4, 2021 at 2:08 PM Richard Biener  wrote:

> On Wed, 4 Aug 2021, Richard Sandiford wrote:
>
> > Richard Biener  writes:
> > > This adds a gather vectorization capability to the vectorizer
> > > without target support by decomposing the offset vector, doing
> > > sclar loads and then building a vector from the result.  This
> > > is aimed mainly at cases where vectorizing the rest of the loop
> > > offsets the cost of vectorizing the gather.
> > >
> > > Note it's difficult to avoid vectorizing the offset load, but in
> > > some cases later passes can turn the vector load + extract into
> > > scalar loads, see the followup patch.
> > >
> > > On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> > > to 219s on a Zen2 CPU which has its native gather instructions
> > > disabled (using those the runtime instead increases to 254s)
> > > using -Ofast -march=znver2 [-flto].  It turns out the critical
> > > loops in this benchmark all perform gather operations.
> > >
> > > Bootstrapped and tested on x86_64-unknown-linux-gnu.
> > >
> > > 2021-07-30  Richard Biener  
> > >
> > > * tree-vect-data-refs.c (vect_check_gather_scatter):
> > > Include widening conversions only when the result is
> > > still handed by native gather or the current offset
> > > size not already matches the data size.
> > > Also succeed analysis in case there's no native support,
> > > noted by a IFN_LAST ifn and a NULL decl.
> > > (vect_analyze_data_refs): Always consider gathers.
> > > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
> > > Test for no IFN gather rather than decl gather.
> > > * tree-vect-stmts.c (vect_model_load_cost): Pass in the
> > > gather-scatter info and cost emulated gathers accordingly.
> > > (vect_truncate_gather_scatter_offset): Properly test for
> > > no IFN gather.
> > > (vect_use_strided_gather_scatters_p): Likewise.
> > > (get_load_store_type): Handle emulated gathers and its
> > > restrictions.
> > > (vectorizable_load): Likewise.  Emulate them by extracting
> > > scalar offsets, doing scalar loads and a vector construct.
> > >
> > > * gcc.target/i386/vect-gather-1.c: New testcase.
> > > * gfortran.dg/vect/vect-8.f90: Adjust.
>

Hi,

The adjusted testcase now fails on aarch64:
FAIL:  gfortran.dg/vect/vect-8.f90   -O   scan-tree-dump-times vect
"vectorized 23 loops" 1


Christophe

> > ---
> > >  gcc/testsuite/gcc.target/i386/vect-gather-1.c |  18 
> > >  gcc/testsuite/gfortran.dg/vect/vect-8.f90 |   2 +-
> > >  gcc/tree-vect-data-refs.c |  34 --
> > >  gcc/tree-vect-patterns.c  |   2 +-
> > >  gcc/tree-vect-stmts.c | 100 --
> > >  5 files changed, 138 insertions(+), 18 deletions(-)
> > >  create mode 100644 gcc/testsuite/gcc.target/i386/vect-gather-1.c
> > >
> > > diff --git a/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> > > new file mode 100644
> > > index 000..134aef39666
> > > --- /dev/null
> > > +++ b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> > > @@ -0,0 +1,18 @@
> > > +/* { dg-do compile } */
> > > +/* { dg-options "-Ofast -msse2 -fdump-tree-vect-details" } */
> > > +
> > > +#ifndef INDEXTYPE
> > > +#define INDEXTYPE int
> > > +#endif
> > > +double vmul(INDEXTYPE *rowstart, INDEXTYPE *rowend,
> > > +   double *luval, double *dst)
> > > +{
> > > +  double res = 0;
> > > +  for (const INDEXTYPE * col = rowstart; col != rowend; ++col,
> ++luval)
> > > +res += *luval * dst[*col];
> > > +  return res;
> > > +}
> > > +
> > > +/* With gather emulation this should be profitable to vectorize
> > > +   even with plain SSE2.  */
> > > +/* { dg-final { scan-tree-dump "loop vectorized" "vect" } } */
> > > diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > > index 9994805d77f..cc1aebfbd84 100644
> > > --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > > +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > > @@ -706,5 +706,5 @@ END SUBROUTINE kernel
> > >
> > >  ! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" {
> target aarch64_sve } } }
> > >  ! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" {
> target { aarch64*-*-* && { ! aarch64_sve } } } } }
> > > -! { dg-final { scan-tree-dump-times "vectorized 2\[23\] loops" 1
> "vect" { target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> > > +! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1
> "vect" { target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> > >  ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" {
> target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
> > > diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
> > > index 6995efba899..3c29ff04fd8 100644
> > > --- a/gcc/tree-vect-data-refs.c
> > > +++ b/gcc/tree-vect-data-refs

Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-04 Thread Richard Biener
On Wed, 4 Aug 2021, Richard Sandiford wrote:

> Richard Biener  writes:
> > This adds a gather vectorization capability to the vectorizer
> > without target support by decomposing the offset vector, doing
> > sclar loads and then building a vector from the result.  This
> > is aimed mainly at cases where vectorizing the rest of the loop
> > offsets the cost of vectorizing the gather.
> >
> > Note it's difficult to avoid vectorizing the offset load, but in
> > some cases later passes can turn the vector load + extract into
> > scalar loads, see the followup patch.
> >
> > On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> > to 219s on a Zen2 CPU which has its native gather instructions
> > disabled (using those the runtime instead increases to 254s)
> > using -Ofast -march=znver2 [-flto].  It turns out the critical
> > loops in this benchmark all perform gather operations.
> >
> > Bootstrapped and tested on x86_64-unknown-linux-gnu.
> >
> > 2021-07-30  Richard Biener  
> >
> > * tree-vect-data-refs.c (vect_check_gather_scatter):
> > Include widening conversions only when the result is
> > still handed by native gather or the current offset
> > size not already matches the data size.
> > Also succeed analysis in case there's no native support,
> > noted by a IFN_LAST ifn and a NULL decl.
> > (vect_analyze_data_refs): Always consider gathers.
> > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
> > Test for no IFN gather rather than decl gather.
> > * tree-vect-stmts.c (vect_model_load_cost): Pass in the
> > gather-scatter info and cost emulated gathers accordingly.
> > (vect_truncate_gather_scatter_offset): Properly test for
> > no IFN gather.
> > (vect_use_strided_gather_scatters_p): Likewise.
> > (get_load_store_type): Handle emulated gathers and its
> > restrictions.
> > (vectorizable_load): Likewise.  Emulate them by extracting
> > scalar offsets, doing scalar loads and a vector construct.
> >
> > * gcc.target/i386/vect-gather-1.c: New testcase.
> > * gfortran.dg/vect/vect-8.f90: Adjust.
> > ---
> >  gcc/testsuite/gcc.target/i386/vect-gather-1.c |  18 
> >  gcc/testsuite/gfortran.dg/vect/vect-8.f90 |   2 +-
> >  gcc/tree-vect-data-refs.c |  34 --
> >  gcc/tree-vect-patterns.c  |   2 +-
> >  gcc/tree-vect-stmts.c | 100 --
> >  5 files changed, 138 insertions(+), 18 deletions(-)
> >  create mode 100644 gcc/testsuite/gcc.target/i386/vect-gather-1.c
> >
> > diff --git a/gcc/testsuite/gcc.target/i386/vect-gather-1.c 
> > b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> > new file mode 100644
> > index 000..134aef39666
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> > @@ -0,0 +1,18 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-Ofast -msse2 -fdump-tree-vect-details" } */
> > +
> > +#ifndef INDEXTYPE
> > +#define INDEXTYPE int
> > +#endif
> > +double vmul(INDEXTYPE *rowstart, INDEXTYPE *rowend,
> > +   double *luval, double *dst)
> > +{
> > +  double res = 0;
> > +  for (const INDEXTYPE * col = rowstart; col != rowend; ++col, ++luval)
> > +res += *luval * dst[*col];
> > +  return res;
> > +}
> > +
> > +/* With gather emulation this should be profitable to vectorize
> > +   even with plain SSE2.  */
> > +/* { dg-final { scan-tree-dump "loop vectorized" "vect" } } */
> > diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 
> > b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > index 9994805d77f..cc1aebfbd84 100644
> > --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> > @@ -706,5 +706,5 @@ END SUBROUTINE kernel
> >  
> >  ! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { 
> > target aarch64_sve } } }
> >  ! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" { 
> > target { aarch64*-*-* && { ! aarch64_sve } } } } }
> > -! { dg-final { scan-tree-dump-times "vectorized 2\[23\] loops" 1 "vect" { 
> > target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> > +! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect" { 
> > target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> >  ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" { 
> > target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
> > diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
> > index 6995efba899..3c29ff04fd8 100644
> > --- a/gcc/tree-vect-data-refs.c
> > +++ b/gcc/tree-vect-data-refs.c
> > @@ -4007,8 +4007,27 @@ vect_check_gather_scatter (stmt_vec_info stmt_info, 
> > loop_vec_info loop_vinfo,
> >   continue;
> > }
> >  
> > - if (TYPE_PRECISION (TREE_TYPE (op0))
> > - < TYPE_PRECISION (TREE_TYPE (off)))
> > + /* Include the conversion if it is widening and we're using
> > +the IFN path or the target can handl

Re: [PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-04 Thread Richard Sandiford via Gcc-patches
Richard Biener  writes:
> This adds a gather vectorization capability to the vectorizer
> without target support by decomposing the offset vector, doing
> sclar loads and then building a vector from the result.  This
> is aimed mainly at cases where vectorizing the rest of the loop
> offsets the cost of vectorizing the gather.
>
> Note it's difficult to avoid vectorizing the offset load, but in
> some cases later passes can turn the vector load + extract into
> scalar loads, see the followup patch.
>
> On SPEC CPU 2017 510.parest_r this improves runtime from 250s
> to 219s on a Zen2 CPU which has its native gather instructions
> disabled (using those the runtime instead increases to 254s)
> using -Ofast -march=znver2 [-flto].  It turns out the critical
> loops in this benchmark all perform gather operations.
>
> Bootstrapped and tested on x86_64-unknown-linux-gnu.
>
> 2021-07-30  Richard Biener  
>
>   * tree-vect-data-refs.c (vect_check_gather_scatter):
>   Include widening conversions only when the result is
>   still handed by native gather or the current offset
>   size not already matches the data size.
>   Also succeed analysis in case there's no native support,
>   noted by a IFN_LAST ifn and a NULL decl.
>   (vect_analyze_data_refs): Always consider gathers.
>   * tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
>   Test for no IFN gather rather than decl gather.
>   * tree-vect-stmts.c (vect_model_load_cost): Pass in the
>   gather-scatter info and cost emulated gathers accordingly.
>   (vect_truncate_gather_scatter_offset): Properly test for
>   no IFN gather.
>   (vect_use_strided_gather_scatters_p): Likewise.
>   (get_load_store_type): Handle emulated gathers and its
>   restrictions.
>   (vectorizable_load): Likewise.  Emulate them by extracting
> scalar offsets, doing scalar loads and a vector construct.
>
>   * gcc.target/i386/vect-gather-1.c: New testcase.
>   * gfortran.dg/vect/vect-8.f90: Adjust.
> ---
>  gcc/testsuite/gcc.target/i386/vect-gather-1.c |  18 
>  gcc/testsuite/gfortran.dg/vect/vect-8.f90 |   2 +-
>  gcc/tree-vect-data-refs.c |  34 --
>  gcc/tree-vect-patterns.c  |   2 +-
>  gcc/tree-vect-stmts.c | 100 --
>  5 files changed, 138 insertions(+), 18 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/vect-gather-1.c
>
> diff --git a/gcc/testsuite/gcc.target/i386/vect-gather-1.c 
> b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> new file mode 100644
> index 000..134aef39666
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
> @@ -0,0 +1,18 @@
> +/* { dg-do compile } */
> +/* { dg-options "-Ofast -msse2 -fdump-tree-vect-details" } */
> +
> +#ifndef INDEXTYPE
> +#define INDEXTYPE int
> +#endif
> +double vmul(INDEXTYPE *rowstart, INDEXTYPE *rowend,
> + double *luval, double *dst)
> +{
> +  double res = 0;
> +  for (const INDEXTYPE * col = rowstart; col != rowend; ++col, ++luval)
> +res += *luval * dst[*col];
> +  return res;
> +}
> +
> +/* With gather emulation this should be profitable to vectorize
> +   even with plain SSE2.  */
> +/* { dg-final { scan-tree-dump "loop vectorized" "vect" } } */
> diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 
> b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> index 9994805d77f..cc1aebfbd84 100644
> --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
> @@ -706,5 +706,5 @@ END SUBROUTINE kernel
>  
>  ! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { target 
> aarch64_sve } } }
>  ! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" { target 
> { aarch64*-*-* && { ! aarch64_sve } } } } }
> -! { dg-final { scan-tree-dump-times "vectorized 2\[23\] loops" 1 "vect" { 
> target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
> +! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect" { 
> target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
>  ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" { target 
> { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
> diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
> index 6995efba899..3c29ff04fd8 100644
> --- a/gcc/tree-vect-data-refs.c
> +++ b/gcc/tree-vect-data-refs.c
> @@ -4007,8 +4007,27 @@ vect_check_gather_scatter (stmt_vec_info stmt_info, 
> loop_vec_info loop_vinfo,
> continue;
>   }
>  
> -   if (TYPE_PRECISION (TREE_TYPE (op0))
> -   < TYPE_PRECISION (TREE_TYPE (off)))
> +   /* Include the conversion if it is widening and we're using
> +  the IFN path or the target can handle the converted from
> +  offset or the current size is not already the same as the
> +  data vector element size.  */
> +   if ((TYPE_PRECISION (TREE_TYPE (op0))
> +< TYPE_P

[PATCH 1/2] Add emulated gather capability to the vectorizer

2021-08-02 Thread Richard Biener
This adds a gather vectorization capability to the vectorizer
without target support by decomposing the offset vector, doing
sclar loads and then building a vector from the result.  This
is aimed mainly at cases where vectorizing the rest of the loop
offsets the cost of vectorizing the gather.

Note it's difficult to avoid vectorizing the offset load, but in
some cases later passes can turn the vector load + extract into
scalar loads, see the followup patch.

On SPEC CPU 2017 510.parest_r this improves runtime from 250s
to 219s on a Zen2 CPU which has its native gather instructions
disabled (using those the runtime instead increases to 254s)
using -Ofast -march=znver2 [-flto].  It turns out the critical
loops in this benchmark all perform gather operations.

Bootstrapped and tested on x86_64-unknown-linux-gnu.

2021-07-30  Richard Biener  

* tree-vect-data-refs.c (vect_check_gather_scatter):
Include widening conversions only when the result is
still handed by native gather or the current offset
size not already matches the data size.
Also succeed analysis in case there's no native support,
noted by a IFN_LAST ifn and a NULL decl.
(vect_analyze_data_refs): Always consider gathers.
* tree-vect-patterns.c (vect_recog_gather_scatter_pattern):
Test for no IFN gather rather than decl gather.
* tree-vect-stmts.c (vect_model_load_cost): Pass in the
gather-scatter info and cost emulated gathers accordingly.
(vect_truncate_gather_scatter_offset): Properly test for
no IFN gather.
(vect_use_strided_gather_scatters_p): Likewise.
(get_load_store_type): Handle emulated gathers and its
restrictions.
(vectorizable_load): Likewise.  Emulate them by extracting
scalar offsets, doing scalar loads and a vector construct.

* gcc.target/i386/vect-gather-1.c: New testcase.
* gfortran.dg/vect/vect-8.f90: Adjust.
---
 gcc/testsuite/gcc.target/i386/vect-gather-1.c |  18 
 gcc/testsuite/gfortran.dg/vect/vect-8.f90 |   2 +-
 gcc/tree-vect-data-refs.c |  34 --
 gcc/tree-vect-patterns.c  |   2 +-
 gcc/tree-vect-stmts.c | 100 --
 5 files changed, 138 insertions(+), 18 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-gather-1.c

diff --git a/gcc/testsuite/gcc.target/i386/vect-gather-1.c 
b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
new file mode 100644
index 000..134aef39666
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-gather-1.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-Ofast -msse2 -fdump-tree-vect-details" } */
+
+#ifndef INDEXTYPE
+#define INDEXTYPE int
+#endif
+double vmul(INDEXTYPE *rowstart, INDEXTYPE *rowend,
+   double *luval, double *dst)
+{
+  double res = 0;
+  for (const INDEXTYPE * col = rowstart; col != rowend; ++col, ++luval)
+res += *luval * dst[*col];
+  return res;
+}
+
+/* With gather emulation this should be profitable to vectorize
+   even with plain SSE2.  */
+/* { dg-final { scan-tree-dump "loop vectorized" "vect" } } */
diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 
b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
index 9994805d77f..cc1aebfbd84 100644
--- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90
+++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90
@@ -706,5 +706,5 @@ END SUBROUTINE kernel
 
 ! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { target 
aarch64_sve } } }
 ! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" { target { 
aarch64*-*-* && { ! aarch64_sve } } } } }
-! { dg-final { scan-tree-dump-times "vectorized 2\[23\] loops" 1 "vect" { 
target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
+! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect" { 
target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } }
 ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" { target { 
{ ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } }
diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
index 6995efba899..3c29ff04fd8 100644
--- a/gcc/tree-vect-data-refs.c
+++ b/gcc/tree-vect-data-refs.c
@@ -4007,8 +4007,27 @@ vect_check_gather_scatter (stmt_vec_info stmt_info, 
loop_vec_info loop_vinfo,
  continue;
}
 
- if (TYPE_PRECISION (TREE_TYPE (op0))
- < TYPE_PRECISION (TREE_TYPE (off)))
+ /* Include the conversion if it is widening and we're using
+the IFN path or the target can handle the converted from
+offset or the current size is not already the same as the
+data vector element size.  */
+ if ((TYPE_PRECISION (TREE_TYPE (op0))
+  < TYPE_PRECISION (TREE_TYPE (off)))
+ && ((!use_ifn_p
+  && (DR_IS_READ (dr)
+  ? (targetm.vectorize.builtin_gather
+