Re: [PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-14 Thread Tejas Belagod via Gcc-patches



From: Kyrylo Tkachov 
Date: Wednesday, June 14, 2023 at 10:11 PM
To: Prathamesh Kulkarni , Tejas Belagod 

Cc: Richard Sandiford , gcc-patches@gcc.gnu.org 

Subject: RE: [PATCH v2] [PR96339] Optimise svlast[ab]


> -Original Message-
> From: Gcc-patches  bounces+kyrylo.tkachov=arm@gcc.gnu.org> On Behalf Of Prathamesh
> Kulkarni via Gcc-patches
> Sent: Wednesday, June 14, 2023 8:13 AM
> To: Tejas Belagod 
> Cc: Richard Sandiford ; gcc-
> patc...@gcc.gnu.org
> Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
>
> On Tue, 13 Jun 2023 at 12:38, Tejas Belagod via Gcc-patches
>  wrote:
> >
> >
> >
> > From: Richard Sandiford 
> > Date: Monday, June 12, 2023 at 2:15 PM
> > To: Tejas Belagod 
> > Cc: gcc-patches@gcc.gnu.org , Tejas Belagod
> 
> > Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
> > Tejas Belagod  writes:
> > > From: Tejas Belagod 
> > >
> > >   This PR optimizes an SVE intrinsics sequence where
> > > svlasta (svptrue_pat_b8 (SV_VL1), x)
> > >   a scalar is selected based on a constant predicate and a variable 
> > > vector.
> > >   This sequence is optimized to return the correspoding element of a
> NEON
> > >   vector. For eg.
> > > svlasta (svptrue_pat_b8 (SV_VL1), x)
> > >   returns
> > > umovw0, v0.b[1]
> > >   Likewise,
> > > svlastb (svptrue_pat_b8 (SV_VL1), x)
> > >   returns
> > >  umovw0, v0.b[0]
> > >   This optimization only works provided the constant predicate maps to a
> range
> > >   that is within the bounds of a 128-bit NEON register.
> > >
> > > gcc/ChangeLog:
> > >
> > >PR target/96339
> > >* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): 
> > > Fold
> sve
> > >calls that have a constant input predicate vector.
> > >(svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
> > >(svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
> > >(svlast_impl::vect_all_same): Check if all vector elements are 
> > > equal.
> > >
> > > gcc/testsuite/ChangeLog:
> > >
> > >PR target/96339
> > >* gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
> > >* gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
> > >* gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
> > >* gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
> > >to expect optimized code for function body.
> > >* gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): 
> > > Likewise.
> >
> > OK, thanks.
> >
> > Applied on master, thanks.
> Hi Tejas,
> This seems to break aarch64 bootstrap build with following error due
> to -Wsign-compare diagnostic:
> 00:18:19 /home/tcwg-
> buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/
> aarch64/aarch64-sve-builtins-base.cc:1133:35:
> error: comparison of integer expressions of different signedness:
> ‘int’ and ‘long unsigned int’ [-Werror=sign-compare]
> 00:18:19  1133 | for (i = npats; i < enelts; i += step_1)
> 00:18:19  | ~~^~~~
> 00:30:46 abe-debug-build: cc1plus: all warnings being treated as errors
> 00:30:46 abe-debug-build: make[3]: ***
> [/home/tcwg-
> buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/
> aarch64/t-aarch64:96:
> aarch64-sve-builtins-base.o] Error 1

Fixed thusly in trunk.
Thanks,
Kyrill

gcc/ChangeLog:

* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold):
Fix signed comparison warning in loop from npats to enelts.


Ah, sorry for breaking BS and thanks Kyrill for the fix.

Tejas.

>
> Thanks,
> Prathamesh
> >
> > Tejas.
> >
> >
> > Richard


RE: [PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-14 Thread Kyrylo Tkachov via Gcc-patches


> -Original Message-
> From: Gcc-patches  bounces+kyrylo.tkachov=arm@gcc.gnu.org> On Behalf Of Prathamesh
> Kulkarni via Gcc-patches
> Sent: Wednesday, June 14, 2023 8:13 AM
> To: Tejas Belagod 
> Cc: Richard Sandiford ; gcc-
> patc...@gcc.gnu.org
> Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
> 
> On Tue, 13 Jun 2023 at 12:38, Tejas Belagod via Gcc-patches
>  wrote:
> >
> >
> >
> > From: Richard Sandiford 
> > Date: Monday, June 12, 2023 at 2:15 PM
> > To: Tejas Belagod 
> > Cc: gcc-patches@gcc.gnu.org , Tejas Belagod
> 
> > Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
> > Tejas Belagod  writes:
> > > From: Tejas Belagod 
> > >
> > >   This PR optimizes an SVE intrinsics sequence where
> > > svlasta (svptrue_pat_b8 (SV_VL1), x)
> > >   a scalar is selected based on a constant predicate and a variable 
> > > vector.
> > >   This sequence is optimized to return the correspoding element of a
> NEON
> > >   vector. For eg.
> > > svlasta (svptrue_pat_b8 (SV_VL1), x)
> > >   returns
> > > umovw0, v0.b[1]
> > >   Likewise,
> > > svlastb (svptrue_pat_b8 (SV_VL1), x)
> > >   returns
> > >  umovw0, v0.b[0]
> > >   This optimization only works provided the constant predicate maps to a
> range
> > >   that is within the bounds of a 128-bit NEON register.
> > >
> > > gcc/ChangeLog:
> > >
> > >PR target/96339
> > >* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): 
> > > Fold
> sve
> > >calls that have a constant input predicate vector.
> > >(svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
> > >(svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
> > >(svlast_impl::vect_all_same): Check if all vector elements are 
> > > equal.
> > >
> > > gcc/testsuite/ChangeLog:
> > >
> > >PR target/96339
> > >* gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
> > >* gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
> > >* gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
> > >* gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
> > >to expect optimized code for function body.
> > >* gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): 
> > > Likewise.
> > >* gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): 
> > > Likewise.
> >
> > OK, thanks.
> >
> > Applied on master, thanks.
> Hi Tejas,
> This seems to break aarch64 bootstrap build with following error due
> to -Wsign-compare diagnostic:
> 00:18:19 /home/tcwg-
> buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/
> aarch64/aarch64-sve-builtins-base.cc:1133:35:
> error: comparison of integer expressions of different signedness:
> ‘int’ and ‘long unsigned int’ [-Werror=sign-compare]
> 00:18:19  1133 | for (i = npats; i < enelts; i += step_1)
> 00:18:19  | ~~^~~~
> 00:30:46 abe-debug-build: cc1plus: all warnings being treated as errors
> 00:30:46 abe-debug-build: make[3]: ***
> [/home/tcwg-
> buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/
> aarch64/t-aarch64:96:
> aarch64-sve-builtins-base.o] Error 1

Fixed thusly in trunk.
Thanks,
Kyrill

gcc/ChangeLog:

* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold):
Fix signed comparison warning in loop from npats to enelts.

> 
> Thanks,
> Prathamesh
> >
> > Tejas.
> >
> >
> > Richard


boot.patch
Description: boot.patch


Re: [PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-14 Thread Prathamesh Kulkarni via Gcc-patches
On Tue, 13 Jun 2023 at 12:38, Tejas Belagod via Gcc-patches
 wrote:
>
>
>
> From: Richard Sandiford 
> Date: Monday, June 12, 2023 at 2:15 PM
> To: Tejas Belagod 
> Cc: gcc-patches@gcc.gnu.org , Tejas Belagod 
> 
> Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
> Tejas Belagod  writes:
> > From: Tejas Belagod 
> >
> >   This PR optimizes an SVE intrinsics sequence where
> > svlasta (svptrue_pat_b8 (SV_VL1), x)
> >   a scalar is selected based on a constant predicate and a variable vector.
> >   This sequence is optimized to return the correspoding element of a NEON
> >   vector. For eg.
> > svlasta (svptrue_pat_b8 (SV_VL1), x)
> >   returns
> > umovw0, v0.b[1]
> >   Likewise,
> > svlastb (svptrue_pat_b8 (SV_VL1), x)
> >   returns
> >  umovw0, v0.b[0]
> >   This optimization only works provided the constant predicate maps to a 
> > range
> >   that is within the bounds of a 128-bit NEON register.
> >
> > gcc/ChangeLog:
> >
> >PR target/96339
> >* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): 
> > Fold sve
> >calls that have a constant input predicate vector.
> >(svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
> >(svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
> >(svlast_impl::vect_all_same): Check if all vector elements are equal.
> >
> > gcc/testsuite/ChangeLog:
> >
> >PR target/96339
> >* gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
> >* gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
> >* gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
> >* gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
> >to expect optimized code for function body.
> >* gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewise.
> >* gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewise.
>
> OK, thanks.
>
> Applied on master, thanks.
Hi Tejas,
This seems to break aarch64 bootstrap build with following error due
to -Wsign-compare diagnostic:
00:18:19 
/home/tcwg-buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/aarch64/aarch64-sve-builtins-base.cc:1133:35:
error: comparison of integer expressions of different signedness:
‘int’ and ‘long unsigned int’ [-Werror=sign-compare]
00:18:19  1133 | for (i = npats; i < enelts; i += step_1)
00:18:19  | ~~^~~~
00:30:46 abe-debug-build: cc1plus: all warnings being treated as errors
00:30:46 abe-debug-build: make[3]: ***
[/home/tcwg-buildslave/workspace/tcwg_gnu_6/abe/snapshots/gcc.git~master/gcc/config/aarch64/t-aarch64:96:
aarch64-sve-builtins-base.o] Error 1

Thanks,
Prathamesh
>
> Tejas.
>
>
> Richard


Re: [PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-13 Thread Tejas Belagod via Gcc-patches



From: Richard Sandiford 
Date: Monday, June 12, 2023 at 2:15 PM
To: Tejas Belagod 
Cc: gcc-patches@gcc.gnu.org , Tejas Belagod 

Subject: Re: [PATCH v2] [PR96339] Optimise svlast[ab]
Tejas Belagod  writes:
> From: Tejas Belagod 
>
>   This PR optimizes an SVE intrinsics sequence where
> svlasta (svptrue_pat_b8 (SV_VL1), x)
>   a scalar is selected based on a constant predicate and a variable vector.
>   This sequence is optimized to return the correspoding element of a NEON
>   vector. For eg.
> svlasta (svptrue_pat_b8 (SV_VL1), x)
>   returns
> umovw0, v0.b[1]
>   Likewise,
> svlastb (svptrue_pat_b8 (SV_VL1), x)
>   returns
>  umovw0, v0.b[0]
>   This optimization only works provided the constant predicate maps to a range
>   that is within the bounds of a 128-bit NEON register.
>
> gcc/ChangeLog:
>
>PR target/96339
>* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): 
> Fold sve
>calls that have a constant input predicate vector.
>(svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
>(svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
>(svlast_impl::vect_all_same): Check if all vector elements are equal.
>
> gcc/testsuite/ChangeLog:
>
>PR target/96339
>* gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
>* gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
>* gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
>* gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
>to expect optimized code for function body.
>* gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewise.
>* gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewise.

OK, thanks.

Applied on master, thanks.

Tejas.


Richard


Re: [PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-12 Thread Richard Sandiford via Gcc-patches
Tejas Belagod  writes:
> From: Tejas Belagod 
>
>   This PR optimizes an SVE intrinsics sequence where
> svlasta (svptrue_pat_b8 (SV_VL1), x)
>   a scalar is selected based on a constant predicate and a variable vector.
>   This sequence is optimized to return the correspoding element of a NEON
>   vector. For eg.
> svlasta (svptrue_pat_b8 (SV_VL1), x)
>   returns
> umovw0, v0.b[1]
>   Likewise,
> svlastb (svptrue_pat_b8 (SV_VL1), x)
>   returns
>  umovw0, v0.b[0]
>   This optimization only works provided the constant predicate maps to a range
>   that is within the bounds of a 128-bit NEON register.
>
> gcc/ChangeLog:
>
>   PR target/96339
>   * config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): Fold 
> sve
>   calls that have a constant input predicate vector.
>   (svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
>   (svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
>   (svlast_impl::vect_all_same): Check if all vector elements are equal.
>
> gcc/testsuite/ChangeLog:
>
>   PR target/96339
>   * gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
>   * gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
>   * gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
>   * gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
>   to expect optimized code for function body.
>   * gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewise.
>   * gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewise.

OK, thanks.

Richard


[PATCH v2] [PR96339] Optimise svlast[ab]

2023-06-12 Thread Tejas Belagod via Gcc-patches
From: Tejas Belagod 

  This PR optimizes an SVE intrinsics sequence where
svlasta (svptrue_pat_b8 (SV_VL1), x)
  a scalar is selected based on a constant predicate and a variable vector.
  This sequence is optimized to return the correspoding element of a NEON
  vector. For eg.
svlasta (svptrue_pat_b8 (SV_VL1), x)
  returns
umovw0, v0.b[1]
  Likewise,
svlastb (svptrue_pat_b8 (SV_VL1), x)
  returns
 umovw0, v0.b[0]
  This optimization only works provided the constant predicate maps to a range
  that is within the bounds of a 128-bit NEON register.

gcc/ChangeLog:

PR target/96339
* config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): Fold 
sve
calls that have a constant input predicate vector.
(svlast_impl::is_lasta): Query to check if intrinsic is svlasta.
(svlast_impl::is_lastb): Query to check if intrinsic is svlastb.
(svlast_impl::vect_all_same): Check if all vector elements are equal.

gcc/testsuite/ChangeLog:

PR target/96339
* gcc.target/aarch64/sve/acle/general-c/svlast.c: New.
* gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New.
* gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New.
* gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm
to expect optimized code for function body.
* gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewise.
* gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewise.
---
 .../aarch64/aarch64-sve-builtins-base.cc  | 133 
 .../aarch64/sve/acle/general-c/svlast.c   |  63 
 .../sve/acle/general-c/svlast128_run.c| 313 +
 .../sve/acle/general-c/svlast256_run.c| 314 ++
 .../gcc.target/aarch64/sve/pcs/return_4.c |   2 -
 .../aarch64/sve/pcs/return_4_1024.c   |   2 -
 .../gcc.target/aarch64/sve/pcs/return_4_128.c |   2 -
 .../aarch64/sve/pcs/return_4_2048.c   |   2 -
 .../gcc.target/aarch64/sve/pcs/return_4_256.c |   2 -
 .../gcc.target/aarch64/sve/pcs/return_4_512.c |   2 -
 .../gcc.target/aarch64/sve/pcs/return_5.c |   2 -
 .../aarch64/sve/pcs/return_5_1024.c   |   2 -
 .../gcc.target/aarch64/sve/pcs/return_5_128.c |   2 -
 .../aarch64/sve/pcs/return_5_2048.c   |   2 -
 .../gcc.target/aarch64/sve/pcs/return_5_256.c |   2 -
 .../gcc.target/aarch64/sve/pcs/return_5_512.c |   2 -
 16 files changed, 823 insertions(+), 24 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c
 create mode 100644 
gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c

diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc 
b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
index cd9cace3c9b..9b766ffa817 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
@@ -1056,6 +1056,139 @@ class svlast_impl : public quiet
 public:
   CONSTEXPR svlast_impl (int unspec) : m_unspec (unspec) {}
 
+  bool is_lasta () const { return m_unspec == UNSPEC_LASTA; }
+  bool is_lastb () const { return m_unspec == UNSPEC_LASTB; }
+
+  bool vect_all_same (tree v, int step) const
+  {
+int i;
+int nelts = vector_cst_encoded_nelts (v);
+tree first_el = VECTOR_CST_ENCODED_ELT (v, 0);
+
+for (i = 0; i < nelts; i += step)
+  if (!operand_equal_p (VECTOR_CST_ENCODED_ELT (v, i), first_el, 0))
+   return false;
+
+return true;
+  }
+
+  /* Fold a svlast{a/b} call with constant predicate to a BIT_FIELD_REF.
+ BIT_FIELD_REF lowers to Advanced SIMD element extract, so we have to
+ ensure the index of the element being accessed is in the range of a
+ Advanced SIMD vector width.  */
+  gimple *fold (gimple_folder & f) const override
+  {
+tree pred = gimple_call_arg (f.call, 0);
+tree val = gimple_call_arg (f.call, 1);
+
+if (TREE_CODE (pred) == VECTOR_CST)
+  {
+   HOST_WIDE_INT pos;
+   int i = 0;
+   int step = f.type_suffix (0).element_bytes;
+   int step_1 = gcd (step, VECTOR_CST_NPATTERNS (pred));
+   int npats = VECTOR_CST_NPATTERNS (pred);
+   unsigned HOST_WIDE_INT e