Hi Srinath,

> -----Original Message-----
> From: Srinath Parvathaneni <srinath.parvathan...@arm.com>
> Sent: 20 March 2020 15:51
> To: gcc-patches@gcc.gnu.org
> Cc: Kyrylo Tkachov <kyrylo.tkac...@arm.com>
> Subject: [PATCH v2][ARM][GCC][10x]: MVE ACLE intrinsics "add with carry
> across beats" and "beat-wise substract".
> 
> Hello Kyrill,
> 
> Following patch is the rebased version of v1.
> (version v1) https://gcc.gnu.org/pipermail/gcc-patches/2019-
> November/534348.html
> 
> ####
> 
> Hello,
> 
> This patch supports following MVE ACLE "add with carry across beats"
> intrinsics and "beat-wise substract" intrinsics.
> 
> vadciq_s32, vadciq_u32, vadciq_m_s32, vadciq_m_u32, vadcq_s32,
> vadcq_u32, vadcq_m_s32, vadcq_m_u32, vsbciq_s32, vsbciq_u32,
> vsbciq_m_s32, vsbciq_m_u32, vsbcq_s32, vsbcq_u32, vsbcq_m_s32,
> vsbcq_m_u32.
> 
> Please refer to M-profile Vector Extension (MVE) intrinsics [1]  for more
> details.
> [1] https://developer.arm.com/architectures/instruction-sets/simd-
> isas/helium/mve-intrinsics
> 
> Regression tested on arm-none-eabi and found no regressions.
> 
> Ok for trunk?

Thanks, I've pushed this patch to trunk.
Kyrill

> 
> Thanks,
> Srinath.
> 
> gcc/ChangeLog:
> 
> 2020-03-20  Srinath Parvathaneni  <srinath.parvathan...@arm.com>
>           Andre Vieira  <andre.simoesdiasvie...@arm.com>
>           Mihail Ionescu  <mihail.ione...@arm.com>
> 
>       * config/arm/arm-builtins.c (ARM_BUILTIN_GET_FPSCR_NZCVQC):
> Define.
>       (ARM_BUILTIN_SET_FPSCR_NZCVQC): Likewise.
>       (arm_init_mve_builtins): Add "__builtin_arm_get_fpscr_nzcvqc" and
>       "__builtin_arm_set_fpscr_nzcvqc" to arm_builtin_decls array.
>       (arm_expand_builtin): Define case
> ARM_BUILTIN_GET_FPSCR_NZCVQC
>       and ARM_BUILTIN_SET_FPSCR_NZCVQC.
>       * config/arm/arm_mve.h (vadciq_s32): Define macro.
>       (vadciq_u32): Likewise.
>       (vadciq_m_s32): Likewise.
>       (vadciq_m_u32): Likewise.
>       (vadcq_s32): Likewise.
>       (vadcq_u32): Likewise.
>       (vadcq_m_s32): Likewise.
>       (vadcq_m_u32): Likewise.
>       (vsbciq_s32): Likewise.
>       (vsbciq_u32): Likewise.
>       (vsbciq_m_s32): Likewise.
>       (vsbciq_m_u32): Likewise.
>       (vsbcq_s32): Likewise.
>       (vsbcq_u32): Likewise.
>       (vsbcq_m_s32): Likewise.
>       (vsbcq_m_u32): Likewise.
>       (__arm_vadciq_s32): Define intrinsic.
>       (__arm_vadciq_u32): Likewise.
>       (__arm_vadciq_m_s32): Likewise.
>       (__arm_vadciq_m_u32): Likewise.
>       (__arm_vadcq_s32): Likewise.
>       (__arm_vadcq_u32): Likewise.
>       (__arm_vadcq_m_s32): Likewise.
>       (__arm_vadcq_m_u32): Likewise.
>       (__arm_vsbciq_s32): Likewise.
>       (__arm_vsbciq_u32): Likewise.
>       (__arm_vsbciq_m_s32): Likewise.
>       (__arm_vsbciq_m_u32): Likewise.
>       (__arm_vsbcq_s32): Likewise.
>       (__arm_vsbcq_u32): Likewise.
>       (__arm_vsbcq_m_s32): Likewise.
>       (__arm_vsbcq_m_u32): Likewise.
>       (vadciq_m): Define polymorphic variant.
>       (vadciq): Likewise.
>       (vadcq_m): Likewise.
>       (vadcq): Likewise.
>       (vsbciq_m): Likewise.
>       (vsbciq): Likewise.
>       (vsbcq_m): Likewise.
>       (vsbcq): Likewise.
>       * config/arm/arm_mve_builtins.def (BINOP_NONE_NONE_NONE):
> Use builtin
>       qualifier.
>       (BINOP_UNONE_UNONE_UNONE): Likewise.
>       (QUADOP_NONE_NONE_NONE_NONE_UNONE): Likewise.
>       (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE): Likewise.
>       * config/arm/mve.md (VADCIQ): Define iterator.
>       (VADCIQ_M): Likewise.
>       (VSBCQ): Likewise.
>       (VSBCQ_M): Likewise.
>       (VSBCIQ): Likewise.
>       (VSBCIQ_M): Likewise.
>       (VADCQ): Likewise.
>       (VADCQ_M): Likewise.
>       (mve_vadciq_m_<supf>v4si): Define RTL pattern.
>       (mve_vadciq_<supf>v4si): Likewise.
>       (mve_vadcq_m_<supf>v4si): Likewise.
>       (mve_vadcq_<supf>v4si): Likewise.
>       (mve_vsbciq_m_<supf>v4si): Likewise.
>       (mve_vsbciq_<supf>v4si): Likewise.
>       (mve_vsbcq_m_<supf>v4si): Likewise.
>       (mve_vsbcq_<supf>v4si): Likewise.
>       (get_fpscr_nzcvqc): Define isns.
>       (set_fpscr_nzcvqc): Define isns.
>       * config/arm/unspecs.md (UNSPEC_GET_FPSCR_NZCVQC): Define.
>       (UNSPEC_SET_FPSCR_NZCVQC): Define.
> 
> gcc/testsuite/ChangeLog:
> 
> 2020-03-20  Srinath Parvathaneni  <srinath.parvathan...@arm.com>
>           Andre Vieira  <andre.simoesdiasvie...@arm.com>
>           Mihail Ionescu  <mihail.ione...@arm.com>
> 
>       * gcc.target/arm/mve/intrinsics/vadciq_m_s32.c: New test.
>       * gcc.target/arm/mve/intrinsics/vadciq_m_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadciq_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadciq_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadcq_m_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadcq_m_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadcq_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vadcq_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbciq_m_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbciq_m_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbciq_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbciq_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbcq_s32.c: Likewise.
>       * gcc.target/arm/mve/intrinsics/vsbcq_u32.c: Likewise.
> 
> 
> ###############     Attachment also inlined for ease of reply
> ###############
> 
> 
> diff --git a/gcc/config/arm/arm-builtins.c b/gcc/config/arm/arm-builtins.c
> index
> ecdd95fdb753be0c53f568b036df1396a8d8f485..96d8adcd37eb3caf51c71d66
> af0331f9d1924b92 100644
> --- a/gcc/config/arm/arm-builtins.c
> +++ b/gcc/config/arm/arm-builtins.c
> @@ -1151,6 +1151,8 @@ enum arm_builtins
> 
>    ARM_BUILTIN_GET_FPSCR,
>    ARM_BUILTIN_SET_FPSCR,
> +  ARM_BUILTIN_GET_FPSCR_NZCVQC,
> +  ARM_BUILTIN_SET_FPSCR_NZCVQC,
> 
>    ARM_BUILTIN_CMSE_NONSECURE_CALLER,
>    ARM_BUILTIN_SIMD_LANE_CHECK,
> @@ -1752,6 +1754,22 @@ arm_init_mve_builtins (void)
>    arm_init_simd_builtin_scalar_types ();
>    arm_init_simd_builtin_types ();
> 
> +  /* Add support for __builtin_{get,set}_fpscr_nzcvqc, used by MVE
> intrinsics
> +     that read and/or write the carry bit.  */
> +  tree get_fpscr_nzcvqc = build_function_type_list (intSI_type_node,
> +                                                 NULL);
> +  tree set_fpscr_nzcvqc = build_function_type_list (void_type_node,
> +                                                 intSI_type_node,
> +                                                 NULL);
> +  arm_builtin_decls[ARM_BUILTIN_GET_FPSCR_NZCVQC]
> +    = add_builtin_function ("__builtin_arm_get_fpscr_nzcvqc",
> get_fpscr_nzcvqc,
> +                         ARM_BUILTIN_GET_FPSCR_NZCVQC, BUILT_IN_MD,
> NULL,
> +                         NULL_TREE);
> +  arm_builtin_decls[ARM_BUILTIN_SET_FPSCR_NZCVQC]
> +    = add_builtin_function ("__builtin_arm_set_fpscr_nzcvqc",
> set_fpscr_nzcvqc,
> +                         ARM_BUILTIN_SET_FPSCR_NZCVQC, BUILT_IN_MD,
> NULL,
> +                         NULL_TREE);
> +
>    for (i = 0; i < ARRAY_SIZE (mve_builtin_data); i++, fcode++)
>      {
>        arm_builtin_datum *d = &mve_builtin_data[i]; @@ -3289,6 +3307,23
> @@ arm_expand_builtin (tree exp,
> 
>    switch (fcode)
>      {
> +    case ARM_BUILTIN_GET_FPSCR_NZCVQC:
> +    case ARM_BUILTIN_SET_FPSCR_NZCVQC:
> +      if (fcode == ARM_BUILTIN_GET_FPSCR_NZCVQC)
> +     {
> +       icode = CODE_FOR_get_fpscr_nzcvqc;
> +       target = gen_reg_rtx (SImode);
> +       emit_insn (GEN_FCN (icode) (target));
> +       return target;
> +     }
> +      else
> +     {
> +       icode = CODE_FOR_set_fpscr_nzcvqc;
> +       op0 = expand_normal (CALL_EXPR_ARG (exp, 0));
> +       emit_insn (GEN_FCN (icode) (force_reg (SImode, op0)));
> +       return NULL_RTX;
> +     }
> +
>      case ARM_BUILTIN_GET_FPSCR:
>      case ARM_BUILTIN_SET_FPSCR:
>        if (fcode == ARM_BUILTIN_GET_FPSCR) diff --git
> a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index
> 77df7c75c3f76066d37f879e01b6d8df6c7394ae..220319cffd711323e5f72ba49
> 407f4237f70ebf3 100644
> --- a/gcc/config/arm/arm_mve.h
> +++ b/gcc/config/arm/arm_mve.h
> @@ -2450,6 +2450,22 @@ typedef struct { uint8x16_t val[4]; } uint8x16x4_t;
> #define vrev32q_x_f16(__a, __p) __arm_vrev32q_x_f16(__a, __p)  #define
> vrev64q_x_f16(__a, __p) __arm_vrev64q_x_f16(__a, __p)  #define
> vrev64q_x_f32(__a, __p) __arm_vrev64q_x_f32(__a, __p)
> +#define vadciq_s32(__a, __b,  __carry_out) __arm_vadciq_s32(__a, __b,
> +__carry_out) #define vadciq_u32(__a, __b,  __carry_out)
> +__arm_vadciq_u32(__a, __b,  __carry_out) #define
> +vadciq_m_s32(__inactive, __a, __b,  __carry_out, __p)
> +__arm_vadciq_m_s32(__inactive, __a, __b,  __carry_out, __p) #define
> +vadciq_m_u32(__inactive, __a, __b,  __carry_out, __p)
> +__arm_vadciq_m_u32(__inactive, __a, __b,  __carry_out, __p) #define
> +vadcq_s32(__a, __b,  __carry) __arm_vadcq_s32(__a, __b,  __carry)
> +#define vadcq_u32(__a, __b,  __carry) __arm_vadcq_u32(__a, __b,
> +__carry) #define vadcq_m_s32(__inactive, __a, __b,  __carry, __p)
> +__arm_vadcq_m_s32(__inactive, __a, __b,  __carry, __p) #define
> +vadcq_m_u32(__inactive, __a, __b,  __carry, __p)
> +__arm_vadcq_m_u32(__inactive, __a, __b,  __carry, __p) #define
> +vsbciq_s32(__a, __b,  __carry_out) __arm_vsbciq_s32(__a, __b,
> +__carry_out) #define vsbciq_u32(__a, __b,  __carry_out)
> +__arm_vsbciq_u32(__a, __b,  __carry_out) #define
> +vsbciq_m_s32(__inactive, __a, __b,  __carry_out, __p)
> +__arm_vsbciq_m_s32(__inactive, __a, __b,  __carry_out, __p) #define
> +vsbciq_m_u32(__inactive, __a, __b,  __carry_out, __p)
> +__arm_vsbciq_m_u32(__inactive, __a, __b,  __carry_out, __p) #define
> +vsbcq_s32(__a, __b,  __carry) __arm_vsbcq_s32(__a, __b,  __carry)
> +#define vsbcq_u32(__a, __b,  __carry) __arm_vsbcq_u32(__a, __b,
> +__carry) #define vsbcq_m_s32(__inactive, __a, __b,  __carry, __p)
> +__arm_vsbcq_m_s32(__inactive, __a, __b,  __carry, __p) #define
> +vsbcq_m_u32(__inactive, __a, __b,  __carry, __p)
> +__arm_vsbcq_m_u32(__inactive, __a, __b,  __carry, __p)
>  #endif
> 
>  __extension__ extern __inline void
> @@ -15917,6 +15933,158 @@ __arm_vshrq_x_n_u32 (uint32x4_t __a, const
> int __imm, mve_pred16_t __p)
>    return __builtin_mve_vshrq_m_n_uv4si (vuninitializedq_u32 (), __a,
> __imm, __p);  }
> 
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadciq_s32 (int32x4_t __a, int32x4_t __b, unsigned * __carry_out)
> +{
> +  int32x4_t __res = __builtin_mve_vadciq_sv4si (__a, __b);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadciq_u32 (uint32x4_t __a, uint32x4_t __b, unsigned *
> +__carry_out) {
> +  uint32x4_t __res = __builtin_mve_vadciq_uv4si (__a, __b);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadciq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> +unsigned * __carry_out, mve_pred16_t __p) {
> +  int32x4_t __res =  __builtin_mve_vadciq_m_sv4si (__inactive, __a,
> +__b, __p);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadciq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> +__b, unsigned * __carry_out, mve_pred16_t __p) {
> +  uint32x4_t __res = __builtin_mve_vadciq_m_uv4si (__inactive, __a,
> +__b, __p);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadcq_s32 (int32x4_t __a, int32x4_t __b, unsigned * __carry) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  int32x4_t __res = __builtin_mve_vadcq_sv4si (__a, __b);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadcq_u32 (uint32x4_t __a, uint32x4_t __b, unsigned * __carry) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  uint32x4_t __res = __builtin_mve_vadcq_uv4si (__a, __b);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadcq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> +unsigned * __carry, mve_pred16_t __p) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  int32x4_t __res = __builtin_mve_vadcq_m_sv4si (__inactive, __a, __b,
> +__p);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vadcq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> +__b, unsigned * __carry, mve_pred16_t __p) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  uint32x4_t __res =  __builtin_mve_vadcq_m_uv4si (__inactive, __a,
> +__b, __p);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbciq_s32 (int32x4_t __a, int32x4_t __b, unsigned * __carry_out)
> +{
> +  int32x4_t __res = __builtin_mve_vsbciq_sv4si (__a, __b);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbciq_u32 (uint32x4_t __a, uint32x4_t __b, unsigned *
> +__carry_out) {
> +  uint32x4_t __res = __builtin_mve_vsbciq_uv4si (__a, __b);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbciq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> +unsigned * __carry_out, mve_pred16_t __p) {
> +  int32x4_t __res = __builtin_mve_vsbciq_m_sv4si (__inactive, __a, __b,
> +__p);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbciq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> +__b, unsigned * __carry_out, mve_pred16_t __p) {
> +  uint32x4_t __res = __builtin_mve_vsbciq_m_uv4si (__inactive, __a,
> +__b, __p);
> +  *__carry_out = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbcq_s32 (int32x4_t __a, int32x4_t __b, unsigned * __carry) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  int32x4_t __res = __builtin_mve_vsbcq_sv4si (__a, __b);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbcq_u32 (uint32x4_t __a, uint32x4_t __b, unsigned * __carry) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  uint32x4_t __res =  __builtin_mve_vsbcq_uv4si (__a, __b);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline int32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbcq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b,
> +unsigned * __carry, mve_pred16_t __p) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  int32x4_t __res = __builtin_mve_vsbcq_m_sv4si (__inactive, __a, __b,
> +__p);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
> +__extension__ extern __inline uint32x4_t __attribute__
> +((__always_inline__, __gnu_inline__, __artificial__))
> +__arm_vsbcq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t
> +__b, unsigned * __carry, mve_pred16_t __p) {
> +  __builtin_arm_set_fpscr_nzcvqc((__builtin_arm_get_fpscr_nzcvqc () &
> +~0x20000000u) | (*__carry << 29));
> +  uint32x4_t __res = __builtin_mve_vsbcq_m_uv4si (__inactive, __a, __b,
> +__p);
> +  *__carry = (__builtin_arm_get_fpscr_nzcvqc () >> 29) & 0x1u;
> +  return __res;
> +}
> +
>  #if (__ARM_FEATURE_MVE & 2) /* MVE Floating point.  */
> 
>  __extension__ extern __inline void
> @@ -25525,6 +25693,65 @@ extern void *__ARM_undef;
>    int (*)[__ARM_mve_type_int64_t_const_ptr]:
> __arm_vldrdq_gather_shifted_offset_z_s64 (__ARM_mve_coerce(__p0,
> int64_t const *), p1, p2), \
>    int (*)[__ARM_mve_type_uint64_t_const_ptr]:
> __arm_vldrdq_gather_shifted_offset_z_u64 (__ARM_mve_coerce(__p0,
> uint64_t const *), p1, p2));})
> 
> +#define vadciq_m(p0,p1,p2,p3,p4) __arm_vadciq_m(p0,p1,p2,p3,p4)
> #define
> +__arm_vadciq_m(p0,p1,p2,p3,p4) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  __typeof(p2) __p2 = (p2); \
> +  _Generic( (int
> +(*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_ty
> peid(__
> +p2)])0, \
> +  int
> +(*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mv
> e_type_
> +int32x4_t]: __arm_vadciq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t),
> +p3, p4), \
> +  int
> +(*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_
> mve_typ
> +e_uint32x4_t]: __arm_vadciq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2,
> uint32x4_t),
> +p3, p4));})
> +
> +#define vadciq(p0,p1,p2) __arm_vadciq(p0,p1,p2) #define
> +__arm_vadciq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> +\
> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> +__arm_vadciq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), p2), \
> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> +__arm_vadciq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), p2));})
> +
> +#define vadcq_m(p0,p1,p2,p3,p4) __arm_vadcq_m(p0,p1,p2,p3,p4)
> #define
> +__arm_vadcq_m(p0,p1,p2,p3,p4) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  __typeof(p2) __p2 = (p2); \
> +  _Generic( (int
> +(*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_ty
> peid(__
> +p2)])0, \
> +  int
> +(*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mv
> e_type_
> +int32x4_t]: __arm_vadcq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t),
> +p3, p4), \
> +  int
> +(*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_
> mve_typ
> +e_uint32x4_t]: __arm_vadcq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2,
> uint32x4_t),
> +p3, p4));})
> +
> +#define vadcq(p0,p1,p2) __arm_vadcq(p0,p1,p2) #define
> +__arm_vadcq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> +\
> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> +__arm_vadcq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), p2), \
> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> +__arm_vadcq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), p2));})
> +
> +#define vsbciq_m(p0,p1,p2,p3,p4) __arm_vsbciq_m(p0,p1,p2,p3,p4)
> #define
> +__arm_vsbciq_m(p0,p1,p2,p3,p4) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  __typeof(p2) __p2 = (p2); \
> +  _Generic( (int
> +(*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_ty
> peid(__
> +p2)])0, \
> +  int
> +(*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mv
> e_type_
> +int32x4_t]: __arm_vsbciq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t),
> +p3, p4), \
> +  int
> +(*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_
> mve_typ
> +e_uint32x4_t]: __arm_vsbciq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2,
> uint32x4_t),
> +p3, p4));})
> +
> +#define vsbciq(p0,p1,p2) __arm_vsbciq(p0,p1,p2) #define
> +__arm_vsbciq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> +\
> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> +__arm_vsbciq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), p2), \
> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> +__arm_vsbciq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), p2));})
> +
> +#define vsbcq_m(p0,p1,p2,p3,p4) __arm_vsbcq_m(p0,p1,p2,p3,p4) #define
> +__arm_vsbcq_m(p0,p1,p2,p3,p4) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  __typeof(p2) __p2 = (p2); \
> +  _Generic( (int
> +(*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_ty
> peid(__
> +p2)])0, \
> +  int
> +(*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mv
> e_type_
> +int32x4_t]: __arm_vsbcq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2,
> int32x4_t),
> +p3, p4), \
> +  int
> +(*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_
> mve_typ
> +e_uint32x4_t]: __arm_vsbcq_m_u32 (__ARM_mve_coerce(__p0,
> uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2,
> uint32x4_t),
> +p3, p4));})
> +
> +#define vsbcq(p0,p1,p2) __arm_vsbcq(p0,p1,p2) #define
> +__arm_vsbcq(p0,p1,p2) ({ __typeof(p0) __p0 = (p0); \
> +  __typeof(p1) __p1 = (p1); \
> +  _Generic( (int
> (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0,
> +\
> +  int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]:
> +__arm_vsbcq_s32 (__ARM_mve_coerce(__p0, int32x4_t),
> +__ARM_mve_coerce(__p1, int32x4_t), p2), \
> +  int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]:
> +__arm_vsbcq_u32 (__ARM_mve_coerce(__p0, uint32x4_t),
> +__ARM_mve_coerce(__p1, uint32x4_t), p2));})
> 
>  #define vldrbq_gather_offset_z(p0,p1,p2)
> __arm_vldrbq_gather_offset_z(p0,p1,p2)
>  #define __arm_vldrbq_gather_offset_z(p0,p1,p2) ({ __typeof(p0) __p0 =
> (p0); \ diff --git a/gcc/config/arm/arm_mve_builtins.def
> b/gcc/config/arm/arm_mve_builtins.def
> index
> 9fc0a8a0c62b22cfd6d37658831cd91704f79885..38f46beb76a3068dcb8dd97e
> 3ee8dbe2707dd72e 100644
> --- a/gcc/config/arm/arm_mve_builtins.def
> +++ b/gcc/config/arm/arm_mve_builtins.def
> @@ -857,3 +857,19 @@ VAR1 (LDRGBWBS_Z, vldrdq_gather_base_wb_z_s,
> v2di)
>  VAR1 (LDRGBWBS, vldrwq_gather_base_wb_s, v4si)
>  VAR1 (LDRGBWBS, vldrwq_gather_base_wb_f, v4sf)
>  VAR1 (LDRGBWBS, vldrdq_gather_base_wb_s, v2di)
> +VAR1 (BINOP_NONE_NONE_NONE, vadciq_s, v4si)
> +VAR1 (BINOP_UNONE_UNONE_UNONE, vadciq_u, v4si)
> +VAR1 (BINOP_NONE_NONE_NONE, vadcq_s, v4si)
> +VAR1 (BINOP_UNONE_UNONE_UNONE, vadcq_u, v4si)
> +VAR1 (BINOP_NONE_NONE_NONE, vsbciq_s, v4si)
> +VAR1 (BINOP_UNONE_UNONE_UNONE, vsbciq_u, v4si)
> +VAR1 (BINOP_NONE_NONE_NONE, vsbcq_s, v4si)
> +VAR1 (BINOP_UNONE_UNONE_UNONE, vsbcq_u, v4si)
> +VAR1 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vadciq_m_s, v4si)
> +VAR1 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vadciq_m_u,
> v4si)
> +VAR1 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vadcq_m_s, v4si)
> +VAR1 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vadcq_m_u,
> v4si)
> +VAR1 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vsbciq_m_s, v4si)
> +VAR1 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vsbciq_m_u,
> v4si)
> +VAR1 (QUADOP_NONE_NONE_NONE_NONE_UNONE, vsbcq_m_s, v4si)
> +VAR1 (QUADOP_UNONE_UNONE_UNONE_UNONE_UNONE, vsbcq_m_u,
> v4si)
> diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index
> 2573cbb719e24f257ac5c8cde4c3aafee6c527a6..55596cf54dbf055a966622571
> 01181f6901a3cfb 100644
> --- a/gcc/config/arm/mve.md
> +++ b/gcc/config/arm/mve.md
> @@ -211,7 +211,10 @@
>                        VDWDUPQ_M VIDUPQ VIDUPQ_M VIWDUPQ
> VIWDUPQ_M
>                        VSTRWQSBWB_S VSTRWQSBWB_U
> VLDRWQGBWB_S VLDRWQGBWB_U
>                        VSTRWQSBWB_F VLDRWQGBWB_F VSTRDQSBWB_S
> VSTRDQSBWB_U
> -                      VLDRDQGBWB_S VLDRDQGBWB_U])
> +                      VLDRDQGBWB_S VLDRDQGBWB_U VADCQ_U
> VADCQ_M_U VADCQ_S
> +                      VADCQ_M_S VSBCIQ_U VSBCIQ_S VSBCIQ_M_U
> VSBCIQ_M_S
> +                      VSBCQ_U VSBCQ_S VSBCQ_M_U VSBCQ_M_S
> VADCIQ_U VADCIQ_M_U
> +                      VADCIQ_S VADCIQ_M_S])
> 
>  (define_mode_attr MVE_CNVT [(V8HI "V8HF") (V4SI "V4SF") (V8HF "V8HI")
>                           (V4SF "V4SI")])
> @@ -382,8 +385,13 @@
>                      (VSTRWQSO_U "u") (VSTRWQSO_S "s") (VSTRWQSSO_U
> "u")
>                      (VSTRWQSSO_S "s") (VSTRWQSBWB_S "s")
> (VSTRWQSBWB_U "u")
>                      (VLDRWQGBWB_S "s") (VLDRWQGBWB_U "u")
> (VLDRDQGBWB_S "s")
> -                    (VLDRDQGBWB_U "u") (VSTRDQSBWB_S "s")
> -                    (VSTRDQSBWB_U "u")])
> +                    (VLDRDQGBWB_U "u") (VSTRDQSBWB_S "s")
> (VADCQ_M_S "s")
> +                    (VSTRDQSBWB_U "u") (VSBCQ_U "u")  (VSBCQ_M_U "u")
> +                    (VSBCQ_S "s")  (VSBCQ_M_S "s") (VSBCIQ_U "u")
> +                    (VSBCIQ_M_U "u") (VSBCIQ_S "s") (VSBCIQ_M_S "s")
> +                    (VADCQ_U "u")  (VADCQ_M_U "u") (VADCQ_S "s")
> +                    (VADCIQ_U "u") (VADCIQ_M_U "u") (VADCIQ_S "s")
> +                    (VADCIQ_M_S "s")])
> 
>  (define_int_attr mode1 [(VCTP8Q "8") (VCTP16Q "16") (VCTP32Q "32")
>                       (VCTP64Q "64") (VCTP8Q_M "8") (VCTP16Q_M "16")
> @@ -636,6 +644,15 @@  (define_int_iterator VLDRWGBWBQ
> [VLDRWQGBWB_S VLDRWQGBWB_U])  (define_int_iterator VSTRDSBWBQ
> [VSTRDQSBWB_S VSTRDQSBWB_U])  (define_int_iterator VLDRDGBWBQ
> [VLDRDQGBWB_S VLDRDQGBWB_U])
> +(define_int_iterator VADCIQ [VADCIQ_U VADCIQ_S]) (define_int_iterator
> +VADCIQ_M [VADCIQ_M_U VADCIQ_M_S]) (define_int_iterator VSBCQ
> [VSBCQ_U
> +VSBCQ_S]) (define_int_iterator VSBCQ_M [VSBCQ_M_U VSBCQ_M_S])
> +(define_int_iterator VSBCIQ [VSBCIQ_U VSBCIQ_S]) (define_int_iterator
> +VSBCIQ_M [VSBCIQ_M_U VSBCIQ_M_S]) (define_int_iterator VADCQ
> [VADCQ_U
> +VADCQ_S]) (define_int_iterator VADCQ_M [VADCQ_M_U VADCQ_M_S])
> +
> 
>  (define_insn "*mve_mov<mode>"
>    [(set (match_operand:MVE_types 0 "nonimmediate_operand"
> "=w,w,r,w,w,r,w,Us") @@ -10597,6 +10614,21 @@
>    DONE;
>  })
> 
> +(define_insn "get_fpscr_nzcvqc"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> +   (unspec:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
> +"TARGET_HAVE_MVE"
> + "vmrs\\t%0, FPSCR_nzcvqc"
> +  [(set_attr "type" "mve_move")])
> +
> +(define_insn "set_fpscr_nzcvqc"
> + [(set (reg:SI VFPCC_REGNUM)
> +   (unspec_volatile:SI [(match_operand:SI 0 "register_operand" "r")]
> +    VUNSPEC_SET_FPSCR_NZCVQC))]
> + "TARGET_HAVE_MVE"
> + "vmsr\\tFPSCR_nzcvqc, %0"
> +  [(set_attr "type" "mve_move")])
> +
>  ;;
>  ;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]  ;; @@ -
> 10621,3 +10653,147 @@
>     return "";
>  }
>    [(set_attr "length" "8")])
> +;;
> +;; [vadciq_m_s, vadciq_m_u])
> +;;
> +(define_insn "mve_vadciq_m_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "0")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")
> +                   (match_operand:V4SI 3 "s_register_operand" "w")
> +                   (match_operand:HI 4 "vpr_register_operand" "Up")]
> +      VADCIQ_M))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(const_int 0)]
> +      VADCIQ_M))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vpst\;vadcit.i32\t%q0, %q2, %q3"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "8")])
> +
> +;;
> +;; [vadciq_u, vadciq_s])
> +;;
> +(define_insn "mve_vadciq_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")]
> +      VADCIQ))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(const_int 0)]
> +      VADCIQ))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vadci.i32\t%q0, %q1, %q2"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "4")])
> +
> +;;
> +;; [vadcq_m_s, vadcq_m_u])
> +;;
> +(define_insn "mve_vadcq_m_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "0")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")
> +                   (match_operand:V4SI 3 "s_register_operand" "w")
> +                   (match_operand:HI 4 "vpr_register_operand" "Up")]
> +      VADCQ_M))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(reg:SI VFPCC_REGNUM)]
> +      VADCQ_M))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vpst\;vadct.i32\t%q0, %q2, %q3"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "8")])
> +
> +;;
> +;; [vadcq_u, vadcq_s])
> +;;
> +(define_insn "mve_vadcq_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                    (match_operand:V4SI 2 "s_register_operand" "w")]
> +      VADCQ))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(reg:SI VFPCC_REGNUM)]
> +      VADCQ))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vadc.i32\t%q0, %q1, %q2"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "4")
> +   (set_attr "conds" "set")])
> +
> +;;
> +;; [vsbciq_m_u, vsbciq_m_s])
> +;;
> +(define_insn "mve_vsbciq_m_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")
> +                   (match_operand:V4SI 3 "s_register_operand" "w")
> +                   (match_operand:HI 4 "vpr_register_operand" "Up")]
> +      VSBCIQ_M))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(const_int 0)]
> +      VSBCIQ_M))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vpst\;vsbcit.i32\t%q0, %q2, %q3"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "8")])
> +
> +;;
> +;; [vsbciq_s, vsbciq_u])
> +;;
> +(define_insn "mve_vsbciq_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")]
> +      VSBCIQ))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(const_int 0)]
> +      VSBCIQ))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vsbci.i32\t%q0, %q1, %q2"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "4")])
> +
> +;;
> +;; [vsbcq_m_u, vsbcq_m_s])
> +;;
> +(define_insn "mve_vsbcq_m_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")
> +                   (match_operand:V4SI 3 "s_register_operand" "w")
> +                   (match_operand:HI 4 "vpr_register_operand" "Up")]
> +      VSBCQ_M))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(reg:SI VFPCC_REGNUM)]
> +      VSBCQ_M))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vpst\;vsbct.i32\t%q0, %q2, %q3"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "8")])
> +
> +;;
> +;; [vsbcq_s, vsbcq_u])
> +;;
> +(define_insn "mve_vsbcq_<supf>v4si"
> +  [(set (match_operand:V4SI 0 "s_register_operand" "=w")
> +     (unspec:V4SI [(match_operand:V4SI 1 "s_register_operand" "w")
> +                   (match_operand:V4SI 2 "s_register_operand" "w")]
> +      VSBCQ))
> +   (set (reg:SI VFPCC_REGNUM)
> +     (unspec:SI [(reg:SI VFPCC_REGNUM)]
> +      VSBCQ))
> +  ]
> +  "TARGET_HAVE_MVE"
> +  "vsbc.i32\t%q0, %q1, %q2"
> +  [(set_attr "type" "mve_move")
> +   (set_attr "length" "4")])
> diff --git a/gcc/config/arm/unspecs.md b/gcc/config/arm/unspecs.md index
> f0b1f465de4b63d624510783576700519044717d..a7575871da7bf123f9e2d69
> 3815147fa60e1e914 100644
> --- a/gcc/config/arm/unspecs.md
> +++ b/gcc/config/arm/unspecs.md
> @@ -170,6 +170,7 @@
>    UNSPEC_TORC                ; Used by the intrinsic form of the iWMMXt
> TORC instruction.
>    UNSPEC_TORVSC              ; Used by the intrinsic form of the iWMMXt
> TORVSC instruction.
>    UNSPEC_TEXTRC              ; Used by the intrinsic form of the iWMMXt
> TEXTRC instruction.
> +  UNSPEC_GET_FPSCR_NZCVQC    ; Represent fetch of FPSCR_nzcvqc
> content.
>  ])
> 
> 
> @@ -218,6 +219,7 @@
>    VUNSPEC_STL                ; Represent a store-register-release.
>    VUNSPEC_GET_FPSCR  ; Represent fetch of FPSCR content.
>    VUNSPEC_SET_FPSCR  ; Represent assign of FPSCR content.
> +  VUNSPEC_SET_FPSCR_NZCVQC   ; Represent assign of FPSCR_nzcvqc
> content.
>    VUNSPEC_PROBE_STACK_RANGE ; Represent stack range probing.
>    VUNSPEC_CDP                ; Represent the coprocessor cdp instruction.
>    VUNSPEC_CDP2               ; Represent the coprocessor cdp2 instruction.
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..3b4019b6abadd72eaba23c
> 2787442d3d86efbf85
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_s32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vadciq_m_s32 (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadcit.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vadciq_m (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadcit.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..a69039d23925da758a36c74
> 81479f0f3ae7b29a2
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_m_u32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vadciq_m_u32 (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadcit.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vadciq_m (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadcit.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..3b7623ce8374c449447a893
> 030afb9dd5e197fd4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_s32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t a, int32x4_t b, unsigned * carry_out) {
> +  return vadciq_s32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vadci.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t a, int32x4_t b, unsigned * carry_out) {
> +  return vadciq (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vadci.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..07eb9d8017a8501184ad6fa
> 02bd1268a72bb7e98
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadciq_u32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t a, uint32x4_t b, unsigned * carry_out) {
> +  return vadciq_u32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vadci.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t a, uint32x4_t b, unsigned * carry_out) {
> +  return vadciq (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vadci.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..8c6f2319ead91fde51c5a70
> 9fc7a2fc22c58017e
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_s32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +  return vadcq_m_s32 (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadct.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +  return vadcq_m (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadct.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..0747fee0e297ad00d2772f3
> 691ca39d52c8eec8a
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_m_u32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +  return vadcq_m_u32 (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadct.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry, mve_pred16_t p) {
> +  return vadcq_m (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vadct.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..07830070ea09cdc7499c0d1
> cb8bccd47e6858692
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_s32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t a, int32x4_t b, unsigned * carry) {
> +  return vadcq_s32 (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vadc.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t a, int32x4_t b, unsigned * carry) {
> +  return vadcq (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vadc.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..479db3a6e93d26717bea11
> 0d40443d10ac6f5eda
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vadcq_u32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t a, uint32x4_t b, unsigned * carry) {
> +  return vadcq_u32 (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vadc.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t a, uint32x4_t b, unsigned * carry) {
> +  return vadcq (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vadc.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..11e5b4011dc6373ae7c23cc
> 03fc7f1d625d10b58
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_s32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vsbciq_m_s32 (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbcit.i32"  } } */
> +
> +int32x4_t
> +foo1 (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vsbciq_m (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbcit.i32"  } } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..df638bc31493b5818d95168
> 15026b7852777f067
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_m_u32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vsbciq_m_u32 (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbcit.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry_out, mve_pred16_t p) {
> +  return vsbciq_m (inactive, a, b, carry_out, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbcit.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..6f0f4dd3aec9d382761d206
> a3119d2cfc39d7a21
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_s32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t a, int32x4_t b, unsigned * carry_out) {
> +  return vsbciq_s32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vsbci.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t a, int32x4_t b, unsigned * carry_out) {
> +  return vsbciq_s32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vsbci.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..e68eaa367e9ee2e6f5ef0de
> 476aa36e1bde9ad85
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbciq_u32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t a, uint32x4_t b, unsigned * carry_out) {
> +  return vsbciq_u32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vsbci.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t a, uint32x4_t b, unsigned * carry_out) {
> +  return vsbciq_u32 (a, b, carry_out);
> +}
> +
> +/* { dg-final { scan-assembler "vsbci.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..0f9b9b188dc5f515d69356a
> ab21c4b95cca06aa5
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +    return vsbcq_m_s32 (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbct.i32"  }  } */
> +
> +int32x4_t
> +foo1(int32x4_t inactive, int32x4_t a, int32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +    return vsbcq_m (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbct.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..fb62c26d8c952c61c3466e2
> 31880fcf7f28077d8
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c
> @@ -0,0 +1,23 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned * carry,
> +mve_pred16_t p) {
> +    return vsbcq_m_u32 (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbct.i32"  }  } */ uint32x4_t
> +foo1 (uint32x4_t inactive, uint32x4_t a, uint32x4_t b, unsigned *
> +carry, mve_pred16_t p) {
> +    return vsbcq_m (inactive, a, b, carry, p); }
> +
> +/* { dg-final { scan-assembler "vpst" } } */
> +/* { dg-final { scan-assembler "vsbct.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_s32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_s32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..fbbda5c9df35bfbb943dcd0
> d017ec2c92e7ecc1f
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_s32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +int32x4_t
> +foo (int32x4_t a, int32x4_t b, unsigned * carry) {
> +  return vsbcq_s32 (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vsbc.i32"  }  } */
> +
> +int32x4_t
> +foo1 (int32x4_t a, int32x4_t b, unsigned * carry) {
> +  return vsbcq (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vsbc.i32"  }  } */
> diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_u32.c
> b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_u32.c
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..286345336cb3f7f07f92827c
> 94ac152859fc3c55
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vsbcq_u32.c
> @@ -0,0 +1,22 @@
> +/* { dg-do compile  } */
> +/* { dg-require-effective-target arm_v8_1m_mve_ok } */
> +/* { dg-add-options arm_v8_1m_mve } */
> +/* { dg-additional-options "-O2" } */
> +
> +#include "arm_mve.h"
> +
> +uint32x4_t
> +foo (uint32x4_t a, uint32x4_t b, unsigned * carry) {
> +  return vsbcq_u32 (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vsbc.i32"  }  } */
> +
> +uint32x4_t
> +foo1 (uint32x4_t a, uint32x4_t b, unsigned * carry) {
> +  return vsbcq (a, b, carry);
> +}
> +
> +/* { dg-final { scan-assembler "vsbc.i32"  }  } */

Reply via email to