Re: [PATCH][AArch64] Vector shift by 64 fix

2014-01-23 Thread Marcus Shawcroft
On 6 January 2014 11:52, Alex Velenko alex.vele...@arm.com wrote:
 Hi,

 This patch fixes vector shift by 64 behavior to meet reference
 manual expectations. Testcase included to check that expectations
 are now met. No regressions found.

 Is patch OK?

OK
/Marcus


Re: [PATCH][AArch64] Vector shift by 64 fix

2014-01-23 Thread Alex Velenko

Hi,
Could someone, please, commit this patch, as I do not have permissions 
to do so.

Kind regards,
Alex

On 23/01/14 12:04, Marcus Shawcroft wrote:

On 6 January 2014 11:52, Alex Velenko alex.vele...@arm.com wrote:

Hi,

This patch fixes vector shift by 64 behavior to meet reference
manual expectations. Testcase included to check that expectations
are now met. No regressions found.

Is patch OK?


OK
/Marcus





Re: [PATCH][AArch64] Vector shift by 64 fix

2014-01-23 Thread James Greenhalgh
On Thu, Jan 23, 2014 at 03:57:33PM +, Alex Velenko wrote:
 Hi,
 Could someone, please, commit this patch, as I do not have permissions 
 to do so.

Please don't top-post on this list.

 On 23/01/14 12:04, Marcus Shawcroft wrote:
  On 6 January 2014 11:52, Alex Velenko alex.vele...@arm.com wrote:
  Hi,
 
  This patch fixes vector shift by 64 behavior to meet reference
  manual expectations. Testcase included to check that expectations
  are now met. No regressions found.
 
  Is patch OK?
 
  OK
  /Marcus

I've committed this on Alex' behalf as revision 206978.

Thanks,
James



Re: [PATCH][AArch64] Vector shift by 64 fix

2014-01-22 Thread Alex Velenko

On 06/01/14 11:52, Alex Velenko wrote:

Hi,

This patch fixes vector shift by 64 behavior to meet reference
manual expectations. Testcase included to check that expectations
are now met. No regressions found.

Is patch OK?

Thanks,
Alex

2014-01-06  Alex Velenko  alex.vele...@arm.com

gcc/

 * config/aarch64/aarch64-simd-builtins.def (ashr): DI mode removed.
 (ashr_simd): New builtin handling DI mode.
 * config/aarch64/aarch64-simd.md (aarch64_ashr_simddi): New pattern.
 (aarch64_sshr_simddi): New match pattern.
 * config/aarch64/arm_neon.h (vshr_n_s32): Builtin call modified.
 (vshrd_n_s64): Likewise.
 * config/aarch64/predicates.md (aarch64_shift_imm64_di): New
predicate.

gcc/testsuite/

 * gcc.target/aarch64/sshr64_1.c: New testcase.


Ping!

Hi,
Can someone, please, review the patch.
Kind regards,
Alex


[PATCH][AArch64] Vector shift by 64 fix

2014-01-06 Thread Alex Velenko

Hi,

This patch fixes vector shift by 64 behavior to meet reference
manual expectations. Testcase included to check that expectations
are now met. No regressions found.

Is patch OK?

Thanks,
Alex

2014-01-06  Alex Velenko  alex.vele...@arm.com

gcc/

* config/aarch64/aarch64-simd-builtins.def (ashr): DI mode removed.
(ashr_simd): New builtin handling DI mode.
* config/aarch64/aarch64-simd.md (aarch64_ashr_simddi): New pattern.
(aarch64_sshr_simddi): New match pattern.
* config/aarch64/arm_neon.h (vshr_n_s32): Builtin call modified.
(vshrd_n_s64): Likewise.
* config/aarch64/predicates.md (aarch64_shift_imm64_di): New predicate.

gcc/testsuite/

* gcc.target/aarch64/sshr64_1.c: New testcase.
diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def
index 1dc3c1fe33fdb8148d2ff9c7198e4d85d5dac5d7..1e88661fd2f0f756ce1427681c843fc0783ab6a2 100644
--- a/gcc/config/aarch64/aarch64-simd-builtins.def
+++ b/gcc/config/aarch64/aarch64-simd-builtins.def
@@ -189,7 +189,8 @@
   BUILTIN_VSDQ_I_DI (BINOP, srshl, 0)
   BUILTIN_VSDQ_I_DI (BINOP, urshl, 0)
 
-  BUILTIN_VSDQ_I_DI (SHIFTIMM, ashr, 3)
+  BUILTIN_VDQ_I (SHIFTIMM, ashr, 3)
+  VAR1 (SHIFTIMM, ashr_simd, 0, di)
   BUILTIN_VSDQ_I_DI (SHIFTIMM, lshr, 3)
   /* Implemented by aarch64_surshr_nmode.  */
   BUILTIN_VSDQ_I_DI (SHIFTIMM, srshr_n, 0)
diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 158b3dca6da12322de0af80d35f593039d716de6..839186a5e3e3363973186d68aeed6fbaf7f0dfea 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -668,6 +668,32 @@
   DONE;
 })
 
+;; DI vector shift
+(define_expand aarch64_ashr_simddi
+  [(match_operand:DI 0 register_operand =w)
+   (match_operand:DI 1 register_operand w)
+   (match_operand:QI 2 aarch64_shift_imm64_di )]
+  TARGET_SIMD
+  {
+if (INTVAL (operands[2]) == 64)
+  emit_insn (gen_aarch64_sshr_simddi (operands[0], operands[1]));
+else
+  emit_insn (gen_ashrdi3 (operands[0], operands[1], operands[2]));
+DONE;
+  }
+)
+
+;; SIMD shift by 64.  This pattern is a special case as standard pattern does
+;; not handle NEON shifts by 64.
+(define_insn aarch64_sshr_simddi
+  [(set (match_operand:DI 0 register_operand =w)
+(unspec:DI
+  [(match_operand:DI 1 register_operand w)] UNSPEC_SSHR64))]
+  TARGET_SIMD
+  sshr\t%d0, %d1, 64
+  [(set_attr type neon_shift_imm)]
+)
+
 (define_expand vlshrmode3
  [(match_operand:VQ_S 0 register_operand )
   (match_operand:VQ_S 1 register_operand )
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 8b3dbd7550e8e9037de1a1384276bee28d21cb3d..130a11c0231c32440573276fd78e62b6f019d302 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -92,6 +92,7 @@
 UNSPEC_SISD_SSHL
 UNSPEC_SISD_USHL
 UNSPEC_SSHL_2S
+UNSPEC_SSHR64
 UNSPEC_ST2
 UNSPEC_ST3
 UNSPEC_ST4
diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
index 03549bd7a27cccb14ed8cdce91cbd4e4278c273f..64012775b3fa7d174af1472f73aadf4174d0d291 100644
--- a/gcc/config/aarch64/arm_neon.h
+++ b/gcc/config/aarch64/arm_neon.h
@@ -23235,7 +23235,7 @@ vshr_n_s32 (int32x2_t __a, const int __b)
 __extension__ static __inline int64x1_t __attribute__ ((__always_inline__))
 vshr_n_s64 (int64x1_t __a, const int __b)
 {
-  return (int64x1_t) __builtin_aarch64_ashrdi (__a, __b);
+  return (int64x1_t) __builtin_aarch64_ashr_simddi (__a, __b);
 }
 
 __extension__ static __inline uint8x8_t __attribute__ ((__always_inline__))
@@ -23313,7 +23313,7 @@ vshrq_n_u64 (uint64x2_t __a, const int __b)
 __extension__ static __inline int64x1_t __attribute__ ((__always_inline__))
 vshrd_n_s64 (int64x1_t __a, const int __b)
 {
-  return (int64x1_t) __builtin_aarch64_ashrdi (__a, __b);
+  return (int64x1_t) __builtin_aarch64_ashr_simddi (__a, __b);
 }
 
 __extension__ static __inline uint64x1_t __attribute__ ((__always_inline__))
diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
index dbc90826665d19a6ac6131918efb2c8a32bd1f04..9538107a5c148408f5c6e8e37aeef92aa5be0856 100644
--- a/gcc/config/aarch64/predicates.md
+++ b/gcc/config/aarch64/predicates.md
@@ -86,6 +86,10 @@
   (and (match_code const_int)
(match_test (unsigned HOST_WIDE_INT) INTVAL (op)  64)))
 
+(define_predicate aarch64_shift_imm64_di
+  (and (match_code const_int)
+   (match_test (unsigned HOST_WIDE_INT) INTVAL (op) = 64)))
+
 (define_predicate aarch64_reg_or_shift_imm_si
   (ior (match_operand 0 register_operand)
(match_operand 0 aarch64_shift_imm_si)))
diff --git a/gcc/testsuite/gcc.target/aarch64/sshr64_1.c b/gcc/testsuite/gcc.target/aarch64/sshr64_1.c
new file mode 100644
index ..89c6096ad3934d1c42fac2c8fba6eba6170762da
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/sshr64_1.c
@@ -0,0 +1,115 @@
+/*