Similarly, the ushr_n_u64 and ushrd_n_u64 intrinsics, allow to perform an unsigned-shift-right of a 64-bit value by 64 places. This is not supported by the standard lshr pattern, which masks the shift amount with 63. However, a shift-by-64 always produces zero, so this patch moves in a constant 0 rather than outputting a shift instruction.

Cross-tested on aarch64-none-elf and aarch64_be-none-elf, with test coverage provided by gcc.target/aarch64/ushr64_1.c .

gcc/ChangeLog:

        * config/aarch64/aarch64-simd.md (aarch64_lshr_simddi): Handle shift
        by 64 by moving const0_rtx.
        (aarch64_ushr_simddi): Delete.

        * config/aarch64/aarch64.md (enum unspec): Delete UNSPEC_USHR64.

Alan Lawrence wrote:
The sshr_n_64 intrinsics allow performing a signed shift right by 64 places. The standard ashrdi3 pattern masks the sign amount with 63, so cannot be used. However, such a shift fills the result by the sign bit, which is identical to shifting right by 63. This patch just simplifies the code to shift by 63 instead, which allows to remove an UNSPEC and insn previously dedicated to this case.

Cross-tested on aarch64-none-elf and aarch64_be-none-elf, with test coverage provided by gcc.target/aarch64/sshr64_1.c .

gcc/ChangeLog:

        * config/aarch64/aarch64.md (enum "unspec"): Remove UNSPEC_SSHR64.

        * config/aarch64/aarch64-simd.md (aarch64_ashr_simddi): Change shift
        amount to 63 if was 64.
        (aarch64_sshr_simddi): Remove.




diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index f9ace6d195a7f7db74751c79c316fdce3696abf6..d493054b703c5698b771ce32d613e28d7c7b7b49 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -739,24 +739,13 @@
   "TARGET_SIMD"
   {
     if (INTVAL (operands[2]) == 64)
-      emit_insn (gen_aarch64_ushr_simddi (operands[0], operands[1]));
+      emit_move_insn (operands[0], const0_rtx);
     else
       emit_insn (gen_lshrdi3 (operands[0], operands[1], operands[2]));
     DONE;
   }
 )
 
-;; SIMD shift by 64.  This pattern is a special case as standard pattern does
-;; not handle NEON shifts by 64.
-(define_insn "aarch64_ushr_simddi"
-  [(set (match_operand:DI 0 "register_operand" "=w")
-        (unspec:DI
-          [(match_operand:DI 1 "register_operand" "w")] UNSPEC_USHR64))]
-  "TARGET_SIMD"
-  "ushr\t%d0, %d1, 64"
-  [(set_attr "type" "neon_shift_imm")]
-)
-
 (define_expand "vec_set<mode>"
   [(match_operand:VQ_S 0 "register_operand")
    (match_operand:<VEL> 1 "register_operand")
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 3c51fd367e954d513aac1180ec4025f15d46c87e..736da80b2705d02793433e6cdbcd422bfecc76f4 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -111,7 +111,6 @@
     UNSPEC_TLS
     UNSPEC_TLSDESC
     UNSPEC_USHL_2S
-    UNSPEC_USHR64
     UNSPEC_VSTRUCTDUMMY
     UNSPEC_SP_SET
     UNSPEC_SP_TEST
diff --git a/gcc/testsuite/gcc.target/aarch64/ushr64_1.c b/gcc/testsuite/gcc.target/aarch64/ushr64_1.c
index b1c741dac3125d97ca3440329ecb32c7d2889d81..ee494894f6fb6f9cb354d836121a7bc6d0d2cdb6 100644
--- a/gcc/testsuite/gcc.target/aarch64/ushr64_1.c
+++ b/gcc/testsuite/gcc.target/aarch64/ushr64_1.c
@@ -42,7 +42,6 @@ test_vshrd_n_u64_0 (uint64_t passed, uint64_t expected)
   return vshrd_n_u64 (passed, 0) != expected;
 }
 
-/* { dg-final { scan-assembler-times "ushr\\td\[0-9\]+, d\[0-9\]+, 64" 2 } } */
 /* { dg-final { (scan-assembler-times "ushr\\td\[0-9\]+, d\[0-9\]+, 4" 2)  || \
    (scan-assembler-times "lsr\\tx\[0-9\]+, x\[0-9\]+, 4" 2) } } */
 /* { dg-final { scan-assembler-not "ushr\\td\[0-9\]+, d\[0-9\]+, 0" } } */

Reply via email to