Hi,

The aarch64_vldX/aarch64_vstX expanders used for the vldX/vstX AdvSIMD
intrisics in Q mode called vec_load_lanes, witch shuffles the vectors
to match the layout expected by the vectorizer.

We do not want this to happen when the intrinsics are called directly
by the end-user code.

This patch fixes this, by calling gen_aarch64_simd_ldX/gen_aarch64_simd_stX.

With this patch, the following tests now pass in advsimd-intrinsics
(target aarch64_be):
vldX_lane.c, vtrn, vuzp, vzip
as well as aarch64/vldN_1.c and aarch64/vstN_1.c

It fixes PR 59810, 63652, 63653.

No regression, and tested on aarch64 and aarch64_be using the Foundation Model.

OK for trunk?

Christophe.
2015-09-02  Christophe Lyon  <christophe.l...@linaro.org>

	PR target/59810
	PR target/63652
	PR target/63653
	* config/aarch64/aarch64-simd.md
	(aarch64_ld<VSTRUCT:nregs><VQ:mode>): Call
	gen_aarch64_simd_ld<VSTRUCT:nregs><VQ:mode>.
	(aarch64_st<VSTRUCT:nregs><VQ:mode>): Call
	gen_aarch64_simd_st<VSTRUCT:nregs><VQ:mode>.

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 9777418..75fa0ab 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -4566,7 +4566,7 @@
   machine_mode mode = <VSTRUCT:MODE>mode;
   rtx mem = gen_rtx_MEM (mode, operands[1]);
 
-  emit_insn (gen_vec_load_lanes<VSTRUCT:mode><VQ:mode> (operands[0], mem));
+  emit_insn (gen_aarch64_simd_ld<VSTRUCT:nregs><VQ:mode> (operands[0], mem));
   DONE;
 })
 
@@ -4849,7 +4849,7 @@
   machine_mode mode = <VSTRUCT:MODE>mode;
   rtx mem = gen_rtx_MEM (mode, operands[0]);
 
-  emit_insn (gen_vec_store_lanes<VSTRUCT:mode><VQ:mode> (mem, operands[1]));
+  emit_insn (gen_aarch64_simd_st<VSTRUCT:nregs><VQ:mode> (mem, operands[1]));
   DONE;
 })
 

Reply via email to