Hi,

As subject, this patch first splits the aarch64_<su>qmovn<mode>
pattern into separate scalar and vector variants. It then further splits
the vector RTL  pattern into big/little endian variants that model the
zero-high-half semantics of the underlying instruction. Modeling
these semantics allows for better RTL combinations while also
removing some register allocation issues as the compiler now knows
that the operation is totally destructive.

Regression tested and bootstrapped on aarch64-none-linux-gnu - no
issues.

Ok for master?

Thanks,
Jonathan

---

gcc/ChangeLog:

2021-06-14  Jonathan Wright  <jonathan.wri...@arm.com>

        * config/aarch64/aarch64-simd-builtins.def: Split generator
        for aarch64_<su>qmovn builtins into scalar and vector
        variants.
        * config/aarch64/aarch64-simd.md (aarch64_<su>qmovn<mode>_insn_le):
        Define.
        (aarch64_<su>qmovn<mode>_insn_be): Define.
        (aarch64_<su>qmovn<mode>): Split into scalar and vector
        variants. Change vector variant to an expander that emits the
        correct instruction depending on endianness.

Attachment: rb14565.patch
Description: rb14565.patch

Reply via email to