Hi,

This patch teaches the AArch64 backend that the AESE and AESD unspecs are 
commutative (which correspond to the vaeseq_u8 and vaesdq_u8 intrinsics). This 
improves register allocation around their corresponding instructions avoiding 
unnecessary moves.

For instance, with the old patterns code such as:

uint8x16_t
test0 (uint8x16_t a, uint8x16_t b)
{
  uint8x16_t result;
  result = vaeseq_u8 (a, b);
  result = vaeseq_u8 (result, a);
  return result;
}

would lead to suboptimal register allocation such as:
test0:
        mov     v2.16b, v0.16b
        aese    v2.16b, v1.16b
        mov     v1.16b, v2.16b
        aese    v1.16b, v0.16b
        mov     v0.16b, v1.16b
        ret

whereas with the new patterns we see:
        aese    v1.16b, v0.16b
        aese    v0.16b, v1.16b
        ret


Bootstrapped and tested on aarch64-none-linux-gnu.

Is this OK for trunk?

Cheers,
Andre


gcc
2018-06-18  Andre Vieira  <andre.simoesdiasvie...@arm.com>


        * config/aarch64/aarch64-simd.md (aarch64_crypto_aes<aes_op>v16qi):
        Make operands of the unspec commutative.

gcc/testsuite
2018-06-18 Andre Vieira  <andre.simoesdiasvie...@arm.com>

        * gcc/target/aarch64/aes_2.c: New test.

Attachment: aes-1.patch
Description: aes-1.patch

Reply via email to