This patch implements constant folding of binary operations for SVE intrinsics
by calling the constant-folding mechanism of the middle-end for a given
tree_code.
In fold-const.cc, the code for folding vector constants was moved from
const_binop to a new function vector_const_binop. This function takes a
function pointer as argument specifying how to fold the vector elements.
The code for folding operations where the first operand is a vector
constant and the second argument is an integer constant was also moved
into vector_const_binop to fold binary SVE intrinsics where the second
operand is an integer (_n).
In the aarch64 backend, the new function aarch64_const_binop was
created, which - in contrast to int_const_binop - does not treat operations as
overflowing. This function is passed as callback to vector_const_binop
during gimple folding in intrinsic implementations.
Because aarch64_const_binop calls poly_int_binop, the latter was made public.

The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?

Signed-off-by: Jennifer Schmitz <jschm...@nvidia.com>

gcc/
        * config/aarch64/aarch64-sve-builtins.cc (aarch64_const_binop):
        New function to fold binary SVE intrinsics without overflow.
        * config/aarch64/aarch64-sve-builtins.h: Declare aarch64_const_binop.
        * fold-const.h: Declare vector_const_binop.
        * fold-const.cc (const_binop): Remove cases for vector constants.
        (vector_const_binop): New function that folds vector constants
        element-wise.
        (int_const_binop): Remove call to wide_int_binop.
        (poly_int_binop): Add call to wide_int_binop.

Attachment: 0001-SVE-intrinsics-Fold-constant-operands.patch
Description: Binary data

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to