On Fri, Mar 10, 2023 at 3:37 AM Tamar Christina via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> Hi All,
>
> The testcase
>
> typedef unsigned int vec __attribute__((vector_size(32)));
> vec
> f3 (vec a, vec b, vec c)
> {
>   vec d = a * b;
>   return d + ((c + d) >> 1);
> }
>
> shows a case where we don't want to form an FMA due to the MUL not being 
> single
> use.  In this case to form an FMA we have to redo the MUL as well as we no
> longer have it to share.
>
> As such making an FMA here would be a de-optimization.
>
> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
>
> Ok for master?
>
> Thanks,
> Tamar
>
> gcc/ChangeLog:
>
>         PR target/108583
>         * tree-ssa-math-opts.cc (convert_mult_to_fma): Inhibit FMA in case not
>         single use.
>
> gcc/testsuite/ChangeLog:
>
>         PR target/108583
>         * gcc.dg/mla_1.c: New test.
>
> Co-Authored-By: Richard Sandiford <richard.sandif...@arm.com>
>
> --- inline copy of patch --
> diff --git a/gcc/testsuite/gcc.dg/mla_1.c b/gcc/testsuite/gcc.dg/mla_1.c
> new file mode 100644
> index 
> 0000000000000000000000000000000000000000..a92ecf248116d89b1bc4207a907ea5ed95728a28
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/mla_1.c
> @@ -0,0 +1,40 @@
> +/* { dg-do compile } */
> +/* { dg-require-effective-target vect_int } */
> +/* { dg-options "-O2 -msve-vector-bits=256 -march=armv8.2-a+sve 
> -fdump-tree-optimized" } */
> +
> +unsigned int
> +f1 (unsigned int a, unsigned int b, unsigned int c) {
> +  unsigned int d = a * b;
> +  return d + ((c + d) >> 1);
> +}
> +
> +unsigned int
> +g1 (unsigned int a, unsigned int b, unsigned int c) {
> +  return a * b + c;
> +}
> +
> +__Uint32x4_t
> +f2 (__Uint32x4_t a, __Uint32x4_t b, __Uint32x4_t c) {
> +  __Uint32x4_t d = a * b;
> +  return d + ((c + d) >> 1);
> +}
> +
> +__Uint32x4_t
> +g2 (__Uint32x4_t a, __Uint32x4_t b, __Uint32x4_t c) {
> +  return a * b + c;
> +}
> +
> +typedef unsigned int vec __attribute__((vector_size(32))); vec
> +f3 (vec a, vec b, vec c)
> +{
> +  vec d = a * b;
> +  return d + ((c + d) >> 1);
> +}
> +
> +vec
> +g3 (vec a, vec b, vec c)
> +{
> +  return a * b + c;
> +}
> +
> +/* { dg-final { scan-tree-dump-times {\.FMA } 1 "optimized" { target 
> aarch64*-*-* } } } */
> diff --git a/gcc/tree-ssa-math-opts.cc b/gcc/tree-ssa-math-opts.cc
> index 
> 5ab5b944a573ad24ce8427aff24fc5215bf05dac..26ed91d58fa4709a67c903ad446d267a3113c172
>  100644
> --- a/gcc/tree-ssa-math-opts.cc
> +++ b/gcc/tree-ssa-math-opts.cc
> @@ -3346,6 +3346,20 @@ convert_mult_to_fma (gimple *mul_stmt, tree op1, tree 
> op2,
>                     param_avoid_fma_max_bits));
>    bool defer = check_defer;
>    bool seen_negate_p = false;
> +
> +  /* There is no numerical difference between fused and unfused integer FMAs,
> +     and the assumption below that FMA is as cheap as addition is unlikely
> +     to be true, especially if the multiplication occurs multiple times on
> +     the same chain.  E.g., for something like:
> +
> +        (((a * b) + c) >> 1) + (a * b)
> +
> +     we do not want to duplicate the a * b into two additions, not least
> +     because the result is not a natural FMA chain.  */
> +  if (ANY_INTEGRAL_TYPE_P (type)
> +      && !has_single_use (mul_result))
What about floating point?
> +    return false;
> +
>    /* Make sure that the multiplication statement becomes dead after
>       the transformation, thus that all uses are transformed to FMAs.
>       This means we assume that an FMA operation has the same cost
>
>
>
>
> --



-- 
BR,
Hongtao

Reply via email to