On Wed, Jul 13, 2022 at 09:10:25AM +0100, Richard Sandiford wrote:
> Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > On Tue, Jul 12, 2022 at 4:38 PM Andrew Carlotti <andrew.carlo...@arm.com> 
> > wrote:
> >>
> >> aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
> >> function call is null before converting it to an assign statement. To avoid
> >> returning an invalid GIMPLE statement in this case, we instead assign the
> >> expression result to a new (unused) variable.
> >>
> >> This change only affects code that:
> >> 1) Calls an intrinsic function that has no side effects;
> >> 2) Does not use or store the value returned by the intrinsic;
> >> 3) Uses parameters that prevent the front-end eliminating the call prior to
> >> gimplification.
> >>
> >> The ICE is unlikely to have occurred in the wild, as it relies on the 
> >> presence
> >> of a redundant intrinsic call.
> >
> > Other targets usually simply refrain from folding intrinsic calls with no 
> > LHS.
> > Another option is to just drop it on the floor if it does not have any
> > side-effects which for the gimple_fold_builtin hook means folding it to
> > a GIMPLE_NOP (gimple_build_nop ()).
> 
> Sorry, I just pushed the patch before seeing this.
> 
> I guess the problem with refraining from folding calls with no lhs
> is that it has to be done on a per-function basis.  (E.g. stores
> should still be folded.)  It then becomes something that we need
> to remember for each individual call.  E.g. ix86_gimple_fold_builtin
> seems to have three different pieces of code for handling null lhses,
> even with its heavy use of gotos.
> 
> So a nice thing about the current patch is that it handles all this
> in one place only.
> 
> Thanks,
> Richard

I specifically wanted to avoid not folding the call, because always
folding means that the builtin doesn't need to be implemented anywhere
else (which isn't relevant here, but may become relevant when folding
newly defined builtins in the future).

I considered dropping the statement, but I wasn't sure at the time that
I could do it safely. I could send a patch to instead replace new_stmt
with a GIMPLE_NOP.

> >> gcc/ChangeLog:
> >>
> >>  * config/aarch64/aarch64-builtins.cc
> >>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
> >>
> >> gcc/testsuite/ChangeLog:
> >>
> >>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
> >>
> >> ---
> >>
> >> diff --git a/gcc/config/aarch64/aarch64-builtins.cc 
> >> b/gcc/config/aarch64/aarch64-builtins.cc
> >> index 
> >> e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e
> >>  100644
> >> --- a/gcc/config/aarch64/aarch64-builtins.cc
> >> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> >> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int 
> >> fcode, gcall *stmt,
> >>      default:
> >>        break;
> >>      }
> >> +
> >> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
> >> +     created an assign statement with a null lhs, then fix this by 
> >> assigning
> >> +     to a new (and subsequently unused) variable. */
> >> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs 
> >> (new_stmt))
> >> +    {
> >> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
> >> +      gimple_assign_set_lhs (new_stmt, new_lhs);
> >> +    }
> >> +
> >>    return new_stmt;
> >>  }
> >>
> >> diff --git 
> >> a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c 
> >> b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> >> new file mode 100644
> >> index 
> >> 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
> >> --- /dev/null
> >> +++ 
> >> b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> >> @@ -0,0 +1,9 @@
> >> +/* { dg-do compile { target { aarch64*-*-* } } } */
> >> +
> >> +#include <arm_neon.h>
> >> +
> >> +int8_t *bar();
> >> +
> >> +void foo() {
> >> +  __builtin_aarch64_ld1v16qi(bar());
> >> +}

Reply via email to