Re: Fold VEC_COND_EXPRs to IFN_COND_* where possible

2018-05-25 Thread Richard Biener
On Thu, May 24, 2018 at 2:21 PM Richard Sandiford <
richard.sandif...@linaro.org> wrote:

> Richard Biener  writes:
> >> 2018-05-24  Richard Sandiford  
> >
> >> gcc/
> >>  * doc/sourcebuild.texi (vect_double_cond_arith: Document.
> >>  * gimple-match.h (gimple_match_op::MAX_NUM_OPS): Bump to 4.
> >>  (gimple_match_op::gimple_match_op): Add an overload for 4
> > operands.
> >>  (gimple_match_op::set_op): Likewise.
> >>  (gimple_resimplify4): Declare.
> >>  * genmatch.c (commutative_op): Handle CFN_COND_* functions.
> >
> > ^^^  you don't seem to use that and I don't see how those are
commutative
> > in operands 1 and 2 without inverting operand 0.  So w/o adjusting the
> > parsing part I think that people can write (cond_foo:c ...) and likely
> > be surprised that it isn't rejected.  It is of course required to make
:C
> > work.

> Operands 1 and 2 are the operands of the binary operation, and now
> that the else value is specified separately, the functions are
> commutative in those operands if the binary operation is commutative.
> Of course, CFN_COND_SUB shouldn't have been there, sigh...

Ah - that was what confused me ;)

> > The patch is ok if you drop this hunk for now.  You can re-introduce it
> > as followup if you make sure to make :c error on those IFNs.

> OK, thanks.  I'm happy to drop it for now.

Thanks,
Richard.


> Richard


Re: Fold VEC_COND_EXPRs to IFN_COND_* where possible

2018-05-24 Thread Richard Sandiford
Richard Biener  writes:
>> 2018-05-24  Richard Sandiford  
>
>> gcc/
>>  * doc/sourcebuild.texi (vect_double_cond_arith: Document.
>>  * gimple-match.h (gimple_match_op::MAX_NUM_OPS): Bump to 4.
>>  (gimple_match_op::gimple_match_op): Add an overload for 4
> operands.
>>  (gimple_match_op::set_op): Likewise.
>>  (gimple_resimplify4): Declare.
>>  * genmatch.c (commutative_op): Handle CFN_COND_* functions.
>
> ^^^  you don't seem to use that and I don't see how those are commutative
> in operands 1 and 2 without inverting operand 0.  So w/o adjusting the
> parsing part I think that people can write (cond_foo:c ...) and likely
> be surprised that it isn't rejected.  It is of course required to make :C
> work.

Operands 1 and 2 are the operands of the binary operation, and now
that the else value is specified separately, the functions are
commutative in those operands if the binary operation is commutative.
Of course, CFN_COND_SUB shouldn't have been there, sigh...

> The patch is ok if you drop this hunk for now.  You can re-introduce it
> as followup if you make sure to make :c error on those IFNs.

OK, thanks.  I'm happy to drop it for now.

Richard


Re: Fold VEC_COND_EXPRs to IFN_COND_* where possible

2018-05-24 Thread Richard Biener
On Thu, May 24, 2018 at 11:28 AM Richard Sandiford <
richard.sandif...@linaro.org> wrote:

> This patch adds the folds:

>(vec_cond COND (foo A B) C) -> (IFN_COND_FOO COND A B C)
>(vec_cond COND C (foo A B)) -> (IFN_COND_FOO (!COND) A B C)

> with the usual implicit restriction that the target must support
> the produced IFN_COND_FOO.

> The results of these folds don't have identical semantics, since
> the reverse transform would be invalid if (FOO A[i] B[i]) faults when
> COND[i] is false.  But this direction is OK since we're simply dropping
> faults for operations whose results aren't needed.

> The new gimple_resimplify4 doesn't try to do any constant folding
> on the IFN_COND_*s.  This is because a later patch will handle it
> by folding the associated unconditional operation.

> Doing this in gimple is better than doing it in .md patterns,
> since the second form (with the inverted condition) is much more
> common than the first, and it's better to fold away the inversion
> in gimple and optimise the result before entering expand.

> Tested on aarch64-linux-gnu (with and without SVE), aarch64_be-elf
> and x86_64-linux-gnu.  OK to install?

> Richard


> 2018-05-24  Richard Sandiford  

> gcc/
>  * doc/sourcebuild.texi (vect_double_cond_arith: Document.
>  * gimple-match.h (gimple_match_op::MAX_NUM_OPS): Bump to 4.
>  (gimple_match_op::gimple_match_op): Add an overload for 4
operands.
>  (gimple_match_op::set_op): Likewise.
>  (gimple_resimplify4): Declare.
>  * genmatch.c (commutative_op): Handle CFN_COND_* functions.

^^^  you don't seem to use that and I don't see how those are commutative
in operands 1 and 2 without inverting operand 0.  So w/o adjusting the
parsing part I think that people can write (cond_foo:c ...) and likely
be surprised that it isn't rejected.  It is of course required to make :C
work.

The patch is ok if you drop this hunk for now.  You can re-introduce it
as followup if you make sure to make :c error on those IFNs.

Thanks,
Richard.

>  (get_operand_type, expr::gen_transform): Likewise.
>  (decision_tree::gen): Generate a simplification routine for 4
operands.
>  * gimple-match-head.c (gimple_simplify): Add an overload for
>  4 operands.  In the top-level function, handle up to 4 call
>  arguments and call gimple_resimplify4.
>  (gimple_resimplify4): New function.
>  (build_call_internal): Pass a fourth operand.
>  (maybe_push_to_seq): Likewise.
>  * match.pd (UNCOND_BINARY, COND_BINARY): New operator lists.
>  Fold VEC_COND_EXPRs of an operation and a default value into
>  an IFN_COND_* function if possible.
>  * config/aarch64/iterators.md (UNSPEC_COND_MAX, UNSPEC_COND_MIN):
>  New unspecs.
>  (SVE_COND_FP_BINARY): Include them.
>  (optab, sve_fp_op): Handle them.
>  (SVE_INT_BINARY_REV): New code iterator.
>  (SVE_COND_FP_BINARY_REV): New int iterator.
>  (commutative): New int attribute.
>  * config/aarch64/aarch64-protos.h
(aarch64_sve_prepare_conditional_op):
>  Declare.
>  * config/aarch64/aarch64.c (aarch64_sve_prepare_conditional_op):
New
>  function.
>  * config/aarch64/aarch64-sve.md (cond_): Use it.
>  (*cond_): New patterns for reversed operands.

> gcc/testsuite/
>  * lib/target-supports.exp
>  (check_effective_target_vect_double_cond_arith): New proc.
>  * gcc.dg/vect/vect-cond-arith-1.c: New test.
>  * gcc.target/aarch64/sve/vcond_8.c: Likewise.
>  * gcc.target/aarch64/sve/vcond_8_run.c: Likewise.
>  * gcc.target/aarch64/sve/vcond_9.c: Likewise.
>  * gcc.target/aarch64/sve/vcond_9_run.c: Likewise.
>  * gcc.target/aarch64/sve/vcond_12.c: Likewise.
>  * gcc.target/aarch64/sve/vcond_12_run.c: Likewise.

> Index: gcc/doc/sourcebuild.texi
> ===
> --- gcc/doc/sourcebuild.texi2018-05-24 09:02:24.987538940 +0100
> +++ gcc/doc/sourcebuild.texi2018-05-24 09:54:37.508451387 +0100
> @@ -1425,6 +1425,10 @@ have different type from the value opera
>   @item vect_double
>   Target supports hardware vectors of @code{double}.

> +@item vect_double_cond_arith
> +Target supports conditional addition, subtraction, minimum and maximum
> +on vectors of @code{double}, via the @code{cond_} optabs.
> +
>   @item vect_element_align_preferred
>   The target's preferred vector alignment is the same as the element
>   alignment.
> Index: gcc/gimple-match.h
> ===
> --- gcc/gimple-match.h  2018-05-24 09:02:28.764328414 +0100
> +++ gcc/gimple-match.h  2018-05-24 09:54:37.509451356 +0100
> @@ -49,17 +49,19 @@ struct gimple_match_op
> gimple_match_op (code_helper, tree, tree);
> gimple_match_op (code_helper, tree, tree, tree);
> gim

Fold VEC_COND_EXPRs to IFN_COND_* where possible

2018-05-24 Thread Richard Sandiford
This patch adds the folds:

  (vec_cond COND (foo A B) C) -> (IFN_COND_FOO COND A B C)
  (vec_cond COND C (foo A B)) -> (IFN_COND_FOO (!COND) A B C)

with the usual implicit restriction that the target must support
the produced IFN_COND_FOO.

The results of these folds don't have identical semantics, since
the reverse transform would be invalid if (FOO A[i] B[i]) faults when
COND[i] is false.  But this direction is OK since we're simply dropping
faults for operations whose results aren't needed.

The new gimple_resimplify4 doesn't try to do any constant folding
on the IFN_COND_*s.  This is because a later patch will handle it
by folding the associated unconditional operation.

Doing this in gimple is better than doing it in .md patterns,
since the second form (with the inverted condition) is much more
common than the first, and it's better to fold away the inversion
in gimple and optimise the result before entering expand.

Tested on aarch64-linux-gnu (with and without SVE), aarch64_be-elf
and x86_64-linux-gnu.  OK to install?

Richard


2018-05-24  Richard Sandiford  

gcc/
* doc/sourcebuild.texi (vect_double_cond_arith: Document.
* gimple-match.h (gimple_match_op::MAX_NUM_OPS): Bump to 4.
(gimple_match_op::gimple_match_op): Add an overload for 4 operands.
(gimple_match_op::set_op): Likewise.
(gimple_resimplify4): Declare.
* genmatch.c (commutative_op): Handle CFN_COND_* functions.
(get_operand_type, expr::gen_transform): Likewise.
(decision_tree::gen): Generate a simplification routine for 4 operands.
* gimple-match-head.c (gimple_simplify): Add an overload for
4 operands.  In the top-level function, handle up to 4 call
arguments and call gimple_resimplify4.
(gimple_resimplify4): New function.
(build_call_internal): Pass a fourth operand.
(maybe_push_to_seq): Likewise.
* match.pd (UNCOND_BINARY, COND_BINARY): New operator lists.
Fold VEC_COND_EXPRs of an operation and a default value into
an IFN_COND_* function if possible.
* config/aarch64/iterators.md (UNSPEC_COND_MAX, UNSPEC_COND_MIN):
New unspecs.
(SVE_COND_FP_BINARY): Include them.
(optab, sve_fp_op): Handle them.
(SVE_INT_BINARY_REV): New code iterator.
(SVE_COND_FP_BINARY_REV): New int iterator.
(commutative): New int attribute.
* config/aarch64/aarch64-protos.h (aarch64_sve_prepare_conditional_op):
Declare.
* config/aarch64/aarch64.c (aarch64_sve_prepare_conditional_op): New
function.
* config/aarch64/aarch64-sve.md (cond_): Use it.
(*cond_): New patterns for reversed operands.

gcc/testsuite/
* lib/target-supports.exp
(check_effective_target_vect_double_cond_arith): New proc.
* gcc.dg/vect/vect-cond-arith-1.c: New test.
* gcc.target/aarch64/sve/vcond_8.c: Likewise.
* gcc.target/aarch64/sve/vcond_8_run.c: Likewise.
* gcc.target/aarch64/sve/vcond_9.c: Likewise.
* gcc.target/aarch64/sve/vcond_9_run.c: Likewise.
* gcc.target/aarch64/sve/vcond_12.c: Likewise.
* gcc.target/aarch64/sve/vcond_12_run.c: Likewise.

Index: gcc/doc/sourcebuild.texi
===
--- gcc/doc/sourcebuild.texi2018-05-24 09:02:24.987538940 +0100
+++ gcc/doc/sourcebuild.texi2018-05-24 09:54:37.508451387 +0100
@@ -1425,6 +1425,10 @@ have different type from the value opera
 @item vect_double
 Target supports hardware vectors of @code{double}.
 
+@item vect_double_cond_arith
+Target supports conditional addition, subtraction, minimum and maximum
+on vectors of @code{double}, via the @code{cond_} optabs.
+
 @item vect_element_align_preferred
 The target's preferred vector alignment is the same as the element
 alignment.
Index: gcc/gimple-match.h
===
--- gcc/gimple-match.h  2018-05-24 09:02:28.764328414 +0100
+++ gcc/gimple-match.h  2018-05-24 09:54:37.509451356 +0100
@@ -49,17 +49,19 @@ struct gimple_match_op
   gimple_match_op (code_helper, tree, tree);
   gimple_match_op (code_helper, tree, tree, tree);
   gimple_match_op (code_helper, tree, tree, tree, tree);
+  gimple_match_op (code_helper, tree, tree, tree, tree, tree);
 
   void set_op (code_helper, tree, unsigned int);
   void set_op (code_helper, tree, tree);
   void set_op (code_helper, tree, tree, tree);
   void set_op (code_helper, tree, tree, tree, tree);
+  void set_op (code_helper, tree, tree, tree, tree, tree);
   void set_value (tree);
 
   tree op_or_null (unsigned int) const;
 
   /* The maximum value of NUM_OPS.  */
-  static const unsigned int MAX_NUM_OPS = 3;
+  static const unsigned int MAX_NUM_OPS = 4;
 
   /* The operation being performed.  */
   code_helper code;
@@ -113,6 +115,17 @@ gimple_match_op::gimple_match_op (code_h
   ops[2] = op2;
 }
 
+inline
+gimple_match_op::gimple