On Thu, Jun 27, 2024 at 4:45 PM Li, Pan2 <pan2...@intel.com> wrote:
>
> Hi Richard,
>
> As mentioned by tamar in previous, would like to try even more optimization 
> based on this patch.
> Assume we take zip benchmark as example, we may have gimple similar as below
>
> unsigned int _1, _2;
> unsigned short int _9;
>
> _9 = (unsigned short int).SAT_SUB (_1, _2);
>
> If we can locate the _1 is in the range of unsigned short, we can distribute 
> the convert into
> the .SAT_SUB, aka:
>
> From:
> _1 = (unsigned int short)_other;
> _9 = (unsigned short int).SAT_SUB (_1, _2);
>
> To:
> _9 = .SAT_SUB ((unsigned int short)_1, (unsigned int short)MIN_EXPR (_2, 
> 65536)));
>
> Unfortunately, it failed to vectorize when I try to perform above changes. 
> The vectorizable_conversion
> considers it is not simple use and then return fail to vect_analyze_loop_2.
>
> zip.test.c:15:12: note:   ==> examining pattern def statement: patt_42 = 
> (short unsigned int) MIN_EXPR <b_12(D), b_12(D)>;
> zip.test.c:15:12: note:   ==> examining statement: patt_42 = (short unsigned 
> int) MIN_EXPR <b_12(D), b_12(D)>;
> zip.test.c:15:12: note:   vect_is_simple_use: operand MIN_EXPR <b_12(D), 
> b_12(D)>, type of def: unknown
> zip.test.c:15:12: missed:   Unsupported pattern.
> zip.test.c:15:12: missed:   use not simple.
> zip.test.c:15:12: note:   vect_is_simple_use: operand MIN_EXPR <b_12(D), 
> b_12(D)>, type of def: unknown
> zip.test.c:15:12: missed:   Unsupported pattern.
> zip.test.c:15:12: missed:   use not simple.
> zip.test.c:15:12: note:   vect_is_simple_use: operand MIN_EXPR <b_12(D), 
> b_12(D)>, type of def: unknown
> zip.test.c:15:12: missed:   Unsupported pattern.
> zip.test.c:15:12: missed:   use not simple.
> zip.test.c:7:6: missed:   not vectorized: relevant stmt not supported: 
> patt_42 = (short unsigned int) MIN_EXPR <b_12(D), b_12(D)>;
> zip.test.c:15:12: missed:  bad operation or unsupported loop bound.
>
> I tried to take COND_EXPR here instead of MIN_EXPR but almost the same 
> behavior. I am not sure if we can unblock this by the
> vectorizable_conversion or we need some improvements from other pass.

I think you're doing the MIN_EXPR wrong - the above says MIN_EXPR
<b_12(D), b_12(D)> which doesn't make
sense anyway.  I suspect you fail to put the MIN_EXPR to a separate statement?

> Thanks a lot.
>
> Pan
>
> -----Original Message-----
> From: Li, Pan2
> Sent: Thursday, June 27, 2024 2:14 PM
> To: Richard Biener <richard.guent...@gmail.com>
> Cc: gcc-patches@gcc.gnu.org; juzhe.zh...@rivai.ai; kito.ch...@gmail.com; 
> jeffreya...@gmail.com; rdapp....@gmail.com
> Subject: RE: [PATCH v3] Vect: Support truncate after .SAT_SUB pattern in zip
>
> > OK
>
> Committed, thanks Richard.
>
> Pan
>
> -----Original Message-----
> From: Richard Biener <richard.guent...@gmail.com>
> Sent: Thursday, June 27, 2024 2:04 PM
> To: Li, Pan2 <pan2...@intel.com>
> Cc: gcc-patches@gcc.gnu.org; juzhe.zh...@rivai.ai; kito.ch...@gmail.com; 
> jeffreya...@gmail.com; rdapp....@gmail.com
> Subject: Re: [PATCH v3] Vect: Support truncate after .SAT_SUB pattern in zip
>
> On Thu, Jun 27, 2024 at 3:31 AM <pan2...@intel.com> wrote:
> >
> > From: Pan Li <pan2...@intel.com>
>
> OK
>
> > The zip benchmark of coremark-pro have one SAT_SUB like pattern but
> > truncated as below:
> >
> > void test (uint16_t *x, unsigned b, unsigned n)
> > {
> >   unsigned a = 0;
> >   register uint16_t *p = x;
> >
> >   do {
> >     a = *--p;
> >     *p = (uint16_t)(a >= b ? a - b : 0); // Truncate after .SAT_SUB
> >   } while (--n);
> > }
> >
> > It will have gimple before vect pass,  it cannot hit any pattern of
> > SAT_SUB and then cannot vectorize to SAT_SUB.
> >
> > _2 = a_11 - b_12(D);
> > iftmp.0_13 = (short unsigned int) _2;
> > _18 = a_11 >= b_12(D);
> > iftmp.0_5 = _18 ? iftmp.0_13 : 0;
> >
> > This patch would like to improve the pattern match to recog above
> > as truncate after .SAT_SUB pattern.  Then we will have the pattern
> > similar to below,  as well as eliminate the first 3 dead stmt.
> >
> > _2 = a_11 - b_12(D);
> > iftmp.0_13 = (short unsigned int) _2;
> > _18 = a_11 >= b_12(D);
> > iftmp.0_5 = (short unsigned int).SAT_SUB (a_11, b_12(D));
> >
> > The below tests are passed for this patch.
> > 1. The rv64gcv fully regression tests.
> > 2. The rv64gcv build with glibc.
> > 3. The x86 bootstrap tests.
> > 4. The x86 fully regression tests.
> >
> > gcc/ChangeLog:
> >
> >         * match.pd: Add convert description for minus and capture.
> >         * tree-vect-patterns.cc (vect_recog_build_binary_gimple_call): Add
> >         new logic to handle in_type is incompatibile with out_type,  as
> >         well as rename from.
> >         (vect_recog_build_binary_gimple_stmt): Rename to.
> >         (vect_recog_sat_add_pattern): Leverage above renamed func.
> >         (vect_recog_sat_sub_pattern): Ditto.
> >
> > Signed-off-by: Pan Li <pan2...@intel.com>
> > ---
> >  gcc/match.pd              |  4 +--
> >  gcc/tree-vect-patterns.cc | 51 ++++++++++++++++++++++++---------------
> >  2 files changed, 33 insertions(+), 22 deletions(-)
> >
> > diff --git a/gcc/match.pd b/gcc/match.pd
> > index cf8a399a744..820591a36b3 100644
> > --- a/gcc/match.pd
> > +++ b/gcc/match.pd
> > @@ -3164,9 +3164,9 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
> >  /* Unsigned saturation sub, case 2 (branch with ge):
> >     SAT_U_SUB = X >= Y ? X - Y : 0.  */
> >  (match (unsigned_integer_sat_sub @0 @1)
> > - (cond^ (ge @0 @1) (minus @0 @1) integer_zerop)
> > + (cond^ (ge @0 @1) (convert? (minus (convert1? @0) (convert1? @1))) 
> > integer_zerop)
> >   (if (INTEGRAL_TYPE_P (type) && TYPE_UNSIGNED (type)
> > -      && types_match (type, @0, @1))))
> > +      && TYPE_UNSIGNED (TREE_TYPE (@0)) && types_match (@0, @1))))
> >
> >  /* Unsigned saturation sub, case 3 (branchless with gt):
> >     SAT_U_SUB = (X - Y) * (X > Y).  */
> > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> > index cef901808eb..519d15f2a43 100644
> > --- a/gcc/tree-vect-patterns.cc
> > +++ b/gcc/tree-vect-patterns.cc
> > @@ -4490,26 +4490,37 @@ vect_recog_mult_pattern (vec_info *vinfo,
> >  extern bool gimple_unsigned_integer_sat_add (tree, tree*, tree (*)(tree));
> >  extern bool gimple_unsigned_integer_sat_sub (tree, tree*, tree (*)(tree));
> >
> > -static gcall *
> > -vect_recog_build_binary_gimple_call (vec_info *vinfo, gimple *stmt,
> > +static gimple *
> > +vect_recog_build_binary_gimple_stmt (vec_info *vinfo, stmt_vec_info 
> > stmt_info,
> >                                      internal_fn fn, tree *type_out,
> > -                                    tree op_0, tree op_1)
> > +                                    tree lhs, tree op_0, tree op_1)
> >  {
> >    tree itype = TREE_TYPE (op_0);
> > -  tree vtype = get_vectype_for_scalar_type (vinfo, itype);
> > +  tree otype = TREE_TYPE (lhs);
> > +  tree v_itype = get_vectype_for_scalar_type (vinfo, itype);
> > +  tree v_otype = get_vectype_for_scalar_type (vinfo, otype);
> >
> > -  if (vtype != NULL_TREE
> > -    && direct_internal_fn_supported_p (fn, vtype, OPTIMIZE_FOR_BOTH))
> > +  if (v_itype != NULL_TREE && v_otype != NULL_TREE
> > +    && direct_internal_fn_supported_p (fn, v_itype, OPTIMIZE_FOR_BOTH))
> >      {
> >        gcall *call = gimple_build_call_internal (fn, 2, op_0, op_1);
> > +      tree in_ssa = vect_recog_temp_ssa_var (itype, NULL);
> >
> > -      gimple_call_set_lhs (call, vect_recog_temp_ssa_var (itype, NULL));
> > +      gimple_call_set_lhs (call, in_ssa);
> >        gimple_call_set_nothrow (call, /* nothrow_p */ false);
> > -      gimple_set_location (call, gimple_location (stmt));
> > +      gimple_set_location (call, gimple_location (STMT_VINFO_STMT 
> > (stmt_info)));
> > +
> > +      *type_out = v_otype;
> >
> > -      *type_out = vtype;
> > +      if (types_compatible_p (itype, otype))
> > +       return call;
> > +      else
> > +       {
> > +         append_pattern_def_seq (vinfo, stmt_info, call, v_itype);
> > +         tree out_ssa = vect_recog_temp_ssa_var (otype, NULL);
> >
> > -      return call;
> > +         return gimple_build_assign (out_ssa, NOP_EXPR, in_ssa);
> > +       }
> >      }
> >
> >    return NULL;
> > @@ -4541,13 +4552,13 @@ vect_recog_sat_add_pattern (vec_info *vinfo, 
> > stmt_vec_info stmt_vinfo,
> >
> >    if (gimple_unsigned_integer_sat_add (lhs, ops, NULL))
> >      {
> > -      gcall *call = vect_recog_build_binary_gimple_call (vinfo, last_stmt,
> > -                                                        IFN_SAT_ADD, 
> > type_out,
> > -                                                        ops[0], ops[1]);
> > -      if (call)
> > +      gimple *stmt = vect_recog_build_binary_gimple_stmt (vinfo, 
> > stmt_vinfo,
> > +                                                         IFN_SAT_ADD, 
> > type_out,
> > +                                                         lhs, ops[0], 
> > ops[1]);
> > +      if (stmt)
> >         {
> >           vect_pattern_detected ("vect_recog_sat_add_pattern", last_stmt);
> > -         return call;
> > +         return stmt;
> >         }
> >      }
> >
> > @@ -4579,13 +4590,13 @@ vect_recog_sat_sub_pattern (vec_info *vinfo, 
> > stmt_vec_info stmt_vinfo,
> >
> >    if (gimple_unsigned_integer_sat_sub (lhs, ops, NULL))
> >      {
> > -      gcall *call = vect_recog_build_binary_gimple_call (vinfo, last_stmt,
> > -                                                        IFN_SAT_SUB, 
> > type_out,
> > -                                                        ops[0], ops[1]);
> > -      if (call)
> > +      gimple *stmt = vect_recog_build_binary_gimple_stmt (vinfo, 
> > stmt_vinfo,
> > +                                                         IFN_SAT_SUB, 
> > type_out,
> > +                                                         lhs, ops[0], 
> > ops[1]);
> > +      if (stmt)
> >         {
> >           vect_pattern_detected ("vect_recog_sat_sub_pattern", last_stmt);
> > -         return call;
> > +         return stmt;
> >         }
> >      }
> >
> > --
> > 2.34.1
> >

Reply via email to