Thanks Richard for comments, will merge the rest form of .SAT_ADD in one middle 
end patch for fully picture, as well as comments addressing.

Pan

-----Original Message-----
From: Richard Biener <richard.guent...@gmail.com> 
Sent: Wednesday, May 22, 2024 9:16 PM
To: Li, Pan2 <pan2...@intel.com>
Cc: gcc-patches@gcc.gnu.org; juzhe.zh...@rivai.ai; kito.ch...@gmail.com; 
tamar.christ...@arm.com
Subject: Re: [PATCH v1 1/2] Match: Support __builtin_add_overflow for 
branchless unsigned SAT_ADD

On Sun, May 19, 2024 at 8:37 AM <pan2...@intel.com> wrote:
>
> From: Pan Li <pan2...@intel.com>
>
> This patch would like to support the branchless form for unsigned
> SAT_ADD when leverage __builtin_add_overflow.  For example as below:
>
> uint64_t sat_add_u(uint64_t x, uint64_t y)
> {
>   uint64_t ret;
>   uint64_t overflow = __builtin_add_overflow (x, y, &ret);
>
>   return (T)(-overflow) | ret;
> }
>
> Before this patch:
>
> uint64_t sat_add_u (uint64_t x, uint64_t y)
> {
>   long unsigned int _1;
>   long unsigned int _2;
>   long unsigned int _3;
>   __complex__ long unsigned int _6;
>   uint64_t _8;
>
> ;;   basic block 2, loop depth 0
> ;;    pred:       ENTRY
>   _6 = .ADD_OVERFLOW (x_4(D), y_5(D));
>   _1 = REALPART_EXPR <_6>;
>   _2 = IMAGPART_EXPR <_6>;
>   _3 = -_2;
>   _8 = _1 | _3;
>   return _8;
> ;;    succ:       EXIT
>
> }
>
> After this patch:
>
> uint64_t sat_add_u (uint64_t x, uint64_t y)
> {
>   uint64_t _8;
>
> ;;   basic block 2, loop depth 0
> ;;    pred:       ENTRY
>   _8 = .SAT_ADD (x_4(D), y_5(D)); [tail call]
>   return _8;
> ;;    succ:       EXIT
>
> }
>
> The below tests suite are passed for this patch.
> * The rv64gcv fully regression test.
> * The x86 bootstrap test.
> * The x86 fully regression test.
>
> gcc/ChangeLog:
>
>         * match.pd: Add SAT_ADD right part 2 for __builtin_add_overflow.
>
> Signed-off-by: Pan Li <pan2...@intel.com>
> ---
>  gcc/match.pd | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/gcc/match.pd b/gcc/match.pd
> index b291e34bbe4..5328e846aff 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -3064,6 +3064,10 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>   (negate (convert (ne (imagpart (IFN_ADD_OVERFLOW:c @0 @1)) integer_zerop)))
>   (if (TYPE_UNSIGNED (type) && integer_types_ternary_match (type, @0, @1))))
>
> +(match (usadd_right_part_2 @0 @1)
> + (negate (imagpart (IFN_ADD_OVERFLOW:c @0 @1)))
> + (if (TYPE_UNSIGNED (type) && integer_types_ternary_match (type, @0, @1))))
> +

Can you merge this with the patch that makes use of the
usadd_right_part_2 match?
It's difficult to review on its own.

>  /* We cannot merge or overload usadd_left_part_1 and usadd_left_part_2
>     because the sub part of left_part_2 cannot work with right_part_1.
>     For example, left_part_2 pattern focus one .ADD_OVERFLOW but the
> --
> 2.34.1
>

Reply via email to