RE: [PATCH][Aarch64] v2: Arithmetic overflow common functions [Patch 1/4]

2018-06-08 Thread Michael Collison
Patch updated as requested:

- name changed from 'aarch64_add_128bit_scratch_regs' to 
'aarch64_addti_scratch_regs'
- name changed from 'aarch64_subv_128bit_scratch_reg's to ' 
aarch64_subvti_scratch_regs'

I did not find any helper function to replace ' aarch64_gen_unlikely_cbranch'.

Okay for trunk?


-Original Message-
From: James Greenhalgh  
Sent: Thursday, June 7, 2018 5:19 PM
To: Michael Collison 
Cc: GCC Patches ; nd 
Subject: Re: [PATCH][Aarch64] v2: Arithmetic overflow common functions [Patch 
1/4]

On Wed, Jun 06, 2018 at 12:14:03PM -0500, Michael Collison wrote:
> This is a respin of a AArch64 patch that adds support for builtin arithmetic 
> overflow operations. This update separates the patch into multiple pieces and 
> addresses comments made by Richard Earnshaw here:
> 
> https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00249.html
> 
> Original patch and motivation for patch here:
> 
> https://gcc.gnu.org/ml/gcc-patches/2017-05/msg01512.html
> 
> This patch primarily contains common functions in aarch64.c for 
> generating TImode scratch registers, and common rtl functions utilized by the 
> overflow patterns in aarch64.md. In addition a new mode representing overflow 
> CC_Vmode is introduced.
> 
> Bootstrapped and tested on aarch64-linux-gnu. Okay for trunk?

Normally it is preferred that each patch in a series stands independent of the 
others. So if I apply just 1/4 I should get a working toolchain. You have some 
dependencies here between 1/4 and 3/4.

Rather than ask you to rework these patches, I think I'll instead ask you to 
squash them all to a single commit after we're done with review. That will save 
you some rebase work and maintain the property that trunk can be built at most 
revisions.


> (aarch64_add_128bit_scratch_regs): Declare
> (aarch64_subv_128bit_scratch_regs): Declare.

Why use 128bit in the function name rather than call it 
aarch64_subvti_scratch_regs ?


> @@ -16337,6 +16353,131 @@ aarch64_split_dimode_const_store (rtx dst, rtx src)
>return true;
>  }
>  
> +/* Generate RTL for a conditional branch with rtx comparison CODE in
> +   mode CC_MODE.  The destination of the unlikely conditional branch
> +   is LABEL_REF.  */
> +
> +void
> +aarch64_gen_unlikely_cbranch (enum rtx_code code, machine_mode cc_mode,
> +   rtx label_ref)
> +{
> +  rtx x;
> +  x = gen_rtx_fmt_ee (code, VOIDmode,
> +   gen_rtx_REG (cc_mode, CC_REGNUM),
> +   const0_rtx);
> +
> +  x = gen_rtx_IF_THEN_ELSE (VOIDmode, x,
> + gen_rtx_LABEL_REF (VOIDmode, label_ref),
> + pc_rtx);
> +  aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); }
> +

I'm a bit surprised this is AArh64 specific and there are no helper functions 
to get you here. Not that it should block the patch;l but if we can reuse 
something I'd prefer we did.

> +void
> +aarch64_expand_subvti (rtx op0, rtx low_dest, rtx low_in1,
> +rtx low_in2, rtx high_dest, rtx high_in1,
> +rtx high_in2)
> +{
> +  if (low_in2 == const0_rtx)
> +{
> +  low_dest = low_in1;
> +  emit_insn (gen_subdi3_compare1 (high_dest, high_in1,
> +   force_reg (DImode, high_in2)));
> +}
> +  else
> +{
> +  if (CONST_INT_P (low_in2))
> + {
> +   low_in2 = force_reg (DImode, GEN_INT (-UINTVAL (low_in2)));
> +   high_in2 = force_reg (DImode, high_in2);
> +   emit_insn (gen_adddi3_compareC (low_dest, low_in1, low_in2));
> + }
> +  else
> + emit_insn (gen_subdi3_compare1 (low_dest, low_in1, low_in2));
> +  emit_insn (gen_subdi3_carryinCV (high_dest,
> +force_reg (DImode, high_in1),
> +high_in2));

This is where we'd break the build. gen_subdi3_carryinCV isn't defined until 
3/4.

The above points are minor.

This patch is OK with them cleaned up, once I've reviewed the other 3 parts to 
this series.

James

> 
> 2018-05-31  Michael Collison  
> Richard Henderson 
> 
> * config/aarch64/aarch64-modes.def (CC_V): New.
> * config/aarch64/aarch64-protos.h
> (aarch64_add_128bit_scratch_regs): Declare
> (aarch64_subv_128bit_scratch_regs): Declare.
> (aarch64_expand_subvti): Declare.
> (aarch64_gen_unlikely_cbranch): Declare
> * config/aarch64/aarch64.c (aarch64_select_cc_mode): Test for signed 
> overflow using CC_Vmode.
> (aarch64_get_condition_code_1): Handle CC_Vmode.
> (aarch64_gen_unlikely_cbranch): New function.
> (aarch64_add_128bit_scratch_regs): New function.
> (aarch64_subv_128bit_scratch_regs): New function.
> (aarch64_expand_subvti): New function.




gnutools-6308-pt1.patch
Description: gnutools-6308-pt1.patch


Re: [PATCH][Aarch64] v2: Arithmetic overflow common functions [Patch 1/4]

2018-06-07 Thread James Greenhalgh
On Wed, Jun 06, 2018 at 12:14:03PM -0500, Michael Collison wrote:
> This is a respin of a AArch64 patch that adds support for builtin arithmetic 
> overflow operations. This update separates the patch into multiple pieces and 
> addresses comments made by Richard Earnshaw here:
> 
> https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00249.html
> 
> Original patch and motivation for patch here:
> 
> https://gcc.gnu.org/ml/gcc-patches/2017-05/msg01512.html
> 
> This patch primarily contains common functions in aarch64.c for generating 
> TImode scratch registers,
> and common rtl functions utilized by the overflow patterns in aarch64.md. In 
> addition a new mode representing overflow CC_Vmode is introduced.
> 
> Bootstrapped and tested on aarch64-linux-gnu. Okay for trunk?

Normally it is preferred that each patch in a series stands independent of
the others. So if I apply just 1/4 I should get a working toolchain. You
have some dependencies here between 1/4 and 3/4.

Rather than ask you to rework these patches, I think I'll instead ask you to
squash them all to a single commit after we're done with review. That will
save you some rebase work and maintain the property that trunk can be built
at most revisions.


> (aarch64_add_128bit_scratch_regs): Declare
> (aarch64_subv_128bit_scratch_regs): Declare.

Why use 128bit in the function name rather than
call it aarch64_subvti_scratch_regs ?


> @@ -16337,6 +16353,131 @@ aarch64_split_dimode_const_store (rtx dst, rtx src)
>return true;
>  }
>  
> +/* Generate RTL for a conditional branch with rtx comparison CODE in
> +   mode CC_MODE.  The destination of the unlikely conditional branch
> +   is LABEL_REF.  */
> +
> +void
> +aarch64_gen_unlikely_cbranch (enum rtx_code code, machine_mode cc_mode,
> +   rtx label_ref)
> +{
> +  rtx x;
> +  x = gen_rtx_fmt_ee (code, VOIDmode,
> +   gen_rtx_REG (cc_mode, CC_REGNUM),
> +   const0_rtx);
> +
> +  x = gen_rtx_IF_THEN_ELSE (VOIDmode, x,
> + gen_rtx_LABEL_REF (VOIDmode, label_ref),
> + pc_rtx);
> +  aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x));
> +}
> +

I'm a bit surprised this is AArh64 specific and there are no helper functions
to get you here. Not that it should block the patch;l but if we can reuse
something I'd prefer we did.

> +void
> +aarch64_expand_subvti (rtx op0, rtx low_dest, rtx low_in1,
> +rtx low_in2, rtx high_dest, rtx high_in1,
> +rtx high_in2)
> +{
> +  if (low_in2 == const0_rtx)
> +{
> +  low_dest = low_in1;
> +  emit_insn (gen_subdi3_compare1 (high_dest, high_in1,
> +   force_reg (DImode, high_in2)));
> +}
> +  else
> +{
> +  if (CONST_INT_P (low_in2))
> + {
> +   low_in2 = force_reg (DImode, GEN_INT (-UINTVAL (low_in2)));
> +   high_in2 = force_reg (DImode, high_in2);
> +   emit_insn (gen_adddi3_compareC (low_dest, low_in1, low_in2));
> + }
> +  else
> + emit_insn (gen_subdi3_compare1 (low_dest, low_in1, low_in2));
> +  emit_insn (gen_subdi3_carryinCV (high_dest,
> +force_reg (DImode, high_in1),
> +high_in2));

This is where we'd break the build. gen_subdi3_carryinCV isn't defined
until 3/4.

The above points are minor.

This patch is OK with them cleaned up, once I've reviewed the other 3 parts
to this series.

James

> 
> 2018-05-31  Michael Collison  
> Richard Henderson 
> 
> * config/aarch64/aarch64-modes.def (CC_V): New.
> * config/aarch64/aarch64-protos.h
> (aarch64_add_128bit_scratch_regs): Declare
> (aarch64_subv_128bit_scratch_regs): Declare.
> (aarch64_expand_subvti): Declare.
> (aarch64_gen_unlikely_cbranch): Declare
> * config/aarch64/aarch64.c (aarch64_select_cc_mode): Test
> for signed overflow using CC_Vmode.
> (aarch64_get_condition_code_1): Handle CC_Vmode.
> (aarch64_gen_unlikely_cbranch): New function.
> (aarch64_add_128bit_scratch_regs): New function.
> (aarch64_subv_128bit_scratch_regs): New function.
> (aarch64_expand_subvti): New function.