Cool, thank you!

Pan

-----Original Message-----
From: Kito Cheng <kito.ch...@gmail.com> 
Sent: Monday, April 17, 2023 9:52 AM
To: Li, Pan2 <pan2...@intel.com>
Cc: juzhe.zh...@rivai.ai; gcc-patches <gcc-patches@gcc.gnu.org>; Kito.cheng 
<kito.ch...@sifive.com>; Wang, Yanzhang <yanzhang.w...@intel.com>
Subject: Re: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.

Pushed to trunk :)

On Mon, Apr 17, 2023 at 9:47 AM Li, Pan2 via Gcc-patches 
<gcc-patches@gcc.gnu.org> wrote:
>
> BTW, this patch will be in GCC 13 or master? The underlying patches may 
> depend on this tests for ensuring correctness.
>
> Pan
>
> -----Original Message-----
> From: Li, Pan2
> Sent: Friday, April 14, 2023 2:47 PM
> To: Kito Cheng <kito.ch...@gmail.com>
> Cc: juzhe.zh...@rivai.ai; gcc-patches <gcc-patches@gcc.gnu.org>; 
> Kito.cheng <kito.ch...@sifive.com>; Wang, Yanzhang 
> <yanzhang.w...@intel.com>
> Subject: RE: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
>
> You're very welcome!
>
> Looks vmorn(v,v) doesn't perform any shortcut, while vmandn(v, v) will be 
> converted to vmclr in upstream. As I understand, there should be no 
> difference between vmORn and vmANDn excepts the operator, will take a look 
> from RTL CSE for more details, 😊!
>
> Pan
>
> -----Original Message-----
> From: Kito Cheng <kito.ch...@gmail.com>
> Sent: Friday, April 14, 2023 2:42 PM
> To: Li, Pan2 <pan2...@intel.com>
> Cc: juzhe.zh...@rivai.ai; gcc-patches <gcc-patches@gcc.gnu.org>; 
> Kito.cheng <kito.ch...@sifive.com>; Wang, Yanzhang 
> <yanzhang.w...@intel.com>
> Subject: Re: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
>
> OK, thanks for the patch :)
>
> On Fri, Apr 14, 2023 at 11:27 AM Li, Pan2 via Gcc-patches 
> <gcc-patches@gcc.gnu.org> wrote:
> >
> > Thanks juzhe, update new version [PATCH v3] for even more checks.
> >
> > Pan
> >
> > From: juzhe.zh...@rivai.ai <juzhe.zh...@rivai.ai>
> > Sent: Friday, April 14, 2023 10:46 AM
> > To: Li, Pan2 <pan2...@intel.com>; gcc-patches 
> > <gcc-patches@gcc.gnu.org>
> > Cc: Kito.cheng <kito.ch...@sifive.com>; Wang, Yanzhang 
> > <yanzhang.w...@intel.com>; Li, Pan2 <pan2...@intel.com>
> > Subject: Re: [PATCH v2] RISC-V: Add test cases for the RVV mask insn 
> > shortcut.
> >
> > LGTM. Wait for Kito more comments.
> >
> > ________________________________
> > juzhe.zh...@rivai.ai<mailto:juzhe.zh...@rivai.ai>
> >
> > From: pan2.li<mailto:pan2...@intel.com>
> > Date: 2023-04-14 10:45
> > To: gcc-patches<mailto:gcc-patches@gcc.gnu.org>
> > CC: juzhe.zhong<mailto:juzhe.zh...@rivai.ai>;
> > kito.cheng<mailto:kito.ch...@sifive.com>;
> > yanzhang.wang<mailto:yanzhang.w...@intel.com>;
> > pan2.li<mailto:pan2...@intel.com>
> > Subject: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
> > From: Pan Li <pan2...@intel.com<mailto:pan2...@intel.com>>
> >
> > There are sorts of shortcut codegen for the RVV mask insn. For 
> > example.
> >
> > vmxor vd, va, va => vmclr vd.
> >
> > We would like to add more optimization like this but first of all we 
> > must add the tests for the existing shortcut optimization, to ensure 
> > we don't break existing optimization from underlying shortcut 
> > optimization.
> >
> > gcc/testsuite/ChangeLog:
> >
> > * gcc.target/riscv/rvv/base/mask_insn_shortcut.c: New test.
> >
> > Signed-off-by: Pan Li <pan2...@intel.com<mailto:pan2...@intel.com>>
> > ---
> > .../riscv/rvv/base/mask_insn_shortcut.c       | 239 ++++++++++++++++++
> > 1 file changed, 239 insertions(+)
> > create mode 100644
> > gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> >
> > diff --git
> > a/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> > b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> > new file mode 100644
> > index 00000000000..efc3af39fc3
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> > @@ -0,0 +1,239 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-march=rv64gcv -mabi=lp64 -O3" } */
> > +
> > +#include "riscv_vector.h"
> > +
> > +vbool1_t test_shortcut_for_riscv_vmand_case_0(vbool1_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmand_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmand_case_1(vbool2_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmand_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmand_case_2(vbool4_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmand_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmand_case_3(vbool8_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmand_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmand_case_4(vbool16_t v1, size_t
> > +vl) {
> > +  return __riscv_vmand_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmand_case_5(vbool32_t v1, size_t
> > +vl) {
> > +  return __riscv_vmand_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmand_case_6(vbool64_t v1, size_t
> > +vl) {
> > +  return __riscv_vmand_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmnand_case_0(vbool1_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmnand_case_1(vbool2_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmnand_case_2(vbool4_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmnand_case_3(vbool8_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmnand_case_4(vbool16_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmnand_case_5(vbool32_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmnand_case_6(vbool64_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmnand_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmandn_case_0(vbool1_t v1, size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmandn_case_1(vbool2_t v1, size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmandn_case_2(vbool4_t v1, size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmandn_case_3(vbool8_t v1, size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmandn_case_4(vbool16_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmandn_case_5(vbool32_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmandn_case_6(vbool64_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmandn_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmxor_case_0(vbool1_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmxor_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmxor_case_1(vbool2_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmxor_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmxor_case_2(vbool4_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmxor_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmxor_case_3(vbool8_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmxor_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmxor_case_4(vbool16_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxor_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmxor_case_5(vbool32_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxor_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmxor_case_6(vbool64_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxor_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmor_case_0(vbool1_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmor_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmor_case_1(vbool2_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmor_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmor_case_2(vbool4_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmor_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmor_case_3(vbool8_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmor_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmor_case_4(vbool16_t v1, size_t
> > +vl) {
> > +  return __riscv_vmor_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmor_case_5(vbool32_t v1, size_t
> > +vl) {
> > +  return __riscv_vmor_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmor_case_6(vbool64_t v1, size_t
> > +vl) {
> > +  return __riscv_vmor_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmnor_case_0(vbool1_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmnor_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmnor_case_1(vbool2_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmnor_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmnor_case_2(vbool4_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmnor_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmnor_case_3(vbool8_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmnor_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmnor_case_4(vbool16_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnor_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmnor_case_5(vbool32_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnor_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmnor_case_6(vbool64_t v1, size_t
> > +vl) {
> > +  return __riscv_vmnor_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmorn_case_0(vbool1_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmorn_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmorn_case_1(vbool2_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmorn_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmorn_case_2(vbool4_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmorn_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmorn_case_3(vbool8_t v1, size_t 
> > +vl) {
> > +  return __riscv_vmorn_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmorn_case_4(vbool16_t v1, size_t
> > +vl) {
> > +  return __riscv_vmorn_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmorn_case_5(vbool32_t v1, size_t
> > +vl) {
> > +  return __riscv_vmorn_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmorn_case_6(vbool64_t v1, size_t
> > +vl) {
> > +  return __riscv_vmorn_mm_b64(v1, v1, vl); }
> > +
> > +vbool1_t test_shortcut_for_riscv_vmxnor_case_0(vbool1_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b1(v1, v1, vl); }
> > +
> > +vbool2_t test_shortcut_for_riscv_vmxnor_case_1(vbool2_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b2(v1, v1, vl); }
> > +
> > +vbool4_t test_shortcut_for_riscv_vmxnor_case_2(vbool4_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b4(v1, v1, vl); }
> > +
> > +vbool8_t test_shortcut_for_riscv_vmxnor_case_3(vbool8_t v1, size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b8(v1, v1, vl); }
> > +
> > +vbool16_t test_shortcut_for_riscv_vmxnor_case_4(vbool16_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b16(v1, v1, vl); }
> > +
> > +vbool32_t test_shortcut_for_riscv_vmxnor_case_5(vbool32_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b32(v1, v1, vl); }
> > +
> > +vbool64_t test_shortcut_for_riscv_vmxnor_case_6(vbool64_t v1, 
> > +size_t
> > +vl) {
> > +  return __riscv_vmxnor_mm_b64(v1, v1, vl); }
> > +
> > +/* { dg-final { scan-assembler-not {vmand\.mm\s+v[0-9]+,\s*v[0-9]+} 
> > +} } */
> > +/* { dg-final { scan-assembler-not 
> > +{vmnand\.mm\s+v[0-9]+,\s*v[0-9]+} } } */
> > +/* { dg-final { scan-assembler-not 
> > +{vmnandn\.mm\s+v[0-9]+,\s*v[0-9]+}
> > +} } */
> > +/* { dg-final { scan-assembler-not {vmxor\.mm\s+v[0-9]+,\s*v[0-9]+} 
> > +} } */
> > +/* { dg-final { scan-assembler-not {vmor\.mm\s+v[0-9]+,\s*v[0-9]+} 
> > +} } */
> > +/* { dg-final { scan-assembler-not {vmnor\.mm\s+v[0-9]+,\s*v[0-9]+} 
> > +} } */
> > +/* { dg-final { scan-assembler-times 
> > +{vmorn\.mm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */
> > +/* { dg-final { scan-assembler-not 
> > +{vmxnor\.mm\s+v[0-9]+,\s*v[0-9]+} } } */
> > +/* { dg-final { scan-assembler-times {vmclr\.m\s+v[0-9]+} 14 } } */
> > +/* { dg-final { scan-assembler-times {vmset\.m\s+v[0-9]+} 7 } } */
> > --
> > 2.34.1
> >
> >

Reply via email to