> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula
> [mailto:pbhagavat...@caviumnetworks.com]
> Sent: Sunday, December 3, 2017 22:21
> To: Herbert Guan <herbert.g...@arm.com>; Jianbo Liu
> <jianbo....@arm.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] arch/arm: optimization for memcpy on
> AArch64
>
> On Sun, Dec 03, 2017 at 12:38:35PM +0000, Herbert Guan wrote:
> > Pavan,
> >
> > Thanks for review and comments.  Please find my comments inline below.
> >
> > Best regards,
> > Herbert
> >
> <snip>
> > > There is an existing flag for arm32 to enable neon based memcpy
> > > RTE_ARCH_ARM_NEON_MEMCPY we could reuse that here as restrict
> does
> > > the same.
> > >
> > This implementation is actually not using ARM NEON instructions so the
> existing flag is not describing the option exactly.  It'll be good if the 
> existing
> flag is "RTE_ARCH_ARM_MEMCPY" but unfortunately it might be too late
> now to get the flags aligned.
> >
>
> Correct me if I'm wrong but doesn't restrict tell the compiler to do SIMD
> optimization?
> Anyway can we put RTE_ARCH_ARM64_MEMCPY into config/common_base
> as CONFIG_RTE_ARCH_ARM64_MEMCPY=n so that it would be easier to
> enable/disable.
>

The result of using 'restrict' is to generate codes with ldp/stp instructions.  
These instructions actually belong to the "data transfer instructions", though 
they are loading/storing a pair of registers.  'ld1/st1' are SIMD (NEON) 
instructions.

I can add CONFIG_RTE_ARCH_ARM64_MEMCPY=n  into common_armv8a_linuxapp in the 
new version as you've suggested.

> > > > +#include <rte_common.h>
> > > > +#include <rte_branch_prediction.h>
> > > > +
> > > >
> > >
> +/*********************************************************
> > > ***********
> > > > +***********
> > > > + * The memory copy performance differs on different AArch64
> > > > +micro-
> > > architectures.
> > > > + * And the most recent glibc (e.g. 2.23 or later) can provide a
> > > > +better memcpy()
> > > > + * performance compared to old glibc versions. It's always
> > > > +suggested to use a
> > > > + * more recent glibc if possible, from which the entire system
> > > > +can get
> > > benefit.
> > > > + *
> > > > + * This implementation improves memory copy on some aarch64
> > > > +micro-architectures,
> > > > + * when an old glibc (e.g. 2.19, 2.17...) is being used. It is
> > > > +disabled by
> > > > + * default and needs "RTE_ARCH_ARM64_MEMCPY" defined to
> activate.
> > > > +It's not
> > > > + * always providing better performance than memcpy() so users
> > > > +need to run unit
> > > > + * test "memcpy_perf_autotest" and customize parameters in
> > > > +customization section
> > > > + * below for best performance.
> > > > + *
> > > > + * Compiler version will also impact the rte_memcpy() performance.
> > > > +It's observed
> > > > + * on some platforms and with the same code, GCC 7.2.0 compiled
> > > > +binaries can
> > > > + * provide better performance than GCC 4.8.5 compiled binaries.
> > > > +
> > > >
> > >
> +*********************************************************
> > > ************
> > > > +*********/
> > > > +
> > > > +/**************************************
> > > > + * Beginning of customization section
> > > > +**************************************/
> > > > +#define ALIGNMENT_MASK 0x0F
> > > > +#ifndef RTE_ARCH_ARM64_MEMCPY_STRICT_ALIGN
> > > > +// Only src unalignment will be treaed as unaligned copy #define
> > > > +IS_UNALIGNED_COPY(dst, src) ((uintptr_t)(dst) & ALIGNMENT_MASK)
> > >
> > > We can use existing `rte_is_aligned` function instead.
> >
> > The exising 'rte_is_aligned()' inline function is defined in a relatively
> complex way, and there will be more instructions generated (using GCC
> 7.2.0):
> >
> > 0000000000000000 <align_check_rte>:   // using rte_is_aligned()
> >    0:91003c01 addx1, x0, #0xf
> >    4:927cec21 andx1, x1, #0xfffffffffffffff0
> >    8:eb01001f cmpx0, x1
> >    c:1a9f07e0 csetw0, ne  // ne = any
> >   10:d65f03c0 ret
> >   14:d503201f nop
> >
> > 0000000000000018 <align_check_simp>:   // using above expression
> >   18:12000c00 andw0, w0, #0xf
> >   1c:d65f03c0 ret
> >
> > So to get better performance, it's better to use the simple logic.
>
> Agreed, I have noticed that too maybe we could change rte_is_aligned to be
> simpler (Not in this patch).
>
> <snip>
> > > Would doing this still benifit if size is compile time constant?
> > > i.e. when
> > > __builtin_constant_p(n) is true.
> > >
> > Yes, performance margin is observed if size is compile time constant on
> some tested platforms.
> >
>
> Sorry I didn't get you but which is better? If size is compile time constant 
> is
> using libc memcpy is better or going with restrict implementation better.
>
> If the former then we could do what 32bit rte_memcpy is using i.e.
>
> #define rte_memcpy(dst, src, n)              \
>         __extension__ ({                     \
>         (__builtin_constant_p(n)) ?          \
>         memcpy((dst), (src), (n)) :          \
>         rte_memcpy_func((dst), (src), (n)); })
>
Per my test, it usually has the same direction.  Means if the variable size can 
get improved performance, then hopefully the compile time constant will be 
improved as well, and vice versa.  The percentage might be different.  So in 
this patch, the property of size parameter (variable or compile time constant 
is not checked).

> Regards,
> Pavan.

Thanks,
Herbert
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

Reply via email to