On Tue, Nov 03, 2015 at 03:57:24PM +0000, Ananyev, Konstantin wrote: > > > > -----Original Message----- > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob > > Sent: Tuesday, November 03, 2015 3:52 PM > > To: dev at dpdk.org > > Subject: [dpdk-dev] [RFC ][PATCH] Introduce > > RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter > > > > rte_ring implementation needs explicit memory barrier > > in weakly ordered architecture like ARM unlike > > strongly ordered architecture like X86 > > > > Introducing RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > configuration to abstract such dependency so that other > > weakly ordered architectures can reuse this infrastructure. > > Looks a bit clumsy. > Please try to follow this suggestion instead: > http://dpdk.org/ml/archives/dev/2015-October/025505.html
Make sense. Do we agree on a macro that is defined based upon RTE_ARCH_STRONGLY_ORDERED_MEM_OP to remove clumsy #ifdef every where ? Jerin > > Konstantin > > > > > Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com> > > --- > > config/common_bsdapp | 5 +++++ > > config/common_linuxapp | 5 +++++ > > config/defconfig_arm64-armv8a-linuxapp-gcc | 1 + > > config/defconfig_arm64-thunderx-linuxapp-gcc | 1 + > > lib/librte_ring/rte_ring.h | 20 ++++++++++++++++++++ > > 5 files changed, 32 insertions(+) > > > > diff --git a/config/common_bsdapp b/config/common_bsdapp > > index b37dcf4..c8d1f63 100644 > > --- a/config/common_bsdapp > > +++ b/config/common_bsdapp > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=n > > CONFIG_RTE_ARCH_STRICT_ALIGN=n > > > > # > > +# Machine has strongly-ordered memory operations on normal memory like x86 > > +# > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=y > > + > > +# > > # Compile to share library > > # > > CONFIG_RTE_BUILD_SHARED_LIB=n > > diff --git a/config/common_linuxapp b/config/common_linuxapp > > index 0de43d5..d040a74 100644 > > --- a/config/common_linuxapp > > +++ b/config/common_linuxapp > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=n > > CONFIG_RTE_ARCH_STRICT_ALIGN=n > > > > # > > +# Machine has strongly-ordered memory operations on normal memory like x86 > > +# > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=y > > + > > +# > > # Compile to share library > > # > > CONFIG_RTE_BUILD_SHARED_LIB=n > > diff --git a/config/defconfig_arm64-armv8a-linuxapp-gcc > > b/config/defconfig_arm64-armv8a-linuxapp-gcc > > index 6ea38a5..5289152 100644 > > --- a/config/defconfig_arm64-armv8a-linuxapp-gcc > > +++ b/config/defconfig_arm64-armv8a-linuxapp-gcc > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH="arm64" > > CONFIG_RTE_ARCH_ARM64=y > > CONFIG_RTE_ARCH_64=y > > CONFIG_RTE_ARCH_ARM_NEON=y > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=n > > > > CONFIG_RTE_FORCE_INTRINSICS=y > > > > diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc > > b/config/defconfig_arm64-thunderx-linuxapp-gcc > > index e8fccc7..79fa9e6 100644 > > --- a/config/defconfig_arm64-thunderx-linuxapp-gcc > > +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH="arm64" > > CONFIG_RTE_ARCH_ARM64=y > > CONFIG_RTE_ARCH_64=y > > CONFIG_RTE_ARCH_ARM_NEON=y > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=n > > > > CONFIG_RTE_FORCE_INTRINSICS=y > > > > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h > > index af68888..1ccd186 100644 > > --- a/lib/librte_ring/rte_ring.h > > +++ b/lib/librte_ring/rte_ring.h > > @@ -457,7 +457,12 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * > > const *obj_table, > > > > /* write entries in ring */ > > ENQUEUE_PTRS(); > > + > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > rte_compiler_barrier(); > > +#else > > + rte_wmb(); > > +#endif > > > > /* if we exceed the watermark */ > > if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) { > > @@ -552,7 +557,12 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * > > const *obj_table, > > > > /* write entries in ring */ > > ENQUEUE_PTRS(); > > + > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > rte_compiler_barrier(); > > +#else > > + rte_wmb(); > > +#endif > > > > /* if we exceed the watermark */ > > if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) { > > @@ -643,7 +653,12 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void > > **obj_table, > > > > /* copy in table */ > > DEQUEUE_PTRS(); > > + > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > rte_compiler_barrier(); > > +#else > > + rte_rmb(); > > +#endif > > > > /* > > * If there are other dequeues in progress that preceded us, > > @@ -727,7 +742,12 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void > > **obj_table, > > > > /* copy in table */ > > DEQUEUE_PTRS(); > > + > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > rte_compiler_barrier(); > > +#else > > + rte_rmb(); > > +#endif > > > > __RING_STAT_ADD(r, deq_success, n); > > r->cons.tail = cons_next; > > -- > > 2.1.0 >