On Sat, Mar 7, 2020 at 8:54 PM Andrew Rybchenko <arybche...@solarflare.com> wrote: > > On 3/7/20 3:51 PM, Andrew Rybchenko wrote: > > On 3/6/20 4:37 PM, Jerin Jacob wrote: > >> On Fri, Mar 6, 2020 at 7:06 PM <xiangxia.m....@gmail.com> wrote: > >>> From: Tonghao Zhang <xiangxia.m....@gmail.com> > >>> > >>> The order of mempool initiation affects mempool index in the > >>> rte_mempool_ops_table. For example, when building APPs with: > >>> > >>> $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... > >>> > >>> The "bucket" mempool will be registered firstly, and its index > >>> in table is 0 while the index of "ring" mempool is 1. DPDK > >>> uses the mk/rte.app.mk to build APPs, and others, for example, > >>> Open vSwitch, use the libdpdk.a or libdpdk.so to build it. > >>> The mempool lib linked in dpdk and Open vSwitch is different. > >>> > >>> The mempool can be used between primary and secondary process, > >>> such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). > >>> There will be a crash because dpdk-pdump creates the "ring_mp_mc" > >>> ring which index in table is 0, but the index of "bucket" ring > >>> is 0 in Open vSwitch. If Open vSwitch use the index 0 to get > >>> mempool ops and malloc memory from mempool. The crash will occur: > >>> > >>> bucket_dequeue (access null and crash) > >>> rte_mempool_get_ops (should get "ring_mp_mc", > >>> but get "bucket" mempool) > >>> rte_mempool_ops_dequeue_bulk > >>> ... > >>> rte_pktmbuf_alloc > >>> rte_pktmbuf_copy > >>> pdump_copy > >>> pdump_rx > >>> rte_eth_rx_burst > >>> > >>> To avoid the crash, there are some solution: > >>> * constructor priority: Different mempool uses different > >>> priority in RTE_INIT, but it's not easy to maintain. > >>> > >>> * change mk/rte.app.mk: Change the order in mk/rte.app.mk to > >>> be same as libdpdk.a/libdpdk.so, but when adding a new mempool > >>> driver in future, we must make sure the order. > >>> > >>> * register mempool orderly: Sort the mempool when registering, > >>> so the lib linked will not affect the index in mempool table. > >>> > >>> Signed-off-by: Tonghao Zhang <xiangxia.m....@gmail.com> > >>> Acked-by: Olivier Matz <olivier.m...@6wind.com> > >> Acked-by: Jerin Jacob <jer...@marvell.com> > > > > The patch is OK, but the fact that ops index changes during > > mempool driver lifetime is frightening. In fact it breaks > > rte_mempool_register_ops() return value semantics (read > > as API break). The return value is not used in DPDK, but it > > is a public function. If I'm not mistaken it should be taken > > into account. Yes, should update the doc: how about this:
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index c90cf31..5a9c8a7 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -904,7 +904,9 @@ int rte_mempool_ops_get_info(const struct rte_mempool *mp, * @param ops * Pointer to an ops structure to register. * @return - * - >=0: Success; return the index of the ops struct in the table. + * - >=0: Success; return the index of the last ops struct in the table. + * The number of the ops struct registered is equal to index + * returned + 1. * - -EINVAL - some missing callbacks while registering ops struct. * - -ENOSPC - the maximum number of ops structs has been reached. */ diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c index b0da096..053f340 100644 --- a/lib/librte_mempool/rte_mempool_ops.c +++ b/lib/librte_mempool/rte_mempool_ops.c @@ -26,7 +26,11 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { return strcmp(m_a->name, m_b->name); } -/* add a new ops struct in rte_mempool_ops_table, return its index. */ +/* + * add a new ops struct in rte_mempool_ops_table. + * on success, return the index of the last ops + * struct in the table. + */ int rte_mempool_register_ops(const struct rte_mempool_ops *h) { > > Also I remember patches which warn about above behaviour > > in documentation. If behaviour changes, corresponding > > documentation must be updated. > > One more point. If the patch is finally accepted it definitely > deserves few lines in release notes. OK, a separate patch should be sent before DPDK 20.05 release ? > -- Thanks, Tonghao