On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote: > Hi all on the list. > > Here's a proposed patch for an external mempool manager > > The External Mempool Manager is an extension to the mempool API that allows > users to add and use an external mempool manager, which allows external memory > subsystems such as external hardware memory management systems and software > based memory allocators to be used with DPDK.
I like this approach.It will be useful for external hardware memory pool managers. BTW, Do you encounter any performance impact on changing to function pointer based approach? > > The existing API to the internal DPDK mempool manager will remain unchanged > and will be backward compatible. > > There are two aspects to external mempool manager. > 1. Adding the code for your new mempool handler. This is achieved by adding > a > new mempool handler source file into the librte_mempool library, and > using the REGISTER_MEMPOOL_HANDLER macro. > 2. Using the new API to call rte_mempool_create_ext to create a new mempool > using the name parameter to identify which handler to use. > > New API calls added > 1. A new mempool 'create' function which accepts mempool handler name. > 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool > handler name, and returns the index to the relevant set of callbacks for > that mempool handler > > Several external mempool managers may be used in the same application. A new > mempool can then be created by using the new 'create' function, providing the > mempool handler name to point the mempool to the relevant mempool manager > callback structure. > > The old 'create' function can still be called by legacy programs, and will > internally work out the mempool handle based on the flags provided (single > producer, single consumer, etc). By default handles are created internally to > implement the built-in DPDK mempool manager and mempool types. > > The external mempool manager needs to provide the following functions. > 1. alloc - allocates the mempool memory, and adds each object onto a ring > 2. put - puts an object back into the mempool once an application has > finished with it > 3. get - gets an object from the mempool for use by the application > 4. get_count - gets the number of available objects in the mempool > 5. free - frees the mempool memory > > Every time a get/put/get_count is called from the application/PMD, the > callback for that mempool is called. These functions are in the fastpath, > and any unoptimised handlers may limit performance. > > The new APIs are as follows: > > 1. rte_mempool_create_ext > > struct rte_mempool * > rte_mempool_create_ext(const char * name, unsigned n, > unsigned cache_size, unsigned private_data_size, > int socket_id, unsigned flags, > const char * handler_name); > > 2. rte_get_mempool_handler > > int16_t > rte_get_mempool_handler(const char *name); Do we need above public API as, in any case we need rte_mempool* pointer to operate on mempools(which has the index anyway)? May a similar functional API with different name/return will be better to figure out, given "name" registered or not in ethernet driver which has dependency on a particular HW pool manager. > > Please see rte_mempool.h for further information on the parameters. > > > The important thing to note is that the mempool handler is passed by name > to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to > get the handler index, which is stored in the rte_memool structure. This > allow multiple processes to use the same mempool, as the function pointers > are accessed via handler index. > > The mempool handler structure contains callbacks to the implementation of > the handler, and is set up for registration as follows: > > static struct rte_mempool_handler handler_sp_mc = { > .name = "ring_sp_mc", > .alloc = rte_mempool_common_ring_alloc, > .put = common_ring_sp_put, > .get = common_ring_mc_get, > .get_count = common_ring_get_count, > .free = common_ring_free, > }; > > And then the following macro will register the handler in the array of > handlers > > REGISTER_MEMPOOL_HANDLER(handler_mp_mc); > > For and example of a simple malloc based mempool manager, see > lib/librte_mempool/custom_mempool.c > > For an example of API usage, please see app/test/test_ext_mempool.c, which > implements a rudimentary mempool manager using simple mallocs for each > mempool object (custom_mempool.c). > > > David Hunt (5): > mempool: add external mempool manager support > memool: add stack (lifo) based external mempool handler > mempool: add custom external mempool handler example > mempool: add autotest for external mempool custom example > mempool: allow rte_pktmbuf_pool_create switch between memool handlers > > app/test/Makefile | 1 + > app/test/test_ext_mempool.c | 470 > ++++++++++++++++++++++++++++++ > app/test/test_mempool_perf.c | 2 - > lib/librte_mbuf/rte_mbuf.c | 11 + > lib/librte_mempool/Makefile | 3 + > lib/librte_mempool/custom_mempool.c | 158 ++++++++++ > lib/librte_mempool/rte_mempool.c | 208 +++++++++---- > lib/librte_mempool/rte_mempool.h | 205 +++++++++++-- > lib/librte_mempool/rte_mempool_default.c | 229 +++++++++++++++ > lib/librte_mempool/rte_mempool_internal.h | 70 +++++ > lib/librte_mempool/rte_mempool_stack.c | 162 ++++++++++ > 11 files changed, 1430 insertions(+), 89 deletions(-) > create mode 100644 app/test/test_ext_mempool.c > create mode 100644 lib/librte_mempool/custom_mempool.c > create mode 100644 lib/librte_mempool/rte_mempool_default.c > create mode 100644 lib/librte_mempool/rte_mempool_internal.h > create mode 100644 lib/librte_mempool/rte_mempool_stack.c > > -- > 1.9.3 >