Here's the latest version of the Mempool Handler feature (previously known as the External Mempool Manager.
It's re-based on top of the latest head as of 17/6/2016, including Olivier's 35-part patch series on mempool re-org [1] [1] http://dpdk.org/ml/archives/dev/2016-May/039229.html v14 changes: * set MEMPOOL_F_RING_CREATED flag after rte_mempool_ring_create() is called. * Changed name of feature from "external mempool manager" to "mempool handler" and updated comments and release notes accordingly. * Added a comment for newly added pool_config param in rte_mempool_set_ops_byname. v13 changes: * Added in extra opaque data (pool_config) to mempool struct for mempool configuration by the ops functions. For example, this can be used to pass device names or device flags to the underlying alloc function. * Added mempool_config param to rte_mempool_set_ops_byname() v12 changes: * Fixed a comment (function pram h -> ops) * fixed a typo (callbacki) v11 changes: * Fixed comments (added '.' where needed for consistency) * removed ABI breakage notice for mempool manager in deprecation.rst * Added description of the external mempool manager functionality to doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed) * renamed rte_mempool_default.c to rte_mempool_ring.c v10 changes: * changed the _put/_get op names to _enqueue/_dequeue to be consistent with the function names * some rte_errno cleanup * comment tweaks about when to set pool_data * removed an un-needed check for ops->alloc == NULL v9 changes: * added a check for NULL alloc in rte_mempool_ops_register * rte_mempool_alloc_t now returns int instead of void* * fixed some comment typo's * removed some unneeded typecasts * changed a return NULL to return -EEXIST in rte_mempool_ops_register * fixed rte_mempool_version.map file so builds ok as shared libs * moved flags check from rte_mempool_create_empty to rte_mempool_create v8 changes: * merged first three patches in the series into one. * changed parameters to ops callback to all be rte_mempool pointer rather than than pointer to opaque data or uint64. * comment fixes. * fixed parameter to _free function (was inconsistent). * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED v7 changes: * Changed rte_mempool_handler_table to rte_mempool_ops_table * Changed hander_idx to ops_index in rte_mempool struct * Reworked comments in rte_mempool.h around ops functions * Changed rte_mempool_hander.c to rte_mempool_ops.c * Changed all functions containing _handler_ to _ops_ * Now there is no mention of 'handler' left * Other small changes out of review of mailing list v6 changes: * Moved the flags handling from rte_mempool_create_empty to rte_mempool_create, as it's only there for backward compatibility * Various comment additions and cleanup * Renamed rte_mempool_handler to rte_mempool_ops * Added a union for *pool and u64 pool_id in struct rte_mempool * split the original patch into a few parts for easier review. * rename functions with _ext_ to _ops_. * addressed review comments * renamed put and get functions to enqueue and dequeue * changed occurences of rte_mempool_ops to const, as they contain function pointers (security) * split out the default external mempool handler into a separate patch for easier review v5 changes: * rebasing, as it is dependent on another patch series [1] v4 changes (Olivier Matz): * remove the rte_mempool_create_ext() function. To change the handler, the user has to do the following: - mp = rte_mempool_create_empty() - rte_mempool_set_handler(mp, "my_handler") - rte_mempool_populate_default(mp) This avoids to add another function with more than 10 arguments, duplicating the doxygen comments * change the api of rte_mempool_alloc_t: only the mempool pointer is required as all information is available in it * change the api of rte_mempool_free_t: remove return value * move inline wrapper functions from the .c to the .h (else they won't be inlined). This implies to have one header file (rte_mempool.h), or it would have generate cross dependencies issues. * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due to the use of && instead of &) * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining) * fix build with shared libraries (global handler has to be declared in the .map file) * rationalize #include order * remove unused function rte_mempool_get_handler_name() * rename some structures, fields, functions * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment from Yuanhan) * test the ext mempool handler in the same file than standard mempool tests, avoiding to duplicate the code * rework the custom handler in mempool_test * rework a bit the patch selecting default mbuf pool handler * fix some doxygen comments v3 changes: * simplified the file layout, renamed to rte_mempool_handler.[hc] * moved the default handlers into rte_mempool_default.c * moved the example handler out into app/test/test_ext_mempool.c * removed is_mc/is_mp change, slight perf degredation on sp cached operation * removed stack hanler, may re-introduce at a later date * Changes out of code reviews v2 changes: * There was a lot of duplicate code between rte_mempool_xmem_create and rte_mempool_create_ext. This has now been refactored and is now hopefully cleaner. * The RTE_NEXT_ABI define is now used to allow building of the library in a format that is compatible with binaries built against previous versions of DPDK. * Changes out of code reviews. Hopefully I've got most of them included. The Mempool Handler feature is an extension to the mempool API that allows users to add and use an alternative mempool handler, which allows external memory subsystems such as external hardware memory management systems and software based memory allocators to be used with DPDK. The existing API to the internal DPDK mempool handler will remain unchanged and will be backward compatible. However, there will be an ABI breakage, as the mempool struct is changing. There are two aspects to mempool handlers. 1. Adding the code for your new mempool operations (ops). This is achieved by adding a new mempool ops source file into the librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro. 2. Using the new API to call rte_mempool_create_empty and rte_mempool_set_ops_byname to create a new mempool using the name parameter to identify which ops to use. New API calls added 1. A new rte_mempool_create_empty() function 2. rte_mempool_set_ops_byname() which sets the mempools ops (functions) 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions which populates the mempool using the relevant ops Several mempool handlers may be used in the same application. A new mempool can then be created by using the new rte_mempool_create_empty function, then calling rte_mempool_set_ops_byname to point the mempool to the relevant mempool handler callback (ops) structure. Legacy applications will continue to use the old rte_mempool_create API call, which uses a ring based mempool handler by default. These applications will need to be modified to use a new mempool handler. A mempool handler needs to provide the following functions. 1. alloc - allocates the mempool memory, and adds each object onto a ring 2. enqueue - puts an object back into the mempool once an application has finished with it 3. dequeue - gets an object from the mempool for use by the application 4. get_count - gets the number of available objects in the mempool 5. free - frees the mempool memory Every time an enqueue/dequeue/get_count is called from the application/PMD, the callback for that mempool is called. These functions are in the fastpath, and any unoptimised ops may limit performance. The new APIs are as follows: 1. rte_mempool_create_empty struct rte_mempool * rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, unsigned cache_size, unsigned private_data_size, int socket_id, unsigned flags); 2. rte_mempool_set_ops_byname() int rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name void *pool_config); 3. rte_mempool_populate_default() int rte_mempool_populate_default(struct rte_mempool *mp); 4. rte_mempool_populate_anon() int rte_mempool_populate_anon(struct rte_mempool *mp); Please see rte_mempool.h for further information on the parameters. The important thing to note is that the mempool ops struct is passed by name to rte_mempool_set_ops_byname, which looks through the ops struct array to get the ops_index, which is then stored in the rte_memool structure. This allow multiple processes to use the same mempool, as the function pointers are accessed via ops index. The mempool ops structure contains callbacks to the implementation of the ops function, and is set up for registration as follows: static const struct rte_mempool_ops ops_sp_mc = { .name = "ring_sp_mc", .alloc = rte_mempool_common_ring_alloc, .enqueue = common_ring_sp_enqueue, .dequeue = common_ring_mc_dequeue, .get_count = common_ring_get_count, .free = common_ring_free, }; And then the following macro will register the ops in the array of ops structures REGISTER_MEMPOOL_OPS(ops_mp_mc); For an example of API usage, please see app/test/test_mempool.c, which implements a rudimentary "custom_handler" mempool handler using simple mallocs for each mempool object. This file also contains the callbacks and self registration for the new handler. David Hunt (2): mempool: support mempool handler operations mbuf: make default mempool ops configurable at build Olivier Matz (1): app/test: test mempool handler