[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Hunt, David


On 16/6/2016 9:58 AM, Olivier MATZ wrote:
>>>
>>> So I don't think we should have more cache misses whether it's
>>> placed at the beginning or at the end. Maybe I'm missing something...
>>>
>>> I still believe it's better to group the 2 fields as they are
>>> tightly linked together. It could be at the end if you see better
>>> performance.
>>>
>>
>> OK, I'll leave at the end because of the performance hit.
>
> Sorry, my message was not clear.
> I mean, having both at the end. Do you see a performance
> impact in that case?
>

I ran multiple more tests, and average drop I'm seeing on an older 
server reduced to 1% average (local cached use-case), with 0% change on 
a newer Haswell server, so I think at this stage we're safe to put it up 
alongside pool_data. There was 0% reduction when I moved both to the 
bottom of the struct. So on the Haswell, it seems to have minimal impact 
regardless of where they go.

I'll post the patch up soon.

Regards,
Dave.







[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Olivier MATZ
>>
>> So I don't think we should have more cache misses whether it's
>> placed at the beginning or at the end. Maybe I'm missing something...
>>
>> I still believe it's better to group the 2 fields as they are
>> tightly linked together. It could be at the end if you see better
>> performance.
>>
>
> OK, I'll leave at the end because of the performance hit.

Sorry, my message was not clear.
I mean, having both at the end. Do you see a performance
impact in that case?


Regards
Olivier


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Olivier MATZ


On 06/16/2016 09:47 AM, Hunt, David wrote:
>
>
> On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>>
>>
>> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
 [...]

 The opaque pointer would be saved in mempool structure, and used
 when the mempool is populated (calling mempool_ops_alloc).
 The type of the structure pointed by the opaque has to be defined
 (and documented) into each mempool_ops manager.


 Olivier
>>>
>>>
>>> OK, just to be sure before I post another patchset.
>>>
>>> For the rte_mempool_struct:
>>>  struct rte_mempool_memhdr_list mem_list; /**< List of memory
>>> chunks */
>>> +   void *ops_args;  /**< optional args for ops
>>> alloc. */
>>>
>>> (at the end of the struct, as it's just on the control path, not to
>>> affect fast path)
>>
>> Hmm, I would put it just after pool_data.
>>
>
> When I move it to just after pool data, the performance of the
> mempool_perf_autotest drops by 2% on my machine for the local cache tests.
> I think I should leave it where I suggested.

I don't really see what you call control path and data path here.
For me, all the fields in mempool structure are not modified once
the mempool is initialized.

http://dpdk.org/browse/dpdk/tree/lib/librte_mempool/rte_mempool.h?id=ce94a51ff05c0a4b63177f8a314feb5d19992036#n201

So I don't think we should have more cache misses whether it's
placed at the beginning or at the end. Maybe I'm missing something...

I still believe it's better to group the 2 fields as they are
tightly linked together. It could be at the end if you see better
performance.



[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Hunt, David


On 16/6/2016 9:47 AM, Olivier MATZ wrote:
>
>
> On 06/16/2016 09:47 AM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>>>
>>>
>>> On 06/15/2016 06:34 PM, Hunt, David wrote:


 On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> [...]
>
> The opaque pointer would be saved in mempool structure, and used
> when the mempool is populated (calling mempool_ops_alloc).
> The type of the structure pointed by the opaque has to be defined
> (and documented) into each mempool_ops manager.
>
>
> Olivier


 OK, just to be sure before I post another patchset.

 For the rte_mempool_struct:
  struct rte_mempool_memhdr_list mem_list; /**< List of memory
 chunks */
 +   void *ops_args;  /**< optional args for ops
 alloc. */

 (at the end of the struct, as it's just on the control path, not to
 affect fast path)
>>>
>>> Hmm, I would put it just after pool_data.
>>>
>>
>> When I move it to just after pool data, the performance of the
>> mempool_perf_autotest drops by 2% on my machine for the local cache 
>> tests.
>> I think I should leave it where I suggested.
>
> I don't really see what you call control path and data path here.
> For me, all the fields in mempool structure are not modified once
> the mempool is initialized.
>
> http://dpdk.org/browse/dpdk/tree/lib/librte_mempool/rte_mempool.h?id=ce94a51ff05c0a4b63177f8a314feb5d19992036#n201
>  
>
>
> So I don't think we should have more cache misses whether it's
> placed at the beginning or at the end. Maybe I'm missing something...
>
> I still believe it's better to group the 2 fields as they are
> tightly linked together. It could be at the end if you see better
> performance.
>

OK, I'll leave at the end because of the performance hit.

Regards,
David.


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Hunt, David


On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>
>
> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>> [...]
>>>
>>> The opaque pointer would be saved in mempool structure, and used
>>> when the mempool is populated (calling mempool_ops_alloc).
>>> The type of the structure pointed by the opaque has to be defined
>>> (and documented) into each mempool_ops manager.
>>>
>>>
>>> Olivier
>>
>>
>> OK, just to be sure before I post another patchset.
>>
>> For the rte_mempool_struct:
>>  struct rte_mempool_memhdr_list mem_list; /**< List of memory
>> chunks */
>> +   void *ops_args;  /**< optional args for ops
>> alloc. */
>>
>> (at the end of the struct, as it's just on the control path, not to
>> affect fast path)
>
> Hmm, I would put it just after pool_data.
>

When I move it to just after pool data, the performance of the 
mempool_perf_autotest drops by 2% on my machine for the local cache tests.
I think I should leave it where I suggested.

Regards,
David.



[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Hunt, David
Hi Shreyansh,

On 16/6/2016 5:35 AM, Shreyansh Jain wrote:
> Though I am late to the discussion...
>
>> -Original Message-
>> From: Olivier MATZ [mailto:olivier.matz at 6wind.com]
>> Sent: Wednesday, June 15, 2016 10:10 PM
>> To: Hunt, David ; Jan Viktorin
>> 
>> Cc: dev at dpdk.org; jerin.jacob at caviumnetworks.com; Shreyansh Jain
>> 
>> Subject: Re: [PATCH v12 0/3] mempool: add external mempool manager
>>
>>
>>
>> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
 [...]

 The opaque pointer would be saved in mempool structure, and used
 when the mempool is populated (calling mempool_ops_alloc).
 The type of the structure pointed by the opaque has to be defined
 (and documented) into each mempool_ops manager.


 Olivier
>>>
>>> OK, just to be sure before I post another patchset.
>>>
>>> For the rte_mempool_struct:
>>>   struct rte_mempool_memhdr_list mem_list; /**< List of memory
>>> chunks */
>>> +   void *ops_args;  /**< optional args for ops
>>> alloc. */
>>>
>>> (at the end of the struct, as it's just on the control path, not to
>>> affect fast path)
>> Hmm, I would put it just after pool_data.
> +1
> And, would 'pool_config' (picked from a previous email from David) a better 
> name?
>
>  From a user perspective, the application is passing a configuration item to 
> the pool to work one. Only the application and mempool allocator understand 
> it (opaque).
> As for 'ops_arg', it would be to control 'assignment-of-operations' to the 
> framework.
>
> Maybe just my point of view.

I agree. I was originally happy with pool_config, sits well with 
pool_data. And it's data for configuring the pool during allocation. 
I'll go with that, then.


>>> Then change function params:
>>>int
>>> -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
>>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>>> +   void *ops_args);
>>>
>>> And (almost) finally in the rte_mempool_set_ops_byname function:
>>>   mp->ops_index = i;
>>> +   mp->ops_args = ops_args;
>>>   return 0;
>>>
>>> Then (actually) finally, add a null to all the calls to
>>> rte_mempool_set_ops_byname.
>>>
>>> OK? :)
>>>
>> Else, looks good to me! Thanks David.
> Me too. Though I would like to clarify something for my understanding:
>
> Mempool->pool_data => Used by allocator to store private data
> Mempool->pool_config => (or ops_arg) used by allocator to access user/app 
> provided value.
>
> Is that correct?

Yes, that's correct.

Regard,
David.






[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-16 Thread Shreyansh Jain
Though I am late to the discussion...

> -Original Message-
> From: Olivier MATZ [mailto:olivier.matz at 6wind.com]
> Sent: Wednesday, June 15, 2016 10:10 PM
> To: Hunt, David ; Jan Viktorin
> 
> Cc: dev at dpdk.org; jerin.jacob at caviumnetworks.com; Shreyansh Jain
> 
> Subject: Re: [PATCH v12 0/3] mempool: add external mempool manager
> 
> 
> 
> On 06/15/2016 06:34 PM, Hunt, David wrote:
> >
> >
> > On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> >> [...]
> >>
> >> The opaque pointer would be saved in mempool structure, and used
> >> when the mempool is populated (calling mempool_ops_alloc).
> >> The type of the structure pointed by the opaque has to be defined
> >> (and documented) into each mempool_ops manager.
> >>
> >>
> >> Olivier
> >
> >
> > OK, just to be sure before I post another patchset.
> >
> > For the rte_mempool_struct:
> >  struct rte_mempool_memhdr_list mem_list; /**< List of memory
> > chunks */
> > +   void *ops_args;  /**< optional args for ops
> > alloc. */
> >
> > (at the end of the struct, as it's just on the control path, not to
> > affect fast path)
> 
> Hmm, I would put it just after pool_data.

+1
And, would 'pool_config' (picked from a previous email from David) a better 
name?

>From a user perspective, the application is passing a configuration item to 
>the pool to work one. Only the application and mempool allocator understand it 
>(opaque).
As for 'ops_arg', it would be to control 'assignment-of-operations' to the 
framework.

Maybe just my point of view.

> 
> 
> >
> > Then change function params:
> >   int
> > -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
> > +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> > +   void *ops_args);
> >
> > And (almost) finally in the rte_mempool_set_ops_byname function:
> >  mp->ops_index = i;
> > +   mp->ops_args = ops_args;
> >  return 0;
> >
> > Then (actually) finally, add a null to all the calls to
> > rte_mempool_set_ops_byname.
> >
> > OK? :)
> >
> 
> Else, looks good to me! Thanks David.

Me too. Though I would like to clarify something for my understanding:

Mempool->pool_data => Used by allocator to store private data
Mempool->pool_config => (or ops_arg) used by allocator to access user/app 
provided value.

Is that correct?

-
Shreyansh





[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Olivier MATZ


On 06/15/2016 06:34 PM, Hunt, David wrote:
>
>
> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>> [...]
>>
>> The opaque pointer would be saved in mempool structure, and used
>> when the mempool is populated (calling mempool_ops_alloc).
>> The type of the structure pointed by the opaque has to be defined
>> (and documented) into each mempool_ops manager.
>>
>>
>> Olivier
>
>
> OK, just to be sure before I post another patchset.
>
> For the rte_mempool_struct:
>  struct rte_mempool_memhdr_list mem_list; /**< List of memory
> chunks */
> +   void *ops_args;  /**< optional args for ops
> alloc. */
>
> (at the end of the struct, as it's just on the control path, not to
> affect fast path)

Hmm, I would put it just after pool_data.


>
> Then change function params:
>   int
> -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> +   void *ops_args);
>
> And (almost) finally in the rte_mempool_set_ops_byname function:
>  mp->ops_index = i;
> +   mp->ops_args = ops_args;
>  return 0;
>
> Then (actually) finally, add a null to all the calls to
> rte_mempool_set_ops_byname.
>
> OK? :)
>

Else, looks good to me! Thanks David.



[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Hunt, David


On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> [...]
>
> The opaque pointer would be saved in mempool structure, and used
> when the mempool is populated (calling mempool_ops_alloc).
> The type of the structure pointed by the opaque has to be defined
> (and documented) into each mempool_ops manager.
>
>
> Olivier


OK, just to be sure before I post another patchset.

For the rte_mempool_struct:
 struct rte_mempool_memhdr_list mem_list; /**< List of memory 
chunks */
+   void *ops_args;  /**< optional args for ops 
alloc. */

(at the end of the struct, as it's just on the control path, not to 
affect fast path)

Then change function params:
  int
-rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+   void *ops_args);

And (almost) finally in the rte_mempool_set_ops_byname function:
 mp->ops_index = i;
+   mp->ops_args = ops_args;
 return 0;

Then (actually) finally, add a null to all the calls to 
rte_mempool_set_ops_byname.

OK? :)

Regards,
Dave.








[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Hunt, David


On 15/6/2016 3:47 PM, Jan Viktorin wrote:
> On Wed, 15 Jun 2016 16:10:13 +0200
> Olivier MATZ  wrote:
>
>> On 06/15/2016 04:02 PM, Hunt, David wrote:
>>>
>>> On 15/6/2016 2:50 PM, Olivier MATZ wrote:
 Hi David,

 On 06/15/2016 02:38 PM, Hunt, David wrote:
>
> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>> Hi,
>>
>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>
>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
 Hi,

 I've got one last question. Initially, I was interested in creating
 my own external memory provider based on a Linux Kernel driver.
 So, I've got an opened file descriptor that points to a device which
 can mmap a memory regions for me.

 ...
 int fd = open("/dev/uio0" ...);
 ...
 rte_mempool *pool = rte_mempool_create_empty(...);
 rte_mempool_set_ops_byname(pool, "uio_allocator_ops");

 I am not sure how to pass the file descriptor pointer. I thought it
 would
 be possible by the rte_mempool_alloc but it's not... Is it possible
 to solve this case?

 The allocator is device-specific.

 Regards
 Jan
>>> This particular use case is not covered.
>>>
>>> We did discuss this before, and an opaque pointer was proposed, but
>>> did
>>> not make it in.
>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>> (and following emails in that thread)
>>>
>>> So, the options for this use case are as follows:
>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>> pool_data pointer before coming back from alloc. (It's a bit of a
>>> hack,
>>> but means no code change).
>>> 2. Add an extra parameter to the alloc function. The simplest way I
>>> can
>>> think of doing this is to
>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>> into the alloc function.
>>> This will have minimal impact on the public API,s as there is
>>> already an
>>> opaque there in the _populate_ funcs, we're just
>>> reusing it for the alloc.
>>>
>>> Do others think option 2 is OK to add this at this late stage? Even if
>>> the patch set has already been ACK'd?
>> Jan's use-case looks to be relevant.
>>
>> What about changing:
>>
>>rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>
>> into:
>>
>>   rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>  void *opaque)
> Or a third function?
>
> rte_mempool_set_ops_arg(struct rte_mempool, *mp, void *arg)

I think if we tried to add another function, there would be some 
opposition to that.
I think it's reasonable to add it to set_ops_byname, as we're setting 
the ops and
the ops_args in the same call. We use them later in the alloc.

>
> Or isn't it really a task for a kind of rte_mempool_populate_*?

I was leaning towards that, but a different proposal was suggested. I'm 
OK with
adding the *opaque to set_ops_byname

> This is a part of mempool I am not involved in yet.
>
>> ?
>>
>> The opaque pointer would be saved in mempool structure, and used
>> when the mempool is populated (calling mempool_ops_alloc).
>> The type of the structure pointed by the opaque has to be defined
>> (and documented) into each mempool_ops manager.
>>   
> Yes, that was another option, which has the additional impact of
> needing an
> opaque added to the mempool struct. If we use the opaque from the
> _populate_
> function, we use it straight away in the alloc, no storage needed.
>
> Also, do you think we need to go ahead with this change, or can we add
> it later as an
> improvement?
 The opaque in populate_phys() is already used for something else
 (i.e. the argument for the free callback of the memory chunk).
 I'm afraid it could cause confusion to have it used for 2 different
 things.

 About the change, I think it could be good to have it in 16.11,
 because it will probably change the API, and we should avoid to
 change it each version ;)

 So I'd vote to have it in the patchset for consistency.
>>> Sure, we should avoid breaking API just after we created it. :)
>>>
>>> OK, here's a slightly different proposal.
>>>
>>> If we add a string, to the _ops_byname, yes, that will work for Jan's case.
> A string? No, I needed to pass a file descriptor or a pointer to some 
> rte_device
> or something like that. So a void * is a way to go.

Apologies, I misread. *opaque it is.

>>> However, if we add a void*, that allow us the flexibility of passing
>>> anything we
>>> want. We can then store the void* in the mempool struct as void
>>> *pool_config,
> void *ops_context, ops_args, ops_data, ...

I think I'll go with ops_args

>>> so that when the alloc gets called, it 

[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Jan Viktorin
On Wed, 15 Jun 2016 16:10:13 +0200
Olivier MATZ  wrote:

> On 06/15/2016 04:02 PM, Hunt, David wrote:
> >
> >
> > On 15/6/2016 2:50 PM, Olivier MATZ wrote:  
> >> Hi David,
> >>
> >> On 06/15/2016 02:38 PM, Hunt, David wrote:  
> >>>
> >>>
> >>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:  
>  Hi,
> 
>  On 06/15/2016 01:47 PM, Hunt, David wrote:  
> >
> >
> > On 15/6/2016 11:13 AM, Jan Viktorin wrote:  
> >> Hi,
> >>
> >> I've got one last question. Initially, I was interested in creating
> >> my own external memory provider based on a Linux Kernel driver.
> >> So, I've got an opened file descriptor that points to a device which
> >> can mmap a memory regions for me.
> >>
> >> ...
> >> int fd = open("/dev/uio0" ...);
> >> ...
> >> rte_mempool *pool = rte_mempool_create_empty(...);
> >> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
> >>
> >> I am not sure how to pass the file descriptor pointer. I thought it
> >> would
> >> be possible by the rte_mempool_alloc but it's not... Is it possible
> >> to solve this case?
> >>
> >> The allocator is device-specific.
> >>
> >> Regards
> >> Jan  
> >
> > This particular use case is not covered.
> >
> > We did discuss this before, and an opaque pointer was proposed, but
> > did
> > not make it in.
> > http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
> > (and following emails in that thread)
> >
> > So, the options for this use case are as follows:
> > 1. Use the pool_data to pass data in to the alloc, then set the
> > pool_data pointer before coming back from alloc. (It's a bit of a
> > hack,
> > but means no code change).
> > 2. Add an extra parameter to the alloc function. The simplest way I
> > can
> > think of doing this is to
> > take the *opaque passed into rte_mempool_populate_phys, and pass it on
> > into the alloc function.
> > This will have minimal impact on the public API,s as there is
> > already an
> > opaque there in the _populate_ funcs, we're just
> > reusing it for the alloc.
> >
> > Do others think option 2 is OK to add this at this late stage? Even if
> > the patch set has already been ACK'd?  
> 
>  Jan's use-case looks to be relevant.
> 
>  What about changing:
> 
>    rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
> 
>  into:
> 
>   rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>  void *opaque)

Or a third function?

rte_mempool_set_ops_arg(struct rte_mempool, *mp, void *arg)

Or isn't it really a task for a kind of rte_mempool_populate_*?

This is a part of mempool I am not involved in yet.

> 
>  ?
> 
>  The opaque pointer would be saved in mempool structure, and used
>  when the mempool is populated (calling mempool_ops_alloc).
>  The type of the structure pointed by the opaque has to be defined
>  (and documented) into each mempool_ops manager.
>   
> >>>
> >>> Yes, that was another option, which has the additional impact of
> >>> needing an
> >>> opaque added to the mempool struct. If we use the opaque from the
> >>> _populate_
> >>> function, we use it straight away in the alloc, no storage needed.
> >>>
> >>> Also, do you think we need to go ahead with this change, or can we add
> >>> it later as an
> >>> improvement?  
> >>
> >> The opaque in populate_phys() is already used for something else
> >> (i.e. the argument for the free callback of the memory chunk).
> >> I'm afraid it could cause confusion to have it used for 2 different
> >> things.
> >>
> >> About the change, I think it could be good to have it in 16.11,
> >> because it will probably change the API, and we should avoid to
> >> change it each version ;)
> >>
> >> So I'd vote to have it in the patchset for consistency.  
> >
> > Sure, we should avoid breaking API just after we created it. :)
> >
> > OK, here's a slightly different proposal.
> >
> > If we add a string, to the _ops_byname, yes, that will work for Jan's case.

A string? No, I needed to pass a file descriptor or a pointer to some rte_device
or something like that. So a void * is a way to go.

> > However, if we add a void*, that allow us the flexibility of passing
> > anything we
> > want. We can then store the void* in the mempool struct as void
> > *pool_config,

void *ops_context, ops_args, ops_data, ...

> > so that when the alloc gets called, it can access whatever is stored at
> > *pool_config
> > and do the correct initialisation/allocation. In Jan's use case, this
> > can simply be typecast
> > to a string. In future cases, it can be a struct, which could including
> > new flags.

New flags? Does it mean an API extension?

> 
> Yep, agree. But not sure I'm seeing the difference with what I
> proposed.

Me neither... I think it is exactly the same :).

Jan


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Olivier MATZ


On 06/15/2016 04:02 PM, Hunt, David wrote:
>
>
> On 15/6/2016 2:50 PM, Olivier MATZ wrote:
>> Hi David,
>>
>> On 06/15/2016 02:38 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
 Hi,

 On 06/15/2016 01:47 PM, Hunt, David wrote:
>
>
> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>> Hi,
>>
>> I've got one last question. Initially, I was interested in creating
>> my own external memory provider based on a Linux Kernel driver.
>> So, I've got an opened file descriptor that points to a device which
>> can mmap a memory regions for me.
>>
>> ...
>> int fd = open("/dev/uio0" ...);
>> ...
>> rte_mempool *pool = rte_mempool_create_empty(...);
>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>
>> I am not sure how to pass the file descriptor pointer. I thought it
>> would
>> be possible by the rte_mempool_alloc but it's not... Is it possible
>> to solve this case?
>>
>> The allocator is device-specific.
>>
>> Regards
>> Jan
>
> This particular use case is not covered.
>
> We did discuss this before, and an opaque pointer was proposed, but
> did
> not make it in.
> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
> (and following emails in that thread)
>
> So, the options for this use case are as follows:
> 1. Use the pool_data to pass data in to the alloc, then set the
> pool_data pointer before coming back from alloc. (It's a bit of a
> hack,
> but means no code change).
> 2. Add an extra parameter to the alloc function. The simplest way I
> can
> think of doing this is to
> take the *opaque passed into rte_mempool_populate_phys, and pass it on
> into the alloc function.
> This will have minimal impact on the public API,s as there is
> already an
> opaque there in the _populate_ funcs, we're just
> reusing it for the alloc.
>
> Do others think option 2 is OK to add this at this late stage? Even if
> the patch set has already been ACK'd?

 Jan's use-case looks to be relevant.

 What about changing:

   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)

 into:

  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
 void *opaque)

 ?

 The opaque pointer would be saved in mempool structure, and used
 when the mempool is populated (calling mempool_ops_alloc).
 The type of the structure pointed by the opaque has to be defined
 (and documented) into each mempool_ops manager.

>>>
>>> Yes, that was another option, which has the additional impact of
>>> needing an
>>> opaque added to the mempool struct. If we use the opaque from the
>>> _populate_
>>> function, we use it straight away in the alloc, no storage needed.
>>>
>>> Also, do you think we need to go ahead with this change, or can we add
>>> it later as an
>>> improvement?
>>
>> The opaque in populate_phys() is already used for something else
>> (i.e. the argument for the free callback of the memory chunk).
>> I'm afraid it could cause confusion to have it used for 2 different
>> things.
>>
>> About the change, I think it could be good to have it in 16.11,
>> because it will probably change the API, and we should avoid to
>> change it each version ;)
>>
>> So I'd vote to have it in the patchset for consistency.
>
> Sure, we should avoid breaking API just after we created it. :)
>
> OK, here's a slightly different proposal.
>
> If we add a string, to the _ops_byname, yes, that will work for Jan's case.
> However, if we add a void*, that allow us the flexibility of passing
> anything we
> want. We can then store the void* in the mempool struct as void
> *pool_config,
> so that when the alloc gets called, it can access whatever is stored at
> *pool_config
> and do the correct initialisation/allocation. In Jan's use case, this
> can simply be typecast
> to a string. In future cases, it can be a struct, which could including
> new flags.

Yep, agree. But not sure I'm seeing the difference with what I
proposed.

>
> I think that change is minimal enough to be low risk at this stage.
>
> Thoughts?

Agree. Thanks!


Olivier


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Olivier MATZ
Hi David,

On 06/15/2016 02:38 PM, Hunt, David wrote:
>
>
> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>> Hi,
>>
>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
 Hi,

 I've got one last question. Initially, I was interested in creating
 my own external memory provider based on a Linux Kernel driver.
 So, I've got an opened file descriptor that points to a device which
 can mmap a memory regions for me.

 ...
 int fd = open("/dev/uio0" ...);
 ...
 rte_mempool *pool = rte_mempool_create_empty(...);
 rte_mempool_set_ops_byname(pool, "uio_allocator_ops");

 I am not sure how to pass the file descriptor pointer. I thought it
 would
 be possible by the rte_mempool_alloc but it's not... Is it possible
 to solve this case?

 The allocator is device-specific.

 Regards
 Jan
>>>
>>> This particular use case is not covered.
>>>
>>> We did discuss this before, and an opaque pointer was proposed, but did
>>> not make it in.
>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>> (and following emails in that thread)
>>>
>>> So, the options for this use case are as follows:
>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>> pool_data pointer before coming back from alloc. (It's a bit of a hack,
>>> but means no code change).
>>> 2. Add an extra parameter to the alloc function. The simplest way I can
>>> think of doing this is to
>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>> into the alloc function.
>>> This will have minimal impact on the public API,s as there is already an
>>> opaque there in the _populate_ funcs, we're just
>>> reusing it for the alloc.
>>>
>>> Do others think option 2 is OK to add this at this late stage? Even if
>>> the patch set has already been ACK'd?
>>
>> Jan's use-case looks to be relevant.
>>
>> What about changing:
>>
>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>
>> into:
>>
>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>> void *opaque)
>>
>> ?
>>
>> The opaque pointer would be saved in mempool structure, and used
>> when the mempool is populated (calling mempool_ops_alloc).
>> The type of the structure pointed by the opaque has to be defined
>> (and documented) into each mempool_ops manager.
>>
>
> Yes, that was another option, which has the additional impact of needing an
> opaque added to the mempool struct. If we use the opaque from the
> _populate_
> function, we use it straight away in the alloc, no storage needed.
>
> Also, do you think we need to go ahead with this change, or can we add
> it later as an
> improvement?

The opaque in populate_phys() is already used for something else
(i.e. the argument for the free callback of the memory chunk).
I'm afraid it could cause confusion to have it used for 2 different
things.

About the change, I think it could be good to have it in 16.11,
because it will probably change the API, and we should avoid to
change it each version ;)

So I'd vote to have it in the patchset for consistency.


Olivier


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Hunt, David


On 15/6/2016 2:50 PM, Olivier MATZ wrote:
> Hi David,
>
> On 06/15/2016 02:38 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>> Hi,
>>>
>>> On 06/15/2016 01:47 PM, Hunt, David wrote:


 On 15/6/2016 11:13 AM, Jan Viktorin wrote:
> Hi,
>
> I've got one last question. Initially, I was interested in creating
> my own external memory provider based on a Linux Kernel driver.
> So, I've got an opened file descriptor that points to a device which
> can mmap a memory regions for me.
>
> ...
> int fd = open("/dev/uio0" ...);
> ...
> rte_mempool *pool = rte_mempool_create_empty(...);
> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>
> I am not sure how to pass the file descriptor pointer. I thought it
> would
> be possible by the rte_mempool_alloc but it's not... Is it possible
> to solve this case?
>
> The allocator is device-specific.
>
> Regards
> Jan

 This particular use case is not covered.

 We did discuss this before, and an opaque pointer was proposed, but 
 did
 not make it in.
 http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
 (and following emails in that thread)

 So, the options for this use case are as follows:
 1. Use the pool_data to pass data in to the alloc, then set the
 pool_data pointer before coming back from alloc. (It's a bit of a 
 hack,
 but means no code change).
 2. Add an extra parameter to the alloc function. The simplest way I 
 can
 think of doing this is to
 take the *opaque passed into rte_mempool_populate_phys, and pass it on
 into the alloc function.
 This will have minimal impact on the public API,s as there is 
 already an
 opaque there in the _populate_ funcs, we're just
 reusing it for the alloc.

 Do others think option 2 is OK to add this at this late stage? Even if
 the patch set has already been ACK'd?
>>>
>>> Jan's use-case looks to be relevant.
>>>
>>> What about changing:
>>>
>>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>>
>>> into:
>>>
>>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>> void *opaque)
>>>
>>> ?
>>>
>>> The opaque pointer would be saved in mempool structure, and used
>>> when the mempool is populated (calling mempool_ops_alloc).
>>> The type of the structure pointed by the opaque has to be defined
>>> (and documented) into each mempool_ops manager.
>>>
>>
>> Yes, that was another option, which has the additional impact of 
>> needing an
>> opaque added to the mempool struct. If we use the opaque from the
>> _populate_
>> function, we use it straight away in the alloc, no storage needed.
>>
>> Also, do you think we need to go ahead with this change, or can we add
>> it later as an
>> improvement?
>
> The opaque in populate_phys() is already used for something else
> (i.e. the argument for the free callback of the memory chunk).
> I'm afraid it could cause confusion to have it used for 2 different
> things.
>
> About the change, I think it could be good to have it in 16.11,
> because it will probably change the API, and we should avoid to
> change it each version ;)
>
> So I'd vote to have it in the patchset for consistency.

Sure, we should avoid breaking API just after we created it. :)

OK, here's a slightly different proposal.

If we add a string, to the _ops_byname, yes, that will work for Jan's case.
However, if we add a void*, that allow us the flexibility of passing 
anything we
want. We can then store the void* in the mempool struct as void 
*pool_config,
so that when the alloc gets called, it can access whatever is stored at 
*pool_config
and do the correct initialisation/allocation. In Jan's use case, this 
can simply be typecast
to a string. In future cases, it can be a struct, which could including 
new flags.

I think that change is minimal enough to be low risk at this stage.

Thoughts?

Dave.













[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Olivier MATZ
Hi,

On 06/15/2016 01:47 PM, Hunt, David wrote:
>
>
> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>> Hi,
>>
>> I've got one last question. Initially, I was interested in creating
>> my own external memory provider based on a Linux Kernel driver.
>> So, I've got an opened file descriptor that points to a device which
>> can mmap a memory regions for me.
>>
>> ...
>> int fd = open("/dev/uio0" ...);
>> ...
>> rte_mempool *pool = rte_mempool_create_empty(...);
>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>
>> I am not sure how to pass the file descriptor pointer. I thought it would
>> be possible by the rte_mempool_alloc but it's not... Is it possible
>> to solve this case?
>>
>> The allocator is device-specific.
>>
>> Regards
>> Jan
>
> This particular use case is not covered.
>
> We did discuss this before, and an opaque pointer was proposed, but did
> not make it in.
> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
> (and following emails in that thread)
>
> So, the options for this use case are as follows:
> 1. Use the pool_data to pass data in to the alloc, then set the
> pool_data pointer before coming back from alloc. (It's a bit of a hack,
> but means no code change).
> 2. Add an extra parameter to the alloc function. The simplest way I can
> think of doing this is to
> take the *opaque passed into rte_mempool_populate_phys, and pass it on
> into the alloc function.
> This will have minimal impact on the public API,s as there is already an
> opaque there in the _populate_ funcs, we're just
> reusing it for the alloc.
>
> Do others think option 2 is OK to add this at this late stage? Even if
> the patch set has already been ACK'd?

Jan's use-case looks to be relevant.

What about changing:

   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)

into:

  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
 void *opaque)

?

The opaque pointer would be saved in mempool structure, and used
when the mempool is populated (calling mempool_ops_alloc).
The type of the structure pointed by the opaque has to be defined
(and documented) into each mempool_ops manager.


Olivier


[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Hunt, David


On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> Hi,
>
> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>> Hi,
>>>
>>> I've got one last question. Initially, I was interested in creating
>>> my own external memory provider based on a Linux Kernel driver.
>>> So, I've got an opened file descriptor that points to a device which
>>> can mmap a memory regions for me.
>>>
>>> ...
>>> int fd = open("/dev/uio0" ...);
>>> ...
>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>
>>> I am not sure how to pass the file descriptor pointer. I thought it 
>>> would
>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>> to solve this case?
>>>
>>> The allocator is device-specific.
>>>
>>> Regards
>>> Jan
>>
>> This particular use case is not covered.
>>
>> We did discuss this before, and an opaque pointer was proposed, but did
>> not make it in.
>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>> (and following emails in that thread)
>>
>> So, the options for this use case are as follows:
>> 1. Use the pool_data to pass data in to the alloc, then set the
>> pool_data pointer before coming back from alloc. (It's a bit of a hack,
>> but means no code change).
>> 2. Add an extra parameter to the alloc function. The simplest way I can
>> think of doing this is to
>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>> into the alloc function.
>> This will have minimal impact on the public API,s as there is already an
>> opaque there in the _populate_ funcs, we're just
>> reusing it for the alloc.
>>
>> Do others think option 2 is OK to add this at this late stage? Even if
>> the patch set has already been ACK'd?
>
> Jan's use-case looks to be relevant.
>
> What about changing:
>
>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>
> into:
>
>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
> void *opaque)
>
> ?
>
> The opaque pointer would be saved in mempool structure, and used
> when the mempool is populated (calling mempool_ops_alloc).
> The type of the structure pointed by the opaque has to be defined
> (and documented) into each mempool_ops manager.
>

Yes, that was another option, which has the additional impact of needing an
opaque added to the mempool struct. If we use the opaque from the _populate_
function, we use it straight away in the alloc, no storage needed.

Also, do you think we need to go ahead with this change, or can we add 
it later as an
improvement?

Regards,
Dave.












[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Hunt, David


On 15/6/2016 11:13 AM, Jan Viktorin wrote:
> Hi,
>
> I've got one last question. Initially, I was interested in creating
> my own external memory provider based on a Linux Kernel driver.
> So, I've got an opened file descriptor that points to a device which
> can mmap a memory regions for me.
>
> ...
> int fd = open("/dev/uio0" ...);
> ...
> rte_mempool *pool = rte_mempool_create_empty(...);
> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>
> I am not sure how to pass the file descriptor pointer. I thought it would
> be possible by the rte_mempool_alloc but it's not... Is it possible
> to solve this case?
>
> The allocator is device-specific.
>
> Regards
> Jan

This particular use case is not covered.

We did discuss this before, and an opaque pointer was proposed, but did 
not make it in.
http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
(and following emails in that thread)

So, the options for this use case are as follows:
1. Use the pool_data to pass data in to the alloc, then set the 
pool_data pointer before coming back from alloc. (It's a bit of a hack, 
but means no code change).
2. Add an extra parameter to the alloc function. The simplest way I can 
think of doing this is to
take the *opaque passed into rte_mempool_populate_phys, and pass it on 
into the alloc function.
This will have minimal impact on the public API,s as there is already an 
opaque there in the _populate_ funcs, we're just
reusing it for the alloc.

Do others think option 2 is OK to add this at this late stage? Even if 
the patch set has already been ACK'd?

Regards,
Dave.














[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread Jan Viktorin
Hi,

I've got one last question. Initially, I was interested in creating
my own external memory provider based on a Linux Kernel driver.
So, I've got an opened file descriptor that points to a device which
can mmap a memory regions for me.

...
int fd = open("/dev/uio0" ...);
...
rte_mempool *pool = rte_mempool_create_empty(...);
rte_mempool_set_ops_byname(pool, "uio_allocator_ops");

I am not sure how to pass the file descriptor pointer. I thought it would
be possible by the rte_mempool_alloc but it's not... Is it possible
to solve this case?

The allocator is device-specific.

Regards
Jan

On Wed, 15 Jun 2016 08:47:01 +0100
David Hunt  wrote:

> Here's the latest version of the External Mempool Manager patchset.
> It's re-based on top of the latest head as of 14/6/2016, including
> Olivier's 35-part patch series on mempool re-org [1]
> 
> [1] http://dpdk.org/ml/archives/dev/2016-May/039229.html
> 
> v12 changes:
> 
>  * Fixed a comment (function pram h -> ops)
>  * Fixed a typo in mempool docs (callbacki)
> 
> v11 changes:
> 
>  * Fixed comments (added '.' where needed for consistency)
>  * removed ABI breakage notice for mempool manager in deprecation.rst
>  * Added description of the external mempool manager functionality to
>doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
>  * renamed rte_mempool_default.c to rte_mempool_ring.c
> 
> v10 changes:
> 
>  * changed the _put/_get op names to _enqueue/_dequeue to be consistent
>with the function names
>  * some rte_errno cleanup
>  * comment tweaks about when to set pool_data
>  * removed an un-needed check for ops->alloc == NULL
> 
> v9 changes:
> 
>  * added a check for NULL alloc in rte_mempool_ops_register
>  * rte_mempool_alloc_t now returns int instead of void*
>  * fixed some comment typo's
>  * removed some unneeded typecasts
>  * changed a return NULL to return -EEXIST in rte_mempool_ops_register
>  * fixed rte_mempool_version.map file so builds ok as shared libs
>  * moved flags check from rte_mempool_create_empty to rte_mempool_create
> 
> v8 changes:
> 
>  * merged first three patches in the series into one.
>  * changed parameters to ops callback to all be rte_mempool pointer
>rather than than pointer to opaque data or uint64.
>  * comment fixes.
>  * fixed parameter to _free function (was inconsistent).
>  * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED
> 
> v7 changes:
> 
>  * Changed rte_mempool_handler_table to rte_mempool_ops_table
>  * Changed hander_idx to ops_index in rte_mempool struct
>  * Reworked comments in rte_mempool.h around ops functions
>  * Changed rte_mempool_hander.c to rte_mempool_ops.c
>  * Changed all functions containing _handler_ to _ops_
>  * Now there is no mention of 'handler' left
>  * Other small changes out of review of mailing list
> 
> v6 changes:
> 
>  * Moved the flags handling from rte_mempool_create_empty to
>rte_mempool_create, as it's only there for backward compatibility
>  * Various comment additions and cleanup
>  * Renamed rte_mempool_handler to rte_mempool_ops
>  * Added a union for *pool and u64 pool_id in struct rte_mempool
>  * split the original patch into a few parts for easier review.
>  * rename functions with _ext_ to _ops_.
>  * addressed review comments
>  * renamed put and get functions to enqueue and dequeue
>  * changed occurences of rte_mempool_ops to const, as they
>contain function pointers (security)
>  * split out the default external mempool handler into a separate
>patch for easier review
> 
> v5 changes:
>  * rebasing, as it is dependent on another patch series [1]
> 
> v4 changes (Olivier Matz):
>  * remove the rte_mempool_create_ext() function. To change the handler, the
>user has to do the following:
>- mp = rte_mempool_create_empty()
>- rte_mempool_set_handler(mp, "my_handler")
>- rte_mempool_populate_default(mp)
>This avoids to add another function with more than 10 arguments, 
> duplicating
>the doxygen comments
>  * change the api of rte_mempool_alloc_t: only the mempool pointer is required
>as all information is available in it
>  * change the api of rte_mempool_free_t: remove return value
>  * move inline wrapper functions from the .c to the .h (else they won't be
>inlined). This implies to have one header file (rte_mempool.h), or it
>would have generate cross dependencies issues.
>  * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
>to the use of && instead of &)
>  * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
>  * fix build with shared libraries (global handler has to be declared in
>the .map file)
>  * rationalize #include order
>  * remove unused function rte_mempool_get_handler_name()
>  * rename some structures, fields, functions
>  * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
>from Yuanhan)
>  * test the ext mempool handler in the same file than standard mempool tests,
>avoiding to 

[dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager

2016-06-15 Thread David Hunt
Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 14/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * Fixed a typo in mempool docs (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.