On 09/20/2013 07:34 PM, Robert Sanford wrote:
> One more point, if you're not doing this already: Allocate 2^N-1
> mbufs, not 2^N. According to the code and comments: "The optimum size
> (in terms of memory usage) for a mempool is when n is a power of two
> minus one: n = (2^q - 1)."
>
Many than
One more point, if you're not doing this already: Allocate 2^N-1 mbufs, not
2^N. According to the code and comments: "The optimum size (in terms of
memory usage) for a mempool is when n is a power of two minus one: n = (2^q
- 1)."
The reason: rte_mempool_create(... n ...) invokes rte_ring_create(.
On 09/19/2013 11:43 PM, Venkatesan, Venky wrote:
> Dmitry,
> One other question - what version of DPDK are you doing on?
> -Venky
>
It's DPDK-1.3.1-7 downloaded from intel.com. Should I try upgrading?
On 09/19/2013 11:39 PM, Robert Sanford wrote:
> Hi Dmitry,
>
> The biggest drop-off seems to be from size 128K to 256K. Are you using
> 1GB huge pages already (rather than 2MB)?
>
> I would think that it would not use over 1GB until you ask for 512K
> mbufs or more.
>
Hi Robert,
Yes, I've been
It might be interesting to see that start/end address of the 256K-item (256
* 1024 * 2240 = 560 MB) mbuf memory pool. Maybe it's the first size that
straddles two 1GB pages.
Perhaps you should try a tool that reports cache misses, TLB misses, and
related statistics. I don't know much about this ar
forwarding performance regression on large
mempool sizes.
Hi Dmitry,
The biggest drop-off seems to be from size 128K to 256K. Are you using 1GB huge
pages already (rather than 2MB)?
I would think that it would not use over 1GB until you ask for 512K mbufs or
more.
--
Regards,
Robert
On Thu, Sep
Hi Dmitry,
The biggest drop-off seems to be from size 128K to 256K. Are you using 1GB
huge pages already (rather than 2MB)?
I would think that it would not use over 1GB until you ask for 512K mbufs
or more.
--
Regards,
Robert
On Thu, Sep 19, 2013 at 3:50 AM, Dmitry Vyal wrote:
> Good day ev
Good day everyone,
While working on IP packet defragmenting I had to enlarge mempool size.
I did this to provide large enough time window for assembling a fragment
sequence. Unfortunately, I got a performance regression: if I enlarge
mempool size from 2**12 to 2**20 MBufs, packet performance fo
8 matches
Mail list logo