Hey Folks,
I ran into the same issue that Alex is describing here, and I wanted to
expand just a little bit on his comments, as the documentation isn't very
clear.
Per the documentation, the two arguments to rte_pktmbuf_pool_init() are a
pointer to the memory pool that contains the newly-allocate
For posterity.
1.When using MTU larger then 2K its advised to provide the value
to rte_pktmbuf_pool_init.
2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
nearest 1K multiple when deciding on the receiving capabilities [buffer
size]of the Buffers in the pool.
The function SRRC
On Thu, Oct 30, 2014 at 02:48:42PM +0200, Alex Markuze wrote:
> For posterity.
>
> 1.When using MTU larger then 2K its advised to provide the value
> to rte_pktmbuf_pool_init.
> 2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
> nearest 1K multiple when deciding on the receivi
Hi,
I'm seeing an unwanted behaviour in the receive flow of ixgbevf. While
using Jumbo frames and sending 4k+ bytes , the receive side breaks up the
packets into 2K buffers, and I receive 3 mbuffs per packet.
Im setting the .max_rx_pkt_len to 4.5K and the mempoll has 5K sized
elements?
Anything e
On Thu, Oct 30, 2014 at 12:23:09PM +0200, Alex Markuze wrote:
> Hi,
> I'm seeing an unwanted behaviour in the receive flow of ixgbevf. While
> using Jumbo frames and sending 4k+ bytes , the receive side breaks up the
> packets into 2K buffers, and I receive 3 mbuffs per packet.
>
> Im setting the
5 matches
Mail list logo