On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick <shakl...@gmail.com> wrote:
> Can you verify how many buffers you're allocating? I don't see how many > you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <ed.lomba...@netscout.com> > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original mbuf size > > of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger <step...@networkplumber.org> > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed <ed.lomba...@netscout.com> > > Cc: users@dpdk.org > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not click > > links or open attachments unless you recognize the sender and know the > > content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" <ed.lomba...@netscout.com> wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the larger > > footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data.