On Sun, 08 Jan 2023 16:23:13 +0300
Ruslan R. Laishev <[email protected]> wrote:

> Hello!
>  
> According the advices (from previous mails)  I recoded my little app to use 
> several lcore-queue pairs to generate traffic. Thanks it's works fine, I see 
> 8Gbps+ now with 2 workers .
> But! Now I have some other situation which I cannot to resolve. 2 workers  
> (every worker run on assigned lcore and put packets to separated tx queue) :
> after start of the app - both worker works some time, but at "some moment" 
> one worker cannot get mbufs by rte_pktmbuf_alloc_bulk() .  Juts for 
> demonstration a piece of stats:
>  
> At start :
> 08-01-2023 16:03:20.065  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:1981397/0/1981397
> 08-01-2023 16:03:20.065  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:1989108/0/1989108
>  
> Since "some moment"
> 08-01-2023 16:15:20.110  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:2197615/5778976181/2197631
> 08-01-2023 16:15:20.110  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:3952732/0/3952732
>  
> 08-01-2023 16:15:30.111  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:2197615/5869388078/2197631
> 08-01-2023 16:15:30.111  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:3980054/0/3980054
>  
> 08-01-2023 16:15:40.111  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:2197615/5959777107/2197631
> 08-01-2023 16:15:40.111  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:4007378/0/4007378
>  
> 08-01-2023 16:15:50.112  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:2197615/6050173812/2197631
> 08-01-2023 16:15:50.112  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:4034699/0/4034699
>  
> 08-01-2023 16:16:00.112  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#001] TX/NoMbufs/Flush:2197615/6140583818/2197631
> 08-01-2023 16:16:00.112  58628 [CPPROC\s_proc_auxilary:822] %TTR2CP-I:  
> [LCore:#002] TX/NoMbufs/Flush:4062021/0/4062021
>  
> So one worker works fine and as expected, second worker - permanently don't 
> getting mbufs .
> Is there what I have to check ?
> Thanks in advance!
>  
> --- 
> С уважением,
> Ruslan R. Laishev
> OpenVMS bigot, natural born system/network progger, C contractor.
> +79013163222
> +79910009922
>  
> 

Two thing to look at. First is the allocated mbuf pool big enough to handle the 
maximum
number of mbufs in flight in your application. For Tx, that is the number of 
transmit
queues multiplied by the number of transmit descriptors per ring. With some 
additional
buffers for staging.  Similar for receive side.

Second, transmit mbufs need to get cleaned up by the device driver after they
are sent. Depending on the the device, this maybe triggered by the receive path.
So polling for receive data may be needed even if you aren't doing any receives.

Reply via email to