Did you see my other suggestion about increasing buffers-per-numa? Your
NICs were missing some RX's because no buffers were available.

Setting the main heap page size to use either 2M or 1G hugepages instead of
using default 4k pages will probably help too.

-Matt


On Tue, Jan 3, 2023 at 1:37 PM <r...@gmx.net> wrote:

> Hi Matt,
>
> thanks. The *no-multi-seq *option is actually dropping the performance
> even more. Once enable it drops from 5 Mpps ( out of expected 10 Mpps) to
> less than < 1 Mpps. Therefore I disabled the option again.
>
> The dpdk applications foward the full 10 Mpps without any dev-args:
>
> ./dpdk-l2fwd -n 4 -l 6  -a 0000:4b:00.0 -a 0000:4b:00.1   -- -q 2 -p 0x3
>
> but also adding those options confirm the full 10 Mpps. Either way *l2fwd*
> will not drop any packets and run the full load.
>
> ./dpdk-l2fwd -n 4 -l 6  -a
> 0000:4b:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128,rxq_pkt_pad_en=1,dv_flow_en=0
> -a
> 0000:4b:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128,rxq_pkt_pad_en=1,dv_flow_en=0
>  -- -q 2 -p 0x3
>
>
> Contrary by using the default */etc/vpp/startup.conf *options of DPDK I
> only yield 4 Mpps out of 10 Mpps expected.
> Once I added the devargs
> *mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128,rxq_pkt_pad_en=1,dv_flow_en=0
>  *it
> gained another 20% ( as recommended in DPDK mellanox perf-report )
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22414): https://lists.fd.io/g/vpp-dev/message/22414
Mute This Topic: https://lists.fd.io/mt/95959719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to