Thanks, I knew that document and we've implemented many of those settings/rules, but perhaps there's one crucial I've forgot? Wonder which one.
Anyway, increasing the amount of queues impinge the performance, while sending 250M packets over a 100GbE link to an Intel 810-cqda2 NIC mounted on the EPYC Milan server, i see: . 1 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 54,590,111 2 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 79,394,138 4 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 87,414,030 . With DPDK 21.02 on RHL8.4. I can't observe this situation while capturing from my Intel server where increasing the queues leads to better performance (while with the test input set I drop with one queue, I do not drop anymore with 2 on the Intel server.) A customer with a brand new EPYC Milan server in his lab observed as well this scenario which is a bit of a worry, but again it might be some config/compilation issue we need do deal with? BTW, the same issue can be reproduced with testpmd, using 4 queues and the same input data set (250M of 64bytes frame at 30Gbps): . testpmd> stop Telling cores to stop... Waiting for lcores to finish... ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 ------- RX-packets: 41762999 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 ------- RX-packets: 40152306 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2 ------- RX-packets: 41153402 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3 ------- RX-packets: 38341370 TX-packets: 0 TX-dropped: 0 ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 161410077 RX-dropped: 88589923 RX-total: 250000000 TX-packets: 0 TX-dropped: 0 TX-total: 0 ---------------------------------------------------------------------------- . . testpmd> show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 161410081 tx_good_packets: 0 rx_good_bytes: 9684605284 tx_good_bytes: 0 rx_missed_errors: 88589923 . Can't figure out what's wrong here.. Il 9/11/21 12:20 PM, Steffen Weise ha scritto: > Hi Filip, > > i have not seen the same issues. > Are you aware of this tuning guide? I applied it and had no issues with > intel 100G NIC. > > HPC Tuning Guide for AMD EPYC Processors > http://developer.amd.com/wp-content/resources/56420.pdf > <http://developer.amd.com/wp-content/resources/56420.pdf> > > Hope it helps. > > Cheers, > Steffen Weise > > >> Am 11.09.2021 um 10:56 schrieb Filip Janiszewski >> <cont...@filipjaniszewski.com>: >> >> I ran more tests, >> >> This AMD server is a bit confusing, I can tune it to capture 28Mpps (64 >> bytes frame) from one single core, so I would assume that using one more >> core will at least increase a bit the capture capabilities, but it's >> not, 1% more speed and it drops regardless of how many queues are >> configured - I've not observed this situation on the Intel server, where >> adding more queues/cores scale to higher throughput. >> >> This issue have been verified now with both Mellanox and Intel (810 >> series, 100GbE) NICs. >> >> Anybody encountered anything similar? >> >> Thanks >> >> Il 9/10/21 3:34 PM, Filip Janiszewski ha scritto: >>> Hi, >>> >>> I've switched a 100Gbe MLX ConnectX-4 card from an Intel Xeon server to >>> an AMD EPYC server (running 75F3 CPU, 256GiB of RAM and PCIe4 lanes), >>> and using the same capture software we can't get any faster than 10Gbps, >>> when exceeding that speed regardless of the amount of queues configured >>> the rx_discards_phy counter starts to raise and packets are lost in huge >>> amounts. >>> >>> On the Xeon machine, I was able to get easily to 50Gbps with 4 queues. >>> >>> Is there any specific DPDK configuration that we might want to setup for >>> those AMD servers? The software is DPDK based so I wonder if some build >>> option is missing somewhere. >>> >>> What else I might want to look for to investigate this issue? >>> >>> Thanks >>> >> >> -- >> BR, Filip >> +48 666 369 823 -- BR, Filip +48 666 369 823