Re: [vpp-dev] vpp+dpdk

2022-12-18 Thread first_semon
Can anyone answer my question? official person? I use the pci called N10 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22346): https://lists.fd.io/g/vpp-dev/message/22346 Mute This Topic: https://lists.fd.io/mt/95640416/21656 Mute

Re: [vpp-dev] vpp+dpdk

2022-12-18 Thread first_semon
Can anyone answer my question? official person? -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22345): https://lists.fd.io/g/vpp-dev/message/22345 Mute This Topic: https://lists.fd.io/mt/95640416/21656 Mute

Re: [vpp-dev] mellanox mlx5 + rdma + lcpng + bond - performance (tuning ? or just FIB/RIB processing limit) (max performance pps about 2Mpps when packet drops starts)

2022-12-18 Thread Paweł Staszewski
Switched to native linux kernel ip forwarding and Can now do 12Mpps with 12 cores at about 80Gbit/s Basically it looks like it is lcp problem and routing - dont know how this tests are done for lcp but it looks like those tests are like 1. load 900k routes 2. connect to traffic generator -