Hi all, Hi Dariusz,




Thanks a lot for your help so far. We really appreciate it.

I just want to touch base with this question which was asked by my colleague 
Tao a while back.



Our question is actually quite simple. Issuing the commands listed below on a 
ConnectX-6 Dx Card breaks the

bifurcated nature of the mlx5 driver in linux kernel for PF1. (No traffic is 
forwarded to linux kernel anymore on PF1)

You don’t need to start any testpmd or dpdk application. Just issuing the 
following commands below breaks the PF1

in linux kernel already.



sudo devlink dev eswitch set pci/0000:8a:00.0 mode switchdev

sudo devlink dev eswitch set pci/0000:8a:00.1 mode switchdev

sudo devlink dev param set pci/0000:8a:00.0 name esw_multiport value true cmode 
runtime

sudo devlink dev param set pci/0000:8a:00.1 name esw_multiport value true cmode 
runtime





----<test environment>-----

pci/0000:8a:00.0:

  driver mlx5_core

  versions:

      fixed:

        fw.psid MT_0000000359

      running:

        fw.version 22.39.2048

        fw 22.39.2048

Linux kernel version: 6.6.16

DPDK: 23.11 (But not really needed to reproduce the issue)

----</test environment>------





This makes the eswitch multiport feature for us unusable. Could you please 
advise whether we are missing smt here ?

As we are really keen to use this feature.



Thanks & Regards





Guvenc Gulce









-------------------- previous email exchange -------------------------------



Hi Dariusz,



It is very appreciated that you took a look at the issue and provided 
suggestions. This time, we again performed tests using two directly connected 
machines and focused on ICMP (IPv4) packets in addition to ICMPv6 packets 
mentioned in the original problem description. The issue remains the same. I 
would like to highlight two points in our setup:





  1.  ICMP packets immediately cannot be captured on PF1 right after setting 
the nic into the multiport eswitch mode. And if I switch off the multiport 
eswitch mode by using following two commands, ICMP communication is resumed 
immediately, which shall prove that configs, such as firewall, on the system 
are correct. I would also assume it has little to do with a running DPDK 
application, as communication is already broken before starting an application 
like testpmd.



sudo devlink dev param set pci/0000:3b:00.0 name esw_multiport value false 
cmode runtime



sudo devlink dev param set pci/0000:3b:00.1 name esw_multiport value false 
cmode runtime





  1.  In this setup, we do not use MLNX_OFED drivers but rely on the upstream 
Mellanox drivers from Linux kernel 6.5.0 (which is greater than the suggested 
kernel version 6.3). Would that make a difference? Could you share some more 
detailed information regarding the environment setup on your side? The firmware 
version we are using for Mellanox ConnectX-6 is 22.39.1002.



Looking forward to your further reply. Thanks in advance.



Best regards,

Tao Li



From: Dariusz Sosnowski <dsosnowski at 
nvidia.com<https://mails.dpdk.org/listinfo/users>>

Date: Friday, 19. April 2024 at 19:30

To: Tao Li <byteocean at hotmail.com<https://mails.dpdk.org/listinfo/users>>, 
users at dpdk.org<https://mails.dpdk.org/listinfo/users> <users at 
dpdk.org<https://mails.dpdk.org/listinfo/users>>

Cc: tao.li06 at sap.com<https://mails.dpdk.org/listinfo/users> <tao.li06 at 
sap.com<https://mails.dpdk.org/listinfo/users>>

Subject: RE: Packets cannot reach host's kernel in multiport e-switch mode 
(mlx5 driver)

Hi,



I could not reproduce the issue locally with testpmd, with flow isolation 
enabled. I can see ICMP packets passing both ways to kernel interfaces of PF0 
and PF1.

Without flow isolation, it is expected that traffic coming to the host will be 
hijacked by DPDK (depending on the MAC address, multicast config and 
promiscuous mode).



Could you please run testpmd with the following command line parameters and 
execute the following commands?



Testpmd command line:

        dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- 
--flow-isolate-all -i



Testpmd commands:

        port stop all

        flow configure 0 queues_number 4 queues_size 64

        flow configure 1 queues_number 4 queues_size 64

        flow configure 2 queues_number 4 queues_size 64

        flow configure 3 queues_number 4 queues_size 64

        port start 0

        port start 1

        port start 2

        port start 3

        set verbose 1

        set fwd rxonly

        start



With this testpmd running, could you please test if both PF0 and PF1 kernel 
interfaces are reachable and all packets pass?



Best regards,

Dariusz Sosnowski

Reply via email to