[dpdk-users] VMware i40en SR-IOV VLAN
Does anyone use DPDK with VLAN on SR-IOV VF for i40en NIC driver under VMware ? It seems new 'security' in the Intel i40en driver broke the ability for DPDK to set VLAN filters for SR-IOV. Specifically, the VF has to be manually set as 'trusted mode' via a CLI command to allow more than 8 VLAN filters to be added on a VF with DEV_RX_OFFLOAD_VLAN_FILTER set using rte_eth_dev_vlan_filter. Anyone had similar experience / workaround possible?
Re: [dpdk-users] Support for more RSS hash types in vmxnet3
On 8/21/2018 6:57 PM, Jay Miller wrote: > It's clear that the vmxnet3 driver (even as of 18.08) supports just a > subset of RSS hash types: > > #define VMXNET3_RSS_OFFLOAD_ALL ( \ > ETH_RSS_IPV4 | \ > ETH_RSS_NONFRAG_IPV4_TCP | \ > ETH_RSS_IPV6 | \ > ETH_RSS_NONFRAG_IPV6_TCP) > > Are there plans to add support for other hash types (like > ETH_RSS_NONFRAG_IPV4_UDP), or is this an architectural limitation of > vmxnet3? On August 22, 2018 at 2:55 AM, Ferruh Yigit wrote: > Hi Yong, > > Can you please double check if the driver reports all supported hash functions > correctly. > > On v18.08, the RSS hf request from application changed from best effort to > strict requirement, meaning if an application request a hash function but > driver > doesn't report it as supported API will return an error, that is why it is > important for PMD to report supported hf properly. > > Thanks, > ferruh On September 13, 2018 6:44 PM, Yong Wang wrote: > That's pretty much all the hash types supported by vmxnet3 by default up to > version 3. > With version 4, UDP RSS will be supported but it's only supported on certain > version of ESX. > Since v4 driver is not out yet, current VMXNET3_RSS_OFFLOAD_ALL should be > good. Ferruh/Yong, I notice that ESXi 6.7 has been released for some months now with support for VMXNET3 version 4 including with RSS for UDP https://docs.vmware.com/en/vSphere/6.7/solutions/vSphere-6.7.2cd6d2a77980cc623caa6062f3c89362/GUID-C500585C0560D28B71180A40A4767C57.html I'm surprised this wasn't already present in DPDK 19.02 given how long 6.7 has been available already. Is it just a matter of changing the definition of VMXNET3_RSS_OFFLOAD_ALL in the PMD to support this, or are other changes required? thanks. Iain
Re: [dpdk-users] Outer VLAN stripping X710 NIC
I posted my observations on the same issue a year or so ago... See my follow-ups on this thread for a patch to work around the problem: http://mails.dpdk.org/archives/users/2017-October/002530.html Note that the patch has real-time overhead so was rejected. But we are using it to good effect in our production systems.
[dpdk-users] Wrong OFED version for DPDK 17.11.1 ?
Hi all, I'm trying to build 17.11.1 against libibverbs from OFED 4.2-1.0.0.0 but having some API compatibility issues. According to the DPDK docs, that's the correct version for DPDK 17.11 Ref: "Mellanox OFED version: 4.2" in https://dpdk.org/doc/guides-17.11/nics/mlx5.html But I get multiple errors which suggest the API versions are incompatible. For example: priv->hw_csum = !!(device_attr_ex.device_cap_flags_ex & IBV_DEVICE_RAW_IP_CSUM); drivers/net/mlx5/mlx5.c:794:21: error: 'struct ibv_device_attr_ex' has no member named 'device_cap_flags_ex' MLNX_OFED_SRC-4.2-1.0.0.0 contains libibverbs-41mlnx1, and that has include/infiniband/verbs.h with the following typedef: struct ibv_device_attr_ex { struct ibv_device_attr orig_attr; uint32_t comp_mask; struct ibv_odp_caps odp_caps; }; So the docs appear to be wrong. no way the OFED 4.2 version of mlx5.c is going to compile against that header. What version of libibverbs should I actually be using for DPDK 17.11.1 ? thanks, Iain
Re: [dpdk-users] VLAN tags always stripped on i40evf [VMware SR-IOV]
Original message from Iain Barker: > > --- drivers/net/i40e/i40e_rxtx.c.orig 2016-11-30 04:28:48.0 -0500 > +++ drivers/net/i40e/i40e_rxtx.c 2017-10-10 15:07:10.851398087 -0400 > @@ -93,6 +93,8 @@ > rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1); > PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u", >rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1)); > + // vlan got stripped. Re-inject vlan from tci > + rte_vlan_insert(&mb); > } else { > mb->vlan_tci = 0; > } > From: Xing, Beilei [mailto:beilei.x...@intel.com] : > >NACK this patch as it will impact performance. > >VLAN strip is supported by latest i40e kernel driver, it works well in kernel >PF + >DPDK VF mode with i40e kernel driver 2.1.26 and DPDK 17.08. Please refer to >Latest kernel driver. > >Beilei Thanks Beilei. I wasn't proposing to submit the patch. I was just using it as a demonstration of the problem. i.e. that the TCI has the required data, but it was stripped before the frame arrives at the app. This is not Linux kernel as the host for PF, it is VMware ESXi. The latest Intel PF driver I see for i40e is version 2.0.6. Will that work?
Re: [dpdk-users] VLAN tags always stripped on i40evf [VMware SR-IOV]
On Tuesday, October 10, 2017 9:49 AM (EST), Iain Barker wrote: > I have a problem trying to get VLAN tagged frames to be received at the > i40evf PMD. With more debugging enabled, I can see that this seems to be a compatibility problem between DPDK and i40evf related to VLAN hardware stripping. Specifically, when DPDK requests VLAN stripping to be disabled by VF, but the PF policy doesn't allow it to be disabled (as is the case for VMware SR-IOV), an error is returned from the API. testpmd> vlan set strip off 0 i40evf_execute_vf_cmd(): No response for 28 i40evf_disable_vlan_strip(): Failed to execute command of VIRTCHNL_OP_DISABLE_VLAN_STRIPPING In that case, received frames with VLAN headers will still be stripped at the PF, and the TCI will record the missing VLAN details when handed up to the DPDK driver. With i40e debug enabled, it's clear to see the difference being reported in i40e_rxd_to_vlan_tci: Example using VLAN on i40e PCI (vlan works): PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 0, vlan_tci_outer: 0 Port 0 pkt-len=102 nb-segs=1 ETH: src=00:10:E0:8D:A7:52 dst=00:10:E0:8A:86:8A [vlan id=8] type=0x0800 IPV4: src=8.8.8.102 dst=8.8.8.3 proto=1 (ICMP) ICMP: echo request seq id=1 Example using VLAN on i40evf SR-IOV (vlan fails): PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 8, vlan_tci_outer: 0 Port 0 pkt-len=60 nb-segs=1 ETH: src=00:10:E0:8D:A7:52 dst=FF:FF:FF:FF:FF:FF type=0x0806 ARP: hrd=1 proto=0x0800 hln=6 pln=4 op=1 (ARP Request) sha=00:10:E0:8D:A7:52 sip=8.8.8.102 tha=00:00:00:00:00:00 tip=8.8.8.3 As the application requested tagging not be stripped, and the hardware driver was not able to disable strip, in my opinion DPDK should emulate the requested behavior by re-add the missing VLAN header in the RX thread, before it passes the mbuf to the application. I'm guessing that the native Linux driver is smart enough to do something like this automatically in software, but DPDK does not... Adding a call to rte_vlan_insert() to reinstate the VLAN header using the data from TCI is sufficient to avoid the problem in a quick test. --- drivers/net/i40e/i40e_rxtx.c.orig 2016-11-30 04:28:48.0 -0500 +++ drivers/net/i40e/i40e_rxtx.c 2017-10-10 15:07:10.851398087 -0400 @@ -93,6 +93,8 @@ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1); PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u", rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1)); + // vlan got stripped. Re-inject vlan from tci + rte_vlan_insert(&mb); } else { mb->vlan_tci = 0; } For a proper solution, this would need to be made selective based on whether the port config originally asked for VLANs to be stripped or not. But I'm not sure that rte_vlan_insert() has enough context to be able to access that data, as it's stored in the driver/hw struct not the rx buffer. Obviously the same would be required in the vector rxtx and similar data paths for other drivers, if affected by the same shortcoming. I don't have other combinations available that I could test with, and I guess VMware i40evf SR-IOV VLAN isn't part of the DPDK release test suite either. cc: d...@dpdk.org for comment as this is getting beyond my level of knowledge as a DPDK user thanks, Iain
[dpdk-users] VLAN tags always stripped on i40evf [VMware SR-IOV]
I have a problem trying to get VLAN tagged frames to be received at the i40evf PMD. Specifically, there is no connectivity on VMware i40e media interface configured as SR-IOV when VLANs are used, but it works fine for non-VLAN tagged frames. It seems the VLAN headers are being stripped before they reach the DPDK application, but the exact same configuration works fine for native Linux driver with VLANs. Host is VMware ESXi 6.5, XL710 interfaces (i40e) and SR-IOV enabled at the vswitch PF, configured as VLAN trunking mode (VLAN tags through to guest). An external system sends VLAN 8 tagged frames to the guest VM interface - the trivial testcase is to use ICMP 'ping' across the VLAN from eth1.8 on another Linux system. With Linux 3.10 native i40evf driver as the guest VM, the VLAN 8 tagged frames are received on i40evf, and the outgoing ICMP reply is tagged correctly as VLAN 8 (verified with wireshark that the VLAN 8 header is present on the frames) With DPDK PMD (tried DPDK 16.7, 16.11, 17.8) running testpmd icmpecho, the frames are received without any VLAN tag, so the reply is sent untagged and the 'ping' fails (verified using dpdk-pdump that the frames have VLAN stripped) Switching from i40evf to using vmxnet3 PV interface, testpmd works correctly with VLAN tagged frames. Also sending the icmp ping without VLAN headers on the i40evf, testpmd works correctly for the untagged case. So the problem seems to be specific to DPDK i40e PMD when using VLANs in SR-IOV mode. My first thought was maybe VMware was configuring the PF to forcing VLAN stripping, but as Linux native i40evf driver is able to correctly, the VLANs must be able to pass from PF to the VF. Maybe DPDK is doing something 'different' compared to Linux for setting up the VF, and breaking VLAN tagging as a result? But I don't know what that might be. Any ideas what I am doing wrong? I've included the test sequence below, which trivially reproduces the difference in behavior. ps. I also tried different firmware and driver versions on the host interface, including the versions listed in the release note for each DPDK release, with the same exact behavior. --- Steps to configure the Linux environment: ifconfig eth0 2.2.2.3 vconfig add eth0 8 ifconfig eth0.8 8.8.8.3 dumpcap -i eth0 Ping to the 2.2.2.3 and 8.8.8.3 IP addresses from the far-end device Observe that packets in the pcap (view using wireshark) are untagged for 2.2.2.3 and 8.8.8.3 tagged vlan 8, for both the icmp echo and reply Steps to configure the DPDK environment: ifconfig eth0 down dpdk-devbind.py --bind=igb_uio eth0 testpmd --log-level=8 --pci-whitelist :03:00.0 -- --interactive --forward-mode=icmpecho set verbose 1 set promisc 0 off set fwd icmpecho vlan set strip off 0 vlan set filter off 0 start Note: Verify that vlan strip and vlan filter are both disabled in 'show port info all'. >From another session of the Linux host, run the dpdk-pdump tool to capture the >trace: dpdk-pdump - --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap,tx-dev=/tmp/capture.pcap' Ping to the 2.2.2.3 and 8.8.8.3 IP addresses from the far-end device Observe that all packets in the pcap appear as untagged for both icmp echo and reply --- thanks, Iain