Update from upstream:

Please check out the other patches below that were sent together with the one 
patch I listed in my previous reply. Cherry-picking patches can be challenging 
with ice driver from later kernels as new features and bug fixes are sent 
continuously in each kernel and the dependency can be tricky.
 
5951a2b9812d    v5.16-rc6 iavf: Fix VLAN feature flags after VFR
e6ba5273d4ed    v5.16-rc6 ice: Fix race conditions between virtchnl handling 
and VF ndo ops
b385cca47363    v5.16-rc6 ice: Fix not stopping Tx queues for VFs
0299faeaf8eb    v5.16-rc6 ice: Remove toggling of antispoof for VF trusted 
promiscuous mode
1a8c7778bcde    v5.16-rc6 ice: Fix VF true promiscuous mode
 
Regarding the issue of changing MTU, it is a known one in ice driver and the 
team is actively working on a fix. I will share once it is ready to be sent 
Linux upstream.
 
Another note, for E810 NICs, please recommend customers to update the NVM image 
on the NIC to the latest one if possible. Here is the latest one:
https://www.intel.com/content/www/us/en/download/19626/non-volatile-memory-nvm-update-utility-for-intel-ethernet-network-adapters-e810-series-linux.html

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1983656

Title:
  SR-IOV VFs no traffic flow and error on Intel E810 (ice / iavf)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Virtual Machines with SR-IOV VFs from an Intel E810-XXV [8086:159b]
  get no traffic flow and produce error messages in both the host and
  guest during network configuration.

  Environment: Ubuntu OpenStack Focal-Ussuri with OVN
  Host Kernel: v5.15.0-41-generic 20.04 Focal-HWE
  Guest Kernels: v5.4.x Focal, v5.15.0-41-generic Jammy

  Host Error Messages:
  ice 0000:98:00.1: VF 7 failed opcode 6, retval: -5

  Guest Error Messages:
  iavf 0000:00:05.0: PF returned error -5 (IAVF_ERR_PARAM) to our request 6

  In the context of these errors "6" refers to the value of
  VIRTCHNL_OP_CONFIG_VSI_QUEUES

  It was found in these cases that the VM is able to successfully
  transmit packets but never receives any and the RX packet drop
  counters for the VF in "ip link" on the host increase equal to the RX
  packet count.

  
  There is a prior commit e6ba5273d4ede03d075d7a116b8edad1f6115f4d claiming to 
resolve this error in some cases. It is already included in the test kernel 
v5.15.0-41 and did not resolve the issue. 

  These Virtual Machines do work with the Mainline v5.19 build on the
  host and it includes the following two VIRTCHNL_OP_CONFIG_VSI_QUEUES
  related commits that are not currently backported to v5.15 or any
  upstream stable kernel:

  6096dae926a22e2892ef9169f582589c16d39639 ice: clear stale Tx queue settings 
before configuring [v5.18]
  be2af71496a54a7195ac62caba6fab49cfe5006c ice: Fix queue config fail handling 
[v5.19]

  Additionally during testing if we link down an interface and/or try to use 
netplan apply to start DHCP instead of manual configuration we triggered the 
following memory corruption bug:
  efe41860008e57fb6b69855b4b93fdf34bc42798 ice: Fix memory corruption in VF 
driver [v5.19]

  
  It appears that this ice/iavf driver is quite immature as many significant 
SR-IOV related fixes have landed in each of the recent kernel releases and we 
may need to consider pro-actively backporting more fixes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1983656/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to