This bug is missing log files that will aid in diagnosing the problem.
While running an Ubuntu kernel (not a mainline or third-party kernel)
please enter the following command in a terminal window:

apport-collect 1960826

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable
to run this command, please add a comment stating that fact and change
the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the
Ubuntu Kernel Team.

** Changed in: linux (Ubuntu)
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1960826

Title:
  NFSv4 performance problem with newer kernels

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I am seeing an issue with NFSv4 performance on Ubuntu 20.04.3 LTS on both 
clients and server (tested kernels 5.4 to 5.16), where the server is connected 
by 10Gbit Ethernet and with multiple clients connected by 1Gbit. I am reading a 
large file from an NFS mount via "dd" sending output to /dev/null with bs=4096. 
Using default sysctl and mount options I am seeing speeds maxing out below 
1Gbit/sec. If I force NFSv3 I see speeds close to 10Gbit/sec with sufficient 
clients connected. I also see no issue with Ubuntu 16.04 (when used for both 
server and clients) in conjunction with NFSv4. I have attached the output from 
two iftop's which shows the status when using NFSv4 and when using NFSv3, in 
the NFSv4 you can clearly see one client reading at max speed and all the 
others apparently throttling back to practically nothing.
  I have additionally tested a range of mount options, which made no 
difference, BBR congestion control which made no difference and the following 
kernel settings which also made no difference:

  net.core.netdev_max_backlog=250000
  net.core.rmem_max=4194304
  net.core.wmem_max=4194304
  net.core.rmem_default=4194304
  net.core.wmem_default=4194304
  net.core.optmem_max=4194304
  net.ipv4.tcp_rmem=4096 87380 4194304
  net.ipv4.tcp_wmem=4096 65536 4194304
  net.ipv4.tcp_mem=786432 1048576 26777216
  net.ipv4.udp_mem=1529892 2039859 3059784
  net.ipv4.udp_rmem_min=16384
  net.ipv4.udp_wmem_min=16384

  The problem is seen on dissimilar hardware, ie this problem exists
  when testing with an HP DL380 G10 with Mellanox 10Gbit Ethernet
  connected to Cisco switch, and also on a Dell R430 with Broadcom
  10Gbit Ethernet connected to a Netgear switch (just to name two of
  several configurations that have been tested). The clients vary in
  each test case also, but are desktop PCs and laptops.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1960826/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to