________________________________________
From: Dale <rdalek1...@gmail.com>
Sent: Sunday, October 1, 2023 1:29 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Network throughput from main Gentoo rig to NAS box.

Tsukasa Mcp_Reznor wrote:
> There are quite a few things to tweak that can lead to much smoother 
> transfers, so I'll make an unordered list to help.
>
> mount -o nocto,nolock,async,nconnect=4,rsize=1048576,wsize=1048576
> rsize and wsize are very important for max bandwidth, worth checking with 
> mount after linked up
> nocto helps a bit, the man page has more info
> nconnect helps reach higher throughput by using more threads on the pipe
> async might actually be your main issue, nfs does a lot of sync writes, so 
> that would explain the gaps in your chart, needs written to physical media 
> before replying that it's been committed so more data can be sent.
>
> sysctl.conf mods
> net.ipv4.tcp_mtu_probing = 2
> net.ipv4.tcp_base_mss = 1024
>
> if you use jumbo frames, that'll allow it to find the higher packet sizes.
>
> fs.nfs.nfs_congestion_kb = 524288
>
> that controls how much data can be inflight waiting for responses, if it's 
> too small that'll also lead to the gaps you see.
>
> subjective part incoming lol
>
> net.core.rmem_default = 1048576
> net.core.rmem_max = 16777216
> net.core.wmem_default = 1048576
> net.core.wmem_max = 16777216
>
> net.ipv4.tcp_mem = 4096 131072 262144
> net.ipv4.tcp_rmem = 4096 1048576 16777216
> net.ipv4.tcp_wmem = 4096 1048576 16777216
>
> net.core.netdev_max_backlog = 10000
> net.ipv4.tcp_fin_timeout = 15
> net.ipv4.tcp_limit_output_bytes = 262144
> net.ipv4.tcp_max_tw_buckets = 262144
>
> you can find your own numbers based on ram size.  Basically those control how 
> much data can be buffered PER socket, big buffers improve bandwidth usage to 
> a point, after that point they can lead to latency being added, if most of 
> your communication is with that NAS, you basically ping the NAS to get the 
> average latency then divide your wire speed by it to see how much data it 
> would take to max it out.  Also being per socket means you can have lower 
> numbers than I use for sure, I do a lot of single file copies, so my workload 
> isn't the normal usage.
> .
>


I finished my OS updates and started my weekly backup updates.  I
mounted using your options and this is a decent improvement.  I'm not
sure which option makes it faster but it is faster, almost double.  A
few examples using fairly large file sizes for good results.


3,519,790,127 100%   51.46MB/s    0:01:05
3,519,632,300 100%   51.97MB/s    0:01:04
3,518,456,042 100%   51.20MB/s    0:01:05


It may not look like much, still slower than just a straight copy with
no encryption, but given previous speeds, this is a nice improvement.  I
think before I was getting about 25 to 30MB/s before.  This is the
settings shown by the mount command now, which should be what it is using.


root@fireball / # mount | grep TV
10.0.0.7:/mnt/backup on /mnt/TV_Backup type nfs4
(rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,nocto,proto=tcp,nconnect=4,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.4,local_lock=none,addr=10.0.0.7)
root@fireball / #


I think it took all your options and is using them.  If you have ideas
that would speed things up more, I'm open to it but this is a nice
improvement.  I still think the encryption slows things down some,
especially on the NAS end which is much older machine and is likely
fairly CPU intensive.  A newer CPU that has the same clock speed and
number of cores would likely do much better, newer instruction support
and all.  I think I read somewhere that newer CPUs have extra stuff to
speed encryption up.  I might be wrong on that.

Thanks much.  Any additional ideas are welcome, from anyone who has
them.  If it matters, both rigs are on UPSs.

Dale

:-)  :-)

P. S.  Those who know I garden, my turnip and mustard greens are popping
up.  My kale and collards are not up yet.  I watered them again to help
them pop up.  Kinda dry here and no rain until the end of the next week,
they think.  They never really know what the weather is going to do
anyway.


----------------------------------------------------------------------------

Glad you got some decent benefits, I just now realized that the "async" setting 
is supposed to be done on the server in /etc/exports not under the client 
mount.  Please give that a shot, I think it'll make a big difference for you.

Reply via email to