There are quite a few things to tweak that can lead to much smoother transfers, 
so I'll make an unordered list to help.

mount -o nocto,nolock,async,nconnect=4,rsize=1048576,wsize=1048576
rsize and wsize are very important for max bandwidth, worth checking with mount 
after linked up
nocto helps a bit, the man page has more info
nconnect helps reach higher throughput by using more threads on the pipe
async might actually be your main issue, nfs does a lot of sync writes, so that 
would explain the gaps in your chart, needs written to physical media before 
replying that it's been committed so more data can be sent.

sysctl.conf mods
net.ipv4.tcp_mtu_probing = 2
net.ipv4.tcp_base_mss = 1024

if you use jumbo frames, that'll allow it to find the higher packet sizes.

fs.nfs.nfs_congestion_kb = 524288

that controls how much data can be inflight waiting for responses, if it's too 
small that'll also lead to the gaps you see.

subjective part incoming lol

net.core.rmem_default = 1048576
net.core.rmem_max = 16777216
net.core.wmem_default = 1048576
net.core.wmem_max = 16777216

net.ipv4.tcp_mem = 4096 131072 262144
net.ipv4.tcp_rmem = 4096 1048576 16777216
net.ipv4.tcp_wmem = 4096 1048576 16777216

net.core.netdev_max_backlog = 10000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_limit_output_bytes = 262144
net.ipv4.tcp_max_tw_buckets = 262144

you can find your own numbers based on ram size.  Basically those control how 
much data can be buffered PER socket, big buffers improve bandwidth usage to a 
point, after that point they can lead to latency being added, if most of your 
communication is with that NAS, you basically ping the NAS to get the average 
latency then divide your wire speed by it to see how much data it would take to 
max it out.  Also being per socket means you can have lower numbers than I use 
for sure, I do a lot of single file copies, so my workload isn't the normal 
usage.

Reply via email to