On Nov 18, 2012, at 10:34 PM, Simon Wilkinson <s...@your-file-system.com>
 wrote:

> 
> On 9 Oct 2012, at 10:24, Dan Van Der Ster wrote:
>> We currently run fileservers with udpsize=2MB, and at that size we have a 30 
>> client limit in our test environment. With a buffer size=8MB (increased 
>> kernel max with sysctl and fileserver option), we don't see any dropped UDP 
>> packets during our client-reading stress test, but still get some dropped 
>> packets if all clients write to the server. With a 16MB buffer we don't see 
>> any dropped packets at all in reading or writing.
> 
> This was discussed in Edinburgh as part of the CERN site report (which I'd 
> recommend to anyone interested in AFS server performance), 
...
> Converting that number of packets into a buffer size is a bit of a dark art.

One of our colleagues pointed out an error in our slides showing how to 
increase the max buffer size to 16MBytes with sysctl. We had published this 
recipe:

    sysctl -w net.core.rmem_max=16777216
    sysctl -w net.core.wmem_max=16777216
    sysctl -w net.core.rmem_default=65536
    sysctl -w net.core.wmem_default=65536
    sysctl -w net.ipv4.tcp_rmem=4096 87380 16777216
    sysctl -w net.ipv4.tcp_wmem=4096 65536 16777216
    sysctl -w net.ipv4.tcp_mem=16777216 16777216 16777216
    sysctl -w net.ipv4.udp_mem=16777216 16777216 16777216
    sysctl -w net.ipv4.udp_rmem_min=65536
    sysctl -w net.ipv4.udp_wmem_min=65536
    sysctl -w net.ipv4.route.flush=1

The problem is that net.ipv4.tcp_mem and net.ipv4.udp_mem are (a) system wide 
total values for all buffers and (b) must be written as a number of 4k pages, 
not bytes. In fact the default values on most systems for net.ipv4.tcp_mem and 
net.ipv4.udp_mem should already be large enough for 16MByte buffers (the 
default is tuned to be ~75% of the system's memory). So, the key sysctl to set 
to enable large receive buffers is net.core.rmem_max.

> So, setting a UDP buffer of 8Mbytes from user space is _just_ enough to 
> handle 4096 incoming RX packets on a standard ethernet. However, it doesn't 
> give you enough overhead to handle pings and other management packets. 
> 16Mbytes should be plenty providing that you don't

We use 256 server threads and found experimentally in our environment that to 
achieve zero packet loss we need around 12MBytes buffers. So we went with 16MB 
to give a little extra headroom.

Thanks for following up on this thread.
--
Dan van der Ster
CERN IT-DSS_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to