Re: socket buffers
On Sat, Jul 03, 2010 at 11:54:17AM +0100, Stuart Henderson wrote: Does anyone know offhand the reason why network connections fail if socket buffers are set above 256k? You might have to patch sb_max for that. Joerg
Re: socket buffers
On Sat, Jul 03, 2010 at 11:54:17AM +0100, Stuart Henderson wrote: Does anyone know offhand the reason why network connections fail if socket buffers are set above 256k? There is this magical define in uipc_socket2.c called SB_MAX that limits the socket buffers to 256k going over that line makes the initial scaling fail and you end up with no buffer at all. # sysctl net.inet.tcp.sendspace=262145 # telnet naiad 80 Trying 2a01:348:108:108:a00:20ff:feda:88b6... Trying 195.95.187.35... # I was thinking of looking into it, but before going down that rabbit hole I thought I'd ask in case there's a quick answer that somebody already knows... (yes, people do use buffers much bigger than this, I looked at some of the academic ftp mirror sites - looks like mirrorservice.org wil negotiate 3MB buffers, aarnet 35MB, if you let them - presumably they try to avoid buffers being a bottleneck for clients reaching them over a national network of at least 1Gb/s end-to-end). 35M, that is insane. Either they have machines with infinite memory or you can kill the boxes easily. -- :wq Claudio
Re: socket buffers
On Sat, Jul 03, 2010 at 05:40:45PM +0200, Claudio Jeker wrote: 35M, that is insane. Either they have machines with infinite memory or you can kill the boxes easily. You don't need 35MB per client connection if interfaces like sendfile(2) are used. All the kernel has to guarantee in that case is copy-on-write for the file content as far as it has been send already. Media distribution server normally don't change files inplace, so the only backpressure this creates is on the VFS cache. Let's assume the server is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of clients will try to fetch a small number of distinct files. The memory the kernel has to commit is limited by the size of that active set, not by the number of clients. Joerg
Re: socket buffers
On 2010/07/03 18:17, Joerg Sonnenberger wrote: On Sat, Jul 03, 2010 at 05:40:45PM +0200, Claudio Jeker wrote: 35M, that is insane. Either they have machines with infinite memory or you can kill the boxes easily. some would also say that 16K is insane ;-) You don't need 35MB per client connection if interfaces like sendfile(2) are used. All the kernel has to guarantee in that case is copy-on-write for the file content as far as it has been send already. Media distribution server normally don't change files inplace, so the only backpressure this creates is on the VFS cache. Let's assume the server is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of clients will try to fetch a small number of distinct files. The memory the kernel has to commit is limited by the size of that active set, not by the number of clients. there is some pretty serious hardware behind it... http://mirror.aarnet.edu.au/indexabout.html
Re: socket buffers
On Sat, 3 Jul 2010 17:46:22 +0100, Stuart Henderson wrote: there is some pretty serious hardware behind it... http://mirror.aarnet.edu.au/indexabout.html Those guys have some serious uses for that equipment in addition to being a great source of ftp mirrors. They are ready (or very close) to handling data from Australia and New Zealand SKA sites. (Square Kilometer Array http://www.skatelescope.org/) The data is measured in Terabytes/day. BTW their OpenBSD mirror currently has 4.7 pkgs for some archs (I did not check all but amd64 is there) but not i386. Weird. *** NOTE *** Please DO NOT CC me. I am subscribed to the list. Mail to the sender address that does not originate at the list server is tarpitted. The reply-to: address is provided for those who feel compelled to reply off list. Thankyou. Rod/ --- This life is not the real thing. It is not even in Beta. If it was, then OpenBSD would already have a man page for it.