On Jan 4, 2012, at 12:44 PM, Dan The Man wrote:
>> Even a backlog of a 1000 is large compared to the default listen queue size 
>> of around 50 or 128.  And if you can drain 1000 connections per second, a 
>> 65K backlog is big enough that plenty of clients (I'm thinking web-browsers 
>> here in particular) will have given up and maybe retried rather than waiting 
>> for 60+ seconds just to exchange data.
> 
> For web browsers makes sense, but if your coding your own server application 
> its only a matter of increasing the read and write timeouts
> to fill queue that high and still process them.

Sure, agreed.

> Of course wouldn't need anything that high, but for benchmarking how much can 
> toss in that listen queue then write something to socket on each one after 
> connection established to see how fast application can finish them all, I 
> think its relevant.
> 
> This linux box I have no issues:
> cappy:~# /sbin/sysctl -w net.core.somaxconn=200000
> net.core.somaxconn = 200000
> cappy:~# sysctl -w net.ipv4.tcp_max_syn_backlog=20000
> net.ipv4.tcp_max_syn_backlog = 200000
> cappy:~#

However, I'm not convinced that it is useful to do this.  At some point, you 
are better off timing out and retrying via exponential backoff than you are 
queuing hundreds of thousands of connections in the hopes that they will 
eventually be serviced by something sometime considerably later.

Regards,
-- 
-Chuck

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to