On Jun 21, 2010, at 5:16 PM, Eric Wong wrote:

>> overloading the server or hitting memory bandwidth issues.  The
>> backlog is at the somaxconn default of 128, I'm still not sure if we
>> will bump that up or not.
> 
> The default backlog we try to specify is actually 1024 (same as
> Mongrel).  But it's always a murky value anyways, as it's
> kernel/sysctl-dependent.  With Unix domain sockets, some folks use
> crazy values like 2048 to look better on synthetic benchmarks :)

Somewhat related -- I've been meaning to discuss the finer points of backlog 
tuning.

I've been experimenting with the multi-server socket+TCP megaunicorn 
configuration from your CDT:
http://rubyforge.org/pipermail/mongrel-unicorn/2009-September/000033.html

Which I think is what this sentence from TUNING is talking about?

        "Setting a very low value for the :backlog parameter in “listen” 
directives can allow failover to happen more quickly if your cluster is 
configured for it."

Our app can catch a batch of requests which will be slow (1-3s), and these can 
pool on one individual server in our load-balanced EC2 cluster -- exactly the 
case for the multi-server failover setup. 

I've put this into production under a healthy load (5000+ RPM) and it appears 
to work really well!  Produces very high requests/s rates at significantly 
higher concurrency than without, and serves zero 502 errors (part of the goal)

I currently I have the unix socket set to a backlog of 64, then failing over to 
a TCP listener using backlog 1024 (so that things are queued rather than 502'd)

I can imagine there might be a case for keeping the TCP backlog low as well & 
serving errors when overloaded, rather than getting caught in an unrecoverable 
back-queue tarpit

I'm currently failing-over to a dedicated "backup" instance, so that I could 
measure exactly how much traffic is being offloaded. This means my benchmarks 
w/o failover are 1 server, but with failover is actually 2 servers. We're 
reconfiguring to something more like the original diagram at which point I'll 
do some cluster-wide stress-tests & share data/scripts/process. 

BTW, this configuration needs a cool name!

-jamie
http://jamiedubs.com
http://fffff.at

_______________________________________________
Unicorn mailing list - [email protected]
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

Reply via email to