Thanks Peres,

I had tried that already. serv-close actually took my sites and games down to a 
crawl.

I tried pretend keep-alives as well and removing httpclose all together; same 
result; seems best with httpclose enabled.

Any other Ideas/suggestions?

On another note, I notice that haproxy strips out/uncompresses any gzipping the 
server replies with. Could this be related?

Cheers,

David



________________________________
From: Peres Levente <sheri...@sheridan.hu>
To: David Tosoff <dtos...@yahoo.com>
Sent: Wed, February 2, 2011 3:00:16 AM
Subject: Re: HAproxy Tuning - HTTP requests loading serially

Hi David,

Just a blind guess, but...

I don't know how your app handles keepalive, but maybe try losing     "option 
httpclose" and use "option http-server-close" instead, thus     allowing proper 
keepalive. This gave me an immediate performance     boost for a webfarm that 
serves  literally millions of requests per     minute - albeit at the cost of 
having to deal with about a hundred     thousand unproperly-closed stale 
connections simultaneously on the     gateways - which can be taken care of 
with 
timeout and sysctl     settings.

Levente

2011.02.02. 0:30 keltezéssel, David Tosoff írta: 
Hello All,
>
>
>I've recently           setup a pair of haproxy VMs (running in Hyper-V) over 
>Ubuntu           10.10 w/ keepalived
>
>
>Things seem to           be working pretty well. We're moving 35-50Mbps (1442  
>         
>concurrent sessions avg) thru the primary node all day, but           we're 
>noticing that multiple concurrent http requests from the           client seem 
>like they're being responded to serially.
>
>
>For example, we           run a 3D game that issues http requests for in-world 
>resources           (textures, maps, images) from the client to the web 
>servers           
>through HAproxy. When we log into the game, we see multiple           blank 
>areas on the walls that load one-by-one, slowly,           serially. When we 
>bypass HAproxy, everything will load           immediately. Oddly enough, 
>individual request thru haproxy are           very fast: 65K resource file 
>downloads in 0.17seconds; but the           next resource doesn't load until 
>the 
>previous is complete...
>
>
>Is there a limit           of how many concurrent (http or otherwise) 
>connections to           haproxy/linux a client can have?
>Can you point me           to any performance tweaks I can place in Ubuntu or 
>Haproxy           that will help with this?
>
>
>Thanks in           advance!
>David
>
>
>[CURRENT CONFIG]
>    global
>        daemon
>        user haproxy
>        maxconn 100000
>        pidfile               /etc/haproxy/haproxy.pid
>        stats socket               /tmp/haproxy.stat level admin
>
>
>    defaults
>        mode http
>        timeout connect 5000ms
>        timeout client 50000ms
>        timeout server 50000ms
>        retries 3
>        option redispatch
>        option httpclose
>        option forwardfor
>
>
>    backend WEBFARM
>        balance leastconn
>        cookie HAP-SID insert               indirect maxidle 120m
>        option httpchk GET               /check.aspx HTTP/1.0
>        http-check expect string               SUCCESS
>        server TC-IIS-2 10.4.1.22:80               cookie TI2 check
>        server TC-IIS-3 10.4.1.23:80               cookie TI3 check
>        server TC-IIS-4 10.4.1.24:80               cookie TI4 check
>        server TC-IIS-5 10.4.1.25:80               cookie TI5 check
>        server TC-IIS-6 10.4.1.26:80               cookie TI6 check
>        server TC-IIS-7 10.4.1.27:80               cookie TI7 check
>
>
>    frontend HYBRID_WEBS
>        default_backend WEBFARM
>        bind 127.0.0.1:80 name LOCAL
>        bind 10.4.0.10:80 name               HYBRID_WEBS
>
>
>________________________________

>avast! Antivirus: Inbound           message clean. 
>Virus Database (VPS):           110201-0, 2011.02.01
>Tested on: 2011.02.02. 0:32:27
>avast! - copyright (c) 1988-2011 AVAST             Software.
>


Reply via email to