Hi,

On Tue, Jan 11, 2011 at 01:25:43PM +0530, Paras Fadte wrote:
> Hi,
> 
> In the attached screenshot ,

Please avoid JPEG screenshots. This one is barely readable. You can save
your stats page in HTML, it's all self-contained. And if you prefer to
hide some info from there, then save in a lossless image format such as
PNG or GIF.

> wanted to know If the backend current queue  of
>  "3069" shown under "log_servers" section is caused due to saturation of the
> defined backend machines logserver4/5/6/7/8.

Yes, only a situation where servers are saturated can cause the backend to
queue requests. In your case, since we're seeing a current connection count
around 500 with a limit at 20000, I suspect that you have enabled dynamic
limits with a minconn and a maxconn on each server. In your case it seems
to protect your servers well, because you have 50% of the requests immediately
served and 50% in the queue. Assuming the requests are equally drained by all
servers, you'd have 3069/5 = 614 requests to be processed for each server
before a new incoming request gets served. At 790 req/s/server, that's around
780ms spent in the queue waiting for a server to become available. The total
response time for the client is around 780+650 = 1.4s.

You should try to play with those parameters to see if your servers respond
faster with more requests or with less. Often you'll notice that your servers
respond faster with less concurrency, and it will make sense to lower the
limits so that they drain requests faster.

> How is this queue different
> from the backend queue shown under "HA-Proxy_LB" ?

I suspect that HA-PROXY_LB is just a "listen" instance you have declared to
accept the traffic and to send it to "log_servers". Only its frontend part
is used then and it has no server, so it will never queue anything. The
queueing only happens in the backend.

Hoping this helps,
Willy


Reply via email to