I've discovered a setting in mod_proxy_balancer that prevents the Mongrel/Rails request queuing vs. accept/close problem from ever being reached.

For each BalancerMember

- max=1 -- this caps the maximum number of connections Apache will open a BalancerMember to '1' - acquire=N max amount of time (N seconds) to wait to acquire a connection to a balancer member

So, at a minimum:

  BalancerMember http://foo max=1 acquire=1

and I'm using

BalancerMember http://127.0.0.1:9000 max=1 keepalive=on acquire=1 timeout=1

=====

I experimented with three mongrel servers, and tied one up for 60 seconds at a time calling "sleep" in a handler.

Without the "acquire" parameter mod_proxy_balancer's simple round-robin scheme blocked waiting when it reached a busy BalancerManager, effectively queuing the request. With "acquire" set the balancer stepped over the busy BalancerMember and continue searching through it's round-robin cycle.

So, whether or not Mongrel's accept/close and request queuing are issues, there is a setting in mod_proxy_balancer that prevents either problem from being triggered.

At a bare minimum, for a single-threaded process running in Mongrel

  BalancerMember http://127.0.0.1:9000 max=1 acquire=1
  BalancerMember http://127.0.0.1:9001 max=1 acquire=1
  ...

With all BalancerMembers busy Apache returns a 503 Server Busy, which is a heck of a lot more appropriate than 502 proxy error.

======

It turns out that having Mongrel reap threads before calling accept both queueing in Mongrel and prevents Mongrel's accept/close behavior.

But BalancerMembers in mod_proxy_balancer will still need "acquire" to be set -- otherwise proxy client threads will sit around waiting for Mongrel to call accept -- effectively queuing requests in Apache.

Since max=1 acquire=1 steps around the queuing problem altogether, the reap-before-accept fix, though more correct, is of no practical benefit.

====

With the current Mongrel code, BalancerMember max > 1 and Mongrel num_processors > 1 triggers the accept/close bug.

Likewise, BalancerMember max >1 with Mongrel num_processors > 1 runs into Mongrel's request queuing....

====

Conclusion ---

I'd like to see Mongrel return a 503 Server Busy when an incoming request hits the num_processor limit.

For practical use, the fix to the problems is in configuring mod_proxy_balancer such that it shields against encountering either issue.






begin:vcard
fn:Robert Mela
n:Mela;Robert
email;internet:[EMAIL PROTECTED]
x-mozilla-html:FALSE
version:2.1
end:vcard

_______________________________________________
Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to