Garrett Rooney wrote:
On 3/6/06, William A. Rowe, Jr. <[EMAIL PROTECTED]> wrote:
Garrett Rooney wrote:
On 3/6/06, William A. Rowe, Jr. <[EMAIL PROTECTED]> wrote:
Jim Jagielski wrote:
See, the issue for fastcgi isn't controlling persistence, persistent
connections are fine as long as you're actually making use of the
backend process, the problem is avoiding having more than one
connection to a backend process that simply cannot handle multiple
concurrent connections.
This seems to be a problem unique (so far anyway) to fastcgi.
So the issue is that mod_proxy_fastcgi needs to create a pool of single
process workers, and ensure that each has only one concurrent request,
right? That's an issue for the proxy_fastcgi module, to mutex them all.
The problem is that with the way mod_proxy currently works there isn't
any way to do that, at least as far as I can tell. It seems like it
will require us to move away from having mod_proxy manage the back end
connections, and if we do that then we're back to the "what exactly is
the advantage to using mod_proxy again?" question.
Hmmm... it really doesn't seem that it's mod_proxies' responsibility to
decide to make only one request to a backend at a time.
But I agree with you that it's valuable to a -subset- of proxy backends
to have some generic request pool, 1-at-a-time service. Perhaps we also
drop in mod_proxy_dequeue or something like that to provide such a service,
and put a generic create-workers code at the parent level? The module would
still be responsible to implement the create-worker callback from this
generic service, but it would then be possible for mod_proxy_dequeue to
manage such a pool.
If you want this to be robust, it would be necessary for mod_proxy_dequeue
probably to run in a child of the parent process to handle all this various
delegation, replace lost children, and handle things like long-lost-child
has exited/returned. A 'do me next' pipe back to the parent would be
necessary to serialize the requests and hand them back a worker.
Bill