----- Original Message -----
From: "Perrin Harkins" <[EMAIL PROTECTED]>
>
>
> Here's what I recall Theo saying (relative to mod_perl):
>
> - Don't use a proxy server for doling out bytes to slow clients; just set
> the buffer on your sockets high enough to allow the server to dump the
> page and move on.  This has been discussed here before, notably in this
> post:
>
>
http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/20000811200559.B17
[EMAIL PROTECTED]
>
> The conclusion was that you could end up paying dearly for the lingeirng
> close on the socket.

In practice I see a fairly consistent ratio of 10 front-end proxies running
per one back end on a site where most hits end up being proxied so
the lingering is a real problem.

> Ultimately, I don't see any way around the fact that proxying from one
> server to another ties up two processes for that time rather than one, so
> if your bottleneck is the number of processes you can run before running
> out of RAM, this is not a good approach.

The point is you only tie up the back end for the time it takes to deliver
to the proxy, then it moves on to another request while the proxy
dribbles the content back to the client.   Plus, of course, it doesn't
have to be on the same machine.

> If your bottleneck is CPU or
> disk access, then it might be useful.  I guess that means this is not so
> hot for the folks who are mostly bottlenecked by an RDBMS, but might be
> good for XML'ers running CPU hungry transformations.  (Yes, I saw Matt's
> talk on AxKit's cache...)

Spreading requests over multiple backends is the fix for this.  There is
some gain in efficiency if you dedicate certain backend servers to
certain tasks since you will then tend to have the right things in the
cache buffers.

  Les Mikesell
     [EMAIL PROTECTED]

Reply via email to