According to Michael Blakeley:

> >  > I'm not following.  Everyone agrees that we don't want to have big
> >  > mod_perl processes waiting on slow clients.  The question is whether
> >  > tuning your socket buffer can provide the same benefits as a proxy server
> >  > and the conclusion so far is that it can't because of the lingering close
> >  > problem.  Are you saying something different?
> >
> >  A tcp close is supposed to require an acknowledgement from the
> >  other end or a fairly long timeout.  I don't see how a socket buffer
> >  alone can change this.    Likewise for any of the load balancer
> >  front ends that work on the tcp connection level (but I'd like to
> >  be proven wrong about this).
> 
> Solaris lets a user-level application close() a socket immediately 
> and go on to do other work. The sockets layer (the TCP/IP stack) will 
> continue to keep that socket open while it delivers any buffered 
> sends - but the user application doesn't need to know this (and 
> naturally won't be able to read any incoming data if it arrives). 
> When the tcp send buffer is empty, the socket will truly close, with 
> all the usual FIN et. al. dialogue.
> 
> Anyway, since the socket is closed from the mod_perl point of view, 
> the heavyweight mod_perl process is no longer tied up. I don't know 
> if this holds true for Linux as well, but if it doesn't, there's 
> always the source code.

I still like the idea of having mod_rewrite in a lightweight
front end, and if the request turns out to be static at that
point there isn't much point in dealing with proxying.  Has
anyone tried putting software load balancing behind the front
end proxy with something like eddieware, balance or ultra
monkey?  In that scheme the front ends might use an IP takeover
failover and/or DNS load balancing and would proxy to what they
think is a single back end server - then this would hit a
tcp level balancer instead.

  Les Mikesell
    [EMAIL PROTECTED]

Reply via email to