>  From: "Perrin Harkins" <[EMAIL PROTECTED]>
>  To: "Ask Bjoern Hansen" <[EMAIL PROTECTED]>
>  Cc: <[EMAIL PROTECTED]>
>  Sent: Tuesday, October 31, 2000 8:47 PM
>  Subject: Re: ApacheCon report
>
>  > > Mr. Llima must do something I don't, because with real world
>  > > requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
>  > > "my" site. And that is serving <500byte stuff.
>  >
>  > I'm not following.  Everyone agrees that we don't want to have big
>  > mod_perl processes waiting on slow clients.  The question is whether
>  > tuning your socket buffer can provide the same benefits as a proxy server
>  > and the conclusion so far is that it can't because of the lingering close
>  > problem.  Are you saying something different?
>
>  A tcp close is supposed to require an acknowledgement from the
>  other end or a fairly long timeout.  I don't see how a socket buffer
>  alone can change this.    Likewise for any of the load balancer
>  front ends that work on the tcp connection level (but I'd like to
>  be proven wrong about this).

Solaris lets a user-level application close() a socket immediately 
and go on to do other work. The sockets layer (the TCP/IP stack) will 
continue to keep that socket open while it delivers any buffered 
sends - but the user application doesn't need to know this (and 
naturally won't be able to read any incoming data if it arrives). 
When the tcp send buffer is empty, the socket will truly close, with 
all the usual FIN et. al. dialogue.

Anyway, since the socket is closed from the mod_perl point of view, 
the heavyweight mod_perl process is no longer tied up. I don't know 
if this holds true for Linux as well, but if it doesn't, there's 
always the source code.

The socket buffers on most Unix and Unix-like OSes tend to be 32kB to 
64 kB. Some OSes allow these to be tuned (ndd on Solaris).

-- Mike
-- 
Michael Blakeley       [EMAIL PROTECTED]     <http://www.blakeley.com/>
             Performance Analysis for Internet Technologies

Reply via email to