On 10/19/2008 01:21 PM, Ruediger Pluem wrote:
> 
> On 10/18/2008 10:22 PM, Graham Leggett wrote:
>> Ruediger Pluem wrote:
> 
>>>    Plus the default socket and TCP buffers on most OS should be already
>>>    larger then this. So in order to profit from the optimization the time
>>>    the client needs to consume the ProxyIOBufferSize needs to be
>>> remarkable.
>> It makes no difference how large the TCP buffers are, the backend will
>> only be released for reuse when the frontend has completely flushed and
> 
> Sorry I maybe wrong here, but I don't think this is the case. If there is
> space left in the TCP buffer you write to the socket non-blocking and the
> data seems to be processed for the sending process then. It does not block
> until the data is sent by the OS. And even flush buckets do not seem to cause
> any special processing like a blocking flush. So returning to your CNN
> example I am pretty much sure that if the TCP buffer for the socket of the
> client connection holds 92k plus the header overhead your connection to the
> backend will be released almost instantly.
> I don't even think so that a close or shutdown on the socket will block until
> all data in the TCP buffer is sent. But this wouldn't matter on keepalive
> connection to the client anyway since the connection isn't closed.

I did some further investigations in the meantime. Using the following 
configuration

ProxyPass /cnn/ http://www.cnn.com/
Sendbuffersize 285168
ProxyIOBufferSize 131072

and the following "slow" testclient

!/usr/bin/perl -w

use strict;
use Socket;

my $proto;
my $port;
my $sin;
my $addr;
my $url;

my $oldfh;

$proto = getprotobyname('tcp');
$port = getservbyname('http','tcp');
$addr = "192.168.2.4";
$url = "/cnn/";

socket(SOCKET_H,PF_INET,SOCK_STREAM,$proto);
$sin = sockaddr_in($port,inet_aton($addr));
setsockopt(SOCKET_H, SOL_SOCKET, SO_RCVBUF, 1);
connect(SOCKET_H,$sin);
$oldfh = select SOCKET_H;
$| = 1;
select $oldfh;
print SOCKET_H "GET $url HTTP/1.0\r\n\r\n";
sleep(500);
close SOCKET_H;

I was able to have the connection to www.cnn.com returned back to the pool
immediately.
The strange thing that remains is that I needed to set the Sendbuffersize about
3 times higher than the actual size of the page to get this done. I currently
do not know why this is the case.

Another maybe funny sidenote: Because of the way the read method on socket 
buckets
work and the way the core input filter works, the ap_get_brigade call when 
processing
the http body of the backend response in mod_proxy_http never returns a brigade 
that
contains more then 8k of data no matter what you set for ProxyIOBufferSize.
And this is the case since 2.0.x days. So the optimization was always limited to
sending at most 8k and in this case the TCP buffer (the send buffer) should have
fixed this in many cases.

Regards

RĂ¼diger

Reply via email to