Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Greg Ames
Jeff Trawick wrote:
Please review the proxy-reqbody branch for proposed improvements to
2.1-dev.  There is a 2.0.x equivalent of the patch at
http://httpd.apache.org/~trawick/20proxyreqbody.txt.
+1 (reviewed, not tested)
certainly an improvement over what we have today.  The brains (decisions about 
what to do with a body) are nicely isolated to a few lines of code in 
send_request_body().  The rest of it is the muscle, and that shouldn't be 
controversial...either it works or it doesn't, and I can't spot any problems.

Greg


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Ronald Park
I was recently considering a similar patch for mod_proxy along the lines
of spool_reqbody_cl() method but it would go one step further: spawning
off a thread to asynchronously read the request into a temp file (or
files) while the initial thread would continue to stream the io_bufsize
chunks down the filter chain. This would 'untie' the original client
and the proxy server in cases where they ran at different speeds (more
a problem for *large* proxy files where one side or the other could be
tied up waiting for the slower side for long periods of time... and
poor Apache caught in the middle).

I hadn't gotten too far along with my patch but with this, it's about
90% of the way done. :)

Ron

On Wed, 2005-02-02 at 09:56 -0500, Greg Ames wrote:
> Jeff Trawick wrote:
> > Please review the proxy-reqbody branch for proposed improvements to
> > 2.1-dev.  There is a 2.0.x equivalent of the patch at
> > http://httpd.apache.org/~trawick/20proxyreqbody.txt.
> 
> +1 (reviewed, not tested)
> 
> certainly an improvement over what we have today.  The brains (decisions 
> about 
> what to do with a body) are nicely isolated to a few lines of code in 
> send_request_body().  The rest of it is the muscle, and that shouldn't be 
> controversial...either it works or it doesn't, and I can't spot any problems.
> 
> Greg
> 
-- 
Ronald Park <[EMAIL PROTECTED]>



Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Jeff Trawick
On Wed, 02 Feb 2005 12:32:39 -0500, Ronald Park <[EMAIL PROTECTED]> wrote:
> I was recently considering a similar patch for mod_proxy along the lines
> of spool_reqbody_cl() method but it would go one step further: spawning
> off a thread to asynchronously read the request into a temp file (or
> files) while the initial thread would continue to stream the io_bufsize
> chunks down the filter chain.

stray thoughts...

one thread per proxy request seems pretty heavy weight...  perhaps
cool for small number of clients, perhaps a gratuitous use of
resources on a busy server...

and if threads are shared, then generic event handling apparatus to be
used for other non-proxy code seems more appropriate

...

>This would 'untie' the original client
>and the proxy server in cases where they ran at different speeds (more
>a problem for *large* proxy files where one side or the other could be
>tied up waiting for the slower side for long periods of time... and
>poor Apache caught in the middle).

the client and proxy server are necessarily tied up until origin
server reads the request and writes the response...  why do we care if
they are tied up writing the request out vs. waiting for the response?
 it seems like we could burn a lot of resources but get only marginal
response time improvement as payback


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Ronald Park
True, the client and the proxying server are tied up intrinsically.
I used the wrong wording to name who would benefit.  I meant that
the server providing the proxied content (the 'other' server; the
one not directly talking to the client) could be done and on it's
way doing other work while the first two finish their dance.

In this picture:

  C <-->  A  <-->  P

C, the client, makes a request to A which proxies it to P.  A & P
can do their exchange independant of C & A. If A-P can be done fast,
but C-A is slow, then P's wasting resource, no?

Ron

On Wed, 2005-02-02 at 13:26 -0500, Jeff Trawick wrote:
> On Wed, 02 Feb 2005 12:32:39 -0500, Ronald Park <[EMAIL PROTECTED]> wrote:
> > I was recently considering a similar patch for mod_proxy along the lines
> > of spool_reqbody_cl() method but it would go one step further: spawning
> > off a thread to asynchronously read the request into a temp file (or
> > files) while the initial thread would continue to stream the io_bufsize
> > chunks down the filter chain.
> 
> stray thoughts...
> 
> one thread per proxy request seems pretty heavy weight...  perhaps
> cool for small number of clients, perhaps a gratuitous use of
> resources on a busy server...
> 
> and if threads are shared, then generic event handling apparatus to be
> used for other non-proxy code seems more appropriate
> 
> ...
> 
> >This would 'untie' the original client
> >and the proxy server in cases where they ran at different speeds (more
> >a problem for *large* proxy files where one side or the other could be
> >tied up waiting for the slower side for long periods of time... and
> >poor Apache caught in the middle).
> 
> the client and proxy server are necessarily tied up until origin
> server reads the request and writes the response...  why do we care if
> they are tied up writing the request out vs. waiting for the response?
>  it seems like we could burn a lot of resources but get only marginal
> response time improvement as payback
-- 
Ronald Park <[EMAIL PROTECTED]>



Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Graham Leggett
Ronald Park wrote:
I was recently considering a similar patch for mod_proxy along the lines
of spool_reqbody_cl() method but it would go one step further: spawning
off a thread to asynchronously read the request into a temp file (or
files) while the initial thread would continue to stream the io_bufsize
chunks down the filter chain. This would 'untie' the original client
and the proxy server in cases where they ran at different speeds (more
a problem for *large* proxy files where one side or the other could be
tied up waiting for the slower side for long periods of time... and
poor Apache caught in the middle).
I hadn't gotten too far along with my patch but with this, it's about
90% of the way done. :)
This is a job for mod_cache, not mod_proxy.
Mod_cache already supports the concept of spooling files to disk (or 
memory, or shared memory), and can be taught how to serve an 
incompletely downloaded file to other clients (apparently it cannot at 
the moment...?).

Adding this capability to proxy is making the proxy code unnecessarily 
complex, and leaves this real problem unsolved for other parts of httpd, 
like CGIs.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Graham Leggett
Ronald Park wrote:
In this picture:
  C <-->  A  <-->  P
C, the client, makes a request to A which proxies it to P.  A & P
can do their exchange independant of C & A. If A-P can be done fast,
but C-A is slow, then P's wasting resource, no?
If you configured mod_cache, my understanding is that P would serve to A 
as much as A was willing to handle, while A could serve C as slowly as 
it liked, freeing up P.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Justin Erenkrantz
--On Wednesday, February 2, 2005 8:49 PM +0200 Graham Leggett 
<[EMAIL PROTECTED]> wrote:

Mod_cache already supports the concept of spooling files to disk (or
memory, or shared memory), and can be taught how to serve an incompletely
downloaded file to other clients (apparently it cannot at the moment...?).
I don't understand the purpose of serving incomplete files from a cache. 
Can you please elaborate on what you think mod_cache should do?  -- justin


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Graham Leggett
Justin Erenkrantz wrote:
I don't understand the purpose of serving incomplete files from a cache. 
Can you please elaborate on what you think mod_cache should do?  -- justin
Since the early days of mod_proxy, there has been a race condition in 
the caching code which has been carried over to mod_cache.

After an URL has been invalidated, and before the new version of that 
URL has been downloaded completely from the backend server, any attempts 
by other clients to fetch the same URL are passed to the backend server 
directly. This results in the load spike on the backend server briefly 
while the replacement cache file is downloaded.

If mod_cache was taught to serve a being-cached URL directly from the 
cache (shadowing the real download), there would be no need for parallel 
connections to the backend server while the file is being cached, and no 
load spike.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


RE: re-do of proxy request body handling - ready for review

2005-02-02 Thread Sander Striker


-Original Message-
From: Graham Leggett [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 10:39 PM
To: dev@httpd.apache.org
Subject: Re: re-do of proxy request body handling - ready for review

Justin Erenkrantz wrote:

> I don't understand the purpose of serving incomplete files from a cache. 
> Can you please elaborate on what you think mod_cache should do?  -- 
> justin

Since the early days of mod_proxy, there has been a race condition in the 
caching code which has been carried over to mod_cache.

After an URL has been invalidated, and before the new version of that URL has 
been downloaded completely from the backend server,
any attempts by other clients to fetch the same URL are passed to the backend 
server directly. This results in the load spike on the
backend server briefly while the replacement cache file is downloaded.

If mod_cache was taught to serve a being-cached URL directly from the cache 
(shadowing the real download), there would be no need
for parallel connections to the backend server while the file is being cached, 
and no load spike.

Regards,
Graham
--



Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Justin Erenkrantz
--On Wednesday, February 2, 2005 11:38 PM +0200 Graham Leggett 
<[EMAIL PROTECTED]> wrote:

If mod_cache was taught to serve a being-cached URL directly from the
cache (shadowing the real download), there would be no need for parallel
connections to the backend server while the file is being cached, and no
load spike.
I don't see any way to implement that cleanly and without lots of undue 
complexity.  Many dragons lay in that direction.

How do we know when another worker has already started to fetch a page?
How do we even know if the response is even cacheable at all?
How do we know when the content is completed?
For example, if the response is chunked, there is no way to know what the 
final length is ahead of time.

If we're still waiting for the initial response (i.e. request has already 
been issued but no data received back yet), then we don't know if the 
origin server will tack on a Cache-Control: no-store or Vary or there is 
some other server-driven reason that it won't be cached or acceptable to 
this client.

Additionally, with this strategy, if the first client to request a page is 
on a slow link, then other clients who are on faster links will be stalled 
while the cached content is stored and then served.

The downside of stalling in the hope that we'll be able to actually serve 
from our cache because another process has made the same request seems much 
worse to me than our current approach.  We could end up making the client 
wait an indefinite amount of time for little advantage.

The downside of the current approach is that we introduce no performance 
penalty to the users at the expense of additional bandwidth towards the 
origin server: we essentially act as if there was no cache present at all.

I'm also unsure that this strategy would mesh well with mod_disk_cache.  I 
think an entirely new and different provider would have to be written 
(assuming we could surmount the above challenges, which I believe are much 
harder than they look).  mod_disk_cache deliberately doesn't use shared 
memory because it introduces unnecessary complexity to the code. 
mod_disk_cache also delays any indication that it has started to fetch the 
page until content has been received.  In fact, the way mod_disk_cache 
works right now is we have an acceptable race condition in that the last 
one to finish will store the data and overwrite all the instances that came 
before.

I would rather focus on getting mod_cache reliable than rewriting it all 
over again to minimize a relatively rare issue.  If it's that much of a 
problem, many pre-caching/priming strategies are also available.  -- justin


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Justin Erenkrantz
--On Wednesday, February 2, 2005 8:40 AM -0500 Jeff Trawick 
<[EMAIL PROTECTED]> wrote:

Please review the proxy-reqbody branch for proposed improvements to
2.1-dev.  There is a 2.0.x equivalent of the patch at
http://httpd.apache.org/~trawick/20proxyreqbody.txt.
To help people review Jeff's changes:
svn diff -r 124193:HEAD 
http://svn.apache.org/repos/asf/httpd/httpd/branches/proxy-reqbody/modules/proxy/proxy_http.c

Anyhow, only one really minor nit: in spool_reqbody_cl, I think it'd be 
good to add a comment that we're intentionally leaving the first 16k (or 
so) of content in our memory buffer.  That took me a little bit to catch on 
to.

+1 to merge into 2.1.  Looks good.  =)  -- justin


Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Graham Leggett
Justin Erenkrantz wrote:
I don't see any way to implement that cleanly and without lots of undue 
complexity.  Many dragons lay in that direction.
When I put together the initial framework of mod_cache, solving this 
problem was one of my goals.

How do we know when another worker has already started to fetch a page?
Because there is an (incomplete) entry in the cache.
How do we even know if the response is even cacheable at all?
RFC2616.
How do we know when the content is completed?
Because of a flag in the cache entry telling us.
For example, if the response is chunked, there is no way to know what 
the final length is ahead of time.
We have no need to know. The "in progress" cache flag is only going to 
be marked as "complete" when the request is complete. If that request 
was chunked, streamed, whatever makes no difference.

If we're still waiting for the initial response (i.e. request has 
already been issued but no data received back yet), then we don't know 
if the origin server will tack on a Cache-Control: no-store or Vary or 
there is some other server-driven reason that it won't be cached or 
acceptable to this client.
As the cache was designed to cache multiple variants of the same URL, 
Vary should not be a problem. If we are still waiting for the initial 
response, then we have no cache object yet - the race condition is still 
there, but a few orders of magnitude shorter in duration.

Additionally, with this strategy, if the first client to request a page 
is on a slow link, then other clients who are on faster links will be 
stalled while the cached content is stored and then served.
If this is happening now then it's a design flaw in mod_cache.
Cache should fill as fast as the sender will go, and the client should 
be able to read as slow as it likes.

This is important to ensure backend servers are not left hanging around 
waiting for slow frontend clients.

The downside of stalling in the hope that we'll be able to actually 
serve from our cache because another process has made the same request 
seems much worse to me than our current approach.  We could end up 
making the client wait an indefinite amount of time for little advantage.
There have been bugs outstanding in mod_proxy v1.3 complaining about 
this issue - the advantage to fixing this is real.

The downside of the current approach is that we introduce no performance 
penalty to the users at the expense of additional bandwidth towards the 
origin server: we essentially act as if there was no cache present at all.
But we introduce a performance penalty to the backend server, which must 
 now handle load spikes from clients. This problem can (and has been 
reported in the past to) have a significant impact on big sites.

I would rather focus on getting mod_cache reliable than rewriting it all 
over again to minimize a relatively rare issue.  If it's that much of a 
problem, many pre-caching/priming strategies are also available.  -- justin
Nobody is expecting a rewrite of the cache, and this issue is definitely 
not rare. I'll start looking at this when I finished getting the LDAP 
stuff done.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: re-do of proxy request body handling - ready for review

2005-02-03 Thread Jeff Trawick
On Wed, 02 Feb 2005 14:39:37 -0800, Justin Erenkrantz
<[EMAIL PROTECTED]> wrote:
> Anyhow, only one really minor nit: in spool_reqbody_cl, I think it'd be
> good to add a comment that we're intentionally leaving the first 16k (or
> so) of content in our memory buffer.  That took me a little bit to catch on
> to.

comment added, and thanks for your review!


Re: re-do of proxy request body handling - ready for review

2005-02-03 Thread Jim Jagielski
On Feb 2, 2005, at 8:40 AM, Jeff Trawick wrote:
Please review the proxy-reqbody branch for proposed improvements to
2.1-dev.  There is a 2.0.x equivalent of the patch at
http://httpd.apache.org/~trawick/20proxyreqbody.txt.
2.1 version reviewed and tested. Wow. V. nice!
+1
--
===
 Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  "There 10 types of people: those who read binary and everyone else."


Multi-threaded proxy? was Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Paul Querna
Ronald Park wrote:
I was recently considering a similar patch for mod_proxy along the lines
of spool_reqbody_cl() method but it would go one step further: spawning
off a thread to asynchronously read the request into a temp file (or
files) while the initial thread would continue to stream the io_bufsize
chunks down the filter chain. This would 'untie' the original client
and the proxy server in cases where they ran at different speeds (more
a problem for *large* proxy files where one side or the other could be
tied up waiting for the slower side for long periods of time... and
poor Apache caught in the middle).
One thought I have been tossing around for a long time is tying the 
proxy code into the Event MPM.  Instead of a thread blocking while it 
waits for a backend reply, the backend server's FD would be added to the 
Event Thread, and then when the reply is ready, any available worker 
thread would handle it, like they do new requests.

This would work well for backend servers that might take a second or two 
 for a reply, but it does add at least 3 context switches.  (in some 
use cases this would work great, in others, it would hurt performance...)

-Paul


Caching incomplete responses was Re: re-do of proxy request body handling - ready for review

2005-02-03 Thread Justin Erenkrantz
On Thu, Feb 03, 2005 at 09:06:04AM +0200, Graham Leggett wrote:
> Justin Erenkrantz wrote:
> 
> >I don't see any way to implement that cleanly and without lots of undue 
> >complexity.  Many dragons lay in that direction.
> 
> When I put together the initial framework of mod_cache, solving this 
> problem was one of my goals.

While this may indeed be a worthy goal, the code that has been in mod_cache
could not do this.

> >How do we know when another worker has already started to fetch a page?
> 
> Because there is an (incomplete) entry in the cache.

How?  Can we do this without tying the cache indexing to shared memory?
Remember that mod_disk_cache currently has no list of entries: it is either
cached or it isn't.  (The lookup is done via a file open() call.)

If we add shared memory, I fail to see the benefit in this corner case
outweighing the cost incurred in the common case.  We would now have to
introduce shared memory and locking through mod_(disk)_cache just to handle
this one case.  This will unnecessarily slow down the overall operation of the
cache for everything else.  A fair portion of the speed of mod_cache and
mod_disk_cache comes from the fact that there are no locks or shared memory
involved (and partially why mod_mem_cache is often worse than no caching at
all!).

This doesn't start to hit upon the issues with shared memory in general for
this particular problem.  For example, the shared memory cache index would be
lost on power or system crash.  One way to address this would be to introduce
paging on disk of the central index - but I believe that is *way* too complex
for httpd.

> >How do we even know if the response is even cacheable at all?
> 
> RFC2616.

Yes, I know the RFC.  But, as I said in my earlier reply, the RFC does not
help when we don't have the response yet!

This comes back to the following race condition:

- We have no valid, cached representation of resource Z on origin server D.
- Client A makes a request for resource Z on origin server D.
- Client B makes a request for resource Z on origin server D.
- Representation of Z is served at some later time by origin server D.

What should Client B do?  There is no response to Client A's request yet.
Should Client B block until we know whether the response from Client A is
cacheable?  Then, if it turns out that it wasn't cacheable, then we have to
request the representation of Z after we found out that it didn't apply.  Or,
should it make a duplicate request under the pessimistic assumption that the
response will be non-cachable.

At what point in the process should Client B block on Client A?  Should Client
B block only if there has been a portion of the body (but headers) received?

The issue I have is that optimistically assuming we can cache a response
without seeing that response in its entirety is dangerous.  I think the safe
(and prudent) behavior is to assume that any non-complete response isn't
cacheable: we should immediately issue an additional request for resource Z.

> >How do we know when the content is completed?
> 
> Because of a flag in the cache entry telling us.

Without the introduction of shared memory, I don't believe this is a realistic
strategy.

> >For example, if the response is chunked, there is no way to know what 
> >the final length is ahead of time.
> 
> We have no need to know. The "in progress" cache flag is only going to 
> be marked as "complete" when the request is complete. If that request 
> was chunked, streamed, whatever makes no difference.

Actually, yes, mod_disk_cache needs to know the full length ahead of time.
mod_disk_cache never actually 'reads' the file.  It relies on sendfile() doing
zero-copy between the network card and the hard drive.  Successful zero-copy
necessarily requires the complete length of the file known ahead of time, or
it can't be used.

Furthermore, the APR file buckets do not know how to handle multiple
EOF-seeing buckets.  There is no clean way to say, "Hey, read to the end of
the file.  Then pick up where you left off, then, oh, read again to EOF."
When do you stop?  How do you know when to stop?  Can you ever know?  (A
shared memory count that says how much data is available?  Oh joy.)

What happens if the origin server disconnects our request for Client A?  What
should we do to Client B then?  How does Client B's httpd instance even know
that it disconnected - instead of Origin D just being stalled temporarily?
Can we now abort Client B's connection?  Should we issue a new request to
Origin D on behalf of Client B?  (Perhaps use byterange?)

How do we handle a crash in the httpd instance responding to Client A while it
is storing an incomplete response?  Can we detect this?  (A crash in any
program attached to a shmem segment may corrupt the entire segment.)

What if we can't serve a chunked response back to Client B?  (It could be an
HTTP/1.0 client.)

> As the cache was designed to cache multiple variants of the same URL, 
> Vary should not be a proble

Re: Multi-threaded proxy? was Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Ronald Park
Hmm, don't know enough about Event MPM to comment on that part of the
message, but with regards to the performance for small requests, in my
original 'design' for this, I figured it would do one synchronous read
first, pass that through the filter chain and, if '(!seen_eos)', then
it would pay the cost to set up the asynchronous details.  Hopefully
then most sites could alleviate performance issues by fiddling with
the ProxyIOBufferSize directive to fit all requests within this value
while large requests would be many multiples of this size.

Ron

On Wed, 2005-02-02 at 09:37 -0800, Paul Querna wrote:
> Ronald Park wrote:
> > I was recently considering a similar patch for mod_proxy along the lines
> > of spool_reqbody_cl() method but it would go one step further: spawning
> > off a thread to asynchronously read the request into a temp file (or
> > files) while the initial thread would continue to stream the io_bufsize
> > chunks down the filter chain. This would 'untie' the original client
> > and the proxy server in cases where they ran at different speeds (more
> > a problem for *large* proxy files where one side or the other could be
> > tied up waiting for the slower side for long periods of time... and
> > poor Apache caught in the middle).
> > 
> 
> One thought I have been tossing around for a long time is tying the 
> proxy code into the Event MPM.  Instead of a thread blocking while it 
> waits for a backend reply, the backend server's FD would be added to the 
> Event Thread, and then when the reply is ready, any available worker 
> thread would handle it, like they do new requests.
> 
> This would work well for backend servers that might take a second or two 
>   for a reply, but it does add at least 3 context switches.  (in some 
> use cases this would work great, in others, it would hurt performance...)
> 
> -Paul
-- 
Ronald Park <[EMAIL PROTECTED]>



Re: Multi-threaded proxy? was Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Mladen Turk
Paul Querna wrote:
One thought I have been tossing around for a long time is tying the 
proxy code into the Event MPM.  Instead of a thread blocking while it 
waits for a backend reply, the backend server's FD would be added to the 
Event Thread, and then when the reply is ready, any available worker 
thread would handle it, like they do new requests.

This would work well for backend servers that might take a second or two 
 for a reply, but it does add at least 3 context switches.  (in some use 
cases this would work great, in others, it would hurt performance...)

I don't think it would give any benefit. Well perhaps only on
forward proxies it could spare some keep-alive connections.
Regards,
Mladen.


Re: Multi-threaded proxy? was Re: re-do of proxy request body handling - ready for review

2005-02-02 Thread Ronald Park
Imagine, just as a wild theoretical scenario (:D), that you have
the following setup:

Apache -> (proxy) -> Squid -> (cache miss) -> Apache -> (docroot)

Where the back-end Apache serves up large files (in the 2G range)
(and, yes, there are far more files that can be effectively cached).
Now imagine you have thousands of clients trying to get those files
some of which have very slow connections.  And also imagine that
their are more front-end Apache instances than back-ends.

The backend Apache could quickly delivery the file through to
the frontend Apache's mod_proxy if it wasn't held up by waiting
for each chunk to be spoonfed over to the slow client.  Even for
relatively good clients, it's likely a number of them are going
to tie up a thread in the back-end for longer than it would if
the front-end gobbled up the proxy response faster.

The problem with the 'gobble up the whole proxy response' all at
once though is that for these huge files, the original client might
not get any response for a noticable amount of time.  Further, if
an impatient client, it might give up and reissue the request again,
tying up another set of threads (and internal bandwidth). :(

Ron

On Wed, 2005-02-02 at 18:51 +0100, Mladen Turk wrote:
> Paul Querna wrote:
> > 
> > One thought I have been tossing around for a long time is tying the 
> > proxy code into the Event MPM.  Instead of a thread blocking while it 
> > waits for a backend reply, the backend server's FD would be added to the 
> > Event Thread, and then when the reply is ready, any available worker 
> > thread would handle it, like they do new requests.
> > 
> > This would work well for backend servers that might take a second or two 
> >  for a reply, but it does add at least 3 context switches.  (in some use 
> > cases this would work great, in others, it would hurt performance...)
> >
> 
> I don't think it would give any benefit. Well perhaps only on
> forward proxies it could spare some keep-alive connections.
> 
> Regards,
> Mladen.
-- 
Ronald Park <[EMAIL PROTECTED]>