Keep-alive has been replaced with persistent connections. This works great because it doesn't make sense to open/close connections for every object.
Pipelining, as I understand it, opens a connection from a client and sends request1, request2, request3... end. The problem as I see it is that the server services these requests sequentially, in the order received. So in HTTP 1.0 a client could open connection, send request1, and open a second connection, send request2,... Then the server can process these multiple connections and requests in a multi-threaded scenario. The bad side of this is the opening/closing of connections. The good side is the independent handling of those requests. Regards, Tim Greenwald "You can't manage what your not measuring" -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Ian Clelland Sent: Saturday, August 02, 2003 6:35 PM To: Mike Dierken Cc: [EMAIL PROTECTED] Subject: Re: HTTP 1.1 Pipelining On Fri, Aug 01, 2003 at 09:30:59PM -0700, Mike Dierken wrote: > > Is pipelining the same as 'keep-alive'? > I thought it was something different... > > ----- Original Message ----- > From: "Tim.Greenwald" <[EMAIL PROTECTED]> > > I would be interested in comments regarding the trade-off of using > > pipelining. My thoughts on this are that it is great because it relieves > the > > burden on the server of having to open a TCP connection for each request > for > > an object but the trade-off is that the requests have to be sent/received > in > > order. This seems to me to override the benefits of multi-threading. If I > > was to send multiple GETs at the same time I would open multiple > connections > > but the requests could be processed concurrently. It's a similar concept -- Keep-Alive was an artifact of HTTP/1.0 which was removed in HTTP/1.1. In 1.1, the default behaviour is to keep connections open for some time to accept further requests, unless a 'Connection: close' header is sent. In response to the original question, I don't think it's really a tradeoff at all -- you can optimise a client's performance by using request pipelining in conjunction with multiple requests. I don't think it's uncommon for a web client to open one connection to a server to get a document, then open half a dozen more connections to get the related images / style sheet, and then keep all of those connections open. When the user clicks a link to another document on the same server, the connections are all still open, so the second document can be retreived even faster. Ian Clelland <[EMAIL PROTECTED]>