Hi Oleg, I'll see if I can take a look later today at these.
Regarding the SizeLimitedResponseReader, the main reason we currently read things into an in-memory buffer is so that we can correctly handle a streamed response with chunked transfer that ends up being too big: we can't tell ahead of time if we will be able to cache it or not, since it doesn't have an explicit Content-Length, so we optimistically read the response into a buffer. If the response ends before we get "too big", then we just pass the buffer to the cache. If the response still hasn't ended when we get to "too big", then we reconstruct the original response to pass on to our client by replaying the original stream out of the buffer and then continuing with the rest of the (unconsumed) response stream. If you're just talking about an optimization where we explicitly know we'll be constructing a cache entry from the response before starting to consume the response body, then I think that's probably fine. Jon ........ Jon Moore Comcast Interactive Media -----Original Message----- From: Oleg Kalnichevski [mailto:[email protected]] Sent: Wed 8/4/2010 9:51 AM To: [email protected] Cc: Moore, Jonathan Subject: HTTP cache API changes Jon et al, I made some changes to the HTTP cache API in the course of the last few days mainly intended to allow for file system based cache implementations. Please feel free to review the changes and complain if you find anything disagreeable. The new code is largely untested but should be enough to get a feel about the API. There is one extra change I would like to do. I would like to refactor the SizeLimitedResponseReader in order to make it possible streaming out the response content directly to a file without buffering content in an intermediate buffer in memory. Once that is done we can proceed with cutting a new release of HttpClient 4.1. Cheers Oleg --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
