On Wed, September 20, 2006 9:50 pm, Ruediger Pluem wrote: > You can set a max cache file size (CacheMaxFileSize) which prevents > caching files that are larger then > a specfic size. This is checked after each bucket is written to the disk. > If the > stream is larger then the max file size the file gets deleted and caching > of this request > is stopped. So this also works with chunked responses.
Hmmm - this affects the case where another process/thread is delivering from a still-being-cached entity. If the lead thread decides to stop, and other threads are following, the other following threads will deliver CacheMaxFileSize data, and cut the request short. One workaround for this problem is to have following threads ignore the cached entity if the entity does not have a content length - something the entity will have when caching is complete. This means the backend server will still see a spike of traffic while the object is being cached, but the cache will no try and cache multiple entities until the first one wins, which happens now. Regards, Graham --
