At 05:22 AM 7/13/2004, Graham Leggett wrote:

>The problem arises when large data sizes (say a 650MB CD ISO) are stored in a 
>multi-tier webserver architecture (mod_proxy in front of a backend, for example), and 
>somebody comes along and tries to download it using a download accelerator, or they 
>simply try to resume a failed download.
>
>The full 650MB CD ISO is then transferred from the backend to the frontend, which 
>then pulls out the bits the frontend needs dumping the rest. And this happens once 
>for every single byte range request.

The solution to this problem is *not* to become tightly coupled with the
placement of filters, directly handling file streams, etc.

The clean solution is a new forward-space semantic for the filter or 
brigade, which would allow you to skip n bytes.  This would allow those
filters which know their transformation (1:1 mappings, etc) to simply
pass this request forward, until it ultimately hits the core filesystem
filter which can seek(fd, CUR, n).

Those filters without expectations of the transformation without fully
reprocessing the stream (e.g. includes) would have to reprocess the 
data, no surprise there.  

Bill



Reply via email to