Justin Erenkrantz wrote:
But, I'm not convinced that the benefit gained by allowing some byte-range optimization is going to be worth it. As soon as you stick in mod_include and/or mod_deflate, you're going to have the ability to have arbitrary content transformation. Even EBCDIC character conversions is not one-to-one. In fact, I bet the number of filters that do a 1:1 transformation (that aren't logging) is small. So, the number of cases where it isn't arbitrary would be extremely miniscule.
The byte ranges aren't done for the benefit of the httpd itself, but rather a potential multi tier backend supported by mod_proxy or mod_backhand.
Right now if you range request a big file, it will work - but not before the entire big file has been passed from the backend application server to the frontend httpd over the backend network. If you think of files the sizes of CD ISOs (650MB) of DVD ISOs (4GB) this backend transfer is not trivial. Add a download accelerator to the equation (likely on a big file like a CD) and suddenly the entire file is transferred once for each range request, ouch.
The idea is to allow the input and output filter stack to be able to react intelligently if it is given a byte ranged response from a content handler such as proxy. The CL and range info can either be parsed directly by the filters ("I am mod_include, I change CL and I don't allow Ranges, so let me strip Range off the input headers and CL off the output headers"), or CL and Range can be encoded into metadata that isn't header specific for the same purpose.
Regards, Graham --
smime.p7s
Description: S/MIME Cryptographic Signature
