Ruediger Pluem wrote:
> 
> On 09/26/2006 01:00 PM, Joe Orton wrote:
>> On Tue, Sep 26, 2006 at 10:52:18AM +0200, Niklas Edmundsson wrote:
>>
>>> This patch depends on "mod_disk_cache LFS-aware config" submitted 
>>> earlier and is for trunk.
>>>
>>> It makes caching of large files possible on 32bit machines by:
>>>
>>> * Realising that a file is a file and can be copied as such, without
>>>  reading the whole thing into memory first.
>>
>> This was discussed a while back.  I think this is an API problem which 
>> needs to be fixed at API level, not something which should be worked 
>> around by adding bucket-type-specific hacks.  The "store_body" callback 
>> needs to be able to operate like a real output filter so it can avoid 
>> the buffering-buckets-into-memory problem; as in:
>>
>> while (brigade not empty)
>> 1. read data from bucket
>> 2. write data to disk
>> 3. pass bucket up output filter chain
>> 4. delete bucket
> 
> I agree. But in the case that we want to save a file bucket, can't we use
> sendfile in this case? Something like ap(r)_store_file_bucket which would
> store it via sendfile or splice on Linux would be really neat. Of course
> a lot more details would need to be worked out e.g. how and if this function
> changes the file bucket and what happens to the position of the src and target
> file pointers in the file.

Linux sendfile() only works on mmapable fds to sockets. splice() one of
the two fds must refer to a pipe (afaik). tee() both fds must refer to
pipes.

--
Davi Arnaut

Reply via email to