On 06 Jul 2020, at 09:28, Ruediger Pluem <rpl...@apache.org> wrote:

>> +    apr_sha1_init(&context);
>> +    nbytes = sizeof(buf);
>> +    while ((status = apr_file_read(fd, buf, &nbytes)) == APR_SUCCESS) {
> 
> Why do we still use apr_file_read in case we use MMAP? IMHO we should use 
> apr_mmap_offset.
> But we would need to consider the amount of memory we map at the same time 
> for large files
> in order to avoid exhausting the memory. So I would propose that we create a 
> brigade with one
> filebucket and do bucket reads. So like the following code from the default 
> handler (excluding the reading):
> 
>        bb = apr_brigade_create(r->pool, c->bucket_alloc);
> 
>        e = apr_brigade_insert_file(bb, fd, 0, r->finfo.size, r->pool);
> 
> #if APR_HAS_MMAP
>            if (d->enable_mmap == ENABLE_MMAP_OFF) {
>                (void)apr_bucket_file_enable_mmap(e, 0);
>            }
> #endif
> 
> Of course that would require to get the finfo in case we open the file 
> descriptor on our own.
> 
> The reading would then be something like (without error handling):
> 
> while (!APR_BRIGADE_EMPTY(bb))
> {
>      e = APR_BUCKET_FIRST(bb)
>      apr_bucket_read(e, &str, &len);
>      apr_sha1_update_binary(&context, str, len);
>      apr_bucket_delete(e);
> }

Makes much more sense - r1879541.

Regards,
Graham
—

Reply via email to