On Thu, Feb 23, 2017 at 5:34 PM, Yann Ylavic <ylavic....@gmail.com> wrote:
> On Thu, Feb 23, 2017 at 4:58 PM, Stefan Eissing
> <stefan.eiss...@greenbytes.de> wrote:
>>
>>> Am 23.02.2017 um 16:38 schrieb Yann Ylavic <ylavic....@gmail.com>:
>>>
>>> On Wed, Feb 22, 2017 at 6:36 PM, Jacob Champion <champio...@gmail.com> 
>>> wrote:
>>>> On 02/22/2017 12:00 AM, Stefan Eissing wrote:
>>>>>
>>>>> Just so I do not misunderstand:
>>>>>
>>>>> you increased BUCKET_BUFF_SIZE in APR from 8000 to 64K? That is what you
>>>>> are testing?
>>>>
>>>>
>>>> Essentially, yes, *and* turn off mmap and sendfile. My hope is to disable
>>>> the mmap-optimization by default while still improving overall performance
>>>> for most users.
>>>>
>>>> Technically, Yann's patch doesn't redefine APR_BUCKET_BUFF_SIZE, it just
>>>> defines a new buffer size for use with the file bucket. It's a little less
>>>> than 64K, I assume to make room for an allocation header:
>>>>
>>>>    #define FILE_BUCKET_BUFF_SIZE (64 * 1024 - 64)
>>>
>>> Actually I'm not very pleased with this solution (or the final one
>>> that would make this size open / configurable).
>>> The issue is potentially the huge (order big-n) allocations which
>>> finally may hurt the system (fragmentation, OOM...).
>>>
>>> So I'm thinking of another way to achieve the same with the current
>>> APR_BUCKET_BUFF_SIZE (2 pages) per alloc.
>>>
>>> The idea is to have a new apr_allocator_allocv() function which would
>>> fill an iovec with what's available in the allocator's freelist (i.e.
>>> spare apr_memnodes) of at least the given min_size bytes (possibly a
>>> max too but I don't see the need for now) and up to the size of the
>>> given iovec.
>>
>> Interesting. Not only for pure files maybe.
>>
>> It would be great if there'd be a SSL_writev()...
>
> Indeed, openssl would fully fill the TLS records.
>
>> but until there is, the TLS case will either turn every iovec into
>> its own TLS record
>
> Yes, but it's more a client issue to work with these records finally
> (because from there all is networking only, and coalescing will happen
> in the core output filter from a socket POV).
>
> Anyway mod_proxy(s) could be the client, so...
>
> I've no idea how much it costs to have 8K vs 16K records, though.
> Maybe in the mod_ssl case we'd want 16K buffers, still reasonable?
>
>> or one needs another copy before that. This last strategy is used by
>> mod_http2. Since there are 9 header bytes per frame, copying data
>> into a right sized buffer gives better performance. So, it would be
>> nice to read n bytes from a bucket brigade and get iovecs back.
>> Which, as I understand it, you propose?
>
> I didn't think of apr_bucket_readv(), more focused on
> file_bucket_read() to do this internally/transparently, but once file
> buckets can do that I think we can generalize the concept, at worse
> filling only iovec[0] when it's ENOTIMPL and/or makes no sense...
>
> That'd help the mod_ssl case with something like
> apr_bucket_readv(min_size=16K), I'll try to think of it once/if I can
> have a more simple to do file_bucket_read() only working ;)

Hm no, this needs to happen on the producer side, not at final
apr_bucket_read().
So for the mod_ssl case I think we could have a simple (new)
apr_bucket_alloc_set_size(16K) in the first place if it helps more
than hurts.

As for your question about "iovecs back" (which I finally didn't
answer), all happens at the bucket alloc level, when buckets are
deleted.

Reply via email to