Why would high-order memory allocations be a problem in userspace code,
which is using virtual memory?  I thought high-order allocations is a big
problem in kernel space, which has to deal with physical pages.

But when you write to a socket, doesn't the kernel scatter the userspace
buffer into multiple SKBs?  SKBs on order-0 pages allocated by the kernel?


On Thu, Feb 23, 2017 at 1:16 PM, Jacob Champion <champio...@gmail.com>
wrote:

> On 02/23/2017 08:34 AM, Yann Ylavic wrote:
> > Actually I'm not very pleased with this solution (or the final one
> > that would make this size open / configurable).
> > The issue is potentially the huge (order big-n) allocations which
> > finally may hurt the system (fragmentation, OOM...).
>
> Power users can break the system, and this is a power tool, right? And we
> have HugeTLB kernels and filesystems to play with, with 2MB and bigger
> pages... Making all these assumptions for 90% of users is perfect for the
> out-of-the-box experience, but hardcoding them so that no one can fix
> broken assumptions seems Bad.
>
> (And don't get me wrong, I think applying vectored I/O to the brigade
> would be a great thing to try out and benchmark. I just think it's a
> long-term and heavily architectural fix, when a short-term change to get
> rid of some #defined constants could have immediate benefits.)
>
> I've no idea how much it costs to have 8K vs 16K records, though.
>> Maybe in the mod_ssl case we'd want 16K buffers, still reasonable?
>>
>
> We can't/shouldn't hardcode this especially. People who want maximum
> throughput may want nice big records, but IIRC users who want progressive
> rendering need smaller records so that they don't have to wait as long for
> the first decrypted chunk. It needs to be tunable, possibly per-location.
>
> --Jacob
>

Reply via email to