On Mon, Jan 9, 2017 at 9:45 AM, Michal Hocko wrote:
> What about those non-default configurations. Do they really want to
> invoke the OOM killer rather than fallback to the vmalloc?
In our case, we use 4096 slots per fq, so that is a 16KB memory allocation.
And these allocations happen right af
On Mon 09-01-17 08:00:16, Eric Dumazet wrote:
> On Mon, Jan 9, 2017 at 2:22 AM, Michal Hocko wrote:
> >
> > the changelog doesn't mention it but this, unlike other kvmalloc
> > conversions is not without functional changes. The kmalloc part
> > will be weaker than it is with the original code for
On Mon, Jan 9, 2017 at 2:22 AM, Michal Hocko wrote:
>
> the changelog doesn't mention it but this, unlike other kvmalloc
> conversions is not without functional changes. The kmalloc part
> will be weaker than it is with the original code for !costly (<64kB)
> requests, because we are enforcing __G
On Fri 06-01-17 17:19:44, Michal Hocko wrote:
[...]
> From 8eadf8774daecdd6c4de37339216282a16920458 Mon Sep 17 00:00:00 2001
> From: Michal Hocko
> Date: Fri, 6 Jan 2017 17:03:31 +0100
> Subject: [PATCH] net: use kvmalloc rather than open coded variant
>
> fq_alloc_node, alloc_netdev_mqs and neti
On 01/06/2017 06:08 PM, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 8:55 AM, Vlastimil Babka wrote:
>> On 01/06/2017 05:48 PM, Eric Dumazet wrote:
>>> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote:
>>>
I wonder what's that cause of the penalty (when accessing the vmapped
a
On Fri, Jan 6, 2017 at 8:55 AM, Vlastimil Babka wrote:
> On 01/06/2017 05:48 PM, Eric Dumazet wrote:
>> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote:
>>
>>>
>>> I wonder what's that cause of the penalty (when accessing the vmapped
>>> area I suppose?) Is it higher risk of collisions cach
On 01/06/2017 05:48 PM, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote:
>
>>
>> I wonder what's that cause of the penalty (when accessing the vmapped
>> area I suppose?) Is it higher risk of collisions cache misses within the
>> area, compared to consecutive physical
On Fri, Jan 6, 2017 at 8:48 AM, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote:
>
>>
>> I wonder what's that cause of the penalty (when accessing the vmapped
>> area I suppose?) Is it higher risk of collisions cache misses within the
>> area, compared to consecutive p
On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote:
>
> I wonder what's that cause of the penalty (when accessing the vmapped
> area I suppose?) Is it higher risk of collisions cache misses within the
> area, compared to consecutive physical adresses?
I believe tests were done with 48 fq qdis
On 01/06/2017 04:39 PM, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 7:20 AM, Michal Hocko wrote:
>>
>> Hi Eric,
>> I am currently checking kmalloc with vmalloc fallback users and convert
>> them to a new kvmalloc helper [1]. While I am adding a support for
>> __GFP_REPEAT to kvmalloc [2] I was wo
On Fri 06-01-17 17:07:43, Michal Hocko wrote:
> On Fri 06-01-17 07:39:14, Eric Dumazet wrote:
> > On Fri, Jan 6, 2017 at 7:20 AM, Michal Hocko wrote:
> > >
> > > Hi Eric,
> > > I am currently checking kmalloc with vmalloc fallback users and convert
> > > them to a new kvmalloc helper [1]. While I
On Fri 06-01-17 07:39:14, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 7:20 AM, Michal Hocko wrote:
> >
> > Hi Eric,
> > I am currently checking kmalloc with vmalloc fallback users and convert
> > them to a new kvmalloc helper [1]. While I am adding a support for
> > __GFP_REPEAT to kvmalloc [2] I
On Fri, Jan 6, 2017 at 7:20 AM, Michal Hocko wrote:
>
> Hi Eric,
> I am currently checking kmalloc with vmalloc fallback users and convert
> them to a new kvmalloc helper [1]. While I am adding a support for
> __GFP_REPEAT to kvmalloc [2] I was wondering what is the reason to use
> __GFP_REPEAT in
Hi Eric,
I am currently checking kmalloc with vmalloc fallback users and convert
them to a new kvmalloc helper [1]. While I am adding a support for
__GFP_REPEAT to kvmalloc [2] I was wondering what is the reason to use
__GFP_REPEAT in fq_alloc_node in the first place. c3bd85495aef
("pkt_sched: fq:
14 matches
Mail list logo