On Fri, 2012-10-19 at 09:03 +0900, JoonSoo Kim wrote:
> Hello, Eric.
> Thank you very much for a kind comment about my question.
> I have one more question related to network subsystem.
> Please let me know what I misunderstand.
>
> 2012/10/14 Eric Dumazet :
> > In latest kernels, skb->head no lon
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet :
> In latest kernels, skb->head no longer use kmalloc()/kfree(), so SLAB vs
> SLUB is less a concern for n
On Wed, Oct 17, 2012 at 1:33 PM, Tim Bird wrote:
> On 10/17/2012 12:20 PM, Shentino wrote:
>> Potentially stupid question
>>
>> But is SLAB the one where all objects per cache have a fixed size and
>> thus you don't have any bookkeeping overhead for the actual
>> allocations?
>>
>> I remember some
On Wed, Oct 17, 2012 at 5:58 PM, Tim Bird wrote:
> On 10/17/2012 12:13 PM, Eric Dumazet wrote:
>> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>>
>>> 8G is a small web server? The RAM budget for Linux on one of
>>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>>> you
On 10/17/2012 12:13 PM, Eric Dumazet wrote:
> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>
>> 8G is a small web server? The RAM budget for Linux on one of
>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>> you're in a ballpark and I'm trimming bonsai trees... :-)
>
On 10/17/2012 12:20 PM, Shentino wrote:
> Potentially stupid question
>
> But is SLAB the one where all objects per cache have a fixed size and
> thus you don't have any bookkeeping overhead for the actual
> allocations?
>
> I remember something about one of the allocation mechanisms being
> desi
On Wed, Oct 17, 2012 at 12:13 PM, Eric Dumazet wrote:
> On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
>
>> 8G is a small web server? The RAM budget for Linux on one of
>> Sony's cameras was 10M. We're not merely not in the same ballpark -
>> you're in a ballpark and I'm trimming bonsai tree
On Wed, 2012-10-17 at 11:45 -0700, Tim Bird wrote:
> 8G is a small web server? The RAM budget for Linux on one of
> Sony's cameras was 10M. We're not merely not in the same ballpark -
> you're in a ballpark and I'm trimming bonsai trees... :-)
>
Even laptops in 2012 have +4GB of ram.
(Maybe n
On 10/16/2012 12:16 PM, Eric Dumazet wrote:
> On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
>
>> Yes, we have some numbers:
>>
>> http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
>>
>> Are they too informal? I can add some details...
>>
>> They've been measured on a **v
On Tue, 2012-10-16 at 15:27 -0300, Ezequiel Garcia wrote:
> Yes, we have some numbers:
>
> http://elinux.org/Kernel_dynamic_memory_analysis#Kmalloc_objects
>
> Are they too informal? I can add some details...
>
> They've been measured on a **very** minimal setup, almost every option
> is stripp
On Thu, 11 Oct 2012, Ezequiel Garcia wrote:
> * Is SLAB a proper choice? or is it just historical an never been
> re-evaluated?
> * Does the average embedded guy knows which allocator to choose
> and what's the impact on his platform?
My current ideas on this subject matter is to get to a poin
On Tue, 16 Oct 2012, Ezequiel Garcia wrote:
> It might be worth reminding that very small systems can use SLOB
> allocator, which does not suffer from this kind of fragmentation.
Well, I have never seen non experimental systems that use SLOB. Others
have claimed they exist.
--
To unsubscribe fr
On Mon, 15 Oct 2012, David Rientjes wrote:
> This type of workload that really exhibits the problem with remote freeing
> would suggest that the design of slub itself is the problem here.
There is a tradeoff here between spatial data locality and temporal
locality. Slub always frees to the queue
On Tue, Oct 16, 2012 at 3:44 PM, Tim Bird wrote:
> On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
>> On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
>>> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
> Now, returning to the fra
On 10/16/2012 11:27 AM, Ezequiel Garcia wrote:
> On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
>> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>>
Now, returning to the fragmentation. The problem with SLAB is that
its smalle
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>
>>> Now, returning to the fragmentation. The problem with SLAB is that
>>> its smaller cache available for kmalloced objects is 32 bytes;
>
On Tue, Oct 16, 2012 at 3:07 PM, Tim Bird wrote:
> On 10/16/2012 05:56 AM, Eric Dumazet wrote:
>> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>>
>>> Now, returning to the fragmentation. The problem with SLAB is that
>>> its smaller cache available for kmalloced objects is 32 bytes;
>
On 10/16/2012 05:56 AM, Eric Dumazet wrote:
> On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
>
>> Now, returning to the fragmentation. The problem with SLAB is that
>> its smaller cache available for kmalloced objects is 32 bytes;
>> while SLUB allows 8, 16, 24 ...
>>
>> Perhaps adding
On Tue, 2012-10-16 at 09:35 -0300, Ezequiel Garcia wrote:
> Now, returning to the fragmentation. The problem with SLAB is that
> its smaller cache available for kmalloced objects is 32 bytes;
> while SLUB allows 8, 16, 24 ...
>
> Perhaps adding smaller caches to SLAB might make sense?
> Is there
David,
On Mon, Oct 15, 2012 at 9:46 PM, David Rientjes wrote:
> On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
>
>> But SLAB suffers from a lot more internal fragmentation than SLUB,
>> which I guess is a known fact. So memory-constrained devices
>> would waste more memory by using SLAB.
>
> Even wi
On Tue, 2012-10-16 at 10:28 +0900, JoonSoo Kim wrote:
> Hello, Eric.
>
> 2012/10/14 Eric Dumazet :
> > SLUB was really bad in the common workload you describe (allocations
> > done by one cpu, freeing done by other cpus), because all kfree() hit
> > the slow path and cpus contend in __slab_free()
Hello, Eric.
2012/10/14 Eric Dumazet :
> SLUB was really bad in the common workload you describe (allocations
> done by one cpu, freeing done by other cpus), because all kfree() hit
> the slow path and cpus contend in __slab_free() in the loop guarded by
> cmpxchg_double_slab(). SLAB has a cache f
On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
> But SLAB suffers from a lot more internal fragmentation than SLUB,
> which I guess is a known fact. So memory-constrained devices
> would waste more memory by using SLAB.
Even with slub's per-cpu partial lists?
--
To unsubscribe from this list: send t
On Sat, 13 Oct 2012, David Rientjes wrote:
> This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
> two 64GB machines (one client, one server), four nodes each, with thread
> counts in multiples of the number of cores. SLUB does a comparable job,
> but once we have the th
On Sat, 2012-10-13 at 02:51 -0700, David Rientjes wrote:
> On Thu, 11 Oct 2012, Andi Kleen wrote:
>
> > When did you last test? Our regressions had disappeared a few kernels
> > ago.
> >
>
> This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
> two 64GB machines (one clie
Hi David,
On Sat, Oct 13, 2012 at 6:54 AM, David Rientjes wrote:
> On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
>
>> >> SLUB is a non-starter for us and incurs a >10% performance degradation in
>> >> netperf TCP_RR.
>> >
>>
>> Where are you seeing that?
>>
>
> In my benchmarking results.
>
>> Noti
On Fri, 12 Oct 2012, Ezequiel Garcia wrote:
> >> SLUB is a non-starter for us and incurs a >10% performance degradation in
> >> netperf TCP_RR.
> >
>
> Where are you seeing that?
>
In my benchmarking results.
> Notice that many defconfigs are for embedded devices,
> and many of them say "use S
On Thu, 11 Oct 2012, Andi Kleen wrote:
> When did you last test? Our regressions had disappeared a few kernels
> ago.
>
This was in August when preparing for LinuxCon, I tested netperf TCP_RR on
two 64GB machines (one client, one server), four nodes each, with thread
counts in multiples of the
Hi,
On Thu, Oct 11, 2012 at 8:10 PM, Andi Kleen wrote:
> David Rientjes writes:
>
>> On Thu, 11 Oct 2012, Andi Kleen wrote:
>>
>>> > While I've always thought SLUB was the default and recommended allocator,
>>> > I'm surprise to find that it's not always the case:
>>>
>>> iirc the main performan
David Rientjes writes:
> On Thu, 11 Oct 2012, Andi Kleen wrote:
>
>> > While I've always thought SLUB was the default and recommended allocator,
>> > I'm surprise to find that it's not always the case:
>>
>> iirc the main performance reasons for slab over slub have mostly
>> disappeared, so in t
On Thu, 11 Oct 2012, Andi Kleen wrote:
> > While I've always thought SLUB was the default and recommended allocator,
> > I'm surprise to find that it's not always the case:
>
> iirc the main performance reasons for slab over slub have mostly
> disappeared, so in theory slab could be finally depre
Ezequiel Garcia writes:
> Hello,
>
> While I've always thought SLUB was the default and recommended allocator,
> I'm surprise to find that it's not always the case:
iirc the main performance reasons for slab over slub have mostly
disappeared, so in theory slab could be finally deprecated now.
-
Hello,
While I've always thought SLUB was the default and recommended allocator,
I'm surprise to find that it's not always the case:
$ find arch/*/configs -name "*defconfig" | wc -l
452
$ grep -r "SLOB=y" arch/*/configs/ | wc -l
11
$ grep -r "SLAB=y" arch/*/configs/ | wc -l
245
This shows that
33 matches
Mail list logo