[PATCH] SLUB The unqueued slab allocator v6
Note that the definition of the return type of ksize() is currently
different between mm and Linus' tree. Patch is conforming to mm.
This patch also needs sprint_symbol() support from mm.
V5->V6:
- Straighten out various coding issues u.a. to
On Sat, 10 Mar 2007, Andrew Morton wrote:
> Is this safe to think about applying yet?
Its safe. By default kernels will be build with SLAB. SLUB becomes only a
selectable alternative. It should not become the primary slab until we
know that its really superior overall and have thoroughly tested
Is this safe to think about applying yet?
We lost the leak detector feature.
It might be nice to create synonyms for PageActive, PageReferenced and
PageError, to make things clearer in the slub core. At the expense of
making things less clear globally. Am unsure.
-
To unsubscribe from this lis
[PATCH] SLUB The unqueued slab allocator v4
V4->V5:
- Single object slabs only for slabs > slub_max_order otherwise generate
sufficient objects to avoid frequent use of the page allocator. This is
necessary to compensate for fragmentation caused by frequent uses of
the page allocat
On Fri, 9 Mar 2007, Mel Gorman wrote:
> The results without slub_debug were not good except for IA64. x86_64 and ppc64
> both blew up for a variety of reasons. The IA64 results were
Yuck that is the dst issue that Adrian is also looking at. Likely an issue
with slab merging and RCU frees.
> Ke
On Fri, 9 Mar 2007, Mel Gorman wrote:
> I'm not sure what you mean by per-order queues. The buddy allocator already
> has per-order lists.
Somehow they do not seem to work right. SLAB (and now SLUB too) can avoid
(or defer) fragmentation by keeping its own queues.
-
To unsubscribe from this list
Note that I am amazed that the kernbench even worked.
The results without slub_debug were not good except for IA64. x86_64 and
ppc64 both blew up for a variety of reasons. The IA64 results were
KernBench Comparison
2.6.21-rc2-mm2-clean 2.
On Thu, 8 Mar 2007, Christoph Lameter wrote:
Note that I am amazed that the kernbench even worked. On small machine
How small? The machines I am testing on aren't "big" but they aren't
misterable either.
I
seem to be getting into trouble with order 1 allocations.
That in itself is pretty
On Thu, 8 Mar 2007, Christoph Lameter wrote:
On Thu, 8 Mar 2007, Mel Gorman wrote:
Note that the 16kb page size has a major
impact on SLUB performance. On IA64 slub will use only 1/4th the locking
overhead as on 4kb platforms.
It'll be interesting to see the kernbench tests then with debuggin
Note that I am amazed that the kernbench even worked. On small machine I
seem to be getting into trouble with order 1 allocations. SLAB seems to be
able to avoid the situation by keeping higher order pages on a freelist
and reduce the alloc/frees of higher order pages that the page allocator
has
On Thu, 8 Mar 2007, Mel Gorman wrote:
> > Note that the 16kb page size has a major
> > impact on SLUB performance. On IA64 slub will use only 1/4th the locking
> > overhead as on 4kb platforms.
> It'll be interesting to see the kernbench tests then with debugging
> disabled.
You can get a simil
On Thu, 8 Mar 2007, Mel Gorman wrote:
> Brought up 4 CPUs
> Node 0 CPUs: 0-3
> mm/memory.c:111: bad pud c50e4480.
Lower bits must be clear right? Looks like the pud was released
and then reused for a 64 byte cache or so. This is likely a freelist
pointer that slub put there after allocat
On (08/03/07 08:48), Christoph Lameter didst pronounce:
> On Thu, 8 Mar 2007, Mel Gorman wrote:
>
> > On x86_64, it completed successfully and looked reliable. There was a 5%
> > performance loss on kernbench and aim9 figures were way down. However, with
> > slub_debug enabled, I would expect that
On Thu, 8 Mar 2007, Mel Gorman wrote:
> On x86_64, it completed successfully and looked reliable. There was a 5%
> performance loss on kernbench and aim9 figures were way down. However, with
> slub_debug enabled, I would expect that so it's not a fair comparison
> performance wise. I'll rerun the
On Tue, 6 Mar 2007, Christoph Lameter wrote:
[PATCH] SLUB The unqueued slab allocator v4
Hi Christoph,
I shoved these patches through a few tests on x86, x86_64, ia64 and ppc64
last night to see how they got on. I enabled slub_debug to catch any
suprises that may be creeping about.
The
[PATCH] SLUB The unqueued slab allocator v4
V3->V4
- Rename /proc/slabinfo to /proc/slubinfo. We have a different format after
all.
- More bug fixes and stabilization of diagnostic functions. This seems
to be finally something that works wherever we test it.
- Serialize kmem_cache_create
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Wed, 28 Feb 2007 17:06:19 -0800 (PST)
> On Wed, 28 Feb 2007, David Miller wrote:
>
> > Arguably SLAB_HWCACHE_ALIGN and SLAB_MUST_HWCACHE_ALIGN should
> > not be set here, but SLUBs change in semantics in this area
> > could cause similar grief in
On Wed, 28 Feb 2007, David Miller wrote:
> Maybe if you managed your individual changes in GIT or similar
> this could be debugged very quickly. :-)
I think once things calm down and the changes become smaller its going
to be easier. Likely the case with after V4.
> Meanwhile I noticed that you
From: David Miller <[EMAIL PROTECTED]>
Date: Wed, 28 Feb 2007 14:00:22 -0800 (PST)
> V3 doesn't boot successfully on sparc64
False alarm!
This crash was actually due to an unrelated problem in the parport_pc
driver on my machine.
Slub v3 boots up and seems to work fine so far on sparc64.
-
To u
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Wed, 28 Feb 2007 11:20:44 -0800 (PST)
> V2->V3
> - Debugging and diagnostic support. This is runtime enabled and not compile
> time enabled. Runtime debugging can be controlled via kernel boot options
> on an individual slab cache basis or glob
On Sat, 24 February 2007 16:14:48 -0800, Christoph Lameter wrote:
>
> It eliminates 50% of the slab caches. Thus it reduces the management
> overhead by half.
How much management overhead is there left with SLUB? Is it just the
one per-node slab? Is there runtime overhead as well?
In a slight
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Sat, 24 Feb 2007 09:32:49 -0800 (PST)
> On Fri, 23 Feb 2007, David Miller wrote:
>
> > I also agree with Andi in that merging could mess up how object type
> > local lifetimes help reduce fragmentation in object pools.
>
> If that is a problem fo
On Sat, 24 Feb 2007, Jörn Engel wrote:
> How much of a gain is the merging anyway? Once you start having
> explicit whitelists or blacklists of pools that can be merged, one can
> start to wonder if the result is worth the effort.
It eliminates 50% of the slab caches. Thus it reduces the managem
On Sat, 24 February 2007 09:32:49 -0800, Christoph Lameter wrote:
>
> If that is a problem for particular object pools then we may be able to
> except those from the merging.
How much of a gain is the merging anyway? Once you start having
explicit whitelists or blacklists of pools that can be m
On Fri, 23 Feb 2007, David Miller wrote:
> > The general caches already merge lots of users depending on their sizes.
> > So we already have the situation and we have tools to deal with it.
>
> But this doesn't happen for things like biovecs, and that will
> make debugging painful.
>
> If a cra
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Fri, 23 Feb 2007 21:47:36 -0800 (PST)
> On Sat, 24 Feb 2007, KAMEZAWA Hiroyuki wrote:
>
> > >From a viewpoint of a crash dump user, this merging will make crash dump
> > investigation very very very difficult.
>
> The general caches already merge
On Sat, 24 Feb 2007, KAMEZAWA Hiroyuki wrote:
> >From a viewpoint of a crash dump user, this merging will make crash dump
> investigation very very very difficult.
The general caches already merge lots of users depending on their sizes.
So we already have the situation and we have tools to deal
On Thu, 22 Feb 2007 10:42:23 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > > G. Slab merging
> > >
> > >We often have slab caches with similar parameters. SLUB detects those
> > >on bootup and merges them into the corresponding general caches. This
> > >leads to more ef
On Fri, 23 Feb 2007, Andi Kleen wrote:
> If you don't cache constructed but free objects then there is no cache
> advantage of constructors/destructors and they would be useless.
SLUB caches those objects as long as they are part of a partially
allocated slab. If all objects in the slab are free
On Thu, Feb 22, 2007 at 10:42:23AM -0800, Christoph Lameter wrote:
> On Thu, 22 Feb 2007, Andi Kleen wrote:
>
> > >SLUB does not need a cache reaper for UP systems.
> >
> > This means constructors/destructors are becomming worthless?
> > Can you describe your rationale why you think they don
On Thu, 22 Feb 2007, Andi Kleen wrote:
> >SLUB does not need a cache reaper for UP systems.
>
> This means constructors/destructors are becomming worthless?
> Can you describe your rationale why you think they don't make
> sense on UP?
Cache reaping has nothing to do with constructors and d
Christoph Lameter <[EMAIL PROTECTED]> writes:
> This is a new slab allocator which was motivated by the complexity of the
> with the existing implementation.
Thanks for doing that work. It certainly was long overdue.
> D. SLAB has a complex cache reaper
>
>SLUB does not need a cache reaper
n Thu, 22 Feb 2007, David Miller wrote:
> All of that logic needs to be protected by CONFIG_ZONE_DMA too.
Right. Will fix that in the next release.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
On Thu, 22 Feb 2007, Peter Zijlstra wrote:
> On Wed, 2007-02-21 at 23:00 -0800, Christoph Lameter wrote:
>
> > +/*
> > + * Lock order:
> > + * 1. slab_lock(page)
> > + * 2. slab->list_lock
> > + *
>
> That seems to contradict this:
This is a trylock. If it fails then we can compensate by al
On Thu, 22 Feb 2007, Pekka Enberg wrote:
> On 2/22/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > This is a new slab allocator which was motivated by the complexity of the
> > existing code in mm/slab.c. It attempts to address a variety of concerns
> > with the existing implementation.
>
>
Hi Christoph,
On 2/22/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
This is a new slab allocator which was motivated by the complexity of the
existing code in mm/slab.c. It attempts to address a variety of concerns
with the existing implementation.
So do you want to add a new allocator or r
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Wed, 21 Feb 2007 23:00:30 -0800 (PST)
> +#ifdef CONFIG_ZONE_DMA
> +static struct kmem_cache *kmalloc_caches_dma[KMALLOC_NR_CACHES];
> +#endif
Therefore.
> +static struct kmem_cache *get_slab(size_t size, gfp_t flags)
> +{
...
> + s = kmalloc
On Wed, 2007-02-21 at 23:00 -0800, Christoph Lameter wrote:
> +/*
> + * Lock order:
> + * 1. slab_lock(page)
> + * 2. slab->list_lock
> + *
That seems to contradict this:
> +/*
> + * Lock page and remove it from the partial list
> + *
> + * Must hold list_lock
> + */
> +static __always_inlin
This is a new slab allocator which was motivated by the complexity of the
existing code in mm/slab.c. It attempts to address a variety of concerns
with the existing implementation.
A. Management of object queues
A particular concern was the complex management of the numerous object
queues
39 matches
Mail list logo