Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-30 Thread Nick Piggin
On Wednesday 31 October 2007 05:32, Christoph Lameter wrote:
> On Tue, 30 Oct 2007, Nick Piggin wrote:
> > Is this actually a speedup on any architecture to roll your own locking
> > rather than using bit spinlock?
>
> It avoids one load from memory when allocating and the release is simply
> writing the page->flags back. Less instructions.

OK, but it probably isn't a measurable speedup, even on microbenchmarks,
right? And many architectures have to have more barriers around cmpxchg
than they do around a test_and_set_bit_lock, so it may even be slower
on some.


> > I am not exactly convinced that smp_wmb() is a good idea to have in your
> > unlock, rather than the normally required smp_mb() that every other open
> > coded lock in the kernel is using today. If you comment every code path
> > where a load leaking out of the critical section would not be a problem,
> > then OK it may be correct, but I still don't think it is worth the
> > maintenance overhead.
>
> I thought you agreed that release semantics only require a write barrier?

Not in general.


> The issue here is that other processors see the updates before the
> updates to page-flags.
>
> A load leaking out of a critical section would require that the result of
> the load is not used to update other information before the slab_unlock
> and that the source of the load is not overwritten in the critical
> section. That does not happen in sluib.

That may be the case, but I don't think there is enough performance
justification to add a hack like this. ia64 for example is going to
do an mf for smp_wmb so I doubt it is a clear win.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-30 Thread Christoph Lameter
On Tue, 30 Oct 2007, Nick Piggin wrote:

> Is this actually a speedup on any architecture to roll your own locking
> rather than using bit spinlock?

It avoids one load from memory when allocating and the release is simply 
writing the page->flags back. Less instructions.

> I am not exactly convinced that smp_wmb() is a good idea to have in your
> unlock, rather than the normally required smp_mb() that every other open
> coded lock in the kernel is using today. If you comment every code path
> where a load leaking out of the critical section would not be a problem,
> then OK it may be correct, but I still don't think it is worth the
> maintenance overhead.

I thought you agreed that release semantics only require a write barrier? 
The issue here is that other processors see the updates before the 
updates to page-flags.

A load leaking out of a critical section would require that the result of 
the load is not used to update other information before the slab_unlock 
and that the source of the load is not overwritten in the critical 
section. That does not happen in sluib.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-30 Thread Christoph Lameter
On Tue, 30 Oct 2007, Nick Piggin wrote:

 Is this actually a speedup on any architecture to roll your own locking
 rather than using bit spinlock?

It avoids one load from memory when allocating and the release is simply 
writing the page-flags back. Less instructions.

 I am not exactly convinced that smp_wmb() is a good idea to have in your
 unlock, rather than the normally required smp_mb() that every other open
 coded lock in the kernel is using today. If you comment every code path
 where a load leaking out of the critical section would not be a problem,
 then OK it may be correct, but I still don't think it is worth the
 maintenance overhead.

I thought you agreed that release semantics only require a write barrier? 
The issue here is that other processors see the updates before the 
updates to page-flags.

A load leaking out of a critical section would require that the result of 
the load is not used to update other information before the slab_unlock 
and that the source of the load is not overwritten in the critical 
section. That does not happen in sluib.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-30 Thread Nick Piggin
On Wednesday 31 October 2007 05:32, Christoph Lameter wrote:
 On Tue, 30 Oct 2007, Nick Piggin wrote:
  Is this actually a speedup on any architecture to roll your own locking
  rather than using bit spinlock?

 It avoids one load from memory when allocating and the release is simply
 writing the page-flags back. Less instructions.

OK, but it probably isn't a measurable speedup, even on microbenchmarks,
right? And many architectures have to have more barriers around cmpxchg
than they do around a test_and_set_bit_lock, so it may even be slower
on some.


  I am not exactly convinced that smp_wmb() is a good idea to have in your
  unlock, rather than the normally required smp_mb() that every other open
  coded lock in the kernel is using today. If you comment every code path
  where a load leaking out of the critical section would not be a problem,
  then OK it may be correct, but I still don't think it is worth the
  maintenance overhead.

 I thought you agreed that release semantics only require a write barrier?

Not in general.


 The issue here is that other processors see the updates before the
 updates to page-flags.

 A load leaking out of a critical section would require that the result of
 the load is not used to update other information before the slab_unlock
 and that the source of the load is not overwritten in the critical
 section. That does not happen in sluib.

That may be the case, but I don't think there is enough performance
justification to add a hack like this. ia64 for example is going to
do an mf for smp_wmb so I doubt it is a clear win.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-29 Thread Nick Piggin
On Sunday 28 October 2007 14:32, Christoph Lameter wrote:
> Too many troubles with the bitlocks and we really do not need
> to do any bitops. Bitops do not effectively retrieve the old
> value which we want. So use a cmpxchg instead on the arches
> that allow it.
>
> Instead of modifying the page->flags with fake atomic operations
> we pass the page state as a parameter to functions. No function uses
> the slab state if the page lock is held. Function must wait until the
> lock is cleared. Thus we can defer the update of page->flags until slab
> processing is complete. The "unlock" operation is then simply updating
> page->flags.

Is this actually a speedup on any architecture to roll your own locking
rather than using bit spinlock?

I am not exactly convinced that smp_wmb() is a good idea to have in your
unlock, rather than the normally required smp_mb() that every other open
coded lock in the kernel is using today. If you comment every code path
where a load leaking out of the critical section would not be a problem,
then OK it may be correct, but I still don't think it is worth the
maintenance overhead.

>
> Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
>
>
> ---
>  mm/slub.c |  324
> +++--- 1 file
> changed, 187 insertions(+), 137 deletions(-)
>
> Index: linux-2.6/mm/slub.c
> ===
> --- linux-2.6.orig/mm/slub.c  2007-10-27 07:56:03.0 -0700
> +++ linux-2.6/mm/slub.c   2007-10-27 07:56:52.0 -0700
> @@ -101,6 +101,7 @@
>   */
>
>  #define FROZEN (1 << PG_active)
> +#define LOCKED (1 << PG_locked)
>
>  #ifdef CONFIG_SLUB_DEBUG
>  #define SLABDEBUG (1 << PG_error)
> @@ -108,36 +109,6 @@
>  #define SLABDEBUG 0
>  #endif
>
> -static inline int SlabFrozen(struct page *page)
> -{
> - return page->flags & FROZEN;
> -}
> -
> -static inline void SetSlabFrozen(struct page *page)
> -{
> - page->flags |= FROZEN;
> -}
> -
> -static inline void ClearSlabFrozen(struct page *page)
> -{
> - page->flags &= ~FROZEN;
> -}
> -
> -static inline int SlabDebug(struct page *page)
> -{
> - return page->flags & SLABDEBUG;
> -}
> -
> -static inline void SetSlabDebug(struct page *page)
> -{
> - page->flags |= SLABDEBUG;
> -}
> -
> -static inline void ClearSlabDebug(struct page *page)
> -{
> - page->flags &= ~SLABDEBUG;
> -}
> -
>  /*
>   * Issues still to be resolved:
>   *
> @@ -818,11 +789,12 @@ static void trace(struct kmem_cache *s,
>  /*
>   * Tracking of fully allocated slabs for debugging purposes.
>   */
> -static void add_full(struct kmem_cache *s, struct page *page)
> +static void add_full(struct kmem_cache *s, struct page *page,
> + unsigned long state)
>  {
>   struct kmem_cache_node *n = get_node(s, page_to_nid(page));
>
> - if (!SlabDebug(page) || !(s->flags & SLAB_STORE_USER))
> + if (!(state & SLABDEBUG) || !(s->flags & SLAB_STORE_USER))
>   return;
>   spin_lock(>list_lock);
>   list_add(>lru, >full);
> @@ -894,7 +866,7 @@ bad:
>  }
>
>  static noinline int free_debug_processing(struct kmem_cache *s,
> - struct page *page, void *object, void *addr)
> + struct page *page, void *object, void *addr, unsigned long state)
>  {
>   if (!check_slab(s, page))
>   goto fail;
> @@ -930,7 +902,7 @@ static noinline int free_debug_processin
>   }
>
>   /* Special debug activities for freeing objects */
> - if (!SlabFrozen(page) && page->freelist == page->end)
> + if (!(state & FROZEN) && page->freelist == page->end)
>   remove_full(s, page);
>   if (s->flags & SLAB_STORE_USER)
>   set_track(s, object, TRACK_FREE, addr);
> @@ -1047,7 +1019,8 @@ static inline int slab_pad_check(struct
>   { return 1; }
>  static inline int check_object(struct kmem_cache *s, struct page *page,
>   void *object, int active) { return 1; }
> -static inline void add_full(struct kmem_cache *s, struct page *page) {}
> +static inline void add_full(struct kmem_cache *s, struct page *page,
> + unsigned long state) {}
>  static inline unsigned long kmem_cache_flags(unsigned long objsize,
>   unsigned long flags, const char *name,
>   void (*ctor)(struct kmem_cache *, void *))
> @@ -1105,6 +1078,7 @@ static noinline struct page *new_slab(st
>   void *start;
>   void *last;
>   void *p;
> + unsigned long state;
>
>   BUG_ON(flags & GFP_SLAB_BUG_MASK);
>
> @@ -1117,11 +1091,12 @@ static noinline struct page *new_slab(st
>   if (n)
>   atomic_long_inc(>nr_slabs);
>   page->slab = s;
> - page->flags |= 1 << PG_slab;
> + state = 1 << PG_slab;
>   if (s->flags & (SLAB_DEBUG_FREE | SLAB_RED_ZONE | SLAB_POISON |
>   SLAB_STORE_USER | SLAB_TRACE))
> - SetSlabDebug(page);
> + state |= SLABDEBUG;
>
> 

Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-29 Thread Nick Piggin
On Sunday 28 October 2007 14:32, Christoph Lameter wrote:
 Too many troubles with the bitlocks and we really do not need
 to do any bitops. Bitops do not effectively retrieve the old
 value which we want. So use a cmpxchg instead on the arches
 that allow it.

 Instead of modifying the page-flags with fake atomic operations
 we pass the page state as a parameter to functions. No function uses
 the slab state if the page lock is held. Function must wait until the
 lock is cleared. Thus we can defer the update of page-flags until slab
 processing is complete. The unlock operation is then simply updating
 page-flags.

Is this actually a speedup on any architecture to roll your own locking
rather than using bit spinlock?

I am not exactly convinced that smp_wmb() is a good idea to have in your
unlock, rather than the normally required smp_mb() that every other open
coded lock in the kernel is using today. If you comment every code path
where a load leaking out of the critical section would not be a problem,
then OK it may be correct, but I still don't think it is worth the
maintenance overhead.


 Signed-off-by: Christoph Lameter [EMAIL PROTECTED]


 ---
  mm/slub.c |  324
 +++--- 1 file
 changed, 187 insertions(+), 137 deletions(-)

 Index: linux-2.6/mm/slub.c
 ===
 --- linux-2.6.orig/mm/slub.c  2007-10-27 07:56:03.0 -0700
 +++ linux-2.6/mm/slub.c   2007-10-27 07:56:52.0 -0700
 @@ -101,6 +101,7 @@
   */

  #define FROZEN (1  PG_active)
 +#define LOCKED (1  PG_locked)

  #ifdef CONFIG_SLUB_DEBUG
  #define SLABDEBUG (1  PG_error)
 @@ -108,36 +109,6 @@
  #define SLABDEBUG 0
  #endif

 -static inline int SlabFrozen(struct page *page)
 -{
 - return page-flags  FROZEN;
 -}
 -
 -static inline void SetSlabFrozen(struct page *page)
 -{
 - page-flags |= FROZEN;
 -}
 -
 -static inline void ClearSlabFrozen(struct page *page)
 -{
 - page-flags = ~FROZEN;
 -}
 -
 -static inline int SlabDebug(struct page *page)
 -{
 - return page-flags  SLABDEBUG;
 -}
 -
 -static inline void SetSlabDebug(struct page *page)
 -{
 - page-flags |= SLABDEBUG;
 -}
 -
 -static inline void ClearSlabDebug(struct page *page)
 -{
 - page-flags = ~SLABDEBUG;
 -}
 -
  /*
   * Issues still to be resolved:
   *
 @@ -818,11 +789,12 @@ static void trace(struct kmem_cache *s,
  /*
   * Tracking of fully allocated slabs for debugging purposes.
   */
 -static void add_full(struct kmem_cache *s, struct page *page)
 +static void add_full(struct kmem_cache *s, struct page *page,
 + unsigned long state)
  {
   struct kmem_cache_node *n = get_node(s, page_to_nid(page));

 - if (!SlabDebug(page) || !(s-flags  SLAB_STORE_USER))
 + if (!(state  SLABDEBUG) || !(s-flags  SLAB_STORE_USER))
   return;
   spin_lock(n-list_lock);
   list_add(page-lru, n-full);
 @@ -894,7 +866,7 @@ bad:
  }

  static noinline int free_debug_processing(struct kmem_cache *s,
 - struct page *page, void *object, void *addr)
 + struct page *page, void *object, void *addr, unsigned long state)
  {
   if (!check_slab(s, page))
   goto fail;
 @@ -930,7 +902,7 @@ static noinline int free_debug_processin
   }

   /* Special debug activities for freeing objects */
 - if (!SlabFrozen(page)  page-freelist == page-end)
 + if (!(state  FROZEN)  page-freelist == page-end)
   remove_full(s, page);
   if (s-flags  SLAB_STORE_USER)
   set_track(s, object, TRACK_FREE, addr);
 @@ -1047,7 +1019,8 @@ static inline int slab_pad_check(struct
   { return 1; }
  static inline int check_object(struct kmem_cache *s, struct page *page,
   void *object, int active) { return 1; }
 -static inline void add_full(struct kmem_cache *s, struct page *page) {}
 +static inline void add_full(struct kmem_cache *s, struct page *page,
 + unsigned long state) {}
  static inline unsigned long kmem_cache_flags(unsigned long objsize,
   unsigned long flags, const char *name,
   void (*ctor)(struct kmem_cache *, void *))
 @@ -1105,6 +1078,7 @@ static noinline struct page *new_slab(st
   void *start;
   void *last;
   void *p;
 + unsigned long state;

   BUG_ON(flags  GFP_SLAB_BUG_MASK);

 @@ -1117,11 +1091,12 @@ static noinline struct page *new_slab(st
   if (n)
   atomic_long_inc(n-nr_slabs);
   page-slab = s;
 - page-flags |= 1  PG_slab;
 + state = 1  PG_slab;
   if (s-flags  (SLAB_DEBUG_FREE | SLAB_RED_ZONE | SLAB_POISON |
   SLAB_STORE_USER | SLAB_TRACE))
 - SetSlabDebug(page);
 + state |= SLABDEBUG;

 + page-flags |= state;
   start = page_address(page);
   page-end = start + 1;

 @@ -1147,13 +1122,13 @@ static void __free_slab(struct kmem_cach
  {
   int 

Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka Enberg
Hi,

On 10/29/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > We don't need preempt_enable for CONFIG_SMP, right?
>
> preempt_enable is needed if preemption is enabled.

Disabled? But yeah, I see that slab_trylock() can leave preemption
disabled if cmpxchg fails. Was confused by the #ifdefs... :-)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Christoph Lameter
On Sun, 28 Oct 2007, Pekka J Enberg wrote:

> It would be easier to review the actual locking changes if you did the 
> SlabXXX removal in a separate patch.

There are no locking changes.

> > -static __always_inline void slab_lock(struct page *page)
> > +static __always_inline void slab_unlock(struct page *page,
> > +   unsigned long state)
> >  {
> > -   bit_spin_lock(PG_locked, >flags);
> > +   smp_wmb();
> 
> Memory barriers deserve a comment. I suppose this is protecting 
> page->flags but against what?

Its making sure that the changes to page flags are made visible after all 
other changes.

> 
> > +   page->flags = state;
> > +   preempt_enable();
> 
> We don't need preempt_enable for CONFIG_SMP, right?

preempt_enable is needed if preemption is enabled.

> 
> > +__release(bitlock);
> 
> This needs a less generic name and maybe a comment explaining that it's 
> not annotating a proper lock? Or maybe we can drop it completely?

Probably.

> > +static __always_inline unsigned long slab_trylock(struct page *page)
> > +{
> > +   unsigned long state;
> > +
> > +   preempt_disable();
> > +   state = page->flags & ~LOCKED;
> > +#ifdef CONFIG_SMP
> > +   if (cmpxchg(>flags, state, state | LOCKED) != state) {
> > +preempt_enable();
> > +return 0;
> > +   }
> > +#endif
> 
> This is hairy. Perhaps it would be cleaner to have totally separate 
> functions for SMP and UP instead?

I think that is reasonably clear. Having code duplicated is not good 
either. Well we may have to clean this up a bit.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka J Enberg
On Sun, 28 Oct 2007, Pekka J Enberg wrote:
> > +__release(bitlock);
> 
> This needs a less generic name and maybe a comment explaining that it's 
> not annotating a proper lock? Or maybe we can drop it completely?

Ah, I see that  does the same thing, so strike that.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka J Enberg
Hi Christoph,

On Sat, 27 Oct 2007, Christoph Lameter wrote:
> Too many troubles with the bitlocks and we really do not need
> to do any bitops. Bitops do not effectively retrieve the old
> value which we want. So use a cmpxchg instead on the arches
> that allow it.

> -static inline int SlabFrozen(struct page *page)
> -{
> - return page->flags & FROZEN;
> -}
> -
> -static inline void SetSlabFrozen(struct page *page)
> -{
> - page->flags |= FROZEN;
> -}

[snip]

It would be easier to review the actual locking changes if you did the 
SlabXXX removal in a separate patch.

> +#ifdef __HAVE_ARCH_CMPXCHG
>  /*
>   * Per slab locking using the pagelock
>   */
> -static __always_inline void slab_lock(struct page *page)
> +static __always_inline void slab_unlock(struct page *page,
> + unsigned long state)
>  {
> - bit_spin_lock(PG_locked, >flags);
> + smp_wmb();

Memory barriers deserve a comment. I suppose this is protecting 
page->flags but against what?

> + page->flags = state;
> + preempt_enable();

We don't need preempt_enable for CONFIG_SMP, right?

> +  __release(bitlock);

This needs a less generic name and maybe a comment explaining that it's 
not annotating a proper lock? Or maybe we can drop it completely?

> +static __always_inline unsigned long slab_trylock(struct page *page)
> +{
> + unsigned long state;
> +
> + preempt_disable();
> + state = page->flags & ~LOCKED;
> +#ifdef CONFIG_SMP
> + if (cmpxchg(>flags, state, state | LOCKED) != state) {
> +  preempt_enable();
> +  return 0;
> + }
> +#endif

This is hairy. Perhaps it would be cleaner to have totally separate 
functions for SMP and UP instead?

> -static __always_inline void slab_unlock(struct page *page)
> +static __always_inline unsigned long slab_lock(struct page *page)
>  {
> - bit_spin_unlock(PG_locked, >flags);
> + unsigned long state;
> +
> + preempt_disable();
> +#ifdef CONFIG_SMP
> + do {
> + state = page->flags & ~LOCKED;
> + } while (cmpxchg(>flags, state, state | LOCKED) != state);
> +#else
> + state = page->flags & ~LOCKED;
> +#endif

Same here.

Pekka
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka J Enberg
Hi Christoph,

On Sat, 27 Oct 2007, Christoph Lameter wrote:
 Too many troubles with the bitlocks and we really do not need
 to do any bitops. Bitops do not effectively retrieve the old
 value which we want. So use a cmpxchg instead on the arches
 that allow it.

 -static inline int SlabFrozen(struct page *page)
 -{
 - return page-flags  FROZEN;
 -}
 -
 -static inline void SetSlabFrozen(struct page *page)
 -{
 - page-flags |= FROZEN;
 -}

[snip]

It would be easier to review the actual locking changes if you did the 
SlabXXX removal in a separate patch.

 +#ifdef __HAVE_ARCH_CMPXCHG
  /*
   * Per slab locking using the pagelock
   */
 -static __always_inline void slab_lock(struct page *page)
 +static __always_inline void slab_unlock(struct page *page,
 + unsigned long state)
  {
 - bit_spin_lock(PG_locked, page-flags);
 + smp_wmb();

Memory barriers deserve a comment. I suppose this is protecting 
page-flags but against what?

 + page-flags = state;
 + preempt_enable();

We don't need preempt_enable for CONFIG_SMP, right?

 +  __release(bitlock);

This needs a less generic name and maybe a comment explaining that it's 
not annotating a proper lock? Or maybe we can drop it completely?

 +static __always_inline unsigned long slab_trylock(struct page *page)
 +{
 + unsigned long state;
 +
 + preempt_disable();
 + state = page-flags  ~LOCKED;
 +#ifdef CONFIG_SMP
 + if (cmpxchg(page-flags, state, state | LOCKED) != state) {
 +  preempt_enable();
 +  return 0;
 + }
 +#endif

This is hairy. Perhaps it would be cleaner to have totally separate 
functions for SMP and UP instead?

 -static __always_inline void slab_unlock(struct page *page)
 +static __always_inline unsigned long slab_lock(struct page *page)
  {
 - bit_spin_unlock(PG_locked, page-flags);
 + unsigned long state;
 +
 + preempt_disable();
 +#ifdef CONFIG_SMP
 + do {
 + state = page-flags  ~LOCKED;
 + } while (cmpxchg(page-flags, state, state | LOCKED) != state);
 +#else
 + state = page-flags  ~LOCKED;
 +#endif

Same here.

Pekka
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka J Enberg
On Sun, 28 Oct 2007, Pekka J Enberg wrote:
  +__release(bitlock);
 
 This needs a less generic name and maybe a comment explaining that it's 
 not annotating a proper lock? Or maybe we can drop it completely?

Ah, I see that linux/bit_spinlock.h does the same thing, so strike that.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Christoph Lameter
On Sun, 28 Oct 2007, Pekka J Enberg wrote:

 It would be easier to review the actual locking changes if you did the 
 SlabXXX removal in a separate patch.

There are no locking changes.

  -static __always_inline void slab_lock(struct page *page)
  +static __always_inline void slab_unlock(struct page *page,
  +   unsigned long state)
   {
  -   bit_spin_lock(PG_locked, page-flags);
  +   smp_wmb();
 
 Memory barriers deserve a comment. I suppose this is protecting 
 page-flags but against what?

Its making sure that the changes to page flags are made visible after all 
other changes.

 
  +   page-flags = state;
  +   preempt_enable();
 
 We don't need preempt_enable for CONFIG_SMP, right?

preempt_enable is needed if preemption is enabled.

 
  +__release(bitlock);
 
 This needs a less generic name and maybe a comment explaining that it's 
 not annotating a proper lock? Or maybe we can drop it completely?

Probably.

  +static __always_inline unsigned long slab_trylock(struct page *page)
  +{
  +   unsigned long state;
  +
  +   preempt_disable();
  +   state = page-flags  ~LOCKED;
  +#ifdef CONFIG_SMP
  +   if (cmpxchg(page-flags, state, state | LOCKED) != state) {
  +preempt_enable();
  +return 0;
  +   }
  +#endif
 
 This is hairy. Perhaps it would be cleaner to have totally separate 
 functions for SMP and UP instead?

I think that is reasonably clear. Having code duplicated is not good 
either. Well we may have to clean this up a bit.



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock.

2007-10-28 Thread Pekka Enberg
Hi,

On 10/29/07, Christoph Lameter [EMAIL PROTECTED] wrote:
  We don't need preempt_enable for CONFIG_SMP, right?

 preempt_enable is needed if preemption is enabled.

Disabled? But yeah, I see that slab_trylock() can leave preemption
disabled if cmpxchg fails. Was confused by the #ifdefs... :-)
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/