Acked-by: Christoph Lameter <[EMAIL PROTECTED]>
Slub can use the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
mm/slub.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index:
Acked-by: Christoph Lameter [EMAIL PROTECTED]
Slub can use the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
mm/slub.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index:
On Friday 19 October 2007 12:01, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > > Yes that is what I attempted to do with the write barrier. To my
> > > knowledge there are no reads that could bleed out and I wanted to avoid
> > > a full fence instruction there.
> >
> > Oh,
On Fri, 19 Oct 2007, Nick Piggin wrote:
> > Yes that is what I attempted to do with the write barrier. To my knowledge
> > there are no reads that could bleed out and I wanted to avoid a full fence
> > instruction there.
>
> Oh, OK. Bit risky ;) You might be right, but anyway I think it
> should
On Friday 19 October 2007 11:21, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > Ah, thanks, but can we just use my earlier patch that does the
> > proper __bit_spin_unlock which is provided by
> > bit_spin_lock-use-lock-bitops.patch
>
> Ok.
>
> > This primitive should have
On Fri, 19 Oct 2007, Nick Piggin wrote:
> Ah, thanks, but can we just use my earlier patch that does the
> proper __bit_spin_unlock which is provided by
> bit_spin_lock-use-lock-bitops.patch
Ok.
> This primitive should have a better chance at being correct, and
> also potentially be more
now
that the base bit lock patches have been sent upstream.
> mm/slub.c | 15 ++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff -puN mm/slub.c~slub-avoid-atomic-operation-for-slab_unlock mm/slub.c
> --- a/mm/slub.c~slub-avoid-atomic-operation-for-slab_unlo
On Fri, 19 Oct 2007, Nick Piggin wrote:
Yes that is what I attempted to do with the write barrier. To my knowledge
there are no reads that could bleed out and I wanted to avoid a full fence
instruction there.
Oh, OK. Bit risky ;) You might be right, but anyway I think it
should be just
On Friday 19 October 2007 11:21, Christoph Lameter wrote:
On Fri, 19 Oct 2007, Nick Piggin wrote:
Ah, thanks, but can we just use my earlier patch that does the
proper __bit_spin_unlock which is provided by
bit_spin_lock-use-lock-bitops.patch
Ok.
This primitive should have a better
On Friday 19 October 2007 12:01, Christoph Lameter wrote:
On Fri, 19 Oct 2007, Nick Piggin wrote:
Yes that is what I attempted to do with the write barrier. To my
knowledge there are no reads that could bleed out and I wanted to avoid
a full fence instruction there.
Oh, OK. Bit risky
On Fri, 19 Oct 2007, Nick Piggin wrote:
Ah, thanks, but can we just use my earlier patch that does the
proper __bit_spin_unlock which is provided by
bit_spin_lock-use-lock-bitops.patch
Ok.
This primitive should have a better chance at being correct, and
also potentially be more optimised
been sent upstream.
mm/slub.c | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff -puN mm/slub.c~slub-avoid-atomic-operation-for-slab_unlock mm/slub.c
--- a/mm/slub.c~slub-avoid-atomic-operation-for-slab_unlock
+++ a/mm/slub.c
@@ -1181,9 +1181,22 @@ static
12 matches
Mail list logo