On Friday 26 October 2007 13:35, Benjamin Herrenschmidt wrote:

[acks]

Thanks for those...

> > Index: linux-2.6/include/asm-powerpc/mmu_context.h
> > ===================================================================
> > --- linux-2.6.orig/include/asm-powerpc/mmu_context.h
> > +++ linux-2.6/include/asm-powerpc/mmu_context.h
> > @@ -129,7 +129,7 @@ static inline void get_mmu_context(struc
> >                 steal_context();
> >  #endif
> >         ctx = next_mmu_context;
> > -       while (test_and_set_bit(ctx, context_map)) {
> > +       while (test_and_set_bit_lock(ctx, context_map)) {
> >                 ctx = find_next_zero_bit(context_map, LAST_CONTEXT+1,
> > ctx); if (ctx > LAST_CONTEXT)
> >                         ctx = 0;
> > @@ -158,7 +158,7 @@ static inline void destroy_context(struc
> >  {
> >         preempt_disable();
> >         if (mm->context.id != NO_CONTEXT) {
> > -               clear_bit(mm->context.id, context_map);
> > +               clear_bit_unlock(mm->context.id, context_map);
> >                 mm->context.id = NO_CONTEXT;
> >  #ifdef FEW_CONTEXTS
> >                 atomic_inc(&nr_free_contexts);
>
> I don't think the previous code was wrong... it's not a locked section
> and we don't care about ordering previous stores. It's an allocation, it
> should be fine. In general, bitmap allocators should be allright.

Well if it is just allocating an arbitrary _number_ out of a bitmap
and nothing else (eg. like the pid allocator), then you don't need
barriers.


> Ignore the FEW_CONTEXTS stuff for now :-) At this point, it's UP only
> and will be replaced sooner or later.

OK. Then I agree, provided you're doing the correct synchronisation
or flushing etc. when destroying a context (which presumably you are).

I'll drop those bits then.

Thanks,
Nick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to