On 23/06/21(Wed) 23:03, Jonathan Matthew wrote:
> On Wed, Jun 23, 2021 at 09:37:10AM +0200, Martin Pieuchot wrote:
> > On 16/06/21(Wed) 11:26, Martin Pieuchot wrote:
> > > Diff below does two things:
> > > 
> > > - Use atomic operations for incrementing/decrementing references of
> > >   anonymous objects.  This allows us to manipulate them without holding
> > >   the KERNEL_LOCK().
> > > 
> > > - Rewrite the loop from uao_swap_off() to only keep a reference to the
> > >   next item in the list.  This is imported from NetBSD and is necessary
> > >   to introduce locking around uao_pagein().
> > > 
> > > ok?
> > 
> > Anyone?
> 
> uao_reference_locked() and uao_detach_locked() are prototyped in
> uvm_extern.h, so they should be removed here too.

Thanks, I'll do that.
 
> It doesn't look like uao_detach() is safe to call without the
> kernel lock; it calls uao_dropswap() for each page, which calls
> uao_set_swslot(), which includes a KERNEL_ASSERT_LOCKED().
> Should we keep the KERNEL_ASSERT_LOCKED() in uao_detach()?

I prefer to keep the KERNEL_ASSERT_LOCKED() where it is needed and not
spread it to all the callers.  My current plan is to trade those assert
by assertions on the vmobjlock so I don't want to add new ones.

Reply via email to