Re: [patch 1/6] mmu_notifier: Core code

2008-02-18 Thread Roland Dreier
It seems that we've come up with two reasonable cases where it makes
sense to use these notifiers for InfiniBand/RDMA:

First, the ability to safely to DMA to/from userspace memory with the
memory regions mlock()ed but the pages not pinned.  In this case the
notifiers here would seem to suit us well:

 > +void (*invalidate_range_begin)(struct mmu_notifier *mn,
 > + struct mm_struct *mm,
 > + unsigned long start, unsigned long end,
 > + int atomic);
 > +
 > +void (*invalidate_range_end)(struct mmu_notifier *mn,
 > + struct mm_struct *mm,
 > + unsigned long start, unsigned long end,
 > + int atomic);

If I understand correctly, the IB stack would have to get the hardware
driver to shoot down translation entries and suspend access to the
region when an invalidate_range_begin notifier is called, and wait for
the invalidate_range_end notifier to repopulate the adapter
translation tables.  This will probably work OK as long as the
interval between the invalidate_range_begin and invalidate_range_end
calls is not "too long."

Also, using this effectively requires us to figure out how we want to
mlock() regions that are going to be used for RDMA.  We could require
userspace to do it, but it's not clear to me that we're safe in the
case where userspace decides not to... what happens if some pages get
swapped out after the invalidate_range_begin notifier?

The second case where some form of notifiers are useful is for
userspace to know when a memory registration is still valid, ie Pete
Wyckoff's work:

http://www.osc.edu/~pw/papers/wyckoff-memreg-ccgrid05.pdf
http://www.osc.edu/~pw/dreg/

however these MMU notifiers seem orthogonal to that: the registration
cache is concerned with address spaces, not page mapping, and hence
the existing vma operations seem to be a better fit.

 - R.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-18 Thread Roland Dreier
It seems that we've come up with two reasonable cases where it makes
sense to use these notifiers for InfiniBand/RDMA:

First, the ability to safely to DMA to/from userspace memory with the
memory regions mlock()ed but the pages not pinned.  In this case the
notifiers here would seem to suit us well:

  +void (*invalidate_range_begin)(struct mmu_notifier *mn,
  + struct mm_struct *mm,
  + unsigned long start, unsigned long end,
  + int atomic);
  +
  +void (*invalidate_range_end)(struct mmu_notifier *mn,
  + struct mm_struct *mm,
  + unsigned long start, unsigned long end,
  + int atomic);

If I understand correctly, the IB stack would have to get the hardware
driver to shoot down translation entries and suspend access to the
region when an invalidate_range_begin notifier is called, and wait for
the invalidate_range_end notifier to repopulate the adapter
translation tables.  This will probably work OK as long as the
interval between the invalidate_range_begin and invalidate_range_end
calls is not too long.

Also, using this effectively requires us to figure out how we want to
mlock() regions that are going to be used for RDMA.  We could require
userspace to do it, but it's not clear to me that we're safe in the
case where userspace decides not to... what happens if some pages get
swapped out after the invalidate_range_begin notifier?

The second case where some form of notifiers are useful is for
userspace to know when a memory registration is still valid, ie Pete
Wyckoff's work:

http://www.osc.edu/~pw/papers/wyckoff-memreg-ccgrid05.pdf
http://www.osc.edu/~pw/dreg/

however these MMU notifiers seem orthogonal to that: the registration
cache is concerned with address spaces, not page mapping, and hence
the existing vma operations seem to be a better fit.

 - R.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-17 Thread Robin Holt
On Sun, Feb 17, 2008 at 04:01:20AM +0100, Andrea Arcangeli wrote:
> On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
> > On Fri, 15 Feb 2008, Andrew Morton wrote:
> > 
> > > What is the status of getting infiniband to use this facility?
> > 
> > Well we are talking about this it seems.
> 
> It seems the IB folks think allowing RDMA over virtual memory is not
> interesting, their argument seem to be that RDMA is only interesting
> on RAM (and they seem not interested in allowing RDMA over a ram+swap
> backed _virtual_ memory allocation). They've just to decide if
> ram+swap allocation for RDMA is useful or not.

I don't think that is a completely fair characterization.  It would be
more fair to say that the changes required to their library/user api
would be too significant to allow an adaptation to any scheme which
allowed removal of physical memory below a virtual mapping.

I agree with the IB folks when they say it is impossible with their
current scheme.  The fact that any consumer of their endpoint identifier
can use any identifier without notifying the kernel prior to its use
certainly makes any implementation under any scheme impossible.

I guess we could possibly make things work for IB if we did some heavy
work.  Let's assume, instead of passing around the physical endpoint
identifiers, they passed around a handle.  In order for any IB endpoint
to commuicate, it would need to request the kernel translate a handle
into an endpoint identifier.  In order for the kernel to put a TLB
entry into the processes address space allowing the process access to
the _CARD_, it would need to ensure all the current endpoint identifiers
for this process were "active" meaning we have verified with the other
endpoint that all pages are faulted and TLB/PFN information is in the
owning card's TLB/PFN tables.  Once all of a processes endoints are
"active" we would drop in the PFN for the adapter into the pages tables.
Any time pages are being revoked from under an active handle, we would
shoot-down the IB adapter card TLB entries for all the remote users of
this handle and quiesce the cards state to ensure transfers are either
complete or terminated.  When their are no active transfers, we would
respond back to the owner and they could complete the source process
page table cleaning.  Any time all of the pages for a handle can not be
mapped from virtual to physical, the remote process would be SIGBUS'd
instead of having it IB adapter TLB installed.

This is essentially how XPMEM does it except we have the benefit of
working on individual pages.

Again, not knowing what I am talking about, but under the assumption that
MPI IB use is contained to a library, I would hope the changes could be
contained under the MPI-to-IB library interface and would not need any
changes at the MPI-user library interface.

We do keep track of the virtual address ranges within a handle that
are being used.  I assume the IB folks will find that helpful as well.
Otherwise, I think they could make things operate this way.  XPMEM has
the advantage of not needing to have virtual-to-physical at all times,
but otherwise it is essentially the same.

Thanks,
Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-17 Thread Robin Holt
On Sun, Feb 17, 2008 at 04:01:20AM +0100, Andrea Arcangeli wrote:
 On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
  On Fri, 15 Feb 2008, Andrew Morton wrote:
  
   What is the status of getting infiniband to use this facility?
  
  Well we are talking about this it seems.
 
 It seems the IB folks think allowing RDMA over virtual memory is not
 interesting, their argument seem to be that RDMA is only interesting
 on RAM (and they seem not interested in allowing RDMA over a ram+swap
 backed _virtual_ memory allocation). They've just to decide if
 ram+swap allocation for RDMA is useful or not.

I don't think that is a completely fair characterization.  It would be
more fair to say that the changes required to their library/user api
would be too significant to allow an adaptation to any scheme which
allowed removal of physical memory below a virtual mapping.

I agree with the IB folks when they say it is impossible with their
current scheme.  The fact that any consumer of their endpoint identifier
can use any identifier without notifying the kernel prior to its use
certainly makes any implementation under any scheme impossible.

I guess we could possibly make things work for IB if we did some heavy
work.  Let's assume, instead of passing around the physical endpoint
identifiers, they passed around a handle.  In order for any IB endpoint
to commuicate, it would need to request the kernel translate a handle
into an endpoint identifier.  In order for the kernel to put a TLB
entry into the processes address space allowing the process access to
the _CARD_, it would need to ensure all the current endpoint identifiers
for this process were active meaning we have verified with the other
endpoint that all pages are faulted and TLB/PFN information is in the
owning card's TLB/PFN tables.  Once all of a processes endoints are
active we would drop in the PFN for the adapter into the pages tables.
Any time pages are being revoked from under an active handle, we would
shoot-down the IB adapter card TLB entries for all the remote users of
this handle and quiesce the cards state to ensure transfers are either
complete or terminated.  When their are no active transfers, we would
respond back to the owner and they could complete the source process
page table cleaning.  Any time all of the pages for a handle can not be
mapped from virtual to physical, the remote process would be SIGBUS'd
instead of having it IB adapter TLB installed.

This is essentially how XPMEM does it except we have the benefit of
working on individual pages.

Again, not knowing what I am talking about, but under the assumption that
MPI IB use is contained to a library, I would hope the changes could be
contained under the MPI-to-IB library interface and would not need any
changes at the MPI-user library interface.

We do keep track of the virtual address ranges within a handle that
are being used.  I assume the IB folks will find that helpful as well.
Otherwise, I think they could make things operate this way.  XPMEM has
the advantage of not needing to have virtual-to-physical at all times,
but otherwise it is essentially the same.

Thanks,
Robin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Doug Maxey

On Fri, 15 Feb 2008 19:37:19 PST, Andrew Morton wrote:
> Which other potential clients have been identified and how important it it
> to those?

The powerpc ehea utilizes its own mmu.  Not sure about the importance 
to the driver. (But will investigate :)

++doug

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrea Arcangeli
On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
> On Fri, 15 Feb 2008, Andrew Morton wrote:
> 
> > What is the status of getting infiniband to use this facility?
> 
> Well we are talking about this it seems.

It seems the IB folks think allowing RDMA over virtual memory is not
interesting, their argument seem to be that RDMA is only interesting
on RAM (and they seem not interested in allowing RDMA over a ram+swap
backed _virtual_ memory allocation). They've just to decide if
ram+swap allocation for RDMA is useful or not.

> > How important is this feature to KVM?
> 
> Andrea can answer this.

I think I already did in separate email.

> > That sucks big time.  What do we need to do to make get the callback
> > functions called in non-atomic context?

I sure agree given I also asked to drop the lock param and enforce the
invalidate_range_* to always be called in non atomic context.

> We would have to drop the inode_mmap_lock. Could be done with some minor 
> work.

The invalidate may be deferred after releasing the lock, the lock may
not have to be dropped to cleanup the API (and make xpmem life easier).

> That is one implementation (XPmem does that). The other is to simply stop 
> all references when any invalidate_range is in progress (KVM and GRU do 
> that).

KVM doesn't stop new references. It doesn't need to because it holds a
reference on the page (GRU doesn't). KVM can invalidate the spte and
flush the tlb only after the linux pte has been cleared and after the
page has been released by the VM (because the page doesn't go in the
freelist and it remains pinned for a little while, until the spte is
dropped too inside invalidate_range_end). GRU has to invalidate
_before_ the linux pte is cleared so it has to stop new references
from being established in the invalidate_range_start/end critical
section.

> Andrea put this in to check the reference status of a page. It functions 
> like the accessed bit.

In short each pte can have some spte associated to it. So whenever we
do a ptep_clear_flush protected by the PT lock, we also have to run
invalidate_page that will internally invoke a sort-of
sptep_clear_flush protected by a kvm->mmu_lock (equivalent of
page_table_lock/PT-lock). sptes just like ptes maps virtual addresses
to physical addresses, so you can read/write to RAM either through a
pte or through a spte.

Just like it would be insane to have any requirement that
ptep_clear_flush has to run in not-atomic context (forcing a
conversion of the PT lock to a mutex), it's also weird require the
invalidate_page/age_page to run in atomic context.

All troubles start with the xpmem requirements of having to schedule
in its equivalent of the sptep_clear_flush because it's not a
gigaherz-in-cpu thing but a gigabit thing where the network stack is
involved with its own software linux driven skb memory allocations,
schedules waiting for network I/O, etc... Imagine ptes allocated in a
remote node, no surprise its brings a new set of problems (assuming it
can work reliably during oom given its memory requirements in the
try_to_unmap path, no page can ever be freed until the skbs have been
allocated and sent and allocated again to receive the ack).

Furthermore xpmem doesn't associate any pte to a spte, it associates a
page_t to certain remote references, or it would be in trouble with
invalidate_page that corresponds to ptep_clear_flush on a virtual
address that exists thanks to the anon_vma/i_mmap lock held (and not
thanks to the mmap_sem like in all invalidate_range calls).

Christoph's patch is a mix of two entirely separated features. KVM can
live with V7 just fine, but it's a lot more than what is needed by KVM.

I don't think that invalidate_page/age_page must be allowed to sleep
because invalidate_range also can sleep. You've to just ask yourself
if the VM locks shall remain spinlocks, for the VM own good (not for
the mmu notifiers good). It'd be bad to make the VM underperform with
mutex protecting tiny critical sections to please some mmu notifier
user. But if they're spinlocks, then clearly invalidate_page/age_page
based on virtual addresses can't sleep or the virtual address wouldn't
make sense anymore by the time the spinlock is released.

> > This function looks like it was tossed in at the last minute.  It's
> > mysterious, undocumented, poorly commented, poorly named.  A better name
> > would be one which has some correlation with the return value.
> > 
> > Because anyone who looks at some code which does
> > 
> > if (mmu_notifier_age_page(mm, address))
> > ...
> > 
> > has to go and reverse-engineer the implementation of
> > mmu_notifier_age_page() to work out under which circumstances the "..."
> > will be executed.  But this should be apparent just from reading the callee
> > implementation.
> > 
> > This function *really* does need some documentation.  What does it *mean*
> > when the ->age_page() from some of the notifiers returned "1" and the
> > 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Christoph Lameter
On Sat, 16 Feb 2008, Andrew Morton wrote:

> "looks good" maybe.  But it's in the details where I fear this will come
> unstuck.  The likelihood that some callbacks really will want to be able to
> block in places where this interface doesn't permit that - either to wait
> for IO to complete or to wait for other threads to clear critical regions.

We can get the invalidate_range to always be called without spinlocks if 
we deal with the case of the inode_mmap_lock being held in truncate case.

If you always want to be able to sleep then we could drop the 
invalidate_page() that is called while pte locks held and require the use 
of a device driver rmap?

> >From that POV it doesn't look like a sufficiently general and useful
> design.  Looks like it was grafted onto the current VM implementation in a
> way which just about suits two particular clients if they try hard enough.

You missed KVM. We did the best we could being as least invasive as 
possible.

> Which is all perfectly understandable - it would be hard to rework core MM
> to be able to make this interface more general.  But I do think it's
> half-baked and there is a decent risk that future (or present) code which
> _could_ use something like this won't be able to use this one, and will
> continue to futz with mlock, page-pinning, etc.
> 
> Not that I know what the fix to that is..

You do not see a chance of this being okay if we adopt the two measures 
that I mentioned above?
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Christoph Lameter
On Fri, 15 Feb 2008, Andrew Morton wrote:

> What is the status of getting infiniband to use this facility?

Well we are talking about this it seems.
> 
> How important is this feature to KVM?

Andrea can answer this.

> To xpmem?

Without this feature we are stuck with page pinning by increasing 
refcounts which leads to endless lru scanning and other misbehavior. Also 
applications that use XPmem will not be able to swap or be able to use 
things like remap.
 
> Which other potential clients have been identified and how important it it
> to those?

It is likely important to various DMA engines, framebuffers devices etc 
etc. Seems to be a generally useful feature.


> > +The notifier chains provide two callback mechanisms. The
> > +first one is required for any device that establishes external mappings.
> > +The second (rmap) mechanism is required if a device needs to be
> > +able to sleep when invalidating references. Sleeping may be necessary
> > +if we are mapping across a network or to different Linux instances
> > +in the same address space.
> 
> I'd have thought that a major reason for sleeping would be to wait for IO
> to complete.  Worth mentioning here?

Right.

> Why is that "easy"?  I's have thought that it would only be easy if the
> driver happened to be using those same locks for its own purposes. 
> Otherwise it is "awkward"?

Its relatively easy because it is tied directly to a process and can use
external tlb shootdown / external page table clearing directly. The other 
method requires an rmap in the device driver where it can lookup the 
processes that are mapping the page.
 
> > +The invalidation mechanism for a range (*invalidate_range_begin/end*) is
> > +called most of the time without any locks held. It is only called with
> > +locks held for file backed mappings that are truncated. A flag indicates
> > +in which mode we are. A driver can use that mechanism to f.e.
> > +delay the freeing of the pages during truncate until no locks are held.
> 
> That sucks big time.  What do we need to do to make get the callback
> functions called in non-atomic context?

We would have to drop the inode_mmap_lock. Could be done with some minor 
work.

> > +Pages must be marked dirty if dirty bits are found to be set in
> > +the external ptes during unmap.
> 
> That sentence is too vague.  Define "marked dirty"?

Call set_page_dirty().

> > +The *release* method is called when a Linux process exits. It is run before
> 
> We'd conventionally use a notation such as "->release()" here, rather than
> the asterisks.

Ok.

> 
> > +the pages and mappings of a process are torn down and gives the device 
> > driver
> > +a chance to zap all the external mappings in one go.
> 
> I assume what you mean here is that ->release() is called during exit()
> when the final reference to an mm is being dropped.

Right.

> > +An example for a code that can be used to build a notifier mechanism into
> > +a device driver can be found in the file
> > +Documentation/mmu_notifier/skeleton.c
> 
> Should that be in samples/?

Oh. We have that?

> > +The mmu_rmap_notifier adds another invalidate_page() callout that is called
> > +*before* the Linux rmaps are walked. At that point only the page lock is
> > +held. The invalidate_page() function must walk the driver rmaps and evict
> > +all the references to the page.
> 
> What happens if it cannot do so?

The page is not reclaimed if we were called from try_to_unmap(). From 
page_mkclean() we must always evict the page to switch off the write 
protect bit.

> > +There is no process information available before the rmaps are consulted.
> 
> Not sure what that sentence means.  I guess "available to the core VM"?

At that point we only have the page. We do not know which processes map 
the page. In order to find out we need to take a spinlock.


> > +The notifier mechanism can therefore not be attached to an mm_struct. 
> > Instead
> > +it is a global callback list. Having to perform a callback for each and 
> > every
> > +page that is reclaimed would be inefficient. Therefore we add an additional
> > +page flag: PageRmapExternal().
> 
> How many page flags are left?

30 or so. Its only available on 64bit.

> Is this feature important enough to justfy consumption of another one?
> 
> > Only pages that are marked with this bit can
> > +be exported and the rmap callbacks will only be performed for pages marked
> > +that way.
> 
> "exported": new term, unclear what it means.

Something external to the kernel references the page.

> > +The required additional Page flag is only availabe in 64 bit mode and
> > +therefore the mmu_rmap_notifier portion is not available on 32 bit 
> > platforms.
> 
> whoa.  Is that good?  You just made your feature unavailable on the great
> majority of Linux systems.

rmaps are usually used by complex drivers that are typically used in large 
systems.

> > + * Notifier functions for hardware and software that establishes external
> > + * references to pages 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrew Morton
On Sat, 16 Feb 2008 11:41:35 +0100 Brice Goglin <[EMAIL PROTECTED]> wrote:

> Andrew Morton wrote:
> > What is the status of getting infiniband to use this facility?
> >
> > How important is this feature to KVM?
> >
> > To xpmem?
> >
> > Which other potential clients have been identified and how important it it
> > to those?
> >   
> 
> As I said when Andrea posted the first patch series, I used something
> very similar for non-RDMA-based HPC about 4 years ago. I haven't had
> time yet to look in depth and try the latest proposed API but my feeling
> is that it looks good.
> 

"looks good" maybe.  But it's in the details where I fear this will come
unstuck.  The likelihood that some callbacks really will want to be able to
block in places where this interface doesn't permit that - either to wait
for IO to complete or to wait for other threads to clear critical regions.

>From that POV it doesn't look like a sufficiently general and useful
design.  Looks like it was grafted onto the current VM implementation in a
way which just about suits two particular clients if they try hard enough.

Which is all perfectly understandable - it would be hard to rework core MM
to be able to make this interface more general.  But I do think it's
half-baked and there is a decent risk that future (or present) code which
_could_ use something like this won't be able to use this one, and will
continue to futz with mlock, page-pinning, etc.

Not that I know what the fix to that is..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Brice Goglin
Andrew Morton wrote:
> What is the status of getting infiniband to use this facility?
>
> How important is this feature to KVM?
>
> To xpmem?
>
> Which other potential clients have been identified and how important it it
> to those?
>   

As I said when Andrea posted the first patch series, I used something
very similar for non-RDMA-based HPC about 4 years ago. I haven't had
time yet to look in depth and try the latest proposed API but my feeling
is that it looks good.

Brice

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Avi Kivity

Andrew Morton wrote:

 


Very.  kvm pins pages that are referenced by the guest;



hm.  Why does it do that?

  


It was deemed best not to allow the guest to write to a page that has 
been swapped out and assigned to an unrelated host process.


One way to view the kvm shadow page tables is as hardware dma 
descriptors. kvm pins pages for the same reason that drivers pin pages 
that are being dma'ed. It's also the reason why mmu notifiers are useful 
for such a wide range of dma capable hardware.


a 64-bit guest 
will easily pin its entire memory with the kernel map.



  
 So this is 
critical for guest swapping to actually work.



Curious.  If KVM can release guest pages at the request of this notifier so
that they can be swapped out, why can't it release them by default, and
allow swapping to proceed?

  


If kvm releases a page, it must also zap any shadow ptes pointing at the 
page and flush the tlb. If you do that for all of memory you can't 
reference any of it.


Releasing a page has costs, both at the time of the release and when the 
guest eventually refers to the page again.



Other nice features like page migration are also enabled by this patch.




We already have page migration.  Do you mean page-migration-when-using-kvm?
  


Yes, I'm obviously writing from a kvm-centric point of view. This is an 
important feature, as the virtualization future seems to be NUMA hosts 
(2- or 4- way, 4 cores per socket) running moderately sized guests. The 
ability to load-balance guests among the NUMA nodes is important for 
performance.


(btw, I'm also looking forward to memory defragmentation. large pages 
are important for virtualization workloads and mmu notifiers are again 
critical to getting it to work while running kvm).


--
Any sufficiently difficult bug is indistinguishable from a feature.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrew Morton
On Sat, 16 Feb 2008 10:45:50 +0200 Avi Kivity <[EMAIL PROTECTED]> wrote:

> Andrew Morton wrote:
> > How important is this feature to KVM?
> >   
> 
> Very.  kvm pins pages that are referenced by the guest;

hm.  Why does it do that?

> a 64-bit guest 
> will easily pin its entire memory with the kernel map.

>  So this is 
> critical for guest swapping to actually work.

Curious.  If KVM can release guest pages at the request of this notifier so
that they can be swapped out, why can't it release them by default, and
allow swapping to proceed?

> 
> Other nice features like page migration are also enabled by this patch.
> 

We already have page migration.  Do you mean page-migration-when-using-kvm?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Avi Kivity

Andrew Morton wrote:

How important is this feature to KVM?
  


Very.  kvm pins pages that are referenced by the guest; a 64-bit guest 
will easily pin its entire memory with the kernel map.  So this is 
critical for guest swapping to actually work.


Other nice features like page migration are also enabled by this patch.

--
Any sufficiently difficult bug is indistinguishable from a feature.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Avi Kivity

Andrew Morton wrote:

How important is this feature to KVM?
  


Very.  kvm pins pages that are referenced by the guest; a 64-bit guest 
will easily pin its entire memory with the kernel map.  So this is 
critical for guest swapping to actually work.


Other nice features like page migration are also enabled by this patch.

--
Any sufficiently difficult bug is indistinguishable from a feature.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrew Morton
On Sat, 16 Feb 2008 10:45:50 +0200 Avi Kivity [EMAIL PROTECTED] wrote:

 Andrew Morton wrote:
  How important is this feature to KVM?

 
 Very.  kvm pins pages that are referenced by the guest;

hm.  Why does it do that?

 a 64-bit guest 
 will easily pin its entire memory with the kernel map.

  So this is 
 critical for guest swapping to actually work.

Curious.  If KVM can release guest pages at the request of this notifier so
that they can be swapped out, why can't it release them by default, and
allow swapping to proceed?

 
 Other nice features like page migration are also enabled by this patch.
 

We already have page migration.  Do you mean page-migration-when-using-kvm?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Avi Kivity

Andrew Morton wrote:

 


Very.  kvm pins pages that are referenced by the guest;



hm.  Why does it do that?

  


It was deemed best not to allow the guest to write to a page that has 
been swapped out and assigned to an unrelated host process.


One way to view the kvm shadow page tables is as hardware dma 
descriptors. kvm pins pages for the same reason that drivers pin pages 
that are being dma'ed. It's also the reason why mmu notifiers are useful 
for such a wide range of dma capable hardware.


a 64-bit guest 
will easily pin its entire memory with the kernel map.



  
 So this is 
critical for guest swapping to actually work.



Curious.  If KVM can release guest pages at the request of this notifier so
that they can be swapped out, why can't it release them by default, and
allow swapping to proceed?

  


If kvm releases a page, it must also zap any shadow ptes pointing at the 
page and flush the tlb. If you do that for all of memory you can't 
reference any of it.


Releasing a page has costs, both at the time of the release and when the 
guest eventually refers to the page again.



Other nice features like page migration are also enabled by this patch.




We already have page migration.  Do you mean page-migration-when-using-kvm?
  


Yes, I'm obviously writing from a kvm-centric point of view. This is an 
important feature, as the virtualization future seems to be NUMA hosts 
(2- or 4- way, 4 cores per socket) running moderately sized guests. The 
ability to load-balance guests among the NUMA nodes is important for 
performance.


(btw, I'm also looking forward to memory defragmentation. large pages 
are important for virtualization workloads and mmu notifiers are again 
critical to getting it to work while running kvm).


--
Any sufficiently difficult bug is indistinguishable from a feature.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Brice Goglin
Andrew Morton wrote:
 What is the status of getting infiniband to use this facility?

 How important is this feature to KVM?

 To xpmem?

 Which other potential clients have been identified and how important it it
 to those?
   

As I said when Andrea posted the first patch series, I used something
very similar for non-RDMA-based HPC about 4 years ago. I haven't had
time yet to look in depth and try the latest proposed API but my feeling
is that it looks good.

Brice

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrew Morton
On Sat, 16 Feb 2008 11:41:35 +0100 Brice Goglin [EMAIL PROTECTED] wrote:

 Andrew Morton wrote:
  What is the status of getting infiniband to use this facility?
 
  How important is this feature to KVM?
 
  To xpmem?
 
  Which other potential clients have been identified and how important it it
  to those?

 
 As I said when Andrea posted the first patch series, I used something
 very similar for non-RDMA-based HPC about 4 years ago. I haven't had
 time yet to look in depth and try the latest proposed API but my feeling
 is that it looks good.
 

looks good maybe.  But it's in the details where I fear this will come
unstuck.  The likelihood that some callbacks really will want to be able to
block in places where this interface doesn't permit that - either to wait
for IO to complete or to wait for other threads to clear critical regions.

From that POV it doesn't look like a sufficiently general and useful
design.  Looks like it was grafted onto the current VM implementation in a
way which just about suits two particular clients if they try hard enough.

Which is all perfectly understandable - it would be hard to rework core MM
to be able to make this interface more general.  But I do think it's
half-baked and there is a decent risk that future (or present) code which
_could_ use something like this won't be able to use this one, and will
continue to futz with mlock, page-pinning, etc.

Not that I know what the fix to that is..
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Christoph Lameter
On Sat, 16 Feb 2008, Andrew Morton wrote:

 looks good maybe.  But it's in the details where I fear this will come
 unstuck.  The likelihood that some callbacks really will want to be able to
 block in places where this interface doesn't permit that - either to wait
 for IO to complete or to wait for other threads to clear critical regions.

We can get the invalidate_range to always be called without spinlocks if 
we deal with the case of the inode_mmap_lock being held in truncate case.

If you always want to be able to sleep then we could drop the 
invalidate_page() that is called while pte locks held and require the use 
of a device driver rmap?

 From that POV it doesn't look like a sufficiently general and useful
 design.  Looks like it was grafted onto the current VM implementation in a
 way which just about suits two particular clients if they try hard enough.

You missed KVM. We did the best we could being as least invasive as 
possible.

 Which is all perfectly understandable - it would be hard to rework core MM
 to be able to make this interface more general.  But I do think it's
 half-baked and there is a decent risk that future (or present) code which
 _could_ use something like this won't be able to use this one, and will
 continue to futz with mlock, page-pinning, etc.
 
 Not that I know what the fix to that is..

You do not see a chance of this being okay if we adopt the two measures 
that I mentioned above?
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Christoph Lameter
On Fri, 15 Feb 2008, Andrew Morton wrote:

 What is the status of getting infiniband to use this facility?

Well we are talking about this it seems.
 
 How important is this feature to KVM?

Andrea can answer this.

 To xpmem?

Without this feature we are stuck with page pinning by increasing 
refcounts which leads to endless lru scanning and other misbehavior. Also 
applications that use XPmem will not be able to swap or be able to use 
things like remap.
 
 Which other potential clients have been identified and how important it it
 to those?

It is likely important to various DMA engines, framebuffers devices etc 
etc. Seems to be a generally useful feature.


  +The notifier chains provide two callback mechanisms. The
  +first one is required for any device that establishes external mappings.
  +The second (rmap) mechanism is required if a device needs to be
  +able to sleep when invalidating references. Sleeping may be necessary
  +if we are mapping across a network or to different Linux instances
  +in the same address space.
 
 I'd have thought that a major reason for sleeping would be to wait for IO
 to complete.  Worth mentioning here?

Right.

 Why is that easy?  I's have thought that it would only be easy if the
 driver happened to be using those same locks for its own purposes. 
 Otherwise it is awkward?

Its relatively easy because it is tied directly to a process and can use
external tlb shootdown / external page table clearing directly. The other 
method requires an rmap in the device driver where it can lookup the 
processes that are mapping the page.
 
  +The invalidation mechanism for a range (*invalidate_range_begin/end*) is
  +called most of the time without any locks held. It is only called with
  +locks held for file backed mappings that are truncated. A flag indicates
  +in which mode we are. A driver can use that mechanism to f.e.
  +delay the freeing of the pages during truncate until no locks are held.
 
 That sucks big time.  What do we need to do to make get the callback
 functions called in non-atomic context?

We would have to drop the inode_mmap_lock. Could be done with some minor 
work.

  +Pages must be marked dirty if dirty bits are found to be set in
  +the external ptes during unmap.
 
 That sentence is too vague.  Define marked dirty?

Call set_page_dirty().

  +The *release* method is called when a Linux process exits. It is run before
 
 We'd conventionally use a notation such as -release() here, rather than
 the asterisks.

Ok.

 
  +the pages and mappings of a process are torn down and gives the device 
  driver
  +a chance to zap all the external mappings in one go.
 
 I assume what you mean here is that -release() is called during exit()
 when the final reference to an mm is being dropped.

Right.

  +An example for a code that can be used to build a notifier mechanism into
  +a device driver can be found in the file
  +Documentation/mmu_notifier/skeleton.c
 
 Should that be in samples/?

Oh. We have that?

  +The mmu_rmap_notifier adds another invalidate_page() callout that is called
  +*before* the Linux rmaps are walked. At that point only the page lock is
  +held. The invalidate_page() function must walk the driver rmaps and evict
  +all the references to the page.
 
 What happens if it cannot do so?

The page is not reclaimed if we were called from try_to_unmap(). From 
page_mkclean() we must always evict the page to switch off the write 
protect bit.

  +There is no process information available before the rmaps are consulted.
 
 Not sure what that sentence means.  I guess available to the core VM?

At that point we only have the page. We do not know which processes map 
the page. In order to find out we need to take a spinlock.


  +The notifier mechanism can therefore not be attached to an mm_struct. 
  Instead
  +it is a global callback list. Having to perform a callback for each and 
  every
  +page that is reclaimed would be inefficient. Therefore we add an additional
  +page flag: PageRmapExternal().
 
 How many page flags are left?

30 or so. Its only available on 64bit.

 Is this feature important enough to justfy consumption of another one?
 
  Only pages that are marked with this bit can
  +be exported and the rmap callbacks will only be performed for pages marked
  +that way.
 
 exported: new term, unclear what it means.

Something external to the kernel references the page.

  +The required additional Page flag is only availabe in 64 bit mode and
  +therefore the mmu_rmap_notifier portion is not available on 32 bit 
  platforms.
 
 whoa.  Is that good?  You just made your feature unavailable on the great
 majority of Linux systems.

rmaps are usually used by complex drivers that are typically used in large 
systems.

  + * Notifier functions for hardware and software that establishes external
  + * references to pages of a Linux system. The notifier calls ensure that
  + * external mappings are removed when the Linux VM removes memory ranges
  + 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Andrea Arcangeli
On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
 On Fri, 15 Feb 2008, Andrew Morton wrote:
 
  What is the status of getting infiniband to use this facility?
 
 Well we are talking about this it seems.

It seems the IB folks think allowing RDMA over virtual memory is not
interesting, their argument seem to be that RDMA is only interesting
on RAM (and they seem not interested in allowing RDMA over a ram+swap
backed _virtual_ memory allocation). They've just to decide if
ram+swap allocation for RDMA is useful or not.

  How important is this feature to KVM?
 
 Andrea can answer this.

I think I already did in separate email.

  That sucks big time.  What do we need to do to make get the callback
  functions called in non-atomic context?

I sure agree given I also asked to drop the lock param and enforce the
invalidate_range_* to always be called in non atomic context.

 We would have to drop the inode_mmap_lock. Could be done with some minor 
 work.

The invalidate may be deferred after releasing the lock, the lock may
not have to be dropped to cleanup the API (and make xpmem life easier).

 That is one implementation (XPmem does that). The other is to simply stop 
 all references when any invalidate_range is in progress (KVM and GRU do 
 that).

KVM doesn't stop new references. It doesn't need to because it holds a
reference on the page (GRU doesn't). KVM can invalidate the spte and
flush the tlb only after the linux pte has been cleared and after the
page has been released by the VM (because the page doesn't go in the
freelist and it remains pinned for a little while, until the spte is
dropped too inside invalidate_range_end). GRU has to invalidate
_before_ the linux pte is cleared so it has to stop new references
from being established in the invalidate_range_start/end critical
section.

 Andrea put this in to check the reference status of a page. It functions 
 like the accessed bit.

In short each pte can have some spte associated to it. So whenever we
do a ptep_clear_flush protected by the PT lock, we also have to run
invalidate_page that will internally invoke a sort-of
sptep_clear_flush protected by a kvm-mmu_lock (equivalent of
page_table_lock/PT-lock). sptes just like ptes maps virtual addresses
to physical addresses, so you can read/write to RAM either through a
pte or through a spte.

Just like it would be insane to have any requirement that
ptep_clear_flush has to run in not-atomic context (forcing a
conversion of the PT lock to a mutex), it's also weird require the
invalidate_page/age_page to run in atomic context.

All troubles start with the xpmem requirements of having to schedule
in its equivalent of the sptep_clear_flush because it's not a
gigaherz-in-cpu thing but a gigabit thing where the network stack is
involved with its own software linux driven skb memory allocations,
schedules waiting for network I/O, etc... Imagine ptes allocated in a
remote node, no surprise its brings a new set of problems (assuming it
can work reliably during oom given its memory requirements in the
try_to_unmap path, no page can ever be freed until the skbs have been
allocated and sent and allocated again to receive the ack).

Furthermore xpmem doesn't associate any pte to a spte, it associates a
page_t to certain remote references, or it would be in trouble with
invalidate_page that corresponds to ptep_clear_flush on a virtual
address that exists thanks to the anon_vma/i_mmap lock held (and not
thanks to the mmap_sem like in all invalidate_range calls).

Christoph's patch is a mix of two entirely separated features. KVM can
live with V7 just fine, but it's a lot more than what is needed by KVM.

I don't think that invalidate_page/age_page must be allowed to sleep
because invalidate_range also can sleep. You've to just ask yourself
if the VM locks shall remain spinlocks, for the VM own good (not for
the mmu notifiers good). It'd be bad to make the VM underperform with
mutex protecting tiny critical sections to please some mmu notifier
user. But if they're spinlocks, then clearly invalidate_page/age_page
based on virtual addresses can't sleep or the virtual address wouldn't
make sense anymore by the time the spinlock is released.

  This function looks like it was tossed in at the last minute.  It's
  mysterious, undocumented, poorly commented, poorly named.  A better name
  would be one which has some correlation with the return value.
  
  Because anyone who looks at some code which does
  
  if (mmu_notifier_age_page(mm, address))
  ...
  
  has to go and reverse-engineer the implementation of
  mmu_notifier_age_page() to work out under which circumstances the ...
  will be executed.  But this should be apparent just from reading the callee
  implementation.
  
  This function *really* does need some documentation.  What does it *mean*
  when the -age_page() from some of the notifiers returned 1 and the
  -age_page() from some other notifiers returned zero?  Dunno.
 
 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-16 Thread Doug Maxey

On Fri, 15 Feb 2008 19:37:19 PST, Andrew Morton wrote:
 Which other potential clients have been identified and how important it it
 to those?

The powerpc ehea utilizes its own mmu.  Not sure about the importance 
to the driver. (But will investigate :)

++doug

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-15 Thread Andrew Morton
On Thu, 14 Feb 2008 22:49:00 -0800 Christoph Lameter <[EMAIL PROTECTED]> wrote:

> MMU notifiers are used for hardware and software that establishes
> external references to pages managed by the Linux kernel. These are
> page table entriews or tlb entries or something else that allows
> hardware (such as DMA engines, scatter gather devices, networking,
> sharing of address spaces across operating system boundaries) and
> software (Virtualization solutions such as KVM, Xen etc) to
> access memory managed by the Linux kernel.
> 
> The MMU notifier will notify the device driver that subscribes to such
> a notifier that the VM is going to do something with the memory
> mapped by that device. The device must then drop references for the
> indicated memory area. The references may be reestablished later.
> 
> The notification scheme is much better than the current schemes of
> avoiding the danger of the VM removing pages that are externally
> mapped. We currently either mlock pages used for RDMA, XPmem etc
> in memory or increase the refcount to pin the pages. Increasing
> the refcount makes it impossible for the VM to reclaim the page.
> 
> Mlock causes problems with reclaim and may lead to OOM if too many
> pages are pinned in memory. It is also incorrect in terms what the POSIX
> specificies for what role mlock should play. Mlock does *not* pin pages in
> memory. Mlock just means do not allow the page to be moved to swap.
> 
> Linux can move pages in memory (for example through the page migration
> mechanism). These pages can be moved even if they are mlocked().
> The current approach of page pinning in use by RDMA etc is conceptually
> broken but there are currently no other easy solutions.
> 
> The alternate of increasing the page count to pin pages is also not
> that enticing since there will be continual attempts to reclaim
> or migrate these pages.
> 
> The solution here allows us to finally fix this issue by requiring
> such devices to subscribe to a notification chain that will allow
> them to work without pinning. The VM gains control of its memory again
> and the memory that has external references can be managed like regular
> memory.
> 
> This patch: Core portion
> 

What is the status of getting infiniband to use this facility?

How important is this feature to KVM?

To xpmem?

Which other potential clients have been identified and how important it it
to those?


> Index: linux-2.6/Documentation/mmu_notifier/README
> ===
> --- /dev/null 1970-01-01 00:00:00.0 +
> +++ linux-2.6/Documentation/mmu_notifier/README   2008-02-14 
> 22:27:19.0 -0800
> @@ -0,0 +1,105 @@
> +Linux MMU Notifiers
> +---
> +
> +MMU notifiers are used for hardware and software that establishes
> +external references to pages managed by the Linux kernel. These are
> +page table entriews or tlb entries or something else that allows
> +hardware (such as DMA engines, scatter gather devices, networking,
> +sharing of address spaces across operating system boundaries) and
> +software (Virtualization solutions such as KVM, Xen etc) to
> +access memory managed by the Linux kernel.
> +
> +The MMU notifier will notify the device driver that subscribes to such
> +a notifier that the VM is going to do something with the memory
> +mapped by that device. The device must then drop references for the
> +indicated memory area. The references may be reestablished later.
> +
> +The notification scheme is much better than the current schemes of
> +dealing with the danger of the VM removing pages.
> +We currently mlock pages used for RDMA, XPmem etc in memory or
> +increase the refcount of the pages.
> +
> +Both cause problems with reclaim and may lead to OOM if too many
> +pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
> +specification of the role of mlock. Mlock does *not* pin pages in
> +memory. It just does not allow the page to be moved to swap.
> +The page refcount is used to track current users of a page struct.
> +Artificially inflating the refcount means that the VM cannot track
> +down all references to a page. It will not be able to reclaim or
> +move a page. However, the core code will try again and again because
> +the assumption is that an elevated refcount is a temporary situation.
> +
> +Linux can move pages in memory (for example through the page migration
> +mechanism). These pages can be moved even if they are mlocked().
> +So the current approach in use by RDMA etc etc is conceptually broken
> +but there are currently no other easy solutions.
> +
> +The solution here allows us to finally fix this issue by requiring
> +such devices to subscribe to a notification chain that will allow
> +them to work without pinning.
> +
> +The notifier chains provide two callback mechanisms. The
> +first one is required for any device that establishes external mappings.
> +The second (rmap) mechanism is 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-15 Thread Andrew Morton
On Thu, 14 Feb 2008 22:49:00 -0800 Christoph Lameter [EMAIL PROTECTED] wrote:

 MMU notifiers are used for hardware and software that establishes
 external references to pages managed by the Linux kernel. These are
 page table entriews or tlb entries or something else that allows
 hardware (such as DMA engines, scatter gather devices, networking,
 sharing of address spaces across operating system boundaries) and
 software (Virtualization solutions such as KVM, Xen etc) to
 access memory managed by the Linux kernel.
 
 The MMU notifier will notify the device driver that subscribes to such
 a notifier that the VM is going to do something with the memory
 mapped by that device. The device must then drop references for the
 indicated memory area. The references may be reestablished later.
 
 The notification scheme is much better than the current schemes of
 avoiding the danger of the VM removing pages that are externally
 mapped. We currently either mlock pages used for RDMA, XPmem etc
 in memory or increase the refcount to pin the pages. Increasing
 the refcount makes it impossible for the VM to reclaim the page.
 
 Mlock causes problems with reclaim and may lead to OOM if too many
 pages are pinned in memory. It is also incorrect in terms what the POSIX
 specificies for what role mlock should play. Mlock does *not* pin pages in
 memory. Mlock just means do not allow the page to be moved to swap.
 
 Linux can move pages in memory (for example through the page migration
 mechanism). These pages can be moved even if they are mlocked().
 The current approach of page pinning in use by RDMA etc is conceptually
 broken but there are currently no other easy solutions.
 
 The alternate of increasing the page count to pin pages is also not
 that enticing since there will be continual attempts to reclaim
 or migrate these pages.
 
 The solution here allows us to finally fix this issue by requiring
 such devices to subscribe to a notification chain that will allow
 them to work without pinning. The VM gains control of its memory again
 and the memory that has external references can be managed like regular
 memory.
 
 This patch: Core portion
 

What is the status of getting infiniband to use this facility?

How important is this feature to KVM?

To xpmem?

Which other potential clients have been identified and how important it it
to those?


 Index: linux-2.6/Documentation/mmu_notifier/README
 ===
 --- /dev/null 1970-01-01 00:00:00.0 +
 +++ linux-2.6/Documentation/mmu_notifier/README   2008-02-14 
 22:27:19.0 -0800
 @@ -0,0 +1,105 @@
 +Linux MMU Notifiers
 +---
 +
 +MMU notifiers are used for hardware and software that establishes
 +external references to pages managed by the Linux kernel. These are
 +page table entriews or tlb entries or something else that allows
 +hardware (such as DMA engines, scatter gather devices, networking,
 +sharing of address spaces across operating system boundaries) and
 +software (Virtualization solutions such as KVM, Xen etc) to
 +access memory managed by the Linux kernel.
 +
 +The MMU notifier will notify the device driver that subscribes to such
 +a notifier that the VM is going to do something with the memory
 +mapped by that device. The device must then drop references for the
 +indicated memory area. The references may be reestablished later.
 +
 +The notification scheme is much better than the current schemes of
 +dealing with the danger of the VM removing pages.
 +We currently mlock pages used for RDMA, XPmem etc in memory or
 +increase the refcount of the pages.
 +
 +Both cause problems with reclaim and may lead to OOM if too many
 +pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
 +specification of the role of mlock. Mlock does *not* pin pages in
 +memory. It just does not allow the page to be moved to swap.
 +The page refcount is used to track current users of a page struct.
 +Artificially inflating the refcount means that the VM cannot track
 +down all references to a page. It will not be able to reclaim or
 +move a page. However, the core code will try again and again because
 +the assumption is that an elevated refcount is a temporary situation.
 +
 +Linux can move pages in memory (for example through the page migration
 +mechanism). These pages can be moved even if they are mlocked().
 +So the current approach in use by RDMA etc etc is conceptually broken
 +but there are currently no other easy solutions.
 +
 +The solution here allows us to finally fix this issue by requiring
 +such devices to subscribe to a notification chain that will allow
 +them to work without pinning.
 +
 +The notifier chains provide two callback mechanisms. The
 +first one is required for any device that establishes external mappings.
 +The second (rmap) mechanism is required if a device needs to be
 +able to sleep when invalidating references. Sleeping may be 

[patch 1/6] mmu_notifier: Core code

2008-02-14 Thread Christoph Lameter
MMU notifiers are used for hardware and software that establishes
external references to pages managed by the Linux kernel. These are
page table entriews or tlb entries or something else that allows
hardware (such as DMA engines, scatter gather devices, networking,
sharing of address spaces across operating system boundaries) and
software (Virtualization solutions such as KVM, Xen etc) to
access memory managed by the Linux kernel.

The MMU notifier will notify the device driver that subscribes to such
a notifier that the VM is going to do something with the memory
mapped by that device. The device must then drop references for the
indicated memory area. The references may be reestablished later.

The notification scheme is much better than the current schemes of
avoiding the danger of the VM removing pages that are externally
mapped. We currently either mlock pages used for RDMA, XPmem etc
in memory or increase the refcount to pin the pages. Increasing
the refcount makes it impossible for the VM to reclaim the page.

Mlock causes problems with reclaim and may lead to OOM if too many
pages are pinned in memory. It is also incorrect in terms what the POSIX
specificies for what role mlock should play. Mlock does *not* pin pages in
memory. Mlock just means do not allow the page to be moved to swap.

Linux can move pages in memory (for example through the page migration
mechanism). These pages can be moved even if they are mlocked().
The current approach of page pinning in use by RDMA etc is conceptually
broken but there are currently no other easy solutions.

The alternate of increasing the page count to pin pages is also not
that enticing since there will be continual attempts to reclaim
or migrate these pages.

The solution here allows us to finally fix this issue by requiring
such devices to subscribe to a notification chain that will allow
them to work without pinning. The VM gains control of its memory again
and the memory that has external references can be managed like regular
memory.

This patch: Core portion

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>

---
 Documentation/mmu_notifier/README |  105 ++
 include/linux/mm_types.h  |7 +
 include/linux/mmu_notifier.h  |  180 ++
 kernel/fork.c |2 
 mm/Kconfig|4 
 mm/Makefile   |1 
 mm/mmap.c |2 
 mm/mmu_notifier.c |   76 
 8 files changed, 377 insertions(+)

Index: linux-2.6/Documentation/mmu_notifier/README
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/Documentation/mmu_notifier/README 2008-02-14 22:27:19.0 
-0800
@@ -0,0 +1,105 @@
+Linux MMU Notifiers
+---
+
+MMU notifiers are used for hardware and software that establishes
+external references to pages managed by the Linux kernel. These are
+page table entriews or tlb entries or something else that allows
+hardware (such as DMA engines, scatter gather devices, networking,
+sharing of address spaces across operating system boundaries) and
+software (Virtualization solutions such as KVM, Xen etc) to
+access memory managed by the Linux kernel.
+
+The MMU notifier will notify the device driver that subscribes to such
+a notifier that the VM is going to do something with the memory
+mapped by that device. The device must then drop references for the
+indicated memory area. The references may be reestablished later.
+
+The notification scheme is much better than the current schemes of
+dealing with the danger of the VM removing pages.
+We currently mlock pages used for RDMA, XPmem etc in memory or
+increase the refcount of the pages.
+
+Both cause problems with reclaim and may lead to OOM if too many
+pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
+specification of the role of mlock. Mlock does *not* pin pages in
+memory. It just does not allow the page to be moved to swap.
+The page refcount is used to track current users of a page struct.
+Artificially inflating the refcount means that the VM cannot track
+down all references to a page. It will not be able to reclaim or
+move a page. However, the core code will try again and again because
+the assumption is that an elevated refcount is a temporary situation.
+
+Linux can move pages in memory (for example through the page migration
+mechanism). These pages can be moved even if they are mlocked().
+So the current approach in use by RDMA etc etc is conceptually broken
+but there are currently no other easy solutions.
+
+The solution here allows us to finally fix this issue by requiring
+such devices to subscribe to a notification chain that will allow
+them to work without pinning.
+
+The notifier chains provide two callback mechanisms. The
+first one 

[patch 1/6] mmu_notifier: Core code

2008-02-14 Thread Christoph Lameter
MMU notifiers are used for hardware and software that establishes
external references to pages managed by the Linux kernel. These are
page table entriews or tlb entries or something else that allows
hardware (such as DMA engines, scatter gather devices, networking,
sharing of address spaces across operating system boundaries) and
software (Virtualization solutions such as KVM, Xen etc) to
access memory managed by the Linux kernel.

The MMU notifier will notify the device driver that subscribes to such
a notifier that the VM is going to do something with the memory
mapped by that device. The device must then drop references for the
indicated memory area. The references may be reestablished later.

The notification scheme is much better than the current schemes of
avoiding the danger of the VM removing pages that are externally
mapped. We currently either mlock pages used for RDMA, XPmem etc
in memory or increase the refcount to pin the pages. Increasing
the refcount makes it impossible for the VM to reclaim the page.

Mlock causes problems with reclaim and may lead to OOM if too many
pages are pinned in memory. It is also incorrect in terms what the POSIX
specificies for what role mlock should play. Mlock does *not* pin pages in
memory. Mlock just means do not allow the page to be moved to swap.

Linux can move pages in memory (for example through the page migration
mechanism). These pages can be moved even if they are mlocked().
The current approach of page pinning in use by RDMA etc is conceptually
broken but there are currently no other easy solutions.

The alternate of increasing the page count to pin pages is also not
that enticing since there will be continual attempts to reclaim
or migrate these pages.

The solution here allows us to finally fix this issue by requiring
such devices to subscribe to a notification chain that will allow
them to work without pinning. The VM gains control of its memory again
and the memory that has external references can be managed like regular
memory.

This patch: Core portion

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]
Signed-off-by: Andrea Arcangeli [EMAIL PROTECTED]

---
 Documentation/mmu_notifier/README |  105 ++
 include/linux/mm_types.h  |7 +
 include/linux/mmu_notifier.h  |  180 ++
 kernel/fork.c |2 
 mm/Kconfig|4 
 mm/Makefile   |1 
 mm/mmap.c |2 
 mm/mmu_notifier.c |   76 
 8 files changed, 377 insertions(+)

Index: linux-2.6/Documentation/mmu_notifier/README
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/Documentation/mmu_notifier/README 2008-02-14 22:27:19.0 
-0800
@@ -0,0 +1,105 @@
+Linux MMU Notifiers
+---
+
+MMU notifiers are used for hardware and software that establishes
+external references to pages managed by the Linux kernel. These are
+page table entriews or tlb entries or something else that allows
+hardware (such as DMA engines, scatter gather devices, networking,
+sharing of address spaces across operating system boundaries) and
+software (Virtualization solutions such as KVM, Xen etc) to
+access memory managed by the Linux kernel.
+
+The MMU notifier will notify the device driver that subscribes to such
+a notifier that the VM is going to do something with the memory
+mapped by that device. The device must then drop references for the
+indicated memory area. The references may be reestablished later.
+
+The notification scheme is much better than the current schemes of
+dealing with the danger of the VM removing pages.
+We currently mlock pages used for RDMA, XPmem etc in memory or
+increase the refcount of the pages.
+
+Both cause problems with reclaim and may lead to OOM if too many
+pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
+specification of the role of mlock. Mlock does *not* pin pages in
+memory. It just does not allow the page to be moved to swap.
+The page refcount is used to track current users of a page struct.
+Artificially inflating the refcount means that the VM cannot track
+down all references to a page. It will not be able to reclaim or
+move a page. However, the core code will try again and again because
+the assumption is that an elevated refcount is a temporary situation.
+
+Linux can move pages in memory (for example through the page migration
+mechanism). These pages can be moved even if they are mlocked().
+So the current approach in use by RDMA etc etc is conceptually broken
+but there are currently no other easy solutions.
+
+The solution here allows us to finally fix this issue by requiring
+such devices to subscribe to a notification chain that will allow
+them to work without pinning.
+
+The notifier chains provide two callback mechanisms. The
+first one is 

[patch 1/6] mmu_notifier: Core code

2008-02-08 Thread Christoph Lameter
MMU notifiers are used for hardware and software that establishes
external references to pages managed by the Linux kernel. These are
page table entriews or tlb entries or something else that allows
hardware (such as DMA engines, scatter gather devices, networking,
sharing of address spaces across operating system boundaries) and
software (Virtualization solutions such as KVM, Xen etc) to
access memory managed by the Linux kernel.

The MMU notifier will notify the device driver that subscribes to such
a notifier that the VM is going to do something with the memory
mapped by that device. The device must then drop references for the
indicated memory area. The references may be reestablished later.

The notification scheme is much better than the current scheme of
avoiding the danger of the VM removing pages that are externally
mapped. We currently mlock pages used for RDMA, XPmem etc in memory.

Mlock causes problems with reclaim and may lead to OOM if too many
pages are pinned in memory. It is also incorrect in terms what the POSIX
specificies for what role mlock should play. Mlock does *not* pin pages in
memory. Mlock just means do not allow the page to be moved to swap.

Linux can move pages in memory (for example through the page migration
mechanism). These pages can be moved even if they are mlocked().
The current approach of page pinning in use by RDMA etc is conceptually
broken but there are currently no other easy solutions.

The solution here allows us to finally fix this issue by requiring
such devices to subscribe to a notification chain that will allow
them to work without pinning.

This patch: Core portion

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>

---
 Documentation/mmu_notifier/README |   99 +
 include/linux/mm_types.h  |7 +
 include/linux/mmu_notifier.h  |  175 ++
 kernel/fork.c |2 
 mm/Kconfig|4 
 mm/Makefile   |1 
 mm/mmap.c |2 
 mm/mmu_notifier.c |   76 
 8 files changed, 366 insertions(+)

Index: linux-2.6/Documentation/mmu_notifier/README
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/Documentation/mmu_notifier/README 2008-02-08 12:30:47.0 
-0800
@@ -0,0 +1,99 @@
+Linux MMU Notifiers
+---
+
+MMU notifiers are used for hardware and software that establishes
+external references to pages managed by the Linux kernel. These are
+page table entriews or tlb entries or something else that allows
+hardware (such as DMA engines, scatter gather devices, networking,
+sharing of address spaces across operating system boundaries) and
+software (Virtualization solutions such as KVM, Xen etc) to
+access memory managed by the Linux kernel.
+
+The MMU notifier will notify the device driver that subscribes to such
+a notifier that the VM is going to do something with the memory
+mapped by that device. The device must then drop references for the
+indicated memory area. The references may be reestablished later.
+
+The notification scheme is much better than the current scheme of
+dealing with the danger of the VM removing pages.
+We currently mlock pages used for RDMA, XPmem etc in memory.
+
+Mlock causes problems with reclaim and may lead to OOM if too many
+pages are pinned in memory. It is also incorrect in terms of the POSIX
+specification of the role of mlock. Mlock does *not* pin pages in
+memory. It just does not allow the page to be moved to swap.
+
+Linux can move pages in memory (for example through the page migration
+mechanism). These pages can be moved even if they are mlocked().
+So the current approach in use by RDMA etc etc is conceptually broken
+but there are currently no other easy solutions.
+
+The solution here allows us to finally fix this issue by requiring
+such devices to subscribe to a notification chain that will allow
+them to work without pinning.
+
+The notifier chains provide two callback mechanisms. The
+first one is required for any device that establishes external mappings.
+The second (rmap) mechanism is required if a device needs to be
+able to sleep when invalidating references. Sleeping may be necessary
+if we are mapping across a network or to different Linux instances
+in the same address space.
+
+mmu_notifier mechanism (for KVM/GRU etc)
+
+Callbacks are registered with an mm_struct from a device driver using
+mmu_notifier_register(). When the VM removes pages (or changes
+permissions on pages etc) then callbacks are triggered.
+
+The invalidation function for a single page (*invalidate_page)
+is called with spinlocks (in particular the pte lock) held. This allow
+for an easy implementation of external ptes that are on the local system.
+

[patch 1/6] mmu_notifier: Core code

2008-02-08 Thread Christoph Lameter
MMU notifiers are used for hardware and software that establishes
external references to pages managed by the Linux kernel. These are
page table entriews or tlb entries or something else that allows
hardware (such as DMA engines, scatter gather devices, networking,
sharing of address spaces across operating system boundaries) and
software (Virtualization solutions such as KVM, Xen etc) to
access memory managed by the Linux kernel.

The MMU notifier will notify the device driver that subscribes to such
a notifier that the VM is going to do something with the memory
mapped by that device. The device must then drop references for the
indicated memory area. The references may be reestablished later.

The notification scheme is much better than the current scheme of
avoiding the danger of the VM removing pages that are externally
mapped. We currently mlock pages used for RDMA, XPmem etc in memory.

Mlock causes problems with reclaim and may lead to OOM if too many
pages are pinned in memory. It is also incorrect in terms what the POSIX
specificies for what role mlock should play. Mlock does *not* pin pages in
memory. Mlock just means do not allow the page to be moved to swap.

Linux can move pages in memory (for example through the page migration
mechanism). These pages can be moved even if they are mlocked().
The current approach of page pinning in use by RDMA etc is conceptually
broken but there are currently no other easy solutions.

The solution here allows us to finally fix this issue by requiring
such devices to subscribe to a notification chain that will allow
them to work without pinning.

This patch: Core portion

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]
Signed-off-by: Andrea Arcangeli [EMAIL PROTECTED]

---
 Documentation/mmu_notifier/README |   99 +
 include/linux/mm_types.h  |7 +
 include/linux/mmu_notifier.h  |  175 ++
 kernel/fork.c |2 
 mm/Kconfig|4 
 mm/Makefile   |1 
 mm/mmap.c |2 
 mm/mmu_notifier.c |   76 
 8 files changed, 366 insertions(+)

Index: linux-2.6/Documentation/mmu_notifier/README
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/Documentation/mmu_notifier/README 2008-02-08 12:30:47.0 
-0800
@@ -0,0 +1,99 @@
+Linux MMU Notifiers
+---
+
+MMU notifiers are used for hardware and software that establishes
+external references to pages managed by the Linux kernel. These are
+page table entriews or tlb entries or something else that allows
+hardware (such as DMA engines, scatter gather devices, networking,
+sharing of address spaces across operating system boundaries) and
+software (Virtualization solutions such as KVM, Xen etc) to
+access memory managed by the Linux kernel.
+
+The MMU notifier will notify the device driver that subscribes to such
+a notifier that the VM is going to do something with the memory
+mapped by that device. The device must then drop references for the
+indicated memory area. The references may be reestablished later.
+
+The notification scheme is much better than the current scheme of
+dealing with the danger of the VM removing pages.
+We currently mlock pages used for RDMA, XPmem etc in memory.
+
+Mlock causes problems with reclaim and may lead to OOM if too many
+pages are pinned in memory. It is also incorrect in terms of the POSIX
+specification of the role of mlock. Mlock does *not* pin pages in
+memory. It just does not allow the page to be moved to swap.
+
+Linux can move pages in memory (for example through the page migration
+mechanism). These pages can be moved even if they are mlocked().
+So the current approach in use by RDMA etc etc is conceptually broken
+but there are currently no other easy solutions.
+
+The solution here allows us to finally fix this issue by requiring
+such devices to subscribe to a notification chain that will allow
+them to work without pinning.
+
+The notifier chains provide two callback mechanisms. The
+first one is required for any device that establishes external mappings.
+The second (rmap) mechanism is required if a device needs to be
+able to sleep when invalidating references. Sleeping may be necessary
+if we are mapping across a network or to different Linux instances
+in the same address space.
+
+mmu_notifier mechanism (for KVM/GRU etc)
+
+Callbacks are registered with an mm_struct from a device driver using
+mmu_notifier_register(). When the VM removes pages (or changes
+permissions on pages etc) then callbacks are triggered.
+
+The invalidation function for a single page (*invalidate_page)
+is called with spinlocks (in particular the pte lock) held. This allow
+for an easy implementation of external ptes that are on the local system.
+
+The 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Christoph Lameter
On Tue, 5 Feb 2008, Andy Whitcroft wrote:

> > +   if (unlikely(!hlist_empty(>mmu_notifier.head))) {
> > +   rcu_read_lock();
> > +   hlist_for_each_entry_safe_rcu(mn, n, t,
> > + >mmu_notifier.head, hlist) {
> > +   if (mn->ops->release)
> > +   mn->ops->release(mn, mm);
> 
> Does this ->release actually release the 'nm' and its associated hlist?
> I see in this thread that this ordering is deemed "use after free" which
> implies so.

Right that was fixed in a later release and discussed extensively later. 
See V5.

> I am not sure it makes sense to add a _safe_rcu variant.  As I understand
> things an _safe variant is used where we are going to remove the current

It was dropped in V5.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Peter Zijlstra

On Tue, 2008-02-05 at 18:05 +, Andy Whitcroft wrote:

> > +   if (unlikely(!hlist_empty(>mmu_notifier.head))) {
> > +   rcu_read_lock();
> > +   hlist_for_each_entry_safe_rcu(mn, n, t,
> > + >mmu_notifier.head, hlist) {
> > +   if (mn->ops->release)
> > +   mn->ops->release(mn, mm);
> 
> Does this ->release actually release the 'nm' and its associated hlist?
> I see in this thread that this ordering is deemed "use after free" which
> implies so.
> 
> If it does that seems wrong.  This is an RCU hlist, therefore the list
> integrity must be maintained through the next grace period in case there
> are parallell readers using the element, in particular its forward
> pointer for traversal.

That is not quite so, list elements must be preserved, not the list
order.

> 
> > +   hlist_del(>hlist);
> 
> For this to be updating the list, you must have some form of "write-side"
> exclusion as these primatives are not "parallel write safe".  It would
> be helpful for this routine to state what that write side exclusion is.

Yeah, has been noticed, read on in the thread :-)

> I am not sure it makes sense to add a _safe_rcu variant.  As I understand
> things an _safe variant is used where we are going to remove the current
> list element in the middle of a list walk.  However the key feature of an
> RCU data structure is that it will always be in a "safe" state until any
> parallel readers have completed.  For an hlist this means that the removed
> entry and its forward link must remain valid for as long as there may be
> a parallel reader traversing this list, ie. until the next grace period.
> If this link is valid for the parallel reader, then it must be valid for
> us, and if so it feels that hlist_for_each_entry_rcu should be sufficient
> to cope in the face of entries being unlinked as we traverse the list.

It does make sense, hlist_del_rcu() maintains the fwd reference, but it
does unlink it from the list proper. As long as there is a write side
exclusion around the actual removal as you noted.

rcu_read_lock();
hlist_for_each_entry_safe_rcu(tpos, pos, n, head, member) {

if (foo) {
spin_lock(write_lock);
hlist_del_rcu(tpos);
spin_unlock(write_unlock);
}
}
rcu_read_unlock();

is a safe construct in that the list itself stays a proper list, and
even items that might be caught in the to-be-deleted entries will have a
fwd way out.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Andy Whitcroft
On Mon, Jan 28, 2008 at 12:28:41PM -0800, Christoph Lameter wrote:
> Core code for mmu notifiers.
> 
> Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
> Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>
> 
> ---
>  include/linux/list.h |   14 ++
>  include/linux/mm_types.h |6 +
>  include/linux/mmu_notifier.h |  210 
> +++
>  include/linux/page-flags.h   |   10 ++
>  kernel/fork.c|2 
>  mm/Kconfig   |4 
>  mm/Makefile  |1 
>  mm/mmap.c|2 
>  mm/mmu_notifier.c|  101 
>  9 files changed, 350 insertions(+)
> 
> Index: linux-2.6/include/linux/mm_types.h
> ===
> --- linux-2.6.orig/include/linux/mm_types.h   2008-01-28 11:35:20.0 
> -0800
> +++ linux-2.6/include/linux/mm_types.h2008-01-28 11:35:22.0 
> -0800
> @@ -153,6 +153,10 @@ struct vm_area_struct {
>  #endif
>  };
>  
> +struct mmu_notifier_head {
> + struct hlist_head head;
> +};
> +
>  struct mm_struct {
>   struct vm_area_struct * mmap;   /* list of VMAs */
>   struct rb_root mm_rb;
> @@ -219,6 +223,8 @@ struct mm_struct {
>   /* aio bits */
>   rwlock_tioctx_list_lock;
>   struct kioctx   *ioctx_list;
> +
> + struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
>  };
>  
>  #endif /* _LINUX_MM_TYPES_H */
> Index: linux-2.6/include/linux/mmu_notifier.h
> ===
> --- /dev/null 1970-01-01 00:00:00.0 +
> +++ linux-2.6/include/linux/mmu_notifier.h2008-01-28 11:43:03.0 
> -0800
> @@ -0,0 +1,210 @@
> +#ifndef _LINUX_MMU_NOTIFIER_H
> +#define _LINUX_MMU_NOTIFIER_H
> +
> +/*
> + * MMU motifier
> + *
> + * Notifier functions for hardware and software that establishes external
> + * references to pages of a Linux system. The notifier calls ensure that
> + * the external mappings are removed when the Linux VM removes memory ranges
> + * or individual pages from a process.
> + *
> + * These fall into two classes
> + *
> + * 1. mmu_notifier
> + *
> + *   These are callbacks registered with an mm_struct. If mappings are
> + *   removed from an address space then callbacks are performed.
> + *   Spinlocks must be held in order to the walk reverse maps and the
> + *   notifications are performed while the spinlock is held.
> + *
> + *
> + * 2. mmu_rmap_notifier
> + *
> + *   Callbacks for subsystems that provide their own rmaps. These
> + *   need to walk their own rmaps for a page. The invalidate_page
> + *   callback is outside of locks so that we are not in a strictly
> + *   atomic context (but we may be in a PF_MEMALLOC context if the
> + *   notifier is called from reclaim code) and are able to sleep.
> + *   Rmap notifiers need an extra page bit and are only available
> + *   on 64 bit platforms. It is up to the subsystem to mark pags
> + *   as PageExternalRmap as needed to trigger the callbacks. Pages
> + *   must be marked dirty if dirty bits are set in the external
> + *   pte.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +struct mmu_notifier_ops;
> +
> +struct mmu_notifier {
> + struct hlist_node hlist;
> + const struct mmu_notifier_ops *ops;
> +};
> +
> +struct mmu_notifier_ops {
> + /*
> +  * Note: The mmu_notifier structure must be released with
> +  * call_rcu() since other processors are only guaranteed to
> +  * see the changes after a quiescent period.
> +  */
> + void (*release)(struct mmu_notifier *mn,
> + struct mm_struct *mm);
> +
> + int (*age_page)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long address);
> +
> + void (*invalidate_page)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long address);
> +
> + /*
> +  * lock indicates that the function is called under spinlock.
> +  */
> + void (*invalidate_range)(struct mmu_notifier *mn,
> +  struct mm_struct *mm,
> +  unsigned long start, unsigned long end,
> +  int lock);
> +};
> +
> +struct mmu_rmap_notifier_ops;
> +
> +struct mmu_rmap_notifier {
> + struct hlist_node hlist;
> + const struct mmu_rmap_notifier_ops *ops;
> +};
> +
> +struct mmu_rmap_notifier_ops {
> + /*
> +  * Called with the page lock held after ptes are modified or removed
> +  * so that a subsystem with its own rmap's can remove remote ptes
> +  * mapping a page.
> +  */
> + void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
> + struct page *page);
> +};
> +
> +#ifdef CONFIG_MMU_NOTIFIER
> +
> +/*
> + * Must hold the 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Andy Whitcroft
On Mon, Jan 28, 2008 at 12:28:41PM -0800, Christoph Lameter wrote:
 Core code for mmu notifiers.
 
 Signed-off-by: Christoph Lameter [EMAIL PROTECTED]
 Signed-off-by: Andrea Arcangeli [EMAIL PROTECTED]
 
 ---
  include/linux/list.h |   14 ++
  include/linux/mm_types.h |6 +
  include/linux/mmu_notifier.h |  210 
 +++
  include/linux/page-flags.h   |   10 ++
  kernel/fork.c|2 
  mm/Kconfig   |4 
  mm/Makefile  |1 
  mm/mmap.c|2 
  mm/mmu_notifier.c|  101 
  9 files changed, 350 insertions(+)
 
 Index: linux-2.6/include/linux/mm_types.h
 ===
 --- linux-2.6.orig/include/linux/mm_types.h   2008-01-28 11:35:20.0 
 -0800
 +++ linux-2.6/include/linux/mm_types.h2008-01-28 11:35:22.0 
 -0800
 @@ -153,6 +153,10 @@ struct vm_area_struct {
  #endif
  };
  
 +struct mmu_notifier_head {
 + struct hlist_head head;
 +};
 +
  struct mm_struct {
   struct vm_area_struct * mmap;   /* list of VMAs */
   struct rb_root mm_rb;
 @@ -219,6 +223,8 @@ struct mm_struct {
   /* aio bits */
   rwlock_tioctx_list_lock;
   struct kioctx   *ioctx_list;
 +
 + struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
  };
  
  #endif /* _LINUX_MM_TYPES_H */
 Index: linux-2.6/include/linux/mmu_notifier.h
 ===
 --- /dev/null 1970-01-01 00:00:00.0 +
 +++ linux-2.6/include/linux/mmu_notifier.h2008-01-28 11:43:03.0 
 -0800
 @@ -0,0 +1,210 @@
 +#ifndef _LINUX_MMU_NOTIFIER_H
 +#define _LINUX_MMU_NOTIFIER_H
 +
 +/*
 + * MMU motifier
 + *
 + * Notifier functions for hardware and software that establishes external
 + * references to pages of a Linux system. The notifier calls ensure that
 + * the external mappings are removed when the Linux VM removes memory ranges
 + * or individual pages from a process.
 + *
 + * These fall into two classes
 + *
 + * 1. mmu_notifier
 + *
 + *   These are callbacks registered with an mm_struct. If mappings are
 + *   removed from an address space then callbacks are performed.
 + *   Spinlocks must be held in order to the walk reverse maps and the
 + *   notifications are performed while the spinlock is held.
 + *
 + *
 + * 2. mmu_rmap_notifier
 + *
 + *   Callbacks for subsystems that provide their own rmaps. These
 + *   need to walk their own rmaps for a page. The invalidate_page
 + *   callback is outside of locks so that we are not in a strictly
 + *   atomic context (but we may be in a PF_MEMALLOC context if the
 + *   notifier is called from reclaim code) and are able to sleep.
 + *   Rmap notifiers need an extra page bit and are only available
 + *   on 64 bit platforms. It is up to the subsystem to mark pags
 + *   as PageExternalRmap as needed to trigger the callbacks. Pages
 + *   must be marked dirty if dirty bits are set in the external
 + *   pte.
 + */
 +
 +#include linux/list.h
 +#include linux/spinlock.h
 +#include linux/rcupdate.h
 +#include linux/mm_types.h
 +
 +struct mmu_notifier_ops;
 +
 +struct mmu_notifier {
 + struct hlist_node hlist;
 + const struct mmu_notifier_ops *ops;
 +};
 +
 +struct mmu_notifier_ops {
 + /*
 +  * Note: The mmu_notifier structure must be released with
 +  * call_rcu() since other processors are only guaranteed to
 +  * see the changes after a quiescent period.
 +  */
 + void (*release)(struct mmu_notifier *mn,
 + struct mm_struct *mm);
 +
 + int (*age_page)(struct mmu_notifier *mn,
 + struct mm_struct *mm,
 + unsigned long address);
 +
 + void (*invalidate_page)(struct mmu_notifier *mn,
 + struct mm_struct *mm,
 + unsigned long address);
 +
 + /*
 +  * lock indicates that the function is called under spinlock.
 +  */
 + void (*invalidate_range)(struct mmu_notifier *mn,
 +  struct mm_struct *mm,
 +  unsigned long start, unsigned long end,
 +  int lock);
 +};
 +
 +struct mmu_rmap_notifier_ops;
 +
 +struct mmu_rmap_notifier {
 + struct hlist_node hlist;
 + const struct mmu_rmap_notifier_ops *ops;
 +};
 +
 +struct mmu_rmap_notifier_ops {
 + /*
 +  * Called with the page lock held after ptes are modified or removed
 +  * so that a subsystem with its own rmap's can remove remote ptes
 +  * mapping a page.
 +  */
 + void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
 + struct page *page);
 +};
 +
 +#ifdef CONFIG_MMU_NOTIFIER
 +
 +/*
 + * Must hold the mmap_sem for write.
 + *
 + * RCU is used to traverse the list. A quiescent period needs to 

Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Peter Zijlstra

On Tue, 2008-02-05 at 18:05 +, Andy Whitcroft wrote:

  +   if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
  +   rcu_read_lock();
  +   hlist_for_each_entry_safe_rcu(mn, n, t,
  + mm-mmu_notifier.head, hlist) {
  +   if (mn-ops-release)
  +   mn-ops-release(mn, mm);
 
 Does this -release actually release the 'nm' and its associated hlist?
 I see in this thread that this ordering is deemed use after free which
 implies so.
 
 If it does that seems wrong.  This is an RCU hlist, therefore the list
 integrity must be maintained through the next grace period in case there
 are parallell readers using the element, in particular its forward
 pointer for traversal.

That is not quite so, list elements must be preserved, not the list
order.

 
  +   hlist_del(mn-hlist);
 
 For this to be updating the list, you must have some form of write-side
 exclusion as these primatives are not parallel write safe.  It would
 be helpful for this routine to state what that write side exclusion is.

Yeah, has been noticed, read on in the thread :-)

 I am not sure it makes sense to add a _safe_rcu variant.  As I understand
 things an _safe variant is used where we are going to remove the current
 list element in the middle of a list walk.  However the key feature of an
 RCU data structure is that it will always be in a safe state until any
 parallel readers have completed.  For an hlist this means that the removed
 entry and its forward link must remain valid for as long as there may be
 a parallel reader traversing this list, ie. until the next grace period.
 If this link is valid for the parallel reader, then it must be valid for
 us, and if so it feels that hlist_for_each_entry_rcu should be sufficient
 to cope in the face of entries being unlinked as we traverse the list.

It does make sense, hlist_del_rcu() maintains the fwd reference, but it
does unlink it from the list proper. As long as there is a write side
exclusion around the actual removal as you noted.

rcu_read_lock();
hlist_for_each_entry_safe_rcu(tpos, pos, n, head, member) {

if (foo) {
spin_lock(write_lock);
hlist_del_rcu(tpos);
spin_unlock(write_unlock);
}
}
rcu_read_unlock();

is a safe construct in that the list itself stays a proper list, and
even items that might be caught in the to-be-deleted entries will have a
fwd way out.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-02-05 Thread Christoph Lameter
On Tue, 5 Feb 2008, Andy Whitcroft wrote:

  +   if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
  +   rcu_read_lock();
  +   hlist_for_each_entry_safe_rcu(mn, n, t,
  + mm-mmu_notifier.head, hlist) {
  +   if (mn-ops-release)
  +   mn-ops-release(mn, mm);
 
 Does this -release actually release the 'nm' and its associated hlist?
 I see in this thread that this ordering is deemed use after free which
 implies so.

Right that was fixed in a later release and discussed extensively later. 
See V5.

 I am not sure it makes sense to add a _safe_rcu variant.  As I understand
 things an _safe variant is used where we are going to remove the current

It was dropped in V5.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [kvm-devel] [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:

> > H.. exit_mmap is only called when the last reference is removed 
> > against the mm right? So no tasks are running anymore. No pages are left. 
> > Do we need to serialize at all for mmu_notifier_release?
> 
> KVM sure doesn't need any locking there.  I thought somebody had to
> possibly take a pin on the "mm_count" and pretend to call
> mmu_notifier_register at will until mmdrop was finally called, in a
> out of order fashion given mmu_notifier_release was implemented like
> if the list could change from under it. Note mmdrop != mmput. mmput
> and in turn mm_users is the serialization point if you prefer to drop
> all locking from _release. Nobody must ever attempt a mmu_notifier_*
> after calling mmput for that mm. That should be enough to be
> safe. I'm fine either ways...

exit_mmap (where we call invalidate_all() and release()) is called when 
mm_users == 0:

void mmput(struct mm_struct *mm)
{
might_sleep();

if (atomic_dec_and_test(>mm_users)) {
exit_aio(mm);
exit_mmap(mm);
if (!list_empty(>mmlist)) {
spin_lock(_lock);
list_del(>mmlist);
spin_unlock(_lock);
}
put_swap_token(mm);
mmdrop(mm);
}
}
EXPORT_SYMBOL_GPL(mmput);

So there is only a single thread executing at the time when 
invalidate_all() is called from exit_mmap(). Then we drop the 
pages, and the page tables. After the page tables we call the ->release 
method and then remove the vmas.

So even dropping off the mmu_notifier chain in invalidate_all() could be 
done without an issue and without locking.

Trouble is if other callbacks attempt the same. Do we need to support the 
removal from the mmu_notifier list in invalidate_range()?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [kvm-devel] [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 03:55:37PM -0800, Christoph Lameter wrote:
> On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> 
> > > I think Andrea's original concept of the lock in the mmu_notifier_head
> > > structure was the best.  I agree with him that it should be a spinlock
> > > instead of the rw_lock.
> > 
> > BTW, I don't see the scalability concern with huge number of tasks:
> > the lock is still in the mm, down_write(mm->mmap_sem); oneinstruction;
> > up_write(mm->mmap_sem) is always going to scale worse than
> > spin_lock(mm->somethingelse); oneinstruction;
> > spin_unlock(mm->somethinglese).
> 
> If we put it elsewhere in the mm then we increase the size of the memory 
> used in the mm_struct.

Yes, and it will increase of the same amount of RAM that you pretend
everyone to pay even if MMU_NOTIFIER=n after your patch is applied (vs
mine that generated 0 ram utilization increase when
MMU_NOTIFIER=n). And the additional ram will provide not just
self-contained locking but higher scalability too.

I think it's much more important to generate zero ram and CPU overhead
for the embedded (this is something I was very careful to enforce in
all my patches), than to reduce scalability and not having a self
contained locking on full configurations with MMU_NOTIFIER=y.

> H.. exit_mmap is only called when the last reference is removed 
> against the mm right? So no tasks are running anymore. No pages are left. 
> Do we need to serialize at all for mmu_notifier_release?

KVM sure doesn't need any locking there.  I thought somebody had to
possibly take a pin on the "mm_count" and pretend to call
mmu_notifier_register at will until mmdrop was finally called, in a
out of order fashion given mmu_notifier_release was implemented like
if the list could change from under it. Note mmdrop != mmput. mmput
and in turn mm_users is the serialization point if you prefer to drop
all locking from _release. Nobody must ever attempt a mmu_notifier_*
after calling mmput for that mm. That should be enough to be
safe. I'm fine either ways...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:

> > I think Andrea's original concept of the lock in the mmu_notifier_head
> > structure was the best.  I agree with him that it should be a spinlock
> > instead of the rw_lock.
> 
> BTW, I don't see the scalability concern with huge number of tasks:
> the lock is still in the mm, down_write(mm->mmap_sem); oneinstruction;
> up_write(mm->mmap_sem) is always going to scale worse than
> spin_lock(mm->somethingelse); oneinstruction;
> spin_unlock(mm->somethinglese).

If we put it elsewhere in the mm then we increase the size of the memory 
used in the mm_struct.

> Furthermore if we go this route and we don't relay on implicit
> serialization of all the mmu notifier users against exit_mmap
> (i.e. the mmu notifier user must agree to stop calling
> mmu_notifier_register on a mm after the last mmput) the autodisarming
> feature will likely have to be removed or it can't possibly be safe to
> run mmu_notifier_unregister while mmu_notifier_release runs. With the
> auto-disarming feature, there is no way to safely know if
> mmu_notifier_unregister has to be called or not. I'm ok with removing
> the auto-disarming feature and to have as self-contained-as-possible
> locking. Then mmu_notifier_release can just become the
> invalidate_all_after and invalidate_all, invalidate_all_before.

H.. exit_mmap is only called when the last reference is removed 
against the mm right? So no tasks are running anymore. No pages are left. 
Do we need to serialize at all for mmu_notifier_release?

 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 04:20:35PM -0600, Robin Holt wrote:
> On Wed, Jan 30, 2008 at 11:19:28AM -0800, Christoph Lameter wrote:
> > On Wed, 30 Jan 2008, Jack Steiner wrote:
> > 
> > > Moving to a different lock solves the problem.
> > 
> > Well it gets us back to the issue why we removed the lock. As Robin said 
> > before: If its global then we can have a huge number of tasks contending 
> > for the lock on startup of a process with a large number of ranks. The 
> > reason to go to mmap_sem was that it was placed in the mm_struct and so we 
> > would just have a couple of contentions per mm_struct.
> > 
> > I'll be looking for some other way to do this.
> 
> I think Andrea's original concept of the lock in the mmu_notifier_head
> structure was the best.  I agree with him that it should be a spinlock
> instead of the rw_lock.

BTW, I don't see the scalability concern with huge number of tasks:
the lock is still in the mm, down_write(mm->mmap_sem); oneinstruction;
up_write(mm->mmap_sem) is always going to scale worse than
spin_lock(mm->somethingelse); oneinstruction;
spin_unlock(mm->somethinglese).

Furthermore if we go this route and we don't relay on implicit
serialization of all the mmu notifier users against exit_mmap
(i.e. the mmu notifier user must agree to stop calling
mmu_notifier_register on a mm after the last mmput) the autodisarming
feature will likely have to be removed or it can't possibly be safe to
run mmu_notifier_unregister while mmu_notifier_release runs. With the
auto-disarming feature, there is no way to safely know if
mmu_notifier_unregister has to be called or not. I'm ok with removing
the auto-disarming feature and to have as self-contained-as-possible
locking. Then mmu_notifier_release can just become the
invalidate_all_after and invalidate_all, invalidate_all_before.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Robin Holt
On Wed, Jan 30, 2008 at 11:19:28AM -0800, Christoph Lameter wrote:
> On Wed, 30 Jan 2008, Jack Steiner wrote:
> 
> > Moving to a different lock solves the problem.
> 
> Well it gets us back to the issue why we removed the lock. As Robin said 
> before: If its global then we can have a huge number of tasks contending 
> for the lock on startup of a process with a large number of ranks. The 
> reason to go to mmap_sem was that it was placed in the mm_struct and so we 
> would just have a couple of contentions per mm_struct.
> 
> I'll be looking for some other way to do this.

I think Andrea's original concept of the lock in the mmu_notifier_head
structure was the best.  I agree with him that it should be a spinlock
instead of the rw_lock.

Thanks,
Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
How about just taking the mmap_sem writelock in release? We have only a 
single caller of mmu_notifier_release() in mm/mmap.c and we know that we 
are not holding mmap_sem at that point. So just acquire it when needed?

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-30 11:21:57.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-30 11:24:59.0 -0800
@@ -18,6 +19,7 @@ void mmu_notifier_release(struct mm_stru
struct hlist_node *n, *t;
 
if (unlikely(!hlist_empty(>mmu_notifier.head))) {
+   down_write(>mmap_sem);
rcu_read_lock();
hlist_for_each_entry_safe_rcu(mn, n, t,
  >mmu_notifier.head, hlist) {
@@ -26,6 +28,7 @@ void mmu_notifier_release(struct mm_stru
mn->ops->release(mn, mm);
}
rcu_read_unlock();
+   up_write(>mmap_sem);
synchronize_rcu();
}
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Wed, 30 Jan 2008, Jack Steiner wrote:

> Moving to a different lock solves the problem.

Well it gets us back to the issue why we removed the lock. As Robin said 
before: If its global then we can have a huge number of tasks contending 
for the lock on startup of a process with a large number of ranks. The 
reason to go to mmap_sem was that it was placed in the mm_struct and so we 
would just have a couple of contentions per mm_struct.

I'll be looking for some other way to do this.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
Ok. So I added the following patch:

---
 include/linux/mmu_notifier.h |1 +
 mm/mmu_notifier.c|   12 
 2 files changed, 13 insertions(+)

Index: linux-2.6/include/linux/mmu_notifier.h
===
--- linux-2.6.orig/include/linux/mmu_notifier.h 2008-01-30 11:09:06.0 
-0800
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-30 11:10:38.0 
-0800
@@ -146,6 +146,7 @@ static inline void mmu_notifier_head_ini
 
 extern void mmu_rmap_notifier_register(struct mmu_rmap_notifier *mrn);
 extern void mmu_rmap_notifier_unregister(struct mmu_rmap_notifier *mrn);
+extern void mmu_rmap_export_page(struct page *page);
 
 extern struct hlist_head mmu_rmap_notifier_list;
 
Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-30 11:09:01.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-30 11:12:10.0 -0800
@@ -99,3 +99,15 @@ void mmu_rmap_notifier_unregister(struct
 }
 EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
 
+/*
+ * Export a page.
+ *
+ * Pagelock must be held.
+ * Must be called before a page is put on an external rmap.
+ */
+void mmu_rmap_export_page(struct page *page)
+{
+   BUG_ON(!PageLocked(page));
+   SetPageExternalRmap(page);
+}
+EXPORT_SYMBOL(mmu_rmap_export_page);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Wed, 30 Jan 2008, Robin Holt wrote:

> Index: git-linus/mm/mmu_notifier.c
> ===
> --- git-linus.orig/mm/mmu_notifier.c  2008-01-30 11:43:45.0 -0600
> +++ git-linus/mm/mmu_notifier.c   2008-01-30 11:56:08.0 -0600
> @@ -99,3 +99,8 @@ void mmu_rmap_notifier_unregister(struct
>  }
>  EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
>  
> +void mmu_rmap_export_page(struct page *page)
> +{
> + SetPageExternalRmap(page);
> +}
> +EXPORT_SYMBOL(mmu_rmap_export_page);

Then mmu_rmap_export_page would have to be called before the subsystem 
establishes the rmap entry for the page. Could we do all PageExternalRmap 
modifications under Pagelock?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Robin Holt
Back to one of Andrea's points from a couple days ago, I think we still
have a problem with the PageExternalRmap page flag.

If I had two drivers with external rmap implementations, there is no way
I can think of for a simple flag to coordinate a single page being
exported and maintained by the two.

Since the intended use seems to point in the direction of the external
rmap must be maintained consistent with the all pages the driver has
exported and the driver will already need to handle cases where the page
does not appear in its rmap, I would propose the setting and clearing
should be handled in the mmu_notifier code.

This is the first of two patches.  This one is intended as an addition
to patch 1/6.  I will post the other shortly under the patch 3/6 thread.


Index: git-linus/include/linux/mmu_notifier.h
===
--- git-linus.orig/include/linux/mmu_notifier.h 2008-01-30 11:43:45.0 
-0600
+++ git-linus/include/linux/mmu_notifier.h  2008-01-30 11:44:35.0 
-0600
@@ -146,6 +146,7 @@ static inline void mmu_notifier_head_ini
 
 extern void mmu_rmap_notifier_register(struct mmu_rmap_notifier *mrn);
 extern void mmu_rmap_notifier_unregister(struct mmu_rmap_notifier *mrn);
+extern void mmu_rmap_export_page(struct page *page);
 
 extern struct hlist_head mmu_rmap_notifier_list;
 
Index: git-linus/mm/mmu_notifier.c
===
--- git-linus.orig/mm/mmu_notifier.c2008-01-30 11:43:45.0 -0600
+++ git-linus/mm/mmu_notifier.c 2008-01-30 11:56:08.0 -0600
@@ -99,3 +99,8 @@ void mmu_rmap_notifier_unregister(struct
 }
 EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
 
+void mmu_rmap_export_page(struct page *page)
+{
+   SetPageExternalRmap(page);
+}
+EXPORT_SYMBOL(mmu_rmap_export_page);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Peter Zijlstra

On Wed, 2008-01-30 at 16:37 +0100, Andrea Arcangeli wrote:
> On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
> > +void mmu_notifier_release(struct mm_struct *mm)
> > +{
> > +   struct mmu_notifier *mn;
> > +   struct hlist_node *n, *t;
> > +
> > +   if (unlikely(!hlist_empty(>mmu_notifier.head))) {
> > +   rcu_read_lock();
> > +   hlist_for_each_entry_safe_rcu(mn, n, t,
> > + >mmu_notifier.head, hlist) {
> > +   hlist_del_rcu(>hlist);
> 
> This will race and kernel crash against mmu_notifier_register in
> SMP. You should resurrect the per-mmu_notifier_head lock in my last
> patch (except it can be converted from a rwlock_t to a regular
> spinlock_t) and drop the mmap_sem from
> mmu_notifier_register/unregister.

Agreed, sorry for this oversight.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 09:53:06AM -0600, Jack Steiner wrote:
> That will also resolve the problem we discussed yesterday. 
> I want to unregister my mmu_notifier when a GRU segment is
> unmapped. This would not necessarily be at task termination.

My proof that there is something wrong in the smp locking of the
current code is very simple: it can't be right to use
hlist_for_each_entry_safe_rcu and rcu_read_lock inside
mmu_notifier_release, and then to call hlist_del_rcu without any
spinlock or semaphore. If we walk the list with
hlist_for_each_entry_safe_rcu (and not with
hlist_for_each_entry_safe), it means the list _can_ change from under
us, and in turn the hlist_del_rcu must be surrounded by a spinlock or
sempahore too!

If by design the list _can't_ change from under us and calling
hlist_del_rcu was safe w/o locks, then hlist_for_each_entry_safe is
_sure_ enough for mmu_notifier_release, and rcu_read_lock most
certainly can be removed too.

To make an usage case where the race could trigger, I was thinking at
somebody bumping the mm_count (not mm_users) and registering a
notifier while mmu_notifier_release runs and relaying on ->release to
know if it has to run mmu_notifier_unregister. However I now started
wondering how it can relay on ->release to know that if ->release is
called after hlist_del_rcu because with the latest changes ->release
will also allow the mn to release itself ;). It's unsafe to call
list_del_rcu twice (the second will crash on a poisoned entry).

This starts to make me think we should remove the auto-disarming
feature and require the notifier-user to have the ->release call
mmu_notifier_unregister first and to free the "mn" inside ->release
too if needed. Or alternatively the notifier-user can bump mm_count
and to call a mmu_notifier_unregister before calling mmdrop (like kvm
could do).

Another approach is to simply define mmu_notifier_release as
implicitly serialized by other code design, with a real lock (not rcu)
against the whole register/unregister operations. So to guarantee the
notifier list can't change from under us while mmu_notifier_release
runs. If we go this route, yes, the auto-disarming hlist_del can be
kept, the current code would have been safe, but to avoid confusion
the mmu_notifier_release shall become this:

void mmu_notifier_release(struct mm_struct *mm)
{
struct mmu_notifier *mn;
struct hlist_node *n, *t;

if (unlikely(!hlist_empty(>mmu_notifier.head))) {
hlist_for_each_entry_safe(mn, n, t,
  >mmu_notifier.head, hlist) {
hlist_del(>hlist);
if (mn->ops->release)
mn->ops->release(mn, mm);
}
}
}

> However, the mmap_sem is already held for write by the core
> VM at the point I would call the unregister function.
> Currently, there is no __mmu_notifier_unregister() defined.
> 
> Moving to a different lock solves the problem.

Unless the mmu_notifier_release becomes like above and we rely on the
user of the mmu notifiers to implement a highlevel external lock that
will we definitely forbid to bump the mm_count of the mm, and to call
register/unregister while mmu_notifier_release could run, 1) moving to a
different lock and 2) removing the auto-disarming hlist_del_rcu from
mmu_notifier_release sounds the only possible smp safe way.

As far as KVM is concerned mmu_notifier_released could be changed to
the version I written above and everything should be ok. For KVM the
mm_count bump is done by the task that also holds a mm_user, so when
exit_mmap runs I don't think the list could possible change anymore.

Anyway those are details that can be perfected after mainline merging,
so this isn't something to worry about too much right now. My idea is
to keep working to perfect it while I hope progress is being made by
Christoph to merge the mmu notifiers V3 patchset in mainline ;).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Jack Steiner
On Wed, Jan 30, 2008 at 04:37:49PM +0100, Andrea Arcangeli wrote:
> On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
> > +void mmu_notifier_release(struct mm_struct *mm)
> > +{
> > +   struct mmu_notifier *mn;
> > +   struct hlist_node *n, *t;
> > +
> > +   if (unlikely(!hlist_empty(>mmu_notifier.head))) {
> > +   rcu_read_lock();
> > +   hlist_for_each_entry_safe_rcu(mn, n, t,
> > + >mmu_notifier.head, hlist) {
> > +   hlist_del_rcu(>hlist);
> 
> This will race and kernel crash against mmu_notifier_register in
> SMP. You should resurrect the per-mmu_notifier_head lock in my last
> patch (except it can be converted from a rwlock_t to a regular
> spinlock_t) and drop the mmap_sem from
> mmu_notifier_register/unregister.

Agree.

That will also resolve the problem we discussed yesterday. 
I want to unregister my mmu_notifier when a GRU segment is
unmapped. This would not necessarily be at task termination.

However, the mmap_sem is already held for write by the core
VM at the point I would call the unregister function.
Currently, there is no __mmu_notifier_unregister() defined.

Moving to a different lock solves the problem.


-- jack
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
> +void mmu_notifier_release(struct mm_struct *mm)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n, *t;
> +
> + if (unlikely(!hlist_empty(>mmu_notifier.head))) {
> + rcu_read_lock();
> + hlist_for_each_entry_safe_rcu(mn, n, t,
> +   >mmu_notifier.head, hlist) {
> + hlist_del_rcu(>hlist);

This will race and kernel crash against mmu_notifier_register in
SMP. You should resurrect the per-mmu_notifier_head lock in my last
patch (except it can be converted from a rwlock_t to a regular
spinlock_t) and drop the mmap_sem from
mmu_notifier_register/unregister.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Jack Steiner
On Wed, Jan 30, 2008 at 04:37:49PM +0100, Andrea Arcangeli wrote:
 On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
  +void mmu_notifier_release(struct mm_struct *mm)
  +{
  +   struct mmu_notifier *mn;
  +   struct hlist_node *n, *t;
  +
  +   if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
  +   rcu_read_lock();
  +   hlist_for_each_entry_safe_rcu(mn, n, t,
  + mm-mmu_notifier.head, hlist) {
  +   hlist_del_rcu(mn-hlist);
 
 This will race and kernel crash against mmu_notifier_register in
 SMP. You should resurrect the per-mmu_notifier_head lock in my last
 patch (except it can be converted from a rwlock_t to a regular
 spinlock_t) and drop the mmap_sem from
 mmu_notifier_register/unregister.

Agree.

That will also resolve the problem we discussed yesterday. 
I want to unregister my mmu_notifier when a GRU segment is
unmapped. This would not necessarily be at task termination.

However, the mmap_sem is already held for write by the core
VM at the point I would call the unregister function.
Currently, there is no __mmu_notifier_unregister() defined.

Moving to a different lock solves the problem.


-- jack
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
 +void mmu_notifier_release(struct mm_struct *mm)
 +{
 + struct mmu_notifier *mn;
 + struct hlist_node *n, *t;
 +
 + if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
 + rcu_read_lock();
 + hlist_for_each_entry_safe_rcu(mn, n, t,
 +   mm-mmu_notifier.head, hlist) {
 + hlist_del_rcu(mn-hlist);

This will race and kernel crash against mmu_notifier_register in
SMP. You should resurrect the per-mmu_notifier_head lock in my last
patch (except it can be converted from a rwlock_t to a regular
spinlock_t) and drop the mmap_sem from
mmu_notifier_register/unregister.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 09:53:06AM -0600, Jack Steiner wrote:
 That will also resolve the problem we discussed yesterday. 
 I want to unregister my mmu_notifier when a GRU segment is
 unmapped. This would not necessarily be at task termination.

My proof that there is something wrong in the smp locking of the
current code is very simple: it can't be right to use
hlist_for_each_entry_safe_rcu and rcu_read_lock inside
mmu_notifier_release, and then to call hlist_del_rcu without any
spinlock or semaphore. If we walk the list with
hlist_for_each_entry_safe_rcu (and not with
hlist_for_each_entry_safe), it means the list _can_ change from under
us, and in turn the hlist_del_rcu must be surrounded by a spinlock or
sempahore too!

If by design the list _can't_ change from under us and calling
hlist_del_rcu was safe w/o locks, then hlist_for_each_entry_safe is
_sure_ enough for mmu_notifier_release, and rcu_read_lock most
certainly can be removed too.

To make an usage case where the race could trigger, I was thinking at
somebody bumping the mm_count (not mm_users) and registering a
notifier while mmu_notifier_release runs and relaying on -release to
know if it has to run mmu_notifier_unregister. However I now started
wondering how it can relay on -release to know that if -release is
called after hlist_del_rcu because with the latest changes -release
will also allow the mn to release itself ;). It's unsafe to call
list_del_rcu twice (the second will crash on a poisoned entry).

This starts to make me think we should remove the auto-disarming
feature and require the notifier-user to have the -release call
mmu_notifier_unregister first and to free the mn inside -release
too if needed. Or alternatively the notifier-user can bump mm_count
and to call a mmu_notifier_unregister before calling mmdrop (like kvm
could do).

Another approach is to simply define mmu_notifier_release as
implicitly serialized by other code design, with a real lock (not rcu)
against the whole register/unregister operations. So to guarantee the
notifier list can't change from under us while mmu_notifier_release
runs. If we go this route, yes, the auto-disarming hlist_del can be
kept, the current code would have been safe, but to avoid confusion
the mmu_notifier_release shall become this:

void mmu_notifier_release(struct mm_struct *mm)
{
struct mmu_notifier *mn;
struct hlist_node *n, *t;

if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
hlist_for_each_entry_safe(mn, n, t,
  mm-mmu_notifier.head, hlist) {
hlist_del(mn-hlist);
if (mn-ops-release)
mn-ops-release(mn, mm);
}
}
}

 However, the mmap_sem is already held for write by the core
 VM at the point I would call the unregister function.
 Currently, there is no __mmu_notifier_unregister() defined.
 
 Moving to a different lock solves the problem.

Unless the mmu_notifier_release becomes like above and we rely on the
user of the mmu notifiers to implement a highlevel external lock that
will we definitely forbid to bump the mm_count of the mm, and to call
register/unregister while mmu_notifier_release could run, 1) moving to a
different lock and 2) removing the auto-disarming hlist_del_rcu from
mmu_notifier_release sounds the only possible smp safe way.

As far as KVM is concerned mmu_notifier_released could be changed to
the version I written above and everything should be ok. For KVM the
mm_count bump is done by the task that also holds a mm_user, so when
exit_mmap runs I don't think the list could possible change anymore.

Anyway those are details that can be perfected after mainline merging,
so this isn't something to worry about too much right now. My idea is
to keep working to perfect it while I hope progress is being made by
Christoph to merge the mmu notifiers V3 patchset in mainline ;).
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Peter Zijlstra

On Wed, 2008-01-30 at 16:37 +0100, Andrea Arcangeli wrote:
 On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote:
  +void mmu_notifier_release(struct mm_struct *mm)
  +{
  +   struct mmu_notifier *mn;
  +   struct hlist_node *n, *t;
  +
  +   if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
  +   rcu_read_lock();
  +   hlist_for_each_entry_safe_rcu(mn, n, t,
  + mm-mmu_notifier.head, hlist) {
  +   hlist_del_rcu(mn-hlist);
 
 This will race and kernel crash against mmu_notifier_register in
 SMP. You should resurrect the per-mmu_notifier_head lock in my last
 patch (except it can be converted from a rwlock_t to a regular
 spinlock_t) and drop the mmap_sem from
 mmu_notifier_register/unregister.

Agreed, sorry for this oversight.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Robin Holt
Back to one of Andrea's points from a couple days ago, I think we still
have a problem with the PageExternalRmap page flag.

If I had two drivers with external rmap implementations, there is no way
I can think of for a simple flag to coordinate a single page being
exported and maintained by the two.

Since the intended use seems to point in the direction of the external
rmap must be maintained consistent with the all pages the driver has
exported and the driver will already need to handle cases where the page
does not appear in its rmap, I would propose the setting and clearing
should be handled in the mmu_notifier code.

This is the first of two patches.  This one is intended as an addition
to patch 1/6.  I will post the other shortly under the patch 3/6 thread.


Index: git-linus/include/linux/mmu_notifier.h
===
--- git-linus.orig/include/linux/mmu_notifier.h 2008-01-30 11:43:45.0 
-0600
+++ git-linus/include/linux/mmu_notifier.h  2008-01-30 11:44:35.0 
-0600
@@ -146,6 +146,7 @@ static inline void mmu_notifier_head_ini
 
 extern void mmu_rmap_notifier_register(struct mmu_rmap_notifier *mrn);
 extern void mmu_rmap_notifier_unregister(struct mmu_rmap_notifier *mrn);
+extern void mmu_rmap_export_page(struct page *page);
 
 extern struct hlist_head mmu_rmap_notifier_list;
 
Index: git-linus/mm/mmu_notifier.c
===
--- git-linus.orig/mm/mmu_notifier.c2008-01-30 11:43:45.0 -0600
+++ git-linus/mm/mmu_notifier.c 2008-01-30 11:56:08.0 -0600
@@ -99,3 +99,8 @@ void mmu_rmap_notifier_unregister(struct
 }
 EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
 
+void mmu_rmap_export_page(struct page *page)
+{
+   SetPageExternalRmap(page);
+}
+EXPORT_SYMBOL(mmu_rmap_export_page);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Wed, 30 Jan 2008, Robin Holt wrote:

 Index: git-linus/mm/mmu_notifier.c
 ===
 --- git-linus.orig/mm/mmu_notifier.c  2008-01-30 11:43:45.0 -0600
 +++ git-linus/mm/mmu_notifier.c   2008-01-30 11:56:08.0 -0600
 @@ -99,3 +99,8 @@ void mmu_rmap_notifier_unregister(struct
  }
  EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
  
 +void mmu_rmap_export_page(struct page *page)
 +{
 + SetPageExternalRmap(page);
 +}
 +EXPORT_SYMBOL(mmu_rmap_export_page);

Then mmu_rmap_export_page would have to be called before the subsystem 
establishes the rmap entry for the page. Could we do all PageExternalRmap 
modifications under Pagelock?


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Wed, 30 Jan 2008, Jack Steiner wrote:

 Moving to a different lock solves the problem.

Well it gets us back to the issue why we removed the lock. As Robin said 
before: If its global then we can have a huge number of tasks contending 
for the lock on startup of a process with a large number of ranks. The 
reason to go to mmap_sem was that it was placed in the mm_struct and so we 
would just have a couple of contentions per mm_struct.

I'll be looking for some other way to do this.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
How about just taking the mmap_sem writelock in release? We have only a 
single caller of mmu_notifier_release() in mm/mmap.c and we know that we 
are not holding mmap_sem at that point. So just acquire it when needed?

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-30 11:21:57.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-30 11:24:59.0 -0800
@@ -18,6 +19,7 @@ void mmu_notifier_release(struct mm_stru
struct hlist_node *n, *t;
 
if (unlikely(!hlist_empty(mm-mmu_notifier.head))) {
+   down_write(mm-mmap_sem);
rcu_read_lock();
hlist_for_each_entry_safe_rcu(mn, n, t,
  mm-mmu_notifier.head, hlist) {
@@ -26,6 +28,7 @@ void mmu_notifier_release(struct mm_stru
mn-ops-release(mn, mm);
}
rcu_read_unlock();
+   up_write(mm-mmap_sem);
synchronize_rcu();
}
 }
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
Ok. So I added the following patch:

---
 include/linux/mmu_notifier.h |1 +
 mm/mmu_notifier.c|   12 
 2 files changed, 13 insertions(+)

Index: linux-2.6/include/linux/mmu_notifier.h
===
--- linux-2.6.orig/include/linux/mmu_notifier.h 2008-01-30 11:09:06.0 
-0800
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-30 11:10:38.0 
-0800
@@ -146,6 +146,7 @@ static inline void mmu_notifier_head_ini
 
 extern void mmu_rmap_notifier_register(struct mmu_rmap_notifier *mrn);
 extern void mmu_rmap_notifier_unregister(struct mmu_rmap_notifier *mrn);
+extern void mmu_rmap_export_page(struct page *page);
 
 extern struct hlist_head mmu_rmap_notifier_list;
 
Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-30 11:09:01.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-30 11:12:10.0 -0800
@@ -99,3 +99,15 @@ void mmu_rmap_notifier_unregister(struct
 }
 EXPORT_SYMBOL(mmu_rmap_notifier_unregister);
 
+/*
+ * Export a page.
+ *
+ * Pagelock must be held.
+ * Must be called before a page is put on an external rmap.
+ */
+void mmu_rmap_export_page(struct page *page)
+{
+   BUG_ON(!PageLocked(page));
+   SetPageExternalRmap(page);
+}
+EXPORT_SYMBOL(mmu_rmap_export_page);

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Robin Holt
On Wed, Jan 30, 2008 at 11:19:28AM -0800, Christoph Lameter wrote:
 On Wed, 30 Jan 2008, Jack Steiner wrote:
 
  Moving to a different lock solves the problem.
 
 Well it gets us back to the issue why we removed the lock. As Robin said 
 before: If its global then we can have a huge number of tasks contending 
 for the lock on startup of a process with a large number of ranks. The 
 reason to go to mmap_sem was that it was placed in the mm_struct and so we 
 would just have a couple of contentions per mm_struct.
 
 I'll be looking for some other way to do this.

I think Andrea's original concept of the lock in the mmu_notifier_head
structure was the best.  I agree with him that it should be a spinlock
instead of the rw_lock.

Thanks,
Robin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:

  I think Andrea's original concept of the lock in the mmu_notifier_head
  structure was the best.  I agree with him that it should be a spinlock
  instead of the rw_lock.
 
 BTW, I don't see the scalability concern with huge number of tasks:
 the lock is still in the mm, down_write(mm-mmap_sem); oneinstruction;
 up_write(mm-mmap_sem) is always going to scale worse than
 spin_lock(mm-somethingelse); oneinstruction;
 spin_unlock(mm-somethinglese).

If we put it elsewhere in the mm then we increase the size of the memory 
used in the mm_struct.

 Furthermore if we go this route and we don't relay on implicit
 serialization of all the mmu notifier users against exit_mmap
 (i.e. the mmu notifier user must agree to stop calling
 mmu_notifier_register on a mm after the last mmput) the autodisarming
 feature will likely have to be removed or it can't possibly be safe to
 run mmu_notifier_unregister while mmu_notifier_release runs. With the
 auto-disarming feature, there is no way to safely know if
 mmu_notifier_unregister has to be called or not. I'm ok with removing
 the auto-disarming feature and to have as self-contained-as-possible
 locking. Then mmu_notifier_release can just become the
 invalidate_all_after and invalidate_all, invalidate_all_before.

H.. exit_mmap is only called when the last reference is removed 
against the mm right? So no tasks are running anymore. No pages are left. 
Do we need to serialize at all for mmu_notifier_release?

 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 04:20:35PM -0600, Robin Holt wrote:
 On Wed, Jan 30, 2008 at 11:19:28AM -0800, Christoph Lameter wrote:
  On Wed, 30 Jan 2008, Jack Steiner wrote:
  
   Moving to a different lock solves the problem.
  
  Well it gets us back to the issue why we removed the lock. As Robin said 
  before: If its global then we can have a huge number of tasks contending 
  for the lock on startup of a process with a large number of ranks. The 
  reason to go to mmap_sem was that it was placed in the mm_struct and so we 
  would just have a couple of contentions per mm_struct.
  
  I'll be looking for some other way to do this.
 
 I think Andrea's original concept of the lock in the mmu_notifier_head
 structure was the best.  I agree with him that it should be a spinlock
 instead of the rw_lock.

BTW, I don't see the scalability concern with huge number of tasks:
the lock is still in the mm, down_write(mm-mmap_sem); oneinstruction;
up_write(mm-mmap_sem) is always going to scale worse than
spin_lock(mm-somethingelse); oneinstruction;
spin_unlock(mm-somethinglese).

Furthermore if we go this route and we don't relay on implicit
serialization of all the mmu notifier users against exit_mmap
(i.e. the mmu notifier user must agree to stop calling
mmu_notifier_register on a mm after the last mmput) the autodisarming
feature will likely have to be removed or it can't possibly be safe to
run mmu_notifier_unregister while mmu_notifier_release runs. With the
auto-disarming feature, there is no way to safely know if
mmu_notifier_unregister has to be called or not. I'm ok with removing
the auto-disarming feature and to have as self-contained-as-possible
locking. Then mmu_notifier_release can just become the
invalidate_all_after and invalidate_all, invalidate_all_before.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [kvm-devel] [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Andrea Arcangeli
On Wed, Jan 30, 2008 at 03:55:37PM -0800, Christoph Lameter wrote:
 On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
 
   I think Andrea's original concept of the lock in the mmu_notifier_head
   structure was the best.  I agree with him that it should be a spinlock
   instead of the rw_lock.
  
  BTW, I don't see the scalability concern with huge number of tasks:
  the lock is still in the mm, down_write(mm-mmap_sem); oneinstruction;
  up_write(mm-mmap_sem) is always going to scale worse than
  spin_lock(mm-somethingelse); oneinstruction;
  spin_unlock(mm-somethinglese).
 
 If we put it elsewhere in the mm then we increase the size of the memory 
 used in the mm_struct.

Yes, and it will increase of the same amount of RAM that you pretend
everyone to pay even if MMU_NOTIFIER=n after your patch is applied (vs
mine that generated 0 ram utilization increase when
MMU_NOTIFIER=n). And the additional ram will provide not just
self-contained locking but higher scalability too.

I think it's much more important to generate zero ram and CPU overhead
for the embedded (this is something I was very careful to enforce in
all my patches), than to reduce scalability and not having a self
contained locking on full configurations with MMU_NOTIFIER=y.

 H.. exit_mmap is only called when the last reference is removed 
 against the mm right? So no tasks are running anymore. No pages are left. 
 Do we need to serialize at all for mmu_notifier_release?

KVM sure doesn't need any locking there.  I thought somebody had to
possibly take a pin on the mm_count and pretend to call
mmu_notifier_register at will until mmdrop was finally called, in a
out of order fashion given mmu_notifier_release was implemented like
if the list could change from under it. Note mmdrop != mmput. mmput
and in turn mm_users is the serialization point if you prefer to drop
all locking from _release. Nobody must ever attempt a mmu_notifier_*
after calling mmput for that mm. That should be enough to be
safe. I'm fine either ways...
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [kvm-devel] [patch 1/6] mmu_notifier: Core code

2008-01-30 Thread Christoph Lameter
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:

  H.. exit_mmap is only called when the last reference is removed 
  against the mm right? So no tasks are running anymore. No pages are left. 
  Do we need to serialize at all for mmu_notifier_release?
 
 KVM sure doesn't need any locking there.  I thought somebody had to
 possibly take a pin on the mm_count and pretend to call
 mmu_notifier_register at will until mmdrop was finally called, in a
 out of order fashion given mmu_notifier_release was implemented like
 if the list could change from under it. Note mmdrop != mmput. mmput
 and in turn mm_users is the serialization point if you prefer to drop
 all locking from _release. Nobody must ever attempt a mmu_notifier_*
 after calling mmput for that mm. That should be enough to be
 safe. I'm fine either ways...

exit_mmap (where we call invalidate_all() and release()) is called when 
mm_users == 0:

void mmput(struct mm_struct *mm)
{
might_sleep();

if (atomic_dec_and_test(mm-mm_users)) {
exit_aio(mm);
exit_mmap(mm);
if (!list_empty(mm-mmlist)) {
spin_lock(mmlist_lock);
list_del(mm-mmlist);
spin_unlock(mmlist_lock);
}
put_swap_token(mm);
mmdrop(mm);
}
}
EXPORT_SYMBOL_GPL(mmput);

So there is only a single thread executing at the time when 
invalidate_all() is called from exit_mmap(). Then we drop the 
pages, and the page tables. After the page tables we call the -release 
method and then remove the vmas.

So even dropping off the mmu_notifier chain in invalidate_all() could be 
done without an issue and without locking.

Trouble is if other callbacks attempt the same. Do we need to support the 
removal from the mmu_notifier list in invalidate_range()?

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Christoph Lameter
Core code for mmu notifiers.

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>

---
 include/linux/list.h |   14 ++
 include/linux/mm_types.h |6 +
 include/linux/mmu_notifier.h |  210 +++
 include/linux/page-flags.h   |   10 ++
 kernel/fork.c|2 
 mm/Kconfig   |4 
 mm/Makefile  |1 
 mm/mmap.c|2 
 mm/mmu_notifier.c|  101 
 9 files changed, 350 insertions(+)

Index: linux-2.6/include/linux/mm_types.h
===
--- linux-2.6.orig/include/linux/mm_types.h 2008-01-29 16:56:33.0 
-0800
+++ linux-2.6/include/linux/mm_types.h  2008-01-29 16:56:36.0 -0800
@@ -153,6 +153,10 @@ struct vm_area_struct {
 #endif
 };
 
+struct mmu_notifier_head {
+   struct hlist_head head;
+};
+
 struct mm_struct {
struct vm_area_struct * mmap;   /* list of VMAs */
struct rb_root mm_rb;
@@ -219,6 +223,8 @@ struct mm_struct {
/* aio bits */
rwlock_tioctx_list_lock;
struct kioctx   *ioctx_list;
+
+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
 
 #endif /* _LINUX_MM_TYPES_H */
Index: linux-2.6/include/linux/mmu_notifier.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-29 16:56:36.0 
-0800
@@ -0,0 +1,210 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+/*
+ * MMU motifier
+ *
+ * Notifier functions for hardware and software that establishes external
+ * references to pages of a Linux system. The notifier calls ensure that
+ * the external mappings are removed when the Linux VM removes memory ranges
+ * or individual pages from a process.
+ *
+ * These fall into two classes
+ *
+ * 1. mmu_notifier
+ *
+ * These are callbacks registered with an mm_struct. If mappings are
+ * removed from an address space then callbacks are performed.
+ * Spinlocks must be held in order to the walk reverse maps and the
+ * notifications are performed while the spinlock is held.
+ *
+ *
+ * 2. mmu_rmap_notifier
+ *
+ * Callbacks for subsystems that provide their own rmaps. These
+ * need to walk their own rmaps for a page. The invalidate_page
+ * callback is outside of locks so that we are not in a strictly
+ * atomic context (but we may be in a PF_MEMALLOC context if the
+ * notifier is called from reclaim code) and are able to sleep.
+ * Rmap notifiers need an extra page bit and are only available
+ * on 64 bit platforms. It is up to the subsystem to mark pags
+ * as PageExternalRmap as needed to trigger the callbacks. Pages
+ * must be marked dirty if dirty bits are set in the external
+ * pte.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+struct mmu_notifier_ops;
+
+struct mmu_notifier {
+   struct hlist_node hlist;
+   const struct mmu_notifier_ops *ops;
+};
+
+struct mmu_notifier_ops {
+   /*
+* Note: The mmu_notifier structure must be released with
+* call_rcu() since other processors are only guaranteed to
+* see the changes after a quiescent period.
+*/
+   void (*release)(struct mmu_notifier *mn,
+   struct mm_struct *mm);
+
+   int (*age_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   void (*invalidate_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   /*
+* lock indicates that the function is called under spinlock.
+*/
+   void (*invalidate_range)(struct mmu_notifier *mn,
+struct mm_struct *mm,
+unsigned long start, unsigned long end,
+int lock);
+};
+
+struct mmu_rmap_notifier_ops;
+
+struct mmu_rmap_notifier {
+   struct hlist_node hlist;
+   const struct mmu_rmap_notifier_ops *ops;
+};
+
+struct mmu_rmap_notifier_ops {
+   /*
+* Called with the page lock held after ptes are modified or removed
+* so that a subsystem with its own rmap's can remove remote ptes
+* mapping a page.
+*/
+   void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
+   struct page *page);
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads
+ */
+extern void __mmu_notifier_register(struct mmu_notifier *mn,
+   

Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Avi Kivity

Christoph Lameter wrote:

On Tue, 29 Jan 2008, Andrea Arcangeli wrote:

  

+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
  

Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
regression (a minor one though).



Andrew does not like #ifdefs and it makes it possible to verify calling 
conventions if !CONFIG_MMU_NOTIFIER.


  


You could define mmu_notifier_head as an empty struct in that case.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Christoph Lameter
On Tue, 29 Jan 2008, Andrea Arcangeli wrote:

> > +   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
> >  };
> 
> Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
> regression (a minor one though).

Andrew does not like #ifdefs and it makes it possible to verify calling 
conventions if !CONFIG_MMU_NOTIFIER.

> It's out of my reach how can you be ok with lock=1. You said you have
> to block, if you can deal with lock=1 once, why can't you deal with
> lock=1 _always_?

Not sure yet. We may have to do more in that area. Need to have feedback 
from Robin.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Robin Holt
I am going to seperate my comments into individual replies to help
reduce the chance they are lost.

> +void mmu_notifier_release(struct mm_struct *mm)
...
> + hlist_for_each_entry_safe_rcu(mn, n, t,
> +   >mmu_notifier.head, hlist) {
> + if (mn->ops->release)
> + mn->ops->release(mn, mm);
> + hlist_del(>hlist);

This is a use-after-free issue.  The hlist_del_rcu needs to be done before
the callout as the structure containing the mmu_notifier structure will
need to be freed from within the ->release callout.

Thanks,
Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Andrea Arcangeli
On Tue, Jan 29, 2008 at 02:59:14PM +0100, Andrea Arcangeli wrote:
> The down_write is garbage. The caller should put it around
> mmu_notifier_register if something. The same way the caller should
> call synchronize_rcu after mmu_notifier_register if it needs
> synchronous behavior from the notifiers. The default version of
> mmu_notifier_register shouldn't be cluttered with unnecessary locking.

Ooops my spinlock was gone from the notifier head so the above
comment is wrong sorry! I thought down_write was needed to serialize
against some _external_ event, not to serialize the list updates in
place of my explicit lock. The critical section is so small that a
semaphore is the wrong locking choice, that's why I assumed it was for
an external event. Anyway RCU won't be optimal for a huge flood of
register/unregister, I agree the down_write shouldn't create much
contention and it saves 4 bytes from each mm_struct, and we can always
change it to a proper spinlock later if needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Andrea Arcangeli
On Mon, Jan 28, 2008 at 12:28:41PM -0800, Christoph Lameter wrote:
> +struct mmu_notifier_head {
> + struct hlist_head head;
> +};
> +
>  struct mm_struct {
>   struct vm_area_struct * mmap;   /* list of VMAs */
>   struct rb_root mm_rb;
> @@ -219,6 +223,8 @@ struct mm_struct {
>   /* aio bits */
>   rwlock_tioctx_list_lock;
>   struct kioctx   *ioctx_list;
> +
> + struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
>  };

Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
regression (a minor one though).

> + /*
> +  * lock indicates that the function is called under spinlock.
> +  */
> + void (*invalidate_range)(struct mmu_notifier *mn,
> +  struct mm_struct *mm,
> +  unsigned long start, unsigned long end,
> +  int lock);
> +};

It's out of my reach how can you be ok with lock=1. You said you have
to block, if you can deal with lock=1 once, why can't you deal with
lock=1 _always_?

> +/*
> + * Note that all notifiers use RCU. The updates are only guaranteed to be
> + * visible to other processes after a RCU quiescent period!
> + */
> +void __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> +{
> + hlist_add_head_rcu(>hlist, >mmu_notifier.head);
> +}
> +EXPORT_SYMBOL_GPL(__mmu_notifier_register);
> +
> +void mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
> +{
> + down_write(>mmap_sem);
> + __mmu_notifier_register(mn, mm);
> + up_write(>mmap_sem);
> +}
> +EXPORT_SYMBOL_GPL(mmu_notifier_register);

The down_write is garbage. The caller should put it around
mmu_notifier_register if something. The same way the caller should
call synchronize_rcu after mmu_notifier_register if it needs
synchronous behavior from the notifiers. The default version of
mmu_notifier_register shouldn't be cluttered with unnecessary locking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Andrea Arcangeli
On Mon, Jan 28, 2008 at 12:28:41PM -0800, Christoph Lameter wrote:
 +struct mmu_notifier_head {
 + struct hlist_head head;
 +};
 +
  struct mm_struct {
   struct vm_area_struct * mmap;   /* list of VMAs */
   struct rb_root mm_rb;
 @@ -219,6 +223,8 @@ struct mm_struct {
   /* aio bits */
   rwlock_tioctx_list_lock;
   struct kioctx   *ioctx_list;
 +
 + struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
  };

Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
regression (a minor one though).

 + /*
 +  * lock indicates that the function is called under spinlock.
 +  */
 + void (*invalidate_range)(struct mmu_notifier *mn,
 +  struct mm_struct *mm,
 +  unsigned long start, unsigned long end,
 +  int lock);
 +};

It's out of my reach how can you be ok with lock=1. You said you have
to block, if you can deal with lock=1 once, why can't you deal with
lock=1 _always_?

 +/*
 + * Note that all notifiers use RCU. The updates are only guaranteed to be
 + * visible to other processes after a RCU quiescent period!
 + */
 +void __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 +{
 + hlist_add_head_rcu(mn-hlist, mm-mmu_notifier.head);
 +}
 +EXPORT_SYMBOL_GPL(__mmu_notifier_register);
 +
 +void mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
 +{
 + down_write(mm-mmap_sem);
 + __mmu_notifier_register(mn, mm);
 + up_write(mm-mmap_sem);
 +}
 +EXPORT_SYMBOL_GPL(mmu_notifier_register);

The down_write is garbage. The caller should put it around
mmu_notifier_register if something. The same way the caller should
call synchronize_rcu after mmu_notifier_register if it needs
synchronous behavior from the notifiers. The default version of
mmu_notifier_register shouldn't be cluttered with unnecessary locking.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Robin Holt
I am going to seperate my comments into individual replies to help
reduce the chance they are lost.

 +void mmu_notifier_release(struct mm_struct *mm)
...
 + hlist_for_each_entry_safe_rcu(mn, n, t,
 +   mm-mmu_notifier.head, hlist) {
 + if (mn-ops-release)
 + mn-ops-release(mn, mm);
 + hlist_del(mn-hlist);

This is a use-after-free issue.  The hlist_del_rcu needs to be done before
the callout as the structure containing the mmu_notifier structure will
need to be freed from within the -release callout.

Thanks,
Robin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Andrea Arcangeli
On Tue, Jan 29, 2008 at 02:59:14PM +0100, Andrea Arcangeli wrote:
 The down_write is garbage. The caller should put it around
 mmu_notifier_register if something. The same way the caller should
 call synchronize_rcu after mmu_notifier_register if it needs
 synchronous behavior from the notifiers. The default version of
 mmu_notifier_register shouldn't be cluttered with unnecessary locking.

Ooops my spinlock was gone from the notifier head so the above
comment is wrong sorry! I thought down_write was needed to serialize
against some _external_ event, not to serialize the list updates in
place of my explicit lock. The critical section is so small that a
semaphore is the wrong locking choice, that's why I assumed it was for
an external event. Anyway RCU won't be optimal for a huge flood of
register/unregister, I agree the down_write shouldn't create much
contention and it saves 4 bytes from each mm_struct, and we can always
change it to a proper spinlock later if needed.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Avi Kivity

Christoph Lameter wrote:

On Tue, 29 Jan 2008, Andrea Arcangeli wrote:

  

+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
  

Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
regression (a minor one though).



Andrew does not like #ifdefs and it makes it possible to verify calling 
conventions if !CONFIG_MMU_NOTIFIER.


  


You could define mmu_notifier_head as an empty struct in that case.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Christoph Lameter
On Tue, 29 Jan 2008, Andrea Arcangeli wrote:

  +   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
   };
 
 Not sure why you prefer to waste ram when MMU_NOTIFIER=n, this is a
 regression (a minor one though).

Andrew does not like #ifdefs and it makes it possible to verify calling 
conventions if !CONFIG_MMU_NOTIFIER.

 It's out of my reach how can you be ok with lock=1. You said you have
 to block, if you can deal with lock=1 once, why can't you deal with
 lock=1 _always_?

Not sure yet. We may have to do more in that area. Need to have feedback 
from Robin.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch 1/6] mmu_notifier: Core code

2008-01-29 Thread Christoph Lameter
Core code for mmu notifiers.

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]
Signed-off-by: Andrea Arcangeli [EMAIL PROTECTED]

---
 include/linux/list.h |   14 ++
 include/linux/mm_types.h |6 +
 include/linux/mmu_notifier.h |  210 +++
 include/linux/page-flags.h   |   10 ++
 kernel/fork.c|2 
 mm/Kconfig   |4 
 mm/Makefile  |1 
 mm/mmap.c|2 
 mm/mmu_notifier.c|  101 
 9 files changed, 350 insertions(+)

Index: linux-2.6/include/linux/mm_types.h
===
--- linux-2.6.orig/include/linux/mm_types.h 2008-01-29 16:56:33.0 
-0800
+++ linux-2.6/include/linux/mm_types.h  2008-01-29 16:56:36.0 -0800
@@ -153,6 +153,10 @@ struct vm_area_struct {
 #endif
 };
 
+struct mmu_notifier_head {
+   struct hlist_head head;
+};
+
 struct mm_struct {
struct vm_area_struct * mmap;   /* list of VMAs */
struct rb_root mm_rb;
@@ -219,6 +223,8 @@ struct mm_struct {
/* aio bits */
rwlock_tioctx_list_lock;
struct kioctx   *ioctx_list;
+
+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
 
 #endif /* _LINUX_MM_TYPES_H */
Index: linux-2.6/include/linux/mmu_notifier.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-29 16:56:36.0 
-0800
@@ -0,0 +1,210 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+/*
+ * MMU motifier
+ *
+ * Notifier functions for hardware and software that establishes external
+ * references to pages of a Linux system. The notifier calls ensure that
+ * the external mappings are removed when the Linux VM removes memory ranges
+ * or individual pages from a process.
+ *
+ * These fall into two classes
+ *
+ * 1. mmu_notifier
+ *
+ * These are callbacks registered with an mm_struct. If mappings are
+ * removed from an address space then callbacks are performed.
+ * Spinlocks must be held in order to the walk reverse maps and the
+ * notifications are performed while the spinlock is held.
+ *
+ *
+ * 2. mmu_rmap_notifier
+ *
+ * Callbacks for subsystems that provide their own rmaps. These
+ * need to walk their own rmaps for a page. The invalidate_page
+ * callback is outside of locks so that we are not in a strictly
+ * atomic context (but we may be in a PF_MEMALLOC context if the
+ * notifier is called from reclaim code) and are able to sleep.
+ * Rmap notifiers need an extra page bit and are only available
+ * on 64 bit platforms. It is up to the subsystem to mark pags
+ * as PageExternalRmap as needed to trigger the callbacks. Pages
+ * must be marked dirty if dirty bits are set in the external
+ * pte.
+ */
+
+#include linux/list.h
+#include linux/spinlock.h
+#include linux/rcupdate.h
+#include linux/mm_types.h
+
+struct mmu_notifier_ops;
+
+struct mmu_notifier {
+   struct hlist_node hlist;
+   const struct mmu_notifier_ops *ops;
+};
+
+struct mmu_notifier_ops {
+   /*
+* Note: The mmu_notifier structure must be released with
+* call_rcu() since other processors are only guaranteed to
+* see the changes after a quiescent period.
+*/
+   void (*release)(struct mmu_notifier *mn,
+   struct mm_struct *mm);
+
+   int (*age_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   void (*invalidate_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   /*
+* lock indicates that the function is called under spinlock.
+*/
+   void (*invalidate_range)(struct mmu_notifier *mn,
+struct mm_struct *mm,
+unsigned long start, unsigned long end,
+int lock);
+};
+
+struct mmu_rmap_notifier_ops;
+
+struct mmu_rmap_notifier {
+   struct hlist_node hlist;
+   const struct mmu_rmap_notifier_ops *ops;
+};
+
+struct mmu_rmap_notifier_ops {
+   /*
+* Called with the page lock held after ptes are modified or removed
+* so that a subsystem with its own rmap's can remove remote ptes
+* mapping a page.
+*/
+   void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
+   struct page *page);
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads
+ */
+extern void 

Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
On Mon, 28 Jan 2008, Robin Holt wrote:

> USE_AFTER_FREE!!!  I made this same comment as well as other relavent
> comments last week.

Must have slipped somehow. Patch needs to be applied after the rcu fix.

Please repeat the other relevant comments if they are still relevant I 
thought I had worked through them.



mmu_notifier_release: remove mmu_notifier struct from list before calling 
->release

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>

---
 mm/mmu_notifier.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-28 17:17:05.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-28 17:17:10.0 -0800
@@ -21,9 +21,9 @@ void mmu_notifier_release(struct mm_stru
rcu_read_lock();
hlist_for_each_entry_safe_rcu(mn, n, t,
  >mmu_notifier.head, hlist) {
+   hlist_del_rcu(>hlist);
if (mn->ops->release)
mn->ops->release(mn, mm);
-   hlist_del_rcu(>hlist);
}
rcu_read_unlock();
synchronize_rcu();
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Robin Holt
> +void mmu_notifier_release(struct mm_struct *mm)
...
> + hlist_for_each_entry_safe_rcu(mn, n, t,
> +   >mmu_notifier.head, hlist) {
> + if (mn->ops->release)
> + mn->ops->release(mn, mm);
> + hlist_del(>hlist);

USE_AFTER_FREE!!!  I made this same comment as well as other relavent
comments last week.


Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
mmu core: Need to use hlist_del

Wrong type of list del in mmu_notifier_release()

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>

---
 mm/mmu_notifier.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-28 14:02:18.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-28 14:02:30.0 -0800
@@ -23,7 +23,7 @@ void mmu_notifier_release(struct mm_stru
  >mmu_notifier.head, hlist) {
if (mn->ops->release)
mn->ops->release(mn, mm);
-   hlist_del(>hlist);
+   hlist_del_rcu(>hlist);
}
rcu_read_unlock();
synchronize_rcu();

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
Core code for mmu notifiers.

Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>

---
 include/linux/list.h |   14 ++
 include/linux/mm_types.h |6 +
 include/linux/mmu_notifier.h |  210 +++
 include/linux/page-flags.h   |   10 ++
 kernel/fork.c|2 
 mm/Kconfig   |4 
 mm/Makefile  |1 
 mm/mmap.c|2 
 mm/mmu_notifier.c|  101 
 9 files changed, 350 insertions(+)

Index: linux-2.6/include/linux/mm_types.h
===
--- linux-2.6.orig/include/linux/mm_types.h 2008-01-28 11:35:20.0 
-0800
+++ linux-2.6/include/linux/mm_types.h  2008-01-28 11:35:22.0 -0800
@@ -153,6 +153,10 @@ struct vm_area_struct {
 #endif
 };
 
+struct mmu_notifier_head {
+   struct hlist_head head;
+};
+
 struct mm_struct {
struct vm_area_struct * mmap;   /* list of VMAs */
struct rb_root mm_rb;
@@ -219,6 +223,8 @@ struct mm_struct {
/* aio bits */
rwlock_tioctx_list_lock;
struct kioctx   *ioctx_list;
+
+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
 
 #endif /* _LINUX_MM_TYPES_H */
Index: linux-2.6/include/linux/mmu_notifier.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-28 11:43:03.0 
-0800
@@ -0,0 +1,210 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+/*
+ * MMU motifier
+ *
+ * Notifier functions for hardware and software that establishes external
+ * references to pages of a Linux system. The notifier calls ensure that
+ * the external mappings are removed when the Linux VM removes memory ranges
+ * or individual pages from a process.
+ *
+ * These fall into two classes
+ *
+ * 1. mmu_notifier
+ *
+ * These are callbacks registered with an mm_struct. If mappings are
+ * removed from an address space then callbacks are performed.
+ * Spinlocks must be held in order to the walk reverse maps and the
+ * notifications are performed while the spinlock is held.
+ *
+ *
+ * 2. mmu_rmap_notifier
+ *
+ * Callbacks for subsystems that provide their own rmaps. These
+ * need to walk their own rmaps for a page. The invalidate_page
+ * callback is outside of locks so that we are not in a strictly
+ * atomic context (but we may be in a PF_MEMALLOC context if the
+ * notifier is called from reclaim code) and are able to sleep.
+ * Rmap notifiers need an extra page bit and are only available
+ * on 64 bit platforms. It is up to the subsystem to mark pags
+ * as PageExternalRmap as needed to trigger the callbacks. Pages
+ * must be marked dirty if dirty bits are set in the external
+ * pte.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+struct mmu_notifier_ops;
+
+struct mmu_notifier {
+   struct hlist_node hlist;
+   const struct mmu_notifier_ops *ops;
+};
+
+struct mmu_notifier_ops {
+   /*
+* Note: The mmu_notifier structure must be released with
+* call_rcu() since other processors are only guaranteed to
+* see the changes after a quiescent period.
+*/
+   void (*release)(struct mmu_notifier *mn,
+   struct mm_struct *mm);
+
+   int (*age_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   void (*invalidate_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   /*
+* lock indicates that the function is called under spinlock.
+*/
+   void (*invalidate_range)(struct mmu_notifier *mn,
+struct mm_struct *mm,
+unsigned long start, unsigned long end,
+int lock);
+};
+
+struct mmu_rmap_notifier_ops;
+
+struct mmu_rmap_notifier {
+   struct hlist_node hlist;
+   const struct mmu_rmap_notifier_ops *ops;
+};
+
+struct mmu_rmap_notifier_ops {
+   /*
+* Called with the page lock held after ptes are modified or removed
+* so that a subsystem with its own rmap's can remove remote ptes
+* mapping a page.
+*/
+   void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
+   struct page *page);
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads
+ */
+extern void __mmu_notifier_register(struct mmu_notifier *mn,
+   

[patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
Core code for mmu notifiers.

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]
Signed-off-by: Andrea Arcangeli [EMAIL PROTECTED]

---
 include/linux/list.h |   14 ++
 include/linux/mm_types.h |6 +
 include/linux/mmu_notifier.h |  210 +++
 include/linux/page-flags.h   |   10 ++
 kernel/fork.c|2 
 mm/Kconfig   |4 
 mm/Makefile  |1 
 mm/mmap.c|2 
 mm/mmu_notifier.c|  101 
 9 files changed, 350 insertions(+)

Index: linux-2.6/include/linux/mm_types.h
===
--- linux-2.6.orig/include/linux/mm_types.h 2008-01-28 11:35:20.0 
-0800
+++ linux-2.6/include/linux/mm_types.h  2008-01-28 11:35:22.0 -0800
@@ -153,6 +153,10 @@ struct vm_area_struct {
 #endif
 };
 
+struct mmu_notifier_head {
+   struct hlist_head head;
+};
+
 struct mm_struct {
struct vm_area_struct * mmap;   /* list of VMAs */
struct rb_root mm_rb;
@@ -219,6 +223,8 @@ struct mm_struct {
/* aio bits */
rwlock_tioctx_list_lock;
struct kioctx   *ioctx_list;
+
+   struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
 };
 
 #endif /* _LINUX_MM_TYPES_H */
Index: linux-2.6/include/linux/mmu_notifier.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-2.6/include/linux/mmu_notifier.h  2008-01-28 11:43:03.0 
-0800
@@ -0,0 +1,210 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+/*
+ * MMU motifier
+ *
+ * Notifier functions for hardware and software that establishes external
+ * references to pages of a Linux system. The notifier calls ensure that
+ * the external mappings are removed when the Linux VM removes memory ranges
+ * or individual pages from a process.
+ *
+ * These fall into two classes
+ *
+ * 1. mmu_notifier
+ *
+ * These are callbacks registered with an mm_struct. If mappings are
+ * removed from an address space then callbacks are performed.
+ * Spinlocks must be held in order to the walk reverse maps and the
+ * notifications are performed while the spinlock is held.
+ *
+ *
+ * 2. mmu_rmap_notifier
+ *
+ * Callbacks for subsystems that provide their own rmaps. These
+ * need to walk their own rmaps for a page. The invalidate_page
+ * callback is outside of locks so that we are not in a strictly
+ * atomic context (but we may be in a PF_MEMALLOC context if the
+ * notifier is called from reclaim code) and are able to sleep.
+ * Rmap notifiers need an extra page bit and are only available
+ * on 64 bit platforms. It is up to the subsystem to mark pags
+ * as PageExternalRmap as needed to trigger the callbacks. Pages
+ * must be marked dirty if dirty bits are set in the external
+ * pte.
+ */
+
+#include linux/list.h
+#include linux/spinlock.h
+#include linux/rcupdate.h
+#include linux/mm_types.h
+
+struct mmu_notifier_ops;
+
+struct mmu_notifier {
+   struct hlist_node hlist;
+   const struct mmu_notifier_ops *ops;
+};
+
+struct mmu_notifier_ops {
+   /*
+* Note: The mmu_notifier structure must be released with
+* call_rcu() since other processors are only guaranteed to
+* see the changes after a quiescent period.
+*/
+   void (*release)(struct mmu_notifier *mn,
+   struct mm_struct *mm);
+
+   int (*age_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   void (*invalidate_page)(struct mmu_notifier *mn,
+   struct mm_struct *mm,
+   unsigned long address);
+
+   /*
+* lock indicates that the function is called under spinlock.
+*/
+   void (*invalidate_range)(struct mmu_notifier *mn,
+struct mm_struct *mm,
+unsigned long start, unsigned long end,
+int lock);
+};
+
+struct mmu_rmap_notifier_ops;
+
+struct mmu_rmap_notifier {
+   struct hlist_node hlist;
+   const struct mmu_rmap_notifier_ops *ops;
+};
+
+struct mmu_rmap_notifier_ops {
+   /*
+* Called with the page lock held after ptes are modified or removed
+* so that a subsystem with its own rmap's can remove remote ptes
+* mapping a page.
+*/
+   void (*invalidate_page)(struct mmu_rmap_notifier *mrn,
+   struct page *page);
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads
+ */
+extern void 

Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
mmu core: Need to use hlist_del

Wrong type of list del in mmu_notifier_release()

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]

---
 mm/mmu_notifier.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-28 14:02:18.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-28 14:02:30.0 -0800
@@ -23,7 +23,7 @@ void mmu_notifier_release(struct mm_stru
  mm-mmu_notifier.head, hlist) {
if (mn-ops-release)
mn-ops-release(mn, mm);
-   hlist_del(mn-hlist);
+   hlist_del_rcu(mn-hlist);
}
rcu_read_unlock();
synchronize_rcu();

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Robin Holt
 +void mmu_notifier_release(struct mm_struct *mm)
...
 + hlist_for_each_entry_safe_rcu(mn, n, t,
 +   mm-mmu_notifier.head, hlist) {
 + if (mn-ops-release)
 + mn-ops-release(mn, mm);
 + hlist_del(mn-hlist);

USE_AFTER_FREE!!!  I made this same comment as well as other relavent
comments last week.


Robin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/6] mmu_notifier: Core code

2008-01-28 Thread Christoph Lameter
On Mon, 28 Jan 2008, Robin Holt wrote:

 USE_AFTER_FREE!!!  I made this same comment as well as other relavent
 comments last week.

Must have slipped somehow. Patch needs to be applied after the rcu fix.

Please repeat the other relevant comments if they are still relevant I 
thought I had worked through them.



mmu_notifier_release: remove mmu_notifier struct from list before calling 
-release

Signed-off-by: Christoph Lameter [EMAIL PROTECTED]

---
 mm/mmu_notifier.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/mm/mmu_notifier.c
===
--- linux-2.6.orig/mm/mmu_notifier.c2008-01-28 17:17:05.0 -0800
+++ linux-2.6/mm/mmu_notifier.c 2008-01-28 17:17:10.0 -0800
@@ -21,9 +21,9 @@ void mmu_notifier_release(struct mm_stru
rcu_read_lock();
hlist_for_each_entry_safe_rcu(mn, n, t,
  mm-mmu_notifier.head, hlist) {
+   hlist_del_rcu(mn-hlist);
if (mn-ops-release)
mn-ops-release(mn, mm);
-   hlist_del_rcu(mn-hlist);
}
rcu_read_unlock();
synchronize_rcu();
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/