David Chinner wrote:
> Nick, Jeremy, (others?) any objections to this approach to solve
> the problem?
Seems sounds in principle. I think Nick's shootdown-stray-mappings mm
API call is a better long-term answer, but this will do for now. I'll
test it out today.
J
-
To unsubscribe from
On Tue, Oct 23, 2007 at 11:30:35AM +0200, Andi Kleen wrote:
> On Tue, Oct 23, 2007 at 05:04:14PM +1000, David Chinner wrote:
> > > You mean like vmap() could record the pages passed to it in the
> > > area->pages
> > > array, and we walk and release than in __vunmap() like it already does
> > >
On Tue, Oct 23, 2007 at 05:04:14PM +1000, David Chinner wrote:
> On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
> > On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
> > > On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
> > > > Could vmap()/vunmap() take
On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
> > That doesn't mean it is correct.
>
> Right, but it also points to the fact that it's not causing problems
> from 99.999% of ppl out there.
So you're waiting for someone to take months to debug this again?
> You mean like vmap()
On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
> On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
> > On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
> > > Could vmap()/vunmap() take references to the pages that are mapped? That
> > > way delaying the
On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
Could vmap()/vunmap() take references to the pages that are mapped? That
way delaying the unmap would
On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
That doesn't mean it is correct.
Right, but it also points to the fact that it's not causing problems
from 99.999% of ppl out there.
So you're waiting for someone to take months to debug this again?
You mean like vmap() could
On Tue, Oct 23, 2007 at 05:04:14PM +1000, David Chinner wrote:
On Tue, Oct 23, 2007 at 10:36:41AM +1000, David Chinner wrote:
On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
Could vmap()/vunmap() take references
On Tue, Oct 23, 2007 at 11:30:35AM +0200, Andi Kleen wrote:
On Tue, Oct 23, 2007 at 05:04:14PM +1000, David Chinner wrote:
You mean like vmap() could record the pages passed to it in the
area-pages
array, and we walk and release than in __vunmap() like it already does
for vfree()?
David Chinner wrote:
Nick, Jeremy, (others?) any objections to this approach to solve
the problem?
Seems sounds in principle. I think Nick's shootdown-stray-mappings mm
API call is a better long-term answer, but this will do for now. I'll
test it out today.
J
-
To unsubscribe from this
On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
> On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
> > On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
> > > It's hidden now so it causes any obvious failures any more. Just subtle
> > > ones which is much worse.
On Tue, 2007-10-23 at 01:35 +0200, Andi Kleen wrote:
> On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
> > On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
> > > On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
> > > > Andi Kleen wrote:
> > > > >
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
> On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
> > On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
> > > Andi Kleen wrote:
> > > > Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> > > >
> > > >>
On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
> On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
> > Andi Kleen wrote:
> > > Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> > >
> > >> Yes, that's precisely the problem. xfs does delay the unmap, leaving
> >
Andi Kleen wrote:
> It's hidden now so it causes any obvious failures any more. Just
> subtle ones which is much worse.
>
I think anything detected by Xen is still classed as "obscure" ;)
> But why not just disable it? It's not critical functionality,
> just a optimization that unfortunately
On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
> Andi Kleen wrote:
> > Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> >
> >> Yes, that's precisely the problem. xfs does delay the unmap, leaving
> >> stray mappings, which upsets Xen.
> >>
> >
> > Again it not just
Andi Kleen wrote:
> Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
>
>> Yes, that's precisely the problem. xfs does delay the unmap, leaving
>> stray mappings, which upsets Xen.
>>
>
> Again it not just upsets Xen, keeping mappings to freed pages is wrong
> generally
> and violates the
Nick Piggin wrote:
> You could call it a bug I think. I don't know much about Xen though,
> whether or not it expects to be able to run an arbitrary OS kernel.
>
Xen's paravirtualized mode always requires a guest OS to be modified;
certainly some operating systems would be very hard to make
dean gaudet wrote:
> sounds like a bug in xen to me :)
>
I explained at the head of this thread how and why Xen works in this
manner. It's certainly a change from native execution; whether you
consider it to be a bug is a different matter.
But it turns out that leaving stray mappings around
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
>
> Yes, that's precisely the problem. xfs does delay the unmap, leaving
> stray mappings, which upsets Xen.
Again it not just upsets Xen, keeping mappings to freed pages is wrong
generally
and violates the x86 (and likely others like PPC)
On Mon, Oct 22, 2007 at 08:16:01AM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2007-10-15 at 13:07 +0200, Andi Kleen wrote:
> > On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
> > > Is this true even if you don't write through those old mappings?
> >
> > I think it happened for
On Mon, Oct 22, 2007 at 08:16:01AM +1000, Benjamin Herrenschmidt wrote:
On Mon, 2007-10-15 at 13:07 +0200, Andi Kleen wrote:
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
Is this true even if you don't write through those old mappings?
I think it happened for reads too.
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Yes, that's precisely the problem. xfs does delay the unmap, leaving
stray mappings, which upsets Xen.
Again it not just upsets Xen, keeping mappings to freed pages is wrong
generally
and violates the x86 (and likely others like PPC)
dean gaudet wrote:
sounds like a bug in xen to me :)
I explained at the head of this thread how and why Xen works in this
manner. It's certainly a change from native execution; whether you
consider it to be a bug is a different matter.
But it turns out that leaving stray mappings around on
Nick Piggin wrote:
You could call it a bug I think. I don't know much about Xen though,
whether or not it expects to be able to run an arbitrary OS kernel.
Xen's paravirtualized mode always requires a guest OS to be modified;
certainly some operating systems would be very hard to make work
Andi Kleen wrote:
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Yes, that's precisely the problem. xfs does delay the unmap, leaving
stray mappings, which upsets Xen.
Again it not just upsets Xen, keeping mappings to freed pages is wrong
generally
and violates the x86 (and
On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Yes, that's precisely the problem. xfs does delay the unmap, leaving
stray mappings, which upsets Xen.
Again it not just upsets Xen, keeping
Andi Kleen wrote:
It's hidden now so it causes any obvious failures any more. Just
subtle ones which is much worse.
I think anything detected by Xen is still classed as obscure ;)
But why not just disable it? It's not critical functionality,
just a optimization that unfortunately turned
On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Yes, that's precisely the problem. xfs does delay the unmap, leaving
stray mappings,
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Yes, that's precisely
On Tue, 2007-10-23 at 01:35 +0200, Andi Kleen wrote:
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
On Mon, Oct 22, 2007 at 11:40:52AM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
Jeremy Fitzhardinge
On Tue, Oct 23, 2007 at 01:35:14AM +0200, Andi Kleen wrote:
On Tue, Oct 23, 2007 at 08:32:25AM +1000, David Chinner wrote:
On Mon, Oct 22, 2007 at 09:07:40PM +0200, Andi Kleen wrote:
It's hidden now so it causes any obvious failures any more. Just subtle
ones which is much worse.
On Monday 22 October 2007 14:28, dean gaudet wrote:
> On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
> > dean gaudet wrote:
> > > On Mon, 15 Oct 2007, Nick Piggin wrote:
> > >> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> > >> because it generally has to invalidate TLBs
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
> dean gaudet wrote:
> > On Mon, 15 Oct 2007, Nick Piggin wrote:
> >
> >
> >> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> >> because it generally has to invalidate TLBs on all CPUs.
> >>
> >
> > why is that?
dean gaudet wrote:
> On Mon, 15 Oct 2007, Nick Piggin wrote:
>
>
>> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
>> because it generally has to invalidate TLBs on all CPUs.
>>
>
> why is that? ignoring 32-bit archs we have heaps of address space
> available...
On Mon, 15 Oct 2007, Nick Piggin wrote:
> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we have heaps of address space
available... couldn't the kernel just burn address space
On Mon, 2007-10-15 at 13:07 +0200, Andi Kleen wrote:
> On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
> > Is this true even if you don't write through those old mappings?
>
> I think it happened for reads too. It is a little counter intuitive
> because in theory the CPU doesn't
On 10/15/07, Andi Kleen <[EMAIL PROTECTED]> wrote:
> > Hmm, OK. It looks like DRM vmallocs memory (which gives highmem).
>
> I meant I'm not sure if it uses that memory uncached. I admit
> not quite understanding that code. There used to be at least
> one place where it set UC for an user mapping
On 10/15/07, Andi Kleen [EMAIL PROTECTED] wrote:
Hmm, OK. It looks like DRM vmallocs memory (which gives highmem).
I meant I'm not sure if it uses that memory uncached. I admit
not quite understanding that code. There used to be at least
one place where it set UC for an user mapping though.
On Mon, 2007-10-15 at 13:07 +0200, Andi Kleen wrote:
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
Is this true even if you don't write through those old mappings?
I think it happened for reads too. It is a little counter intuitive
because in theory the CPU doesn't need to
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we have heaps of address space
available... couldn't the kernel just burn address space
dean gaudet wrote:
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we have heaps of address space
available... couldn't the
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
dean gaudet wrote:
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
why is that? ignoring 32-bit archs we
On Monday 22 October 2007 14:28, dean gaudet wrote:
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
dean gaudet wrote:
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
> Hmm, OK. It looks like DRM vmallocs memory (which gives highmem).
I meant I'm not sure if it uses that memory uncached. I admit
not quite understanding that code. There used to be at least
one place where it set UC for an user mapping though.
-Andi
-
To unsubscribe from this list: send the
On Monday 15 October 2007 21:07, Andi Kleen wrote:
> On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
> > Is this true even if you don't write through those old mappings?
>
> I think it happened for reads too. It is a little counter intuitive
> because in theory the CPU doesn't need
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
> Is this true even if you don't write through those old mappings?
I think it happened for reads too. It is a little counter intuitive
because in theory the CPU doesn't need to write back non dirty lines,
but in the one case which took
On Monday 15 October 2007 19:36, Andi Kleen wrote:
> David Chinner <[EMAIL PROTECTED]> writes:
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate some of the worst of the problems.
>
> You're
David Chinner <[EMAIL PROTECTED]> writes:
>
> And yes, we delay unmapping pages until we have a batch of them
> to unmap. vmap and vunmap do not scale, so this is batching helps
> alleviate some of the worst of the problems.
You're keeping vmaps around for already freed pages?
That will be a big
On Mon, Oct 15, 2007 at 02:25:46PM +1000, David Chinner wrote:
> > Hm, well I saw the problem with a filesystem made with mkfs.xfs with no
> > options, so there must be at least *some* vmapping going on there.
>
> Sorry - I should have been more precise - vmap should never be used in
>
On Mon, Oct 15, 2007 at 02:25:46PM +1000, David Chinner wrote:
Hm, well I saw the problem with a filesystem made with mkfs.xfs with no
options, so there must be at least *some* vmapping going on there.
Sorry - I should have been more precise - vmap should never be used in
performance
David Chinner [EMAIL PROTECTED] writes:
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the problems.
You're keeping vmaps around for already freed pages?
That will be a big
On Monday 15 October 2007 19:36, Andi Kleen wrote:
David Chinner [EMAIL PROTECTED] writes:
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the problems.
You're keeping vmaps
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
Is this true even if you don't write through those old mappings?
I think it happened for reads too. It is a little counter intuitive
because in theory the CPU doesn't need to write back non dirty lines,
but in the one case which took
On Monday 15 October 2007 21:07, Andi Kleen wrote:
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
Is this true even if you don't write through those old mappings?
I think it happened for reads too. It is a little counter intuitive
because in theory the CPU doesn't need to
Hmm, OK. It looks like DRM vmallocs memory (which gives highmem).
I meant I'm not sure if it uses that memory uncached. I admit
not quite understanding that code. There used to be at least
one place where it set UC for an user mapping though.
-Andi
-
To unsubscribe from this list: send the line
On Sun, Oct 14, 2007 at 09:18:17PM -0700, Jeremy Fitzhardinge wrote:
> David Chinner wrote:
> > With defaults - little effect as vmap should never be used. It's
> > only when you start using larger block sizes for metadata that this
> > becomes an issue. The CONFIG_XEN workaround should be fine
David Chinner wrote:
> With defaults - little effect as vmap should never be used. It's
> only when you start using larger block sizes for metadata that this
> becomes an issue. The CONFIG_XEN workaround should be fine until we
> get a proper vmap cache
Hm, well I saw the problem with a
On Sun, Oct 14, 2007 at 08:42:34PM -0700, Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > That's not going to
> > happen for at least a cycle or two though, so in the meantime maybe
> > an ifdef for that XFS vmap batching code would help?
> >
>
> For now I've proposed a patch to simply
Nick Piggin wrote:
> Yeah, it would be possible. The easiest way would just be to shoot down
> all lazy vmaps (because you're doing the global IPIs anyway, which are
> the expensive thing, at which point you may as well purge the rest of
> your lazy mappings).
>
Sure.
> If it is sufficiently
On Monday 15 October 2007 10:57, Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> > because it generally has to invalidate TLBs on all CPUs.
>
> I see.
>
> > I'm looking at some more general solutions to this (already have
Nick Piggin wrote:
> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> because it generally has to invalidate TLBs on all CPUs.
>
I see.
> I'm looking at some more general solutions to this (already have some
> batching / lazy unmapping that replaces the XFS specific
On Sun, Oct 14, 2007 at 04:12:20PM -0700, Jeremy Fitzhardinge wrote:
> David Chinner wrote:
> > You mean xfs_buf.c.
> >
>
> Yes, sorry.
>
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate
On Monday 15 October 2007 09:12, Jeremy Fitzhardinge wrote:
> David Chinner wrote:
> > You mean xfs_buf.c.
>
> Yes, sorry.
>
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate some of the worst of
David Chinner wrote:
> You mean xfs_buf.c.
>
Yes, sorry.
> And yes, we delay unmapping pages until we have a batch of them
> to unmap. vmap and vunmap do not scale, so this is batching helps
> alleviate some of the worst of the problems.
>
How much performance does it cost? What kind of
On Fri, Oct 12, 2007 at 09:58:43AM -0700, Jeremy Fitzhardinge wrote:
> Hi Dave & other XFS folk,
>
> I'm tracking down a bug which appears to be a bad interaction between XFS
> and Xen. It looks like XFS is holding RW mappings on free pages, which Xen
> is trying to get an exclusive RO mapping
On Fri, Oct 12, 2007 at 09:58:43AM -0700, Jeremy Fitzhardinge wrote:
Hi Dave other XFS folk,
I'm tracking down a bug which appears to be a bad interaction between XFS
and Xen. It looks like XFS is holding RW mappings on free pages, which Xen
is trying to get an exclusive RO mapping on so
David Chinner wrote:
You mean xfs_buf.c.
Yes, sorry.
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the problems.
How much performance does it cost? What kind of
On Monday 15 October 2007 09:12, Jeremy Fitzhardinge wrote:
David Chinner wrote:
You mean xfs_buf.c.
Yes, sorry.
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the
On Sun, Oct 14, 2007 at 04:12:20PM -0700, Jeremy Fitzhardinge wrote:
David Chinner wrote:
You mean xfs_buf.c.
Yes, sorry.
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst
Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
I see.
I'm looking at some more general solutions to this (already have some
batching / lazy unmapping that replaces the XFS specific one),
On Monday 15 October 2007 10:57, Jeremy Fitzhardinge wrote:
Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
I see.
I'm looking at some more general solutions to this (already have some
Nick Piggin wrote:
Yeah, it would be possible. The easiest way would just be to shoot down
all lazy vmaps (because you're doing the global IPIs anyway, which are
the expensive thing, at which point you may as well purge the rest of
your lazy mappings).
Sure.
If it is sufficiently rare,
On Sun, Oct 14, 2007 at 08:42:34PM -0700, Jeremy Fitzhardinge wrote:
Nick Piggin wrote:
That's not going to
happen for at least a cycle or two though, so in the meantime maybe
an ifdef for that XFS vmap batching code would help?
For now I've proposed a patch to simply eagerly
David Chinner wrote:
With defaults - little effect as vmap should never be used. It's
only when you start using larger block sizes for metadata that this
becomes an issue. The CONFIG_XEN workaround should be fine until we
get a proper vmap cache
Hm, well I saw the problem with a
On Sun, Oct 14, 2007 at 09:18:17PM -0700, Jeremy Fitzhardinge wrote:
David Chinner wrote:
With defaults - little effect as vmap should never be used. It's
only when you start using larger block sizes for metadata that this
becomes an issue. The CONFIG_XEN workaround should be fine until we
Jeremy Fitzhardinge wrote:
> I guess we could create a special-case interface to do the same thing
> with XFS mappings, but it would be nicer to have something more generic.
>
> Is my analysis correct? Or should XFS not be holding stray mappings?
> Or is there already some kind of generic
Hi Dave & other XFS folk,
I'm tracking down a bug which appears to be a bad interaction between
XFS and Xen. It looks like XFS is holding RW mappings on free pages,
which Xen is trying to get an exclusive RO mapping on so it can turn
them into pagetables.
I'm assuming the pages are actually
Hi Dave other XFS folk,
I'm tracking down a bug which appears to be a bad interaction between
XFS and Xen. It looks like XFS is holding RW mappings on free pages,
which Xen is trying to get an exclusive RO mapping on so it can turn
them into pagetables.
I'm assuming the pages are actually
Jeremy Fitzhardinge wrote:
I guess we could create a special-case interface to do the same thing
with XFS mappings, but it would be nicer to have something more generic.
Is my analysis correct? Or should XFS not be holding stray mappings?
Or is there already some kind of generic mechanism I
80 matches
Mail list logo