ציטוט Andrea Arcangeli:
> Same as before but one one hand ported to #v7 API and on the other
> hand ported to latest kvm.git.
>
> Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>
>
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 41962e7..e1287ab 100644
> --- a/arch/x86/kvm/Kco
Same as before but one one hand ported to #v7 API and on the other
hand ported to latest kvm.git.
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 41962e7..e1287ab 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -21
On Sat, Feb 16, 2008 at 05:51:38AM -0600, Robin Holt wrote:
> I am doing this in xpmem with a stack-based structure in the function
> calling get_user_pages. That structure describes the start and
> end address of the range we are doing the get_user_pages on. If an
> invalidate_range_begin comes
On Sat, Feb 16, 2008 at 03:08:17AM -0800, Andrew Morton wrote:
> On Sat, 16 Feb 2008 11:48:27 +0100 Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
>
> > +void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> > + struct mm_struct *mm,
> > +
On Sat, Feb 16, 2008 at 11:48:27AM +0100, Andrea Arcangeli wrote:
> Those below two patches enable KVM to swap the guest physical memory
> through Christoph's V7.
>
> There's one last _purely_theoretical_ race condition I figured out and
> that I'm wondering how to best fix. The race condition wor
On Sat, 16 Feb 2008 11:48:27 +0100 Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> +void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> +struct mm_struct *mm,
> +unsigned long start, unsigned long
> en
Those below two patches enable KVM to swap the guest physical memory
through Christoph's V7.
There's one last _purely_theoretical_ race condition I figured out and
that I'm wondering how to best fix. The race condition worst case is
that a few guest physical pages could remain pinned by sptes. The