Andrea Arcangeli wrote:
> This adds locking to the memslots so they can be looked up with only
> the mmu_lock. Entries with memslot->userspace_addr have to be ignored
> because they're not fully inserted yet.
>
>   
What is the motivation for this?  Calls from mmu notifiers that don't 
have mmap_sem held?


>  
>       /* Allocate page dirty bitmap if needed */
> @@ -311,14 +320,18 @@ int __kvm_set_memory_region(struct kvm *kvm,
>               memset(new.dirty_bitmap, 0, dirty_bytes);
>       }
>  
> +     spin_lock(&kvm->mmu_lock);
>       if (mem->slot >= kvm->nmemslots)
>               kvm->nmemslots = mem->slot + 1;
>  
>       *memslot = new;
> +     spin_unlock(&kvm->mmu_lock);
>  
>       r = kvm_arch_set_memory_region(kvm, mem, old, user_alloc);
>       if (r) {
> +             spin_lock(&kvm->mmu_lock);
>               *memslot = old;
> +             spin_unlock(&kvm->mmu_lock);
>               goto out_free;
>       }
>  
>   

This is arch independent code, I'm surprised mmu_lock is visible here?

What are the new lookup rules?  We don't hold mmu_lock everywhere we 
look up a gfn, do we?


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to