On Mon, Jan 14, 2008 at 04:09:03PM +0200, Avi Kivity wrote:
> Marcelo Tosatti wrote:
>>>  +static void unmap_spte(struct kvm *kvm, u64 *spte)
>>> +{
>>> +   struct page *page = pfn_to_page((*spte & PT64_BASE_ADDR_MASK) >> 
>>> PAGE_SHIFT);
>>> +   get_page(page);
>>> +   rmap_remove(kvm, spte);
>>> +   set_shadow_pte(spte, shadow_trap_nonpresent_pte);
>>> +   kvm_flush_remote_tlbs(kvm);
>>> +   __free_page(page);
>>> +}
>>> +
>>> +void kvm_rmap_unmap_gfn(struct kvm *kvm, gfn_t gfn)
>>> +{
>>> +   unsigned long *rmapp;
>>> +   u64 *spte, *curr_spte;
>>> +
>>> +   spin_lock(&kvm->mmu_lock);
>>> +   gfn = unalias_gfn(kvm, gfn);
>>> +   rmapp = gfn_to_rmap(kvm, gfn);
>>>     
>>
>> The alias and memslot maps are protected only by mmap_sem, so you
>> should make kvm_set_memory_region/set_memory_alias grab the mmu spinlock
>> in addition to mmap_sem in write mode.
>>
>> kvm_mmu_zap_all() grabs the mmu lock.. that should probably move up into
>> the caller.
>>
>>   
>
> Aren't mmu notifiers called with mmap_sem held for read?
>
> Maybe not from the swap path?

Good point, the swap path isn't covered by the mmap_sem, so Marcelo's
right I need to fixup the locking a bit.

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to