On Tue, Nov 25, 2008 at 04:38:13PM +0200, Avi Kivity wrote:
> Marcelo Tosatti wrote:
>> *shadow_pte can point to a different page if the guest updates
>> pagetable, there is a fault before resync, the fault updates the
>> spte with new gfn (and pfn) via mmu_set_spte. In which case the gfn
>> cache is updated since:
>>
>>                     } else if (pfn != spte_to_pfn(*shadow_pte)) {
>>                         printk("hfn old %lx new %lx\n",
>>                                  spte_to_pfn(*shadow_pte), pfn);
>>                         rmap_remove(vcpu->kvm, shadow_pte);
>>   
>
> Okay.  Please resend but without the reversal of can_unsync, it will  
> make a more readable patch.  If you like, send a follow on that only  
> does the reversal.

Here it goes.

KVM: MMU: optimize set_spte for page sync

The write protect verification in set_spte is unnecessary for page sync.

Its guaranteed that, if the unsync spte was writable, the target page
does not have a write protected shadow (if it had, the spte would have
been write protected under mmu_lock by rmap_write_protect before).

Same reasoning applies to mark_page_dirty: the gfn has been marked as
dirty via the pagefault path.

The cost of hash table and memslot lookups are quite significant if the
workload is pagetable write intensive resulting in increased mmu_lock
contention.

Signed-off-by: Marcelo Tosatti <[EMAIL PROTECTED]>

Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -1593,6 +1593,15 @@ static int set_spte(struct kvm_vcpu *vcp
 
                spte |= PT_WRITABLE_MASK;
 
+               /*
+                * Optimization: for pte sync, if spte was writable the hash
+                * lookup is unnecessary (and expensive). Write protection
+                * is responsibility of mmu_get_page / kvm_sync_page.
+                * Same reasoning can be applied to dirty page accounting.
+                */
+               if (!can_unsync && is_writeble_pte(*shadow_pte))
+                       goto set_pte;
+
                if (mmu_need_write_protect(vcpu, gfn, can_unsync)) {
                        pgprintk("%s: found shadow page for %lx, marking ro\n",
                                 __func__, gfn);

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to