Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-14 Thread Andrea Arcangeli
On Tue, Feb 14, 2012 at 01:56:17PM +0900, Takuya Yoshikawa wrote:
 (2012/02/14 13:36), Takuya Yoshikawa wrote:
 
  BTW, do you think that kvm_mmu_flush_tlb() should be moved inside of the
  mmu_lock critical section?
 
 
 Ah, forget about this.  Trivially no.

Yes the reason is that it's the local flush and guest mode isn't
running if we're running that function so it's ok to run it later.

About the other change you did in this patch 2/2, I can't find the
code you're patching in the 3.2 upstream source, when I added the tlb
flush to invlpg, I immediately used a cumulative need_flush at the end
(before relasing mmu_lock of course).

   if (need_flush)
  kvm_flush_remote_tlbs(vcpu-kvm);
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-13 Thread Takuya Yoshikawa

(Sorry for the delay, I was in bad form this weekend.)

I am sorry, I see what I was misreading.
My eyes misread kvm_mmu_flush_tlb() as kvm_flush_remote_tlbs().
That's why I could not understand what you said, really sorry.

(2012/02/10 16:42), Xiao Guangrong wrote:


It is obvious wrong, i do not think all tlbs always need be flushed...



What do you mean by obvious wrong ?




In the current code, all tlbs are flushed only when s spte is zapped, but after
your change, they are always changed.


Above description will probably tell you what I was thinking.


Even before this patch, we were always flushing TLBs from the caller.




Oh, could you please tell me where tlbs can be flushed except when a spte is
zapped in this path?


Ditto.




I have a question:  your patches apparently changed the timing of TLB flush
but all I could see from the changelogs were:


 KVM: MMU: cleanup FNAME(invlpg)

 Directly Use mmu_page_zap_pte to zap spte in FNAME(invlpg), also remove the
 same code between FNAME(invlpg) and FNAME(sync_page)



This patch dose not change the logic, the tlb flushed time is also not changed,
it just directly call kvm_flush_remote_tlbs when a spte is zapped.




 KVM: MMU: fast prefetch spte on invlpg path

 Fast prefetch spte for the unsync shadow page on invlpg path


This patch did not change the code when kvm_flush_remote_tlbs is called.

Where cause your confused?



Thank you for your explanation!

Probably I should reread the code after taking enough sleep.

BTW, do you think that kvm_mmu_flush_tlb() should be moved inside of the
mmu_lock critical section?


Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-13 Thread Takuya Yoshikawa

(2012/02/14 13:36), Takuya Yoshikawa wrote:


BTW, do you think that kvm_mmu_flush_tlb() should be moved inside of the
mmu_lock critical section?



Ah, forget about this.  Trivially no.
I really need to take a rest.

Sorry,
Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-09 Thread Takuya Yoshikawa
This function's TLB flush was moved sometimes in the past:

1.
  commit 4539b35881ae9664b0e2953438dd83f5ee02c0b4
  KVM: Fix missing smp tlb flush in invlpg

inserted it in the critical section.

2.
  commit 505aef8f30a95f7e4abf2c07e54ded1521587ba0
  KVM: MMU: cleanup FNAME(invlpg)

moved it inside the loop:  right after mmu_page_zap_pte().

3.
  commit f57f2ef58f6703e6df70ed52a198920cb3e8edba
  KVM: MMU: fast prefetch spte on invlpg path

inserted update_pte() after it.

In addition, the caller, kvm_mmu_invlpg(), still does flush.

This patch simplifies the logic and removes this redundancy.

Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
 arch/x86/kvm/mmu.c |1 -
 arch/x86/kvm/paging_tmpl.h |6 --
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ae76cc3..ea27ee3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3769,7 +3769,6 @@ EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 {
vcpu-arch.mmu.invlpg(vcpu, gva);
-   kvm_mmu_flush_tlb(vcpu);
++vcpu-stat.invlpg;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 1561028..69d06f5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
mmu_topup_memory_caches(vcpu);
 
spin_lock(vcpu-kvm-mmu_lock);
+
for_each_shadow_entry(vcpu, gva, iterator) {
level = iterator.level;
sptep = iterator.sptep;
@@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
pte_gpa = FNAME(get_level1_sp_gpa)(sp);
pte_gpa += (sptep - sp-spt) * sizeof(pt_element_t);
 
-   if (mmu_page_zap_pte(vcpu-kvm, sp, sptep))
-   kvm_flush_remote_tlbs(vcpu-kvm);
+   mmu_page_zap_pte(vcpu-kvm, sp, sptep);
 
if (!rmap_can_add(vcpu))
break;
@@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
if (!is_shadow_present_pte(*sptep) || !sp-unsync_children)
break;
}
+
+   kvm_flush_remote_tlbs(vcpu-kvm);
spin_unlock(vcpu-kvm-mmu_lock);
 }
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-09 Thread Xiao Guangrong
On 02/10/2012 02:29 PM, Takuya Yoshikawa wrote:

 diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
 index 1561028..69d06f5 100644
 --- a/arch/x86/kvm/paging_tmpl.h
 +++ b/arch/x86/kvm/paging_tmpl.h
 @@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   mmu_topup_memory_caches(vcpu);
 
   spin_lock(vcpu-kvm-mmu_lock);
 +
   for_each_shadow_entry(vcpu, gva, iterator) {
   level = iterator.level;
   sptep = iterator.sptep;
 @@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   pte_gpa = FNAME(get_level1_sp_gpa)(sp);
   pte_gpa += (sptep - sp-spt) * sizeof(pt_element_t);
 
 - if (mmu_page_zap_pte(vcpu-kvm, sp, sptep))
 - kvm_flush_remote_tlbs(vcpu-kvm);
 + mmu_page_zap_pte(vcpu-kvm, sp, sptep);
 
   if (!rmap_can_add(vcpu))
   break;
 @@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   if (!is_shadow_present_pte(*sptep) || !sp-unsync_children)
   break;
   }
 +
 + kvm_flush_remote_tlbs(vcpu-kvm);
   spin_unlock(vcpu-kvm-mmu_lock);


It is obvious wrong, i do not think all tlbs always need be flushed...

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-09 Thread Takuya Yoshikawa

(2012/02/10 15:55), Xiao Guangrong wrote:

On 02/10/2012 02:29 PM, Takuya Yoshikawa wrote:


diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 1561028..69d06f5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
mmu_topup_memory_caches(vcpu);

spin_lock(vcpu-kvm-mmu_lock);
+
for_each_shadow_entry(vcpu, gva, iterator) {
level = iterator.level;
sptep = iterator.sptep;
@@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
pte_gpa = FNAME(get_level1_sp_gpa)(sp);
pte_gpa += (sptep - sp-spt) * sizeof(pt_element_t);

-   if (mmu_page_zap_pte(vcpu-kvm, sp, sptep))
-   kvm_flush_remote_tlbs(vcpu-kvm);
+   mmu_page_zap_pte(vcpu-kvm, sp, sptep);

if (!rmap_can_add(vcpu))
break;
@@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
if (!is_shadow_present_pte(*sptep) || !sp-unsync_children)
break;
}
+
+   kvm_flush_remote_tlbs(vcpu-kvm);
spin_unlock(vcpu-kvm-mmu_lock);



It is obvious wrong, i do not think all tlbs always need be flushed...



What do you mean by obvious wrong ?

Even before this patch, we were always flushing TLBs from the caller.

I have a question:  your patches apparently changed the timing of TLB flush
but all I could see from the changelogs were:


KVM: MMU: cleanup FNAME(invlpg)

Directly Use mmu_page_zap_pte to zap spte in FNAME(invlpg), also remove the
same code between FNAME(invlpg) and FNAME(sync_page)


KVM: MMU: fast prefetch spte on invlpg path

Fast prefetch spte for the unsync shadow page on invlpg path


Please explain what your patches semantically changed before checking the
details of my patch.

Thanks,
Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM: MMU: Flush TLBs only once in invlpg() before releasing mmu_lock

2012-02-09 Thread Xiao Guangrong
On 02/10/2012 03:21 PM, Takuya Yoshikawa wrote:

 (2012/02/10 15:55), Xiao Guangrong wrote:
 On 02/10/2012 02:29 PM, Takuya Yoshikawa wrote:

 diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
 index 1561028..69d06f5 100644
 --- a/arch/x86/kvm/paging_tmpl.h
 +++ b/arch/x86/kvm/paging_tmpl.h
 @@ -682,6 +682,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   mmu_topup_memory_caches(vcpu);

   spin_lock(vcpu-kvm-mmu_lock);
 +
   for_each_shadow_entry(vcpu, gva, iterator) {
   level = iterator.level;
   sptep = iterator.sptep;
 @@ -697,8 +698,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   pte_gpa = FNAME(get_level1_sp_gpa)(sp);
   pte_gpa += (sptep - sp-spt) * sizeof(pt_element_t);

 -if (mmu_page_zap_pte(vcpu-kvm, sp, sptep))
 -kvm_flush_remote_tlbs(vcpu-kvm);
 +mmu_page_zap_pte(vcpu-kvm, sp, sptep);

   if (!rmap_can_add(vcpu))
   break;
 @@ -713,6 +713,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t 
 gva)
   if (!is_shadow_present_pte(*sptep) || !sp-unsync_children)
   break;
   }
 +
 +kvm_flush_remote_tlbs(vcpu-kvm);
   spin_unlock(vcpu-kvm-mmu_lock);


 It is obvious wrong, i do not think all tlbs always need be flushed...

 
 What do you mean by obvious wrong ?
 


In the current code, all tlbs are flushed only when s spte is zapped, but after
your change, they are always changed.

 Even before this patch, we were always flushing TLBs from the caller.
 


Oh, could you please tell me where tlbs can be flushed except when a spte is
zapped in this path?

 I have a question:  your patches apparently changed the timing of TLB flush
 but all I could see from the changelogs were:
 
 
 KVM: MMU: cleanup FNAME(invlpg)
 
 Directly Use mmu_page_zap_pte to zap spte in FNAME(invlpg), also remove 
 the
 same code between FNAME(invlpg) and FNAME(sync_page)
 


This patch dose not change the logic, the tlb flushed time is also not changed,
it just directly call kvm_flush_remote_tlbs when a spte is zapped.

 
 KVM: MMU: fast prefetch spte on invlpg path
 
 Fast prefetch spte for the unsync shadow page on invlpg path
 
 


This patch did not change the code when kvm_flush_remote_tlbs is called.

Where cause your confused?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html