Re: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-21 Thread Avi Kivity
On 05/21/2012 06:22 AM, Hao, Xudong wrote:
  -Original Message-
  From: Marcelo Tosatti [mailto:mtosa...@redhat.com]
  Sent: Friday, May 18, 2012 10:23 AM
  To: Xudong Hao
  Cc: a...@redhat.com; kvm@vger.kernel.org; linux-ker...@vger.kernel.org;
  Shan, Haitao; Zhang, Xiantao; Hao, Xudong
  Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
  
  On Wed, May 16, 2012 at 09:12:30AM +0800, Xudong Hao wrote:
   Enabling Access bit when doing memory swapping.
  
   Signed-off-by: Haitao Shan haitao.s...@intel.com
   Signed-off-by: Xudong Hao xudong@intel.com
   ---
arch/x86/kvm/mmu.c |   13 +++--
arch/x86/kvm/vmx.c |6 --
2 files changed, 11 insertions(+), 8 deletions(-)
  
   diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
   index ff053ca..5f55f98 100644
   --- a/arch/x86/kvm/mmu.c
   +++ b/arch/x86/kvm/mmu.c
   @@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm,
  unsigned long *rmapp,
   int young = 0;
  
   /*
   -* Emulate the accessed bit for EPT, by checking if this page has
   +* In case of absence of EPT Access and Dirty Bits supports,
   +* emulate the accessed bit for EPT, by checking if this page has
* an EPT mapping, and clearing it if it does. On the next access,
* a new EPT mapping will be established.
* This has some overhead, but not as much as the cost of swapping
   @@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm,
  unsigned long *rmapp,
   while (spte) {
   int _young;
   u64 _spte = *spte;
   -   BUG_ON(!(_spte  PT_PRESENT_MASK));
   -   _young = _spte  PT_ACCESSED_MASK;
   +   BUG_ON(!is_shadow_present_pte(_spte));
   +   _young = _spte  shadow_accessed_mask;
   if (_young) {
   young = 1;
   -   clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
   +   *spte = ~shadow_accessed_mask;
   }
  
  Now a dirty bit can be lost. Is there a reason to remove the clear_bit?

 The clear_bit() is called in shadown and EPT A/D mode, because 
 PT_ACCESSED_SHIFT does not make sense for EPT A/D bit, so use the code 
 shadow_accessed_mask to mask the bit for both of them.

That doesn't answer the question.  An atomic operation is now non-atomic.

You can calculate shadow_accessed_bit and keep on using clear_bit(), or
switch to cmpxchg64(), but don't just drop the dirty bit here.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-21 Thread Hao, Xudong
 -Original Message-
 From: Avi Kivity [mailto:a...@redhat.com]
 Sent: Monday, May 21, 2012 4:32 PM
 To: Hao, Xudong
 Cc: Marcelo Tosatti; Xudong Hao; kvm@vger.kernel.org;
 linux-ker...@vger.kernel.org; Shan, Haitao; Zhang, Xiantao
 Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
 
 On 05/21/2012 06:22 AM, Hao, Xudong wrote:
   -Original Message-
   From: Marcelo Tosatti [mailto:mtosa...@redhat.com]
   Sent: Friday, May 18, 2012 10:23 AM
   To: Xudong Hao
   Cc: a...@redhat.com; kvm@vger.kernel.org; linux-ker...@vger.kernel.org;
   Shan, Haitao; Zhang, Xiantao; Hao, Xudong
   Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
  
   On Wed, May 16, 2012 at 09:12:30AM +0800, Xudong Hao wrote:
Enabling Access bit when doing memory swapping.
   
Signed-off-by: Haitao Shan haitao.s...@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
---
 arch/x86/kvm/mmu.c |   13 +++--
 arch/x86/kvm/vmx.c |6 --
 2 files changed, 11 insertions(+), 8 deletions(-)
   
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ff053ca..5f55f98 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm,
   unsigned long *rmapp,
int young = 0;
   
/*
-* Emulate the accessed bit for EPT, by checking if this page has
+* In case of absence of EPT Access and Dirty Bits supports,
+* emulate the accessed bit for EPT, by checking if this page has
 * an EPT mapping, and clearing it if it does. On the next access,
 * a new EPT mapping will be established.
 * This has some overhead, but not as much as the cost of swapping
@@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm,
   unsigned long *rmapp,
while (spte) {
int _young;
u64 _spte = *spte;
-   BUG_ON(!(_spte  PT_PRESENT_MASK));
-   _young = _spte  PT_ACCESSED_MASK;
+   BUG_ON(!is_shadow_present_pte(_spte));
+   _young = _spte  shadow_accessed_mask;
if (_young) {
young = 1;
-   clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
+   *spte = ~shadow_accessed_mask;
}
  
   Now a dirty bit can be lost. Is there a reason to remove the clear_bit?
 
  The clear_bit() is called in shadown and EPT A/D mode, because
 PT_ACCESSED_SHIFT does not make sense for EPT A/D bit, so use the code
 shadow_accessed_mask to mask the bit for both of them.
 
 That doesn't answer the question.  An atomic operation is now non-atomic.
 
 You can calculate shadow_accessed_bit and keep on using clear_bit(), or
 switch to cmpxchg64(), but don't just drop the dirty bit here.
 

I know your meaning. How about this changes:

...
young = 1;
+if (enable_ept_ad_bits)
+clear_bit(ffs(shadow_accessed_mask), (unsigned long *)spte);
clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
...

 --
 error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-21 Thread Avi Kivity
On 05/21/2012 01:35 PM, Hao, Xudong wrote:
  
  That doesn't answer the question.  An atomic operation is now non-atomic.
  
  You can calculate shadow_accessed_bit and keep on using clear_bit(), or
  switch to cmpxchg64(), but don't just drop the dirty bit here.
  

 I know your meaning. How about this changes:

 ...
 young = 1;
 +if (enable_ept_ad_bits)
 +clear_bit(ffs(shadow_accessed_mask), (unsigned long *)spte);

ffs() returns an off-by-one result, so this needs to be adjusted.  IIRC
bsfl is slow, but this shouldn't be a problem here.


-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-21 Thread Hao, Xudong
 -Original Message-
 From: Avi Kivity [mailto:a...@redhat.com]
 Sent: Monday, May 21, 2012 6:48 PM
 To: Hao, Xudong
 Cc: Marcelo Tosatti; Xudong Hao; kvm@vger.kernel.org;
 linux-ker...@vger.kernel.org; Shan, Haitao; Zhang, Xiantao
 Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
 
 On 05/21/2012 01:35 PM, Hao, Xudong wrote:
  
   That doesn't answer the question.  An atomic operation is now
 non-atomic.
  
   You can calculate shadow_accessed_bit and keep on using clear_bit(), or
   switch to cmpxchg64(), but don't just drop the dirty bit here.
  
 
  I know your meaning. How about this changes:
 
  ...
  young = 1;
  +if (enable_ept_ad_bits)
  +clear_bit(ffs(shadow_accessed_mask), (unsigned long
 *)spte);
 
 ffs() returns an off-by-one result, so this needs to be adjusted. 

Yes, it need to decrease 1, I'll send v3 version for patch4, any other comments?

 IIRC
 bsfl is slow, but this shouldn't be a problem here.
 

I do not know the story...

 
 --
 error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-21 Thread Avi Kivity
On 05/21/2012 02:17 PM, Hao, Xudong wrote:
  -Original Message-
  From: Avi Kivity [mailto:a...@redhat.com]
  Sent: Monday, May 21, 2012 6:48 PM
  To: Hao, Xudong
  Cc: Marcelo Tosatti; Xudong Hao; kvm@vger.kernel.org;
  linux-ker...@vger.kernel.org; Shan, Haitao; Zhang, Xiantao
  Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
  
  On 05/21/2012 01:35 PM, Hao, Xudong wrote:
   
That doesn't answer the question.  An atomic operation is now
  non-atomic.
   
You can calculate shadow_accessed_bit and keep on using clear_bit(), or
switch to cmpxchg64(), but don't just drop the dirty bit here.
   
  
   I know your meaning. How about this changes:
  
   ...
   young = 1;
   +if (enable_ept_ad_bits)
   +clear_bit(ffs(shadow_accessed_mask), (unsigned long
  *)spte);
  
  ffs() returns an off-by-one result, so this needs to be adjusted. 

 Yes, it need to decrease 1, I'll send v3 version for patch4, any other 
 comments?

I think it's fine.

  IIRC
  bsfl is slow, but this shouldn't be a problem here.
  

 I do not know the story...


No story, bsf is a relatively slow instruction, but it shouldn't affect
us here; this isn't a fast path and in any case it's only a few cycles.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-20 Thread Hao, Xudong
 -Original Message-
 From: Marcelo Tosatti [mailto:mtosa...@redhat.com]
 Sent: Friday, May 18, 2012 10:23 AM
 To: Xudong Hao
 Cc: a...@redhat.com; kvm@vger.kernel.org; linux-ker...@vger.kernel.org;
 Shan, Haitao; Zhang, Xiantao; Hao, Xudong
 Subject: Re: [PATCH 4/4] Enabling Access bit when doing memory swapping
 
 On Wed, May 16, 2012 at 09:12:30AM +0800, Xudong Hao wrote:
  Enabling Access bit when doing memory swapping.
 
  Signed-off-by: Haitao Shan haitao.s...@intel.com
  Signed-off-by: Xudong Hao xudong@intel.com
  ---
   arch/x86/kvm/mmu.c |   13 +++--
   arch/x86/kvm/vmx.c |6 --
   2 files changed, 11 insertions(+), 8 deletions(-)
 
  diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
  index ff053ca..5f55f98 100644
  --- a/arch/x86/kvm/mmu.c
  +++ b/arch/x86/kvm/mmu.c
  @@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm,
 unsigned long *rmapp,
  int young = 0;
 
  /*
  -* Emulate the accessed bit for EPT, by checking if this page has
  +* In case of absence of EPT Access and Dirty Bits supports,
  +* emulate the accessed bit for EPT, by checking if this page has
   * an EPT mapping, and clearing it if it does. On the next access,
   * a new EPT mapping will be established.
   * This has some overhead, but not as much as the cost of swapping
  @@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm,
 unsigned long *rmapp,
  while (spte) {
  int _young;
  u64 _spte = *spte;
  -   BUG_ON(!(_spte  PT_PRESENT_MASK));
  -   _young = _spte  PT_ACCESSED_MASK;
  +   BUG_ON(!is_shadow_present_pte(_spte));
  +   _young = _spte  shadow_accessed_mask;
  if (_young) {
  young = 1;
  -   clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
  +   *spte = ~shadow_accessed_mask;
  }
 
 Now a dirty bit can be lost. Is there a reason to remove the clear_bit?

The clear_bit() is called in shadown and EPT A/D mode, because 
PT_ACCESSED_SHIFT does not make sense for EPT A/D bit, so use the code 
shadow_accessed_mask to mask the bit for both of them.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-17 Thread Marcelo Tosatti
On Wed, May 16, 2012 at 09:12:30AM +0800, Xudong Hao wrote:
 Enabling Access bit when doing memory swapping.
 
 Signed-off-by: Haitao Shan haitao.s...@intel.com
 Signed-off-by: Xudong Hao xudong@intel.com
 ---
  arch/x86/kvm/mmu.c |   13 +++--
  arch/x86/kvm/vmx.c |6 --
  2 files changed, 11 insertions(+), 8 deletions(-)
 
 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
 index ff053ca..5f55f98 100644
 --- a/arch/x86/kvm/mmu.c
 +++ b/arch/x86/kvm/mmu.c
 @@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long 
 *rmapp,
 int young = 0;
 
 /*
 -* Emulate the accessed bit for EPT, by checking if this page has
 +* In case of absence of EPT Access and Dirty Bits supports,
 +* emulate the accessed bit for EPT, by checking if this page has
  * an EPT mapping, and clearing it if it does. On the next access,
  * a new EPT mapping will be established.
  * This has some overhead, but not as much as the cost of swapping
 @@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned 
 long *rmapp,
 while (spte) {
 int _young;
 u64 _spte = *spte;
 -   BUG_ON(!(_spte  PT_PRESENT_MASK));
 -   _young = _spte  PT_ACCESSED_MASK;
 +   BUG_ON(!is_shadow_present_pte(_spte));
 +   _young = _spte  shadow_accessed_mask;
 if (_young) {
 young = 1;
 -   clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
 +   *spte = ~shadow_accessed_mask;
 }

Now a dirty bit can be lost. Is there a reason to remove the clear_bit?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] Enabling Access bit when doing memory swapping

2012-05-15 Thread Xudong Hao
Enabling Access bit when doing memory swapping.

Signed-off-by: Haitao Shan haitao.s...@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
---
 arch/x86/kvm/mmu.c |   13 +++--
 arch/x86/kvm/vmx.c |6 --
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ff053ca..5f55f98 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1166,7 +1166,8 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long 
*rmapp,
int young = 0;

/*
-* Emulate the accessed bit for EPT, by checking if this page has
+* In case of absence of EPT Access and Dirty Bits supports,
+* emulate the accessed bit for EPT, by checking if this page has
 * an EPT mapping, and clearing it if it does. On the next access,
 * a new EPT mapping will be established.
 * This has some overhead, but not as much as the cost of swapping
@@ -1179,11 +1180,11 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long 
*rmapp,
while (spte) {
int _young;
u64 _spte = *spte;
-   BUG_ON(!(_spte  PT_PRESENT_MASK));
-   _young = _spte  PT_ACCESSED_MASK;
+   BUG_ON(!is_shadow_present_pte(_spte));
+   _young = _spte  shadow_accessed_mask;
if (_young) {
young = 1;
-   clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)spte);
+   *spte = ~shadow_accessed_mask;
}
spte = rmap_next(rmapp, spte);
}
@@ -1207,8 +1208,8 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned 
long *rmapp,
spte = rmap_next(rmapp, NULL);
while (spte) {
u64 _spte = *spte;
-   BUG_ON(!(_spte  PT_PRESENT_MASK));
-   young = _spte  PT_ACCESSED_MASK;
+   BUG_ON(!is_shadow_present_pte(_spte));
+   young = _spte  shadow_accessed_mask;
if (young) {
young = 1;
break;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 89151a9..a3ef549 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7242,8 +7242,10 @@ static int __init vmx_init(void)
vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);

if (enable_ept) {
-   kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,
-   VMX_EPT_EXECUTABLE_MASK);
+   kvm_mmu_set_mask_ptes(0ull,
+   (enable_ept_ad_bits) ? VMX_EPT_ACCESS_BIT : 0ull,
+   (enable_ept_ad_bits) ? VMX_EPT_DIRTY_BIT : 0ull,
+   0ull, VMX_EPT_EXECUTABLE_MASK);
ept_set_mmio_spte_mask();
kvm_enable_tdp();
} else
--
1.7.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html