Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-04-01 Thread Avi Kivity

Dong, Eddie wrote:

Looks good, but doesn't apply; please check if you are working against
the latest version.



Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits 
violation check patch.
  


Applied, thanks.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-31 Thread Dong, Eddie

> 
> Looks good, but doesn't apply; please check if you are working against
> the latest version.

Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits 
violation check patch.

thx, eddie



Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask 
per latest SDM.

Signed-off-by: Eddie Dong 


diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 41a0482..400c056 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
 }
 
-static int is_present_pte(unsigned long pte)
-{
-   return pte & PT_PRESENT_MASK;
-}
-
 static int is_shadow_present_pte(u64 pte)
 {
return pte != shadow_trap_nonpresent_pte
@@ -2195,6 +2190,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int 
level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+   context->rsvd_bits_mask[0][2] =
+   rsvd_bits(maxphyaddr, 63) |
+   rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62);  /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index eaab214..3494a2f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
 }
 
+static inline int is_present_pte(unsigned long pte)
+{
+   return pte & PT_PRESENT_MASK;
+}
+
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9702353..3d07c9a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -234,7 +234,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
-   if ((pdpte[i] & 1) && (pdpte[i] & 0xfff001e6ull)) {
+   if (is_present_pte(pdpte[i]) &&
+   (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-31 Thread Avi Kivity

Dong, Eddie wrote:

Neiger, Gil wrote:
  

PDPTEs are used only if CR0.PG=CR4.PAE=1.

In that situation, their format depends the value of IA32_EFER.LMA.

If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
that is marked present.  The execute-disable setting of a page is
determined only by the PDE and PTE.  


If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). 


- Gil



Rebased.
Thanks, eddie


  


Looks good, but doesn't apply; please check if you are working against 
the latest version.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-31 Thread Dong, Eddie
Neiger, Gil wrote:
> PDPTEs are used only if CR0.PG=CR4.PAE=1.
> 
> In that situation, their format depends the value of IA32_EFER.LMA.
> 
> If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
> that is marked present.  The execute-disable setting of a page is
> determined only by the PDE and PTE.  
> 
> If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
> entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). 
> 
>   - Gil

Rebased.
Thanks, eddie


commit 032caed3da123950eeb3e192baf444d4eae80c85
Author: root 
Date:   Tue Mar 31 16:22:49 2009 +0800

Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask 
per latest SDM.

Signed-off-by: Eddie Dong 

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..1bed3aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
 }
 
-static int is_present_pte(unsigned long pte)
-{
-   return pte & PT_PRESENT_MASK;
-}
-
 static int is_shadow_present_pte(u64 pte)
 {
return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int 
level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+   context->rsvd_bits_mask[0][2] =
+   rsvd_bits(maxphyaddr, 63) |
+   rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62);  /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
 }
 
+static inline int is_present_pte(unsigned long pte)
+{
+   return pte & PT_PRESENT_MASK;
+}
+
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
-   if ((pdpte[i] & 1) && (pdpte[i] & 0xfff001e6ull)) {
+   if (is_present_pte(pdpte[i]) &&
+   (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}

cr3_load_rsvd.patch
Description: cr3_load_rsvd.patch


RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-30 Thread Neiger, Gil
PDPTEs are used only if CR0.PG=CR4.PAE=1.

In that situation, their format depends the value of IA32_EFER.LMA.

If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE that is 
marked present.  The execute-disable setting of a page is determined only by 
the PDE and PTE.

If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4 entries, 
PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1).

- Gil

-Original Message-
From: Dong, Eddie 
Sent: Monday, March 30, 2009 5:51 PM
To: Neiger, Gil
Cc: Avi Kivity; kvm@vger.kernel.org; Dong, Eddie
Subject: FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

Avi Kivity wrote:
> Dong, Eddie wrote:
>> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu
>>  *vcpu, int level) context->rsvd_bits_mask[1][0] = 0;
>>  break;
>>  case PT32E_ROOT_LEVEL:
>> +context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
>> +rsvd_bits(maxphyaddr, 62) |
>> +rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
>>  context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
>>  rsvd_bits(maxphyaddr, 62);  /* PDE */
>>  context->rsvd_bits_mask[0][0] = exb_bit_rsvd
> 
> Are you sure that PDPTEs support NX?  They don't support R/W and U/S,
> so it seems likely that NX is reserved as well even when EFER.NXE is
> enabled. 


Gil:
Here is the original mail in KVM mailinglist. If you would be able to 
help, that is great.
thx, eddie--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-30 Thread Dong, Eddie
Avi Kivity wrote:
> Dong, Eddie wrote:
>> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu
>>  *vcpu, int level) context->rsvd_bits_mask[1][0] = 0;
>>  break;
>>  case PT32E_ROOT_LEVEL:
>> +context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
>> +rsvd_bits(maxphyaddr, 62) |
>> +rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
>>  context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
>>  rsvd_bits(maxphyaddr, 62);  /* PDE */
>>  context->rsvd_bits_mask[0][0] = exb_bit_rsvd
> 
> Are you sure that PDPTEs support NX?  They don't support R/W and U/S,
> so it seems likely that NX is reserved as well even when EFER.NXE is
> enabled. 

I am refering Fig 3-20/3-21 of SDM3A, but I think Fig3-20/21 has EXB bit missed 
since Table 3-5 and section 3.10.3.
I will double check with internal architect. 
thx, eddie--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-30 Thread Avi Kivity

Dong, Eddie wrote:

@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int 
level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+   context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+   rsvd_bits(maxphyaddr, 62) |
+   rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62);  /* PDE */
 		context->rsvd_bits_mask[0][0] = exb_bit_rsvd 


Are you sure that PDPTEs support NX?  They don't support R/W and U/S, so 
it seems likely that NX is reserved as well even when EFER.NXE is enabled.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

2009-03-30 Thread Dong, Eddie
Dong, Eddie wrote:
> This is followup of rsvd_bits emulation.
> 
Base on new rsvd_bits emulation patch.
thx, eddie


commit 2c1472ef2b9fd87a261e8b58a7db11afd6a111dc
Author: root 
Date:   Mon Mar 30 17:05:47 2009 +0800

Use rsvd_bits_mask in load_pdptrs for cleanup with EXB bit considered.

Signed-off-by: Eddie Dong 

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..eaf41c0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
 }
 
-static int is_present_pte(unsigned long pte)
-{
-   return pte & PT_PRESENT_MASK;
-}
-
 static int is_shadow_present_pte(u64 pte)
 {
return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int 
level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+   context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+   rsvd_bits(maxphyaddr, 62) |
+   rsvd_bits(7, 8) | rsvd_bits(1, 2);  /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62);  /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
 }
 
+static inline int is_present_pte(unsigned long pte)
+{
+   return pte & PT_PRESENT_MASK;
+}
+
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
-   if ((pdpte[i] & 1) && (pdpte[i] & 0xfff001e6ull)) {
+   if (is_present_pte(pdpte[i]) &&
+   (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html