Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Dong, Eddie wrote: Looks good, but doesn't apply; please check if you are working against the latest version. Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits violation check patch. Applied, thanks. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Neiger, Gil wrote: PDPTEs are used only if CR0.PG=CR4.PAE=1. In that situation, their format depends the value of IA32_EFER.LMA. If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE that is marked present. The execute-disable setting of a page is determined only by the PDE and PTE. If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4 entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). - Gil Rebased. Thanks, eddie commit 032caed3da123950eeb3e192baf444d4eae80c85 Author: root r...@eddie-wb.localdomain Date: Tue Mar 31 16:22:49 2009 +0800 Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM. Signed-off-by: Eddie Dong eddie.d...@intel.com diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 2eab758..1bed3aa 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu) return vcpu-arch.shadow_efer EFER_NX; } -static int is_present_pte(unsigned long pte) -{ - return pte PT_PRESENT_MASK; -} - static int is_shadow_present_pte(u64 pte) { return pte != shadow_trap_nonpresent_pte @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context-rsvd_bits_mask[1][0] = 0; break; case PT32E_ROOT_LEVEL: + context-rsvd_bits_mask[0][2] = + rsvd_bits(maxphyaddr, 63) | + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd | diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 258e5d5..2a6eb50 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu) return vcpu-arch.cr0 X86_CR0_PG; } +static inline int is_present_pte(unsigned long pte) +{ + return pte PT_PRESENT_MASK; +} + #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 961bd2b..b449ff0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3) goto out; } for (i = 0; i ARRAY_SIZE(pdpte); ++i) { - if ((pdpte[i] 1) (pdpte[i] 0xfff001e6ull)) { + if (is_present_pte(pdpte[i]) + (pdpte[i] vcpu-arch.mmu.rsvd_bits_mask[0][2])) { ret = 0; goto out; } cr3_load_rsvd.patch Description: cr3_load_rsvd.patch
Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Dong, Eddie wrote: Neiger, Gil wrote: PDPTEs are used only if CR0.PG=CR4.PAE=1. In that situation, their format depends the value of IA32_EFER.LMA. If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE that is marked present. The execute-disable setting of a page is determined only by the PDE and PTE. If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4 entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). - Gil Rebased. Thanks, eddie Looks good, but doesn't apply; please check if you are working against the latest version. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Looks good, but doesn't apply; please check if you are working against the latest version. Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits violation check patch. thx, eddie Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM. Signed-off-by: Eddie Dong eddie.d...@intel.com diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 41a0482..400c056 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu) return vcpu-arch.shadow_efer EFER_NX; } -static int is_present_pte(unsigned long pte) -{ - return pte PT_PRESENT_MASK; -} - static int is_shadow_present_pte(u64 pte) { return pte != shadow_trap_nonpresent_pte @@ -2195,6 +2190,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context-rsvd_bits_mask[1][0] = 0; break; case PT32E_ROOT_LEVEL: + context-rsvd_bits_mask[0][2] = + rsvd_bits(maxphyaddr, 63) | + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd | diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index eaab214..3494a2f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu) return vcpu-arch.cr0 X86_CR0_PG; } +static inline int is_present_pte(unsigned long pte) +{ + return pte PT_PRESENT_MASK; +} + #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9702353..3d07c9a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -234,7 +234,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3) goto out; } for (i = 0; i ARRAY_SIZE(pdpte); ++i) { - if ((pdpte[i] 1) (pdpte[i] 0xfff001e6ull)) { + if (is_present_pte(pdpte[i]) + (pdpte[i] vcpu-arch.mmu.rsvd_bits_mask[0][2])) { ret = 0; goto out; }-- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Dong, Eddie wrote: @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context-rsvd_bits_mask[1][0] = 0; break; case PT32E_ROOT_LEVEL: + context-rsvd_bits_mask[0][2] = exb_bit_rsvd | + rsvd_bits(maxphyaddr, 62) | + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd Are you sure that PDPTEs support NX? They don't support R/W and U/S, so it seems likely that NX is reserved as well even when EFER.NXE is enabled. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Avi Kivity wrote: Dong, Eddie wrote: @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context-rsvd_bits_mask[1][0] = 0; break; case PT32E_ROOT_LEVEL: +context-rsvd_bits_mask[0][2] = exb_bit_rsvd | +rsvd_bits(maxphyaddr, 62) | +rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd Are you sure that PDPTEs support NX? They don't support R/W and U/S, so it seems likely that NX is reserved as well even when EFER.NXE is enabled. Gil: Here is the original mail in KVM mailinglist. If you would be able to help, that is great. thx, eddie-- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
PDPTEs are used only if CR0.PG=CR4.PAE=1. In that situation, their format depends the value of IA32_EFER.LMA. If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE that is marked present. The execute-disable setting of a page is determined only by the PDE and PTE. If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4 entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). - Gil -Original Message- From: Dong, Eddie Sent: Monday, March 30, 2009 5:51 PM To: Neiger, Gil Cc: Avi Kivity; kvm@vger.kernel.org; Dong, Eddie Subject: FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit Avi Kivity wrote: Dong, Eddie wrote: @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context-rsvd_bits_mask[1][0] = 0; break; case PT32E_ROOT_LEVEL: +context-rsvd_bits_mask[0][2] = exb_bit_rsvd | +rsvd_bits(maxphyaddr, 62) | +rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd Are you sure that PDPTEs support NX? They don't support R/W and U/S, so it seems likely that NX is reserved as well even when EFER.NXE is enabled. Gil: Here is the original mail in KVM mailinglist. If you would be able to help, that is great. thx, eddie-- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
This is followup of rsvd_bits emulation. thx, eddie commit 171eb2b2d8282dd913a5d5c6c695fd64e1ddcf4c Author: root r...@eddie-wb.localdomain Date: Mon Mar 30 11:39:50 2009 +0800 Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit. Signed-off-by: Eddie Dong eddie.d...@intel.com diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 0a6f109..b0bf8b2 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2255,6 +2255,9 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu) if (!is_nx(vcpu)) exb_bit_rsvd = rsvd_bits(63, 63); + context-rsvd_bits_mask[0][2] = exb_bit_rsvd | + rsvd_bits(maxphyaddr, 62) | + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ context-rsvd_bits_mask[0][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 62); /* PDE */ context-rsvd_bits_mask[0][0] = exb_bit_rsvd | @@ -2270,6 +2273,17 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu) static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) { struct kvm_mmu *context = vcpu-arch.mmu; + int maxphyaddr = cpuid_maxphyaddr(vcpu); + u64 exb_bit_rsvd = 0; + + if (!is_long_mode(vcpu) is_pae(vcpu) is_paging(vcpu)) { + if (!is_nx(vcpu)) + exb_bit_rsvd = rsvd_bits(63, 63); + + context-rsvd_bits_mask[0][2] = exb_bit_rsvd | + rsvd_bits(maxphyaddr, 62) | + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */ + } context-new_cr3 = nonpaging_new_cr3; context-page_fault = tdp_page_fault; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 961bd2b..ff178fd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3) goto out; } for (i = 0; i ARRAY_SIZE(pdpte); ++i) { - if ((pdpte[i] 1) (pdpte[i] 0xfff001e6ull)) { + if ((pdpte[i] PT_PRESENT_MASK) + (pdpte[i] vcpu-arch.mmu.rsvd_bits_mask[0][2])) { ret = 0; goto out; }-- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html