Re: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4
Il 01/04/2014 03:17, Wu, Feng ha scritto: A fault is considered + * as a SMAP violation if all of the following + * conditions are ture: + * - X86_CR4_SMAP is set in CR4 + * - An user page is accessed + * - Page fault in kernel mode + * - !(CPL3 X86_EFLAGS_AC is set) - if CPL 3, EFLAGS.AC is clear Should it be if CPL =3 or EFLAGS.AC is clear ? What I meant is if CPL 3, EFLAGS.AC must also be clear, but your version is fine too. You choose! Paolo -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4
-Original Message- From: Paolo Bonzini [mailto:pbonz...@redhat.com] Sent: Tuesday, April 01, 2014 5:27 PM To: Wu, Feng; g...@redhat.com; h...@zytor.com; kvm@vger.kernel.org Subject: Re: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4 Il 01/04/2014 03:17, Wu, Feng ha scritto: A fault is considered + * as a SMAP violation if all of the following + * conditions are ture: + * - X86_CR4_SMAP is set in CR4 + * - An user page is accessed + * - Page fault in kernel mode + * - !(CPL3 X86_EFLAGS_AC is set) - if CPL 3, EFLAGS.AC is clear Should it be if CPL =3 or EFLAGS.AC is clear ? What I meant is if CPL 3, EFLAGS.AC must also be clear, but your version is fine too. You choose! Thanks for your comments! :) BTW: I tested nested today, this patch set works well for nested virtualization! Paolo -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4
Just a few comments... -static void update_permission_bitmask(struct kvm_vcpu *vcpu, +void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, bool ept) { unsigned bit, byte, pfec; u8 map; - bool fault, x, w, u, wf, uf, ff, smep; + bool fault, x, w, u, wf, uf, ff, smapf, cr4_smap, smep, smap = 0; smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP); Can you make an additional patch to rename this to cr4_smep? + cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP); for (byte = 0; byte ARRAY_SIZE(mmu-permissions); ++byte) { pfec = byte 1; map = 0; wf = pfec PFERR_WRITE_MASK; uf = pfec PFERR_USER_MASK; ff = pfec PFERR_FETCH_MASK; + /* +* PFERR_RSVD_MASK bit is used to detect SMAP violation. +* We will check it in permission_fault(), this bit is +* set in pfec for normal fault, while it is cleared for +* SMAP violations. +*/ This bit is set in PFEC if we the access is _not_ subject to SMAP restrictions, and cleared otherwise. The bit is only meaningful if the SMAP bit is set in CR4. + smapf = !(pfec PFERR_RSVD_MASK); for (bit = 0; bit 8; ++bit) { x = bit ACC_EXEC_MASK; w = bit ACC_WRITE_MASK; @@ -3627,11 +3635,32 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu, w |= !is_write_protection(vcpu) !uf; /* Disallow supervisor fetches of user code if cr4.smep */ x = !(smep u !uf); + + /* +* SMAP:kernel-mode data accesses from user-mode +* mappings should fault. A fault is considered +* as a SMAP violation if all of the following +* conditions are ture: +* - X86_CR4_SMAP is set in CR4 +* - An user page is accessed +* - Page fault in kernel mode +* - !(CPL3 X86_EFLAGS_AC is set) - if CPL 3, EFLAGS.AC is clear +* Here, we cover the first three conditions, +* The CPL and X86_EFLAGS_AC is in smapf,which +* permission_fault() computes dynamically. The fourth is computed dynamically in permission_fault() and is in SMAPF. +* Also, SMAP does not affect instruction +* fetches, add the !ff check here to make it +* clearer. +*/ + smap = cr4_smap u !uf !ff; } else /* Not really needed: no U/S accesses on ept */ u = 1; - fault = (ff !x) || (uf !u) || (wf !w); + fault = (ff !x) || (uf !u) || (wf !w) || + (smapf smap); map |= fault bit; } mmu-permissions[byte] = map; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 2926152..822190f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,11 +44,17 @@ #define PT_DIRECTORY_LEVEL 2 #define PT_PAGE_TABLE_LEVEL 1 -#define PFERR_PRESENT_MASK (1U 0) -#define PFERR_WRITE_MASK (1U 1) -#define PFERR_USER_MASK (1U 2) -#define PFERR_RSVD_MASK (1U 3) -#define PFERR_FETCH_MASK (1U 4) +#define PFERR_PRESENT_BIT 0 +#define PFERR_WRITE_BIT 1 +#define PFERR_USER_BIT 2 +#define PFERR_RSVD_BIT 3 +#define PFERR_FETCH_BIT 4 + +#define PFERR_PRESENT_MASK (1U PFERR_PRESENT_BIT) +#define PFERR_WRITE_MASK (1U PFERR_WRITE_BIT) +#define PFERR_USER_MASK (1U PFERR_USER_BIT) +#define PFERR_RSVD_MASK (1U PFERR_RSVD_BIT) +#define PFERR_FETCH_MASK (1U PFERR_FETCH_BIT) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask); @@ -73,6 +79,8 @@ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct); void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly); +void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + bool ept); static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) { @@ -110,10 +118,30 @@ static inline bool is_write_protection(struct kvm_vcpu *vcpu) * Will a fault with a given page-fault error code (pfec) cause a permission * fault with the given access (in
RE: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4
-Original Message- From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of Paolo Bonzini Sent: Monday, March 31, 2014 9:31 PM To: Wu, Feng; g...@redhat.com; h...@zytor.com; kvm@vger.kernel.org Subject: Re: [PATCH v3 2/4] KVM: Add SMAP support when setting CR4 Just a few comments... -static void update_permission_bitmask(struct kvm_vcpu *vcpu, +void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, bool ept) { unsigned bit, byte, pfec; u8 map; - bool fault, x, w, u, wf, uf, ff, smep; + bool fault, x, w, u, wf, uf, ff, smapf, cr4_smap, smep, smap = 0; smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP); Can you make an additional patch to rename this to cr4_smep? Sure! I noticed your comments about this issue in the previous email, I was prepare to make a patch for it, will send out it today! + cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP); for (byte = 0; byte ARRAY_SIZE(mmu-permissions); ++byte) { pfec = byte 1; map = 0; wf = pfec PFERR_WRITE_MASK; uf = pfec PFERR_USER_MASK; ff = pfec PFERR_FETCH_MASK; + /* +* PFERR_RSVD_MASK bit is used to detect SMAP violation. +* We will check it in permission_fault(), this bit is +* set in pfec for normal fault, while it is cleared for +* SMAP violations. +*/ This bit is set in PFEC if we the access is _not_ subject to SMAP restrictions, and cleared otherwise. The bit is only meaningful if the SMAP bit is set in CR4. + smapf = !(pfec PFERR_RSVD_MASK); for (bit = 0; bit 8; ++bit) { x = bit ACC_EXEC_MASK; w = bit ACC_WRITE_MASK; @@ -3627,11 +3635,32 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu, w |= !is_write_protection(vcpu) !uf; /* Disallow supervisor fetches of user code if cr4.smep */ x = !(smep u !uf); + + /* +* SMAP:kernel-mode data accesses from user-mode +* mappings should fault. A fault is considered +* as a SMAP violation if all of the following +* conditions are ture: +* - X86_CR4_SMAP is set in CR4 +* - An user page is accessed +* - Page fault in kernel mode +* - !(CPL3 X86_EFLAGS_AC is set) - if CPL 3, EFLAGS.AC is clear Should it be if CPL =3 or EFLAGS.AC is clear ? +* Here, we cover the first three conditions, +* The CPL and X86_EFLAGS_AC is in smapf,which +* permission_fault() computes dynamically. The fourth is computed dynamically in permission_fault() and is in SMAPF. +* Also, SMAP does not affect instruction +* fetches, add the !ff check here to make it +* clearer. +*/ + smap = cr4_smap u !uf !ff; } else /* Not really needed: no U/S accesses on ept */ u = 1; - fault = (ff !x) || (uf !u) || (wf !w); + fault = (ff !x) || (uf !u) || (wf !w) || + (smapf smap); map |= fault bit; } mmu-permissions[byte] = map; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 2926152..822190f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -44,11 +44,17 @@ #define PT_DIRECTORY_LEVEL 2 #define PT_PAGE_TABLE_LEVEL 1 -#define PFERR_PRESENT_MASK (1U 0) -#define PFERR_WRITE_MASK (1U 1) -#define PFERR_USER_MASK (1U 2) -#define PFERR_RSVD_MASK (1U 3) -#define PFERR_FETCH_MASK (1U 4) +#define PFERR_PRESENT_BIT 0 +#define PFERR_WRITE_BIT 1 +#define PFERR_USER_BIT 2 +#define PFERR_RSVD_BIT 3 +#define PFERR_FETCH_BIT 4 + +#define PFERR_PRESENT_MASK (1U PFERR_PRESENT_BIT) +#define PFERR_WRITE_MASK (1U PFERR_WRITE_BIT) +#define PFERR_USER_MASK (1U PFERR_USER_BIT) +#define PFERR_RSVD_MASK (1U PFERR_RSVD_BIT) +#define PFERR_FETCH_MASK (1U PFERR_FETCH_BIT) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask); @@ -73,6 +79,8 @@ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct); void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); void