On 22/12/15 10:30, Huaitong Han wrote:
> Protection keys define a new 4-bit protection key field(PKEY) in bits 62:59 of
> leaf entries of the page tables.
>
> PKRU register defines 32 bits, there are 16 domains and 2 attribute bits per
> domain in pkru, for each i (0 ≤ i ≤ 15), PKRU[2i] is the access-disable bit 
> for
> protection key i (ADi); PKRU[2i+1] is the write-disable bit for protection key
> i (WDi). PKEY is index to a defined domain.
>
> A fault is considered as a PKU violation if all of the following conditions 
> are
> ture:
> 1.CR4_PKE=1.
> 2.EFER_LMA=1.
> 3.Page is present with no reserved bit violations.
> 4.The access is not an instruction fetch.
> 5.The access is to a user page.
> 6.PKRU.AD=1
>     or The access is a data write and PKRU.WD=1
>         and either CR0.WP=1 or it is a user access.

One comment you didn't address from v3, however:

"At the moment PFEC_insn_fetch is only set in
hvm_fetch_from_guest_virt() if hvm_nx_enabled() or hvm_smep_enabled()
are true.  Which means that if you *don't* have nx or smep enabled, then
the patch series as is will fault on instruction fetches when it
shouldn't.  (I don't remember anyone mentioning nx or smep being enabled
as a prerequisite for pkeys.)"

I think realistically the only way to address this is to start making
the clean separation between "pfec in" and "pfec out" I mentioned in the
previous discussion.

I've coded up the attached patch, but only compile-tested it.  Can you
give it a look to see if you think it is correct, test it, include it in
your next patch series?

Thanks,
 -George

From b8ec8e14c670215587a4e74c3aaec8dab6f26a8c Mon Sep 17 00:00:00 2001
From: George Dunlap <george.dun...@eu.citrix.com>
Date: Tue, 22 Dec 2015 16:17:22 +0000
Subject: [PATCH] xen/mm: Clean up pfec handling in gva_to_gfn

At the moment, the pfec argument to gva_to_gfn has two functions:

* To inform guest_walk what kind of access is happenind

* As a value to pass back into the guest in the event of a fault.

Unfortunately this is not quite treated consistently: the hvm_fetch_*
function will "pre-clear" the PFEC_insn_fetch flag before calling
gva_to_gfn; meaning guest_walk doesn't actually know whether a given
access is an instruction fetch or not.  This works now, but will cause
issues when pkeys are introduced, since guest_walk will need to know
whether an access is an instruction fetch even if it doesn't return
PFEC_insn_fetch.

Fix this by making a clean separation for in and out functionalities
of the pfec argument:

1. Always pass in the access type to gva_to_gfn

2. Filter out inappropriate access flags before returning from gva_to_gfn.

(The PFEC_insn_fetch flag should only be passed to the guest if either NX or
SMEP is enabled.  See Intel 64 Developer's Manual, Volume 3, Section 4.7.)

Signed-off-by: George Dunlap <george.dun...@citrix.com>
---
 xen/arch/x86/hvm/hvm.c           |  8 ++------
 xen/arch/x86/mm/hap/guest_walk.c | 10 +++++++++-
 xen/arch/x86/mm/shadow/multi.c   |  6 ++++++
 3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 08cef1f..aa1f64f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4423,11 +4423,9 @@ enum hvm_copy_result hvm_copy_from_guest_virt(
 enum hvm_copy_result hvm_fetch_from_guest_virt(
     void *buf, unsigned long vaddr, int size, uint32_t pfec)
 {
-    if ( hvm_nx_enabled(current) || hvm_smep_enabled(current) )
-        pfec |= PFEC_insn_fetch;
     return __hvm_copy(buf, vaddr, size,
                       HVMCOPY_from_guest | HVMCOPY_fault | HVMCOPY_virt,
-                      PFEC_page_present | pfec);
+                      PFEC_page_present | PFEC_insn_fetch | pfec);
 }
 
 enum hvm_copy_result hvm_copy_to_guest_virt_nofault(
@@ -4449,11 +4447,9 @@ enum hvm_copy_result hvm_copy_from_guest_virt_nofault(
 enum hvm_copy_result hvm_fetch_from_guest_virt_nofault(
     void *buf, unsigned long vaddr, int size, uint32_t pfec)
 {
-    if ( hvm_nx_enabled(current) || hvm_smep_enabled(current) )
-        pfec |= PFEC_insn_fetch;
     return __hvm_copy(buf, vaddr, size,
                       HVMCOPY_from_guest | HVMCOPY_no_fault | HVMCOPY_virt,
-                      PFEC_page_present | pfec);
+                      PFEC_page_present | PFEC_insn_fetch | pfec);
 }
 
 unsigned long copy_to_user_hvm(void *to, const void *from, unsigned int len)
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index 11c1b35..2dce111 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -82,7 +82,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
     if ( !top_page )
     {
         pfec[0] &= ~PFEC_page_present;
-        return INVALID_GFN;
+        goto out_tweak_pfec;
     }
     top_mfn = _mfn(page_to_mfn(top_page));
 
@@ -136,6 +136,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
     if ( missing & _PAGE_SHARED )
         pfec[0] = PFEC_page_shared;
 
+out_tweak_pfec:
+    /* 
+     * Intel 64 Volume 3, Section 4.7: The PFEC_insn_fetch flag is set
+     * only when NX or SMEP are enabled.
+     */
+    if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
+        pfec[0] &= ~PFEC_insn_fetch;
+
     return INVALID_GFN;
 }
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 58f7e72..3844c2d 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3668,6 +3668,12 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
             pfec[0] &= ~PFEC_page_present;
         if ( missing & _PAGE_INVALID_BITS )
             pfec[0] |= PFEC_reserved_bit;
+        /* 
+         * Intel 64 Volume 3, Section 4.7: The PFEC_insn_fetch flag is
+         * set only when NX or SMEP are enabled.
+         */
+        if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) )
+            pfec[0] &= ~PFEC_insn_fetch;
         return INVALID_GFN;
     }
     gfn = guest_walk_to_gfn(&gw);
-- 
2.1.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to