On Tue, May 21, 2013 at 03:52:12PM +0800, Xiao Guangrong wrote: > On 05/19/2013 12:52 PM, Jun Nakajima wrote: > > From: Nadav Har'El <n...@il.ibm.com> > > > > This is the first patch in a series which adds nested EPT support to KVM's > > nested VMX. Nested EPT means emulating EPT for an L1 guest so that L1 can > > use > > EPT when running a nested guest L2. When L1 uses EPT, it allows the L2 guest > > to set its own cr3 and take its own page faults without either of L0 or L1 > > getting involved. This often significanlty improves L2's performance over > > the > > previous two alternatives (shadow page tables over EPT, and shadow page > > tables over shadow page tables). > > > > This patch adds EPT support to paging_tmpl.h. > > > > paging_tmpl.h contains the code for reading and writing page tables. The > > code > > for 32-bit and 64-bit tables is very similar, but not identical, so > > paging_tmpl.h is #include'd twice in mmu.c, once with PTTTYPE=32 and once > > with PTTYPE=64, and this generates the two sets of similar functions. > > > > There are subtle but important differences between the format of EPT tables > > and that of ordinary x86 64-bit page tables, so for nested EPT we need a > > third set of functions to read the guest EPT table and to write the shadow > > EPT table. > > > > So this patch adds third PTTYPE, PTTYPE_EPT, which creates functions > > (prefixed > > with "EPT") which correctly read and write EPT tables. > > > > Signed-off-by: Nadav Har'El <n...@il.ibm.com> > > Signed-off-by: Jun Nakajima <jun.nakaj...@intel.com> > > Signed-off-by: Xinhao Xu <xinhao...@intel.com> > > --- > > arch/x86/kvm/mmu.c | 5 +++++ > > arch/x86/kvm/paging_tmpl.h | 43 +++++++++++++++++++++++++++++++++++++++++-- > > 2 files changed, 46 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index 117233f..6c1670f 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -3397,6 +3397,11 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, > > unsigned level, unsigned gp > > return mmu->last_pte_bitmap & (1 << index); > > } > > > > +#define PTTYPE_EPT 18 /* arbitrary */ > > +#define PTTYPE PTTYPE_EPT > > +#include "paging_tmpl.h" > > +#undef PTTYPE > > + > > #define PTTYPE 64 > > #include "paging_tmpl.h" > > #undef PTTYPE > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > > index df34d4a..4c45654 100644 > > --- a/arch/x86/kvm/paging_tmpl.h > > +++ b/arch/x86/kvm/paging_tmpl.h > > @@ -50,6 +50,22 @@ > > #define PT_LEVEL_BITS PT32_LEVEL_BITS > > #define PT_MAX_FULL_LEVELS 2 > > #define CMPXCHG cmpxchg > > +#elif PTTYPE == PTTYPE_EPT > > + #define pt_element_t u64 > > + #define guest_walker guest_walkerEPT > > + #define FNAME(name) EPT_##name > > + #define PT_BASE_ADDR_MASK PT64_BASE_ADDR_MASK > > + #define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl) > > + #define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl) > > + #define PT_INDEX(addr, level) PT64_INDEX(addr, level) > > + #define PT_LEVEL_BITS PT64_LEVEL_BITS > > + #ifdef CONFIG_X86_64 > > + #define PT_MAX_FULL_LEVELS 4 > > + #define CMPXCHG cmpxchg > > + #else > > + #define CMPXCHG cmpxchg64 > > CMPXHG is only used in FNAME(cmpxchg_gpte), but you commented it later. > Do we really need it? > > > + #define PT_MAX_FULL_LEVELS 2 > > And the SDM says: > > "It uses a page-walk length of 4, meaning that at most 4 EPT paging-structure > entriesare accessed to translate a guest-physical address.", Is My SDM > obsolete? > Which kind of process supports page-walk length = 2? > > It seems your patch is not able to handle the case that the guest uses > walk-lenght = 2 > which is running on the host with walk-lenght = 4. > (plrease refer to how to handle sp->role.quadrant in FNAME(get_level1_sp_gpa) > in > the current code.) > But since EPT always has 4 levels on all existing cpus it is not an issue and the only case that we should worry about is guest walk-lenght == host walk-lenght == 4, or have I misunderstood what you mean here?
-- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html