Re: [PATCH v2 0/9] cleanup around kvm_sync_page, and a few micro-optimizations

2016-03-08 Thread Takuya Yoshikawa
On 2016/03/08 17:30, Paolo Bonzini wrote:
> On 08/03/2016 09:00, Takuya Yoshikawa wrote:
>>> KVM: MMU: introduce kvm_mmu_flush_or_zap
>>> KVM: MMU: move TLB flush out of __kvm_sync_page
>>> KVM: MMU: use kvm_sync_page in kvm_sync_pages
>>> KVM: MMU: cleanup __kvm_sync_page and its callers
>>> KVM: MMU: invert return value of mmu.sync_page and *kvm_sync_page*
>>> KVM: MMU: move zap/flush to kvm_mmu_get_page
>>> KVM: MMU: coalesce more page zapping in mmu_sync_children
>>
>> 1-7 look good to me.
> 
> Great, these are the ones where I wanted more review.
> 
> Can I add your Reviewed-by for these?

Yes, please.

Reviewed-by: Takuya Yoshikawa 

  Takuya





Re: [PATCH v2 0/9] cleanup around kvm_sync_page, and a few micro-optimizations

2016-03-08 Thread Takuya Yoshikawa
On 2016/03/07 23:15, Paolo Bonzini wrote:
> Having committed the ubsan fixes, this are the cleanups that are left.
> 
> Compared to v1, I have fixed the patch to coalesce page zapping after
> mmu_sync_children (as requested by Takuya and Guangrong), and I have
> rewritten is_last_gpte again in an even simpler way.
> 
> Paolo
> 
> Paolo Bonzini (9):
>KVM: MMU: introduce kvm_mmu_flush_or_zap
>KVM: MMU: move TLB flush out of __kvm_sync_page
>KVM: MMU: use kvm_sync_page in kvm_sync_pages
>KVM: MMU: cleanup __kvm_sync_page and its callers
>KVM: MMU: invert return value of mmu.sync_page and *kvm_sync_page*
>KVM: MMU: move zap/flush to kvm_mmu_get_page
>KVM: MMU: coalesce more page zapping in mmu_sync_children

1-7 look good to me.

>KVM: MMU: simplify is_last_gpte
>KVM: MMU: micro-optimize gpte_access

8 and 9 look reasonable to me, though I read them only briefly.

  Takuya




Re: [PATCH 09/12] KVM: MMU: coalesce zapping page after mmu_sync_children

2016-02-24 Thread Takuya Yoshikawa
On 2016/02/24 22:17, Paolo Bonzini wrote:
> Move the call to kvm_mmu_flush_or_zap outside the loop.
> 
> Signed-off-by: Paolo Bonzini 
> ---
>   arch/x86/kvm/mmu.c | 9 ++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 725316df32ec..6d47b5c43246 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
>   struct mmu_page_path parents;
>   struct kvm_mmu_pages pages;
>   LIST_HEAD(invalid_list);
> + bool flush = false;
>   
>   while (mmu_unsync_walk(parent, &pages)) {
>   bool protected = false;
> - bool flush = false;
>   
>   for_each_sp(pages, sp, parents, i)
>   protected |= rmap_write_protect(vcpu, sp->gfn);
>   
> - if (protected)
> + if (protected) {
>   kvm_flush_remote_tlbs(vcpu->kvm);
> + flush = false;
> + }
>   
>   for_each_sp(pages, sp, parents, i) {
>   flush |= kvm_sync_page(vcpu, sp, &invalid_list);
>   mmu_pages_clear_parents(&parents);
>   }
> - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
>   cond_resched_lock(&vcpu->kvm->mmu_lock);

This may release the mmu_lock before committing the zapping.
Is it safe?  If so, we may want to see the reason in the changelog.

  Takuya

>   }
> +
> + kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
>   }
>   
>   static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp)
> 





[PATCH 0/2] KVM: x86: MMU: Clean up handle_mmio_page_fault() handling in kvm_mmu_page_fault()

2016-02-22 Thread Takuya Yoshikawa
The end result is very similar to handle_ept_misconfig()'s corresponding code.

It may also be possible to change handle_ept_misconfig() not to call
handle_mmio_page_fault() separately from kvm_mmu_page_fault():
the only difference seems to be whether it checks for PFERR_RSVD_MASK.

Takuya Yoshikawa (2):
  KVM: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault()
  KVM: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault()

 arch/x86/kvm/mmu.c | 54 +-
 arch/x86/kvm/paging_tmpl.h | 19 ++--
 2 files changed, 26 insertions(+), 47 deletions(-)

-- 
2.1.0





[PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault()

2016-02-22 Thread Takuya Yoshikawa
Rather than placing a handle_mmio_page_fault() call in each
vcpu->arch.mmu.page_fault() handler, moving it up to
kvm_mmu_page_fault() makes the code better:

 - avoids code duplication
 - for kvm_arch_async_page_ready(), which is the other caller of
   vcpu->arch.mmu.page_fault(), removes an extra error_code check
 - avoids returning both RET_MMIO_PF_* values and raw integer values
   from vcpu->arch.mmu.page_fault()

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 39 ---
 arch/x86/kvm/paging_tmpl.h | 19 ++-
 2 files changed, 22 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a28b734..2ce3892 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3370,13 +3370,6 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, 
gva_t gva,
 
pgprintk("%s: gva %lx error %x\n", __func__, gva, error_code);
 
-   if (unlikely(error_code & PFERR_RSVD_MASK)) {
-   r = handle_mmio_page_fault(vcpu, gva, true);
-
-   if (likely(r != RET_MMIO_PF_INVALID))
-   return r;
-   }
-
r = mmu_topup_memory_caches(vcpu);
if (r)
return r;
@@ -3460,13 +3453,6 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
 
MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
 
-   if (unlikely(error_code & PFERR_RSVD_MASK)) {
-   r = handle_mmio_page_fault(vcpu, gpa, true);
-
-   if (likely(r != RET_MMIO_PF_INVALID))
-   return r;
-   }
-
r = mmu_topup_memory_caches(vcpu);
if (r)
return r;
@@ -4361,18 +4347,27 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t 
cr2, u32 error_code,
enum emulation_result er;
bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
 
+   if (unlikely(error_code & PFERR_RSVD_MASK)) {
+   r = handle_mmio_page_fault(vcpu, cr2, direct);
+   if (r == RET_MMIO_PF_EMULATE) {
+   emulation_type = 0;
+   goto emulate;
+   }
+   if (r == RET_MMIO_PF_RETRY)
+   return 1;
+   if (r < 0)
+   return r;
+   }
+
r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
if (r < 0)
-   goto out;
-
-   if (!r) {
-   r = 1;
-   goto out;
-   }
+   return r;
+   if (!r)
+   return 1;
 
if (mmio_info_in_cache(vcpu, cr2, direct))
emulation_type = 0;
-
+emulate:
er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
 
switch (er) {
@@ -4386,8 +4381,6 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, 
u32 error_code,
default:
BUG();
}
-out:
-   return r;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6c9fed9..05827ff 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -702,24 +702,17 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
 
pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code);
 
-   if (unlikely(error_code & PFERR_RSVD_MASK)) {
-   r = handle_mmio_page_fault(vcpu, addr, mmu_is_nested(vcpu));
-   if (likely(r != RET_MMIO_PF_INVALID))
-   return r;
-
-   /*
-* page fault with PFEC.RSVD  = 1 is caused by shadow
-* page fault, should not be used to walk guest page
-* table.
-*/
-   error_code &= ~PFERR_RSVD_MASK;
-   };
-
r = mmu_topup_memory_caches(vcpu);
if (r)
return r;
 
/*
+* If PFEC.RSVD is set, this is a shadow page fault.
+* The bit needs to be cleared before walking guest page tables.
+*/
+   error_code &= ~PFERR_RSVD_MASK;
+
+   /*
 * Look up the guest pte for the faulting address.
 */
r = FNAME(walk_addr)(&walker, vcpu, addr, error_code);
-- 
2.1.0





[PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault()

2016-02-22 Thread Takuya Yoshikawa
These two have only slight differences:
 - whether 'addr' is of type u64 or of type gva_t
 - whether they have 'direct' parameter or not

Concerning the former, quickly_check_mmio_pf()'s u64 is better because
'addr' needs to be able to have both a guest physical address and a
guest virtual address.

The latter is just a stylistic issue as we can always calculate the mode
from the 'vcpu' as is_mmio_page_fault() does.  This patch keeps the
parameter to make the following patch cleaner.

In addition, the patch renames the function to mmio_info_in_cache() to
make it clear what it actually checks for.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 15 ---
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95a955d..a28b734 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3273,7 +3273,7 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, 
u64 spte, int level)
return __is_rsvd_bits_set(&mmu->shadow_zero_check, spte, level);
 }
 
-static bool quickly_check_mmio_pf(struct kvm_vcpu *vcpu, u64 addr, bool direct)
+static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 {
if (direct)
return vcpu_match_mmio_gpa(vcpu, addr);
@@ -3332,7 +3332,7 @@ int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 
addr, bool direct)
u64 spte;
bool reserved;
 
-   if (quickly_check_mmio_pf(vcpu, addr, direct))
+   if (mmio_info_in_cache(vcpu, addr, direct))
return RET_MMIO_PF_EMULATE;
 
reserved = walk_shadow_page_get_mmio_spte(vcpu, addr, &spte);
@@ -4354,19 +4354,12 @@ static void make_mmu_pages_available(struct kvm_vcpu 
*vcpu)
kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-static bool is_mmio_page_fault(struct kvm_vcpu *vcpu, gva_t addr)
-{
-   if (vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu))
-   return vcpu_match_mmio_gpa(vcpu, addr);
-
-   return vcpu_match_mmio_gva(vcpu, addr);
-}
-
 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
   void *insn, int insn_len)
 {
int r, emulation_type = EMULTYPE_RETRY;
enum emulation_result er;
+   bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu);
 
r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
if (r < 0)
@@ -4377,7 +4370,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, 
u32 error_code,
goto out;
}
 
-   if (is_mmio_page_fault(vcpu, cr2))
+   if (mmio_info_in_cache(vcpu, cr2, direct))
emulation_type = 0;
 
er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
-- 
2.1.0





[PATCH] KVM: x86: MMU: Use clear_page() instead of init_shadow_page_table()

2015-12-18 Thread Takuya Yoshikawa
Not just in order to clean up the code, but to make it faster by using
enhanced instructions: the initialization became 20-30% faster on our
testing machine.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a1a3d19..7f5a82b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2041,14 +2041,6 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
}
 }
 
-static void init_shadow_page_table(struct kvm_mmu_page *sp)
-{
-   int i;
-
-   for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
-   sp->spt[i] = 0ull;
-}
-
 static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp)
 {
sp->write_flooding_count = 0;
@@ -2128,7 +2120,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
account_shadowed(vcpu->kvm, sp);
}
sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen;
-   init_shadow_page_table(sp);
+   clear_page(sp->spt);
trace_kvm_mmu_get_page(sp, true);
return sp;
 }
-- 
2.1.0



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()

2015-11-26 Thread Takuya Yoshikawa
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().

Signed-off-by: Takuya Yoshikawa 
Cc: Xiao Guangrong 
---
 arch/x86/kvm/mmu.c | 27 ++-
 1 file changed, 6 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ec61b22..204c7d4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1007,26 +1007,6 @@ static void pte_list_remove(u64 *spte, struct 
kvm_rmap_head *rmap_head)
}
 }
 
-typedef void (*pte_list_walk_fn) (u64 *spte);
-static void pte_list_walk(struct kvm_rmap_head *rmap_head, pte_list_walk_fn fn)
-{
-   struct pte_list_desc *desc;
-   int i;
-
-   if (!rmap_head->val)
-   return;
-
-   if (!(rmap_head->val & 1))
-   return fn((u64 *)rmap_head->val);
-
-   desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
-   while (desc) {
-   for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i)
-   fn(desc->sptes[i]);
-   desc = desc->more;
-   }
-}
-
 static struct kvm_rmap_head *__gfn_to_rmap(gfn_t gfn, int level,
   struct kvm_memory_slot *slot)
 {
@@ -1749,7 +1729,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu, int direct
 static void mark_unsync(u64 *spte);
 static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp)
 {
-   pte_list_walk(&sp->parent_ptes, mark_unsync);
+   u64 *sptep;
+   struct rmap_iterator iter;
+
+   for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) {
+   mark_unsync(sptep);
+   }
 }
 
 static void mark_unsync(u64 *spte)
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()

2015-11-26 Thread Takuya Yoshikawa
Every time kvm_mmu_get_page() is called with a non-NULL parent_pte
argument, link_shadow_page() follows that to set the parent entry so
that the new mapping will point to the returned page table.

Moving parent_pte handling there allows to clean up the code because
parent_pte is passed to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().

In addition, the patch avoids calling mark_unsync() for other parents in
the sp->parent_ptes chain than the newly added parent_pte, because they
have been there since before the current page fault handling started.

Signed-off-by: Takuya Yoshikawa 
Cc: Xiao Guangrong 
---
 arch/x86/kvm/mmu.c | 23 +--
 arch/x86/kvm/paging_tmpl.h |  6 ++
 2 files changed, 11 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7f46e3e..ec61b22 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2119,12 +2119,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
if (sp->unsync && kvm_sync_page_transient(vcpu, sp))
break;
 
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
-   if (sp->unsync_children) {
+   if (sp->unsync_children)
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
-   kvm_mmu_mark_parents_unsync(sp);
-   } else if (sp->unsync)
-   kvm_mmu_mark_parents_unsync(sp);
 
__clear_sp_write_flooding_count(sp);
trace_kvm_mmu_get_page(sp, false);
@@ -2135,8 +2131,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
 
sp = kvm_mmu_alloc_page(vcpu, direct);
 
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
-
sp->gfn = gfn;
sp->role = role;
hlist_add_head(&sp->hash_link,
@@ -2204,7 +2198,8 @@ static void shadow_walk_next(struct 
kvm_shadow_walk_iterator *iterator)
return __shadow_walk_next(iterator, *iterator->sptep);
 }
 
-static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp)
+static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
+struct kvm_mmu_page *sp)
 {
u64 spte;
 
@@ -2215,6 +2210,11 @@ static void link_shadow_page(u64 *sptep, struct 
kvm_mmu_page *sp)
   shadow_user_mask | shadow_x_mask | shadow_accessed_mask;
 
mmu_spte_set(sptep, spte);
+
+   mmu_page_add_parent_pte(vcpu, sp, sptep);
+
+   if (sp->unsync_children || sp->unsync)
+   mark_unsync(sptep);
 }
 
 static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
@@ -2273,11 +2273,6 @@ static void kvm_mmu_page_unlink_children(struct kvm *kvm,
mmu_page_zap_pte(kvm, sp, sp->spt + i);
 }
 
-static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte)
-{
-   mmu_page_remove_parent_pte(sp, parent_pte);
-}
-
 static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
u64 *sptep;
@@ -2743,7 +2738,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
  iterator.level - 1,
  1, ACC_ALL, iterator.sptep);
 
-   link_shadow_page(iterator.sptep, sp);
+   link_shadow_page(vcpu, iterator.sptep, sp);
}
}
return emulate;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 11650ea..0dcf9c8 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
goto out_gpte_changed;
 
if (sp)
-   link_shadow_page(it.sptep, sp);
+   link_shadow_page(vcpu, it.sptep, sp);
}
 
for (;
@@ -618,7 +618,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
  true, direct_access, it.sptep);
-   link_shadow_page(it.sptep, sp);
+   link_shadow_page(vcpu, it.sptep, sp);
}
 
clear_sp_write_flooding_count(it.sptep);
@@ -629,8 +629,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
return emulate;
 
 out_gpte_changed:
-   if (sp)
-   kvm_mmu_put_page(sp, it.sptep);
kvm_release_pfn_clean(pfn);
return 0;
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

2015-11-26 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 20 +++-
 arch/x86/kvm/paging_tmpl.h |  4 ++--
 2 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 204c7d4..a1a3d19 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2071,8 +2071,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
 gva_t gaddr,
 unsigned level,
 int direct,
-unsigned access,
-u64 *parent_pte)
+unsigned access)
 {
union kvm_mmu_page_role role;
unsigned quadrant;
@@ -2720,8 +2719,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
base_addr &= PT64_LVL_ADDR_MASK(iterator.level);
pseudo_gfn = base_addr >> PAGE_SHIFT;
sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
- iterator.level - 1,
- 1, ACC_ALL, iterator.sptep);
+ iterator.level - 1, 1, ACC_ALL);
 
link_shadow_page(vcpu, iterator.sptep, sp);
}
@@ -3078,8 +3076,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) {
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL,
- 1, ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL, 1, ACC_ALL);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.root_hpa = __pa(sp->spt);
@@ -3091,9 +3088,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, i << (30 - PAGE_SHIFT),
- i << 30,
- PT32_ROOT_LEVEL, 1, ACC_ALL,
- NULL);
+   i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3130,7 +3125,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, root_gfn, 0, PT64_ROOT_LEVEL,
- 0, ACC_ALL, NULL);
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3163,9 +3158,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
}
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
- PT32_ROOT_LEVEL, 0,
- ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30, PT32_ROOT_LEVEL,
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 0dcf9c8..91e939b 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -587,7 +587,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!is_shadow_present_pte(*it.sptep)) {
table_gfn = gw->table_gfn[it.level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
- false, access, it.sptep);
+ false, access);
}
 
/*
@@ -617,7 +617,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
- true, direct_access, it.sptep);
+ true, direct_access);
link_shado

[PATCH V4 0/3] KVM: x86: MMU: Clean up x86's mmu code for future work - part2

2015-11-26 Thread Takuya Yoshikawa
Guests worked normally in shadow paging mode (ept=0) on my test machine.

Please check if the first two patches reflect what you meant correctly.

Takuya Yoshikawa (3):
  [1] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to 
link_shadow_page()
  [2] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()
  [3] KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

 arch/x86/kvm/mmu.c | 70 +++---
 arch/x86/kvm/paging_tmpl.h | 10 +++
 2 files changed, 26 insertions(+), 54 deletions(-)

-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 09/10] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()

2015-11-25 Thread Takuya Yoshikawa

On 2015/11/26 1:32, Paolo Bonzini wrote:

On 20/11/2015 09:57, Xiao Guangrong wrote:



You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()

By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set()
(then the parent
spte is present now), you can directly clean up for_each_rmap_spte().


So basically squash together the two patches (8/10 and 9/10) except the
change to kvm_mmu_mark_parents_unsync; then in the second patch switch
from pte_list_walk to for_each_rmap_spte.

That makes sense indeed.


Sorry for my being late to respond to Xiao's suggestions.  I could not
use my development machine for a while this week.

In short, this kvm_mmu_mark_parents_unsync() call in kvm_mmu_get_page()
should have been mark_unsync() for the new parent_pte only, because we
are constructing the mappings from/to it and other parents in the
sp->parent_ptes are not related to this fault?

As the code has been this way for some time, a bit scary to change it,
but I'll do some tests without that extra kvm_mmu_mark_parents_unsync()
with a guest (with ept=0) this afternoon.

  Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()

2015-11-20 Thread Takuya Yoshikawa

On 2015/11/20 17:46, Xiao Guangrong wrote:


You just ignored my comment on the previous version...


I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.

Since this patch set has become huge than expected, I'm sending
this version so that patch 01-07 can be applied first.

For patch 08-10, I think we need to check more because there seems
to be some confusion between us.  You can also read other discussions
between Marcelo, Paolo and me.

Anyway, since these three patches has been placed at the end of the
series now, I hope we can concentrate on them easier than before.

Thanks,
  Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 10/10] KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

2015-11-20 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 20 +++-
 arch/x86/kvm/paging_tmpl.h |  4 ++--
 2 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2071,8 +2071,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
 gva_t gaddr,
 unsigned level,
 int direct,
-unsigned access,
-u64 *parent_pte)
+unsigned access)
 {
union kvm_mmu_page_role role;
unsigned quadrant;
@@ -2724,8 +2723,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
base_addr &= PT64_LVL_ADDR_MASK(iterator.level);
pseudo_gfn = base_addr >> PAGE_SHIFT;
sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
- iterator.level - 1,
- 1, ACC_ALL, iterator.sptep);
+ iterator.level - 1, 1, ACC_ALL);
 
link_shadow_page(vcpu, iterator.sptep, sp);
}
@@ -3082,8 +3080,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) {
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL,
- 1, ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL, 1, ACC_ALL);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.root_hpa = __pa(sp->spt);
@@ -3095,9 +3092,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, i << (30 - PAGE_SHIFT),
- i << 30,
- PT32_ROOT_LEVEL, 1, ACC_ALL,
- NULL);
+   i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3134,7 +3129,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, root_gfn, 0, PT64_ROOT_LEVEL,
- 0, ACC_ALL, NULL);
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3167,9 +3162,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
}
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
- PT32_ROOT_LEVEL, 0,
- ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30, PT32_ROOT_LEVEL,
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 0dcf9c8..91e939b 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -587,7 +587,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!is_shadow_present_pte(*it.sptep)) {
table_gfn = gw->table_gfn[it.level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
- false, access, it.sptep);
+ false, access);
}
 
/*
@@ -617,7 +617,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
- true, direct_access, it.sptep);
+ true, direct_access);
link_shado

[PATCH 09/10] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()

2015-11-20 Thread Takuya Yoshikawa
Every time kvm_mmu_get_page() is called with a non-NULL parent_pte
argument, link_shadow_page() follows that to set the parent entry so
that the new mapping will point to the returned page table.

Moving parent_pte handling there allows to clean up the code because
parent_pte is passed to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 22 --
 arch/x86/kvm/paging_tmpl.h |  6 ++
 2 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4e29d9a..b020323 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2107,14 +2107,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
if (sp->unsync_children) {
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
kvm_mmu_mark_parents_unsync(sp);
-   if (parent_pte)
-   mark_unsync(parent_pte);
} else if (sp->unsync) {
kvm_mmu_mark_parents_unsync(sp);
-   if (parent_pte)
-   mark_unsync(parent_pte);
}
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 
__clear_sp_write_flooding_count(sp);
trace_kvm_mmu_get_page(sp, false);
@@ -2125,8 +2120,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
 
sp = kvm_mmu_alloc_page(vcpu, direct);
 
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
-
sp->gfn = gfn;
sp->role = role;
hlist_add_head(&sp->hash_link,
@@ -2194,7 +2187,8 @@ static void shadow_walk_next(struct 
kvm_shadow_walk_iterator *iterator)
return __shadow_walk_next(iterator, *iterator->sptep);
 }
 
-static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp)
+static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
+struct kvm_mmu_page *sp)
 {
u64 spte;
 
@@ -2205,6 +2199,11 @@ static void link_shadow_page(u64 *sptep, struct 
kvm_mmu_page *sp)
   shadow_user_mask | shadow_x_mask | shadow_accessed_mask;
 
mmu_spte_set(sptep, spte);
+
+   if (sp->unsync_children || sp->unsync)
+   mark_unsync(sptep);
+
+   mmu_page_add_parent_pte(vcpu, sp, sptep);
 }
 
 static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
@@ -2263,11 +2262,6 @@ static void kvm_mmu_page_unlink_children(struct kvm *kvm,
mmu_page_zap_pte(kvm, sp, sp->spt + i);
 }
 
-static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte)
-{
-   mmu_page_remove_parent_pte(sp, parent_pte);
-}
-
 static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
u64 *sptep;
@@ -2733,7 +2727,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
  iterator.level - 1,
  1, ACC_ALL, iterator.sptep);
 
-   link_shadow_page(iterator.sptep, sp);
+   link_shadow_page(vcpu, iterator.sptep, sp);
}
}
return emulate;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 11650ea..0dcf9c8 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
goto out_gpte_changed;
 
if (sp)
-   link_shadow_page(it.sptep, sp);
+   link_shadow_page(vcpu, it.sptep, sp);
}
 
for (;
@@ -618,7 +618,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
  true, direct_access, it.sptep);
-   link_shadow_page(it.sptep, sp);
+   link_shadow_page(vcpu, it.sptep, sp);
}
 
clear_sp_write_flooding_count(it.sptep);
@@ -629,8 +629,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
return emulate;
 
 out_gpte_changed:
-   if (sp)
-   kvm_mmu_put_page(sp, it.sptep);
kvm_release_pfn_clean(pfn);
return 0;
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()

2015-11-20 Thread Takuya Yoshikawa
kvm_mmu_mark_parents_unsync() alone uses pte_list_walk(), witch does
nearly the same as the for_each_rmap_spte macro.  The only difference
is that is_shadow_present_pte() checks cannot be placed there because
kvm_mmu_mark_parents_unsync() can be called with a new parent pointer
whose entry is not set yet.

By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 36 +---
 1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7f46e3e..4e29d9a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1007,26 +1007,6 @@ static void pte_list_remove(u64 *spte, struct 
kvm_rmap_head *rmap_head)
}
 }
 
-typedef void (*pte_list_walk_fn) (u64 *spte);
-static void pte_list_walk(struct kvm_rmap_head *rmap_head, pte_list_walk_fn fn)
-{
-   struct pte_list_desc *desc;
-   int i;
-
-   if (!rmap_head->val)
-   return;
-
-   if (!(rmap_head->val & 1))
-   return fn((u64 *)rmap_head->val);
-
-   desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
-   while (desc) {
-   for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i)
-   fn(desc->sptes[i]);
-   desc = desc->more;
-   }
-}
-
 static struct kvm_rmap_head *__gfn_to_rmap(gfn_t gfn, int level,
   struct kvm_memory_slot *slot)
 {
@@ -1749,7 +1729,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu, int direct
 static void mark_unsync(u64 *spte);
 static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp)
 {
-   pte_list_walk(&sp->parent_ptes, mark_unsync);
+   u64 *sptep;
+   struct rmap_iterator iter;
+
+   for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) {
+   mark_unsync(sptep);
+   }
 }
 
 static void mark_unsync(u64 *spte)
@@ -2119,12 +2104,17 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
if (sp->unsync && kvm_sync_page_transient(vcpu, sp))
break;
 
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
if (sp->unsync_children) {
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
kvm_mmu_mark_parents_unsync(sp);
-   } else if (sp->unsync)
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   } else if (sp->unsync) {
kvm_mmu_mark_parents_unsync(sp);
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   }
+   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 
__clear_sp_write_flooding_count(sp);
trace_kvm_mmu_get_page(sp, false);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 07/10] KVM: x86: MMU: Move initialization of parent_ptes out from kvm_mmu_alloc_page()

2015-11-20 Thread Takuya Yoshikawa
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5b249d4..7f46e3e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1726,8 +1726,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp,
mmu_spte_clear_no_track(parent_pte);
 }
 
-static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
-  u64 *parent_pte, int direct)
+static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int 
direct)
 {
struct kvm_mmu_page *sp;
 
@@ -1743,8 +1742,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu,
 * this feature. See the comments in kvm_zap_obsolete_pages().
 */
list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-   sp->parent_ptes.val = 0;
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
kvm_mod_used_mmu_pages(vcpu->kvm, +1);
return sp;
 }
@@ -2133,10 +2130,13 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
trace_kvm_mmu_get_page(sp, false);
return sp;
}
+
++vcpu->kvm->stat.mmu_cache_miss;
-   sp = kvm_mmu_alloc_page(vcpu, parent_pte, direct);
-   if (!sp)
-   return sp;
+
+   sp = kvm_mmu_alloc_page(vcpu, direct);
+
+   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
+
sp->gfn = gfn;
sp->role = role;
hlist_add_head(&sp->hash_link,
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 06/10] KVM: x86: MMU: Consolidate BUG_ON checks for reverse-mapped sptes

2015-11-20 Thread Takuya Yoshikawa
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.

Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro, will be checked the
same way.

One thing to keep in mind is that kvm_mmu_unlink_parents() also uses
rmap_get_first() to handle parent sptes.  The change will not break it
because parent sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.

Signed-off-by: Takuya Yoshikawa 
---
 Documentation/virtual/kvm/mmu.txt |  4 ++--
 arch/x86/kvm/mmu.c| 26 +-
 2 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt 
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -203,10 +203,10 @@ Shadow pages contain the following information:
 page cannot be destroyed.  See role.invalid.
   parent_ptes:
 The reverse mapping for the pte/ptes pointing at this page's spt. If
-parent_ptes bit 0 is zero, only one spte points at this pages and
+parent_ptes bit 0 is zero, only one spte points at this page and
 parent_ptes points at this single spte, otherwise, there exists multiple
 sptes pointing at this page and (parent_ptes & ~0x1) points at a data
-structure with a list of parent_ptes.
+structure with a list of parent sptes.
   unsync:
 If true, then the translations in this page may not match the guest's
 translation.  This is equivalent to the state of the tlb when a pte is
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3104748..5b249d4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1098,17 +1098,23 @@ struct rmap_iterator {
 static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head,
   struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (!rmap_head->val)
return NULL;
 
if (!(rmap_head->val & 1)) {
iter->desc = NULL;
-   return (u64 *)rmap_head->val;
+   sptep = (u64 *)rmap_head->val;
+   goto out;
}
 
iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
iter->pos = 0;
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+out:
+   BUG_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 /*
@@ -1118,14 +1124,14 @@ static u64 *rmap_get_first(struct kvm_rmap_head 
*rmap_head,
  */
 static u64 *rmap_get_next(struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (iter->desc) {
if (iter->pos < PTE_LIST_EXT - 1) {
-   u64 *sptep;
-
++iter->pos;
sptep = iter->desc->sptes[iter->pos];
if (sptep)
-   return sptep;
+   goto out;
}
 
iter->desc = iter->desc->more;
@@ -1133,17 +1139,20 @@ static u64 *rmap_get_next(struct rmap_iterator *iter)
if (iter->desc) {
iter->pos = 0;
/* desc->sptes[0] cannot be NULL */
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+   goto out;
}
}
 
return NULL;
+out:
+   BUG_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 #define for_each_rmap_spte(_rmap_head_, _iter_, _spte_)
\
for (_spte_ = rmap_get_first(_rmap_head_, _iter_);  \
-_spte_ && ({BUG_ON(!is_shadow_present_pte(*_spte_)); 1;}); \
-_spte_ = rmap_get_next(_iter_))
+_spte_; _spte_ = rmap_get_next(_iter_))
 
 static void drop_spte(struct kvm *kvm, u64 *sptep)
 {
@@ -1358,7 +1367,6 @@ static bool kvm_zap_rmapp(struct kvm *kvm, struct 
kvm_rmap_head *rmap_head)
bool flush = false;
 
while ((sptep = rmap_get_first(rmap_head, &iter))) {
-   BUG_ON(!(*sptep & PT_PRESENT_MASK));
rmap_printk("%s: spte %p %llx.\n", __func__, sptep, *sptep);
 
drop_spte(kvm, sptep);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 05/10] KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()

2015-11-20 Thread Takuya Yoshikawa
is_rmap_spte(), originally named is_rmap_pte(), was introduced when the
simple reverse mapping was implemented by commit cd4a4e5374110444
("[PATCH] KVM: MMU: Implement simple reverse mapping").  At that point,
its role was clear and only rmap_add() and rmap_remove() were using it
to select sptes that need to be reverse-mapped.

Independently of that, is_shadow_present_pte() was first introduced by
commit c7addb902054195b ("KVM: Allow not-present guest page faults to
bypass kvm") to do bypass_guest_pf optimization, which does not exist
any more.

These two seem to have changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.

Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c   | 13 -
 arch/x86/kvm/mmu_audit.c |  2 +-
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 74c120c..3104748 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -311,11 +311,6 @@ static int is_large_pte(u64 pte)
return pte & PT_PAGE_SIZE_MASK;
 }
 
-static int is_rmap_spte(u64 pte)
-{
-   return is_shadow_present_pte(pte);
-}
-
 static int is_last_spte(u64 pte, int level)
 {
if (level == PT_PAGE_TABLE_LEVEL)
@@ -540,7 +535,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
u64 old_spte = *sptep;
bool ret = false;
 
-   WARN_ON(!is_rmap_spte(new_spte));
+   WARN_ON(!is_shadow_present_pte(new_spte));
 
if (!is_shadow_present_pte(old_spte)) {
mmu_spte_set(sptep, new_spte);
@@ -595,7 +590,7 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
else
old_spte = __update_clear_spte_slow(sptep, 0ull);
 
-   if (!is_rmap_spte(old_spte))
+   if (!is_shadow_present_pte(old_spte))
return 0;
 
pfn = spte_to_pfn(old_spte);
@@ -2575,7 +2570,7 @@ static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep, unsigned pte_access,
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
 
-   if (is_rmap_spte(*sptep)) {
+   if (is_shadow_present_pte(*sptep)) {
/*
 * If we overwrite a PTE page pointer with a 2MB PMD, unlink
 * the parent of the now unreachable PTE.
@@ -2919,7 +2914,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t 
gva, int level,
 * If the mapping has been changed, let the vcpu fault on the
 * same address again.
 */
-   if (!is_rmap_spte(spte)) {
+   if (!is_shadow_present_pte(spte)) {
ret = true;
goto exit;
}
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index f7b0488..1cee3ec 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -183,7 +183,7 @@ static void check_mappings_rmap(struct kvm *kvm, struct 
kvm_mmu_page *sp)
return;
 
for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
-   if (!is_rmap_spte(sp->spt[i]))
+   if (!is_shadow_present_pte(sp->spt[i]))
continue;
 
inspect_spte_has_rmap(kvm, sp->spt + i);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 04/10] KVM: x86: MMU: Make mmu_set_spte() return emulate value

2015-11-20 Thread Takuya Yoshikawa
mmu_set_spte()'s code is based on the assumption that the emulate
parameter has a valid pointer value if set_spte() returns true and
write_fault is not zero.  In other cases, emulate may be NULL, so a
NULL-check is needed.

Stop passing emulate pointer and make mmu_set_spte() return the emulate
value instead to clean up this complex interface.  Prefetch functions
can just throw away the return value.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 27 ++-
 arch/x86/kvm/paging_tmpl.h | 10 +-
 2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9832bc9..74c120c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2564,13 +2564,13 @@ done:
return ret;
 }
 
-static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
-unsigned pte_access, int write_fault, int *emulate,
-int level, gfn_t gfn, pfn_t pfn, bool speculative,
-bool host_writable)
+static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned 
pte_access,
+int write_fault, int level, gfn_t gfn, pfn_t pfn,
+bool speculative, bool host_writable)
 {
int was_rmapped = 0;
int rmap_count;
+   bool emulate = false;
 
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
@@ -2600,12 +2600,12 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
if (set_spte(vcpu, sptep, pte_access, level, gfn, pfn, speculative,
  true, host_writable)) {
if (write_fault)
-   *emulate = 1;
+   emulate = true;
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
}
 
-   if (unlikely(is_mmio_spte(*sptep) && emulate))
-   *emulate = 1;
+   if (unlikely(is_mmio_spte(*sptep)))
+   emulate = true;
 
pgprintk("%s: setting spte %llx\n", __func__, *sptep);
pgprintk("instantiating %s PTE (%s) at %llx (%llx) addr %p\n",
@@ -2624,6 +2624,8 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
}
 
kvm_release_pfn_clean(pfn);
+
+   return emulate;
 }
 
 static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
@@ -2658,9 +2660,8 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
return -1;
 
for (i = 0; i < ret; i++, gfn++, start++)
-   mmu_set_spte(vcpu, start, access, 0, NULL,
-sp->role.level, gfn, page_to_pfn(pages[i]),
-true, true);
+   mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn,
+page_to_pfn(pages[i]), true, true);
 
return 0;
 }
@@ -2721,9 +2722,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
 
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
-   mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
-write, &emulate, level, gfn, pfn,
-prefault, map_writable);
+   emulate = mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
+  write, level, gfn, pfn, prefault,
+  map_writable);
direct_pte_prefetch(vcpu, iterator.sptep);
++vcpu->stat.pf_fixed;
break;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index d8fdc5c..11650ea 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -475,8 +475,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct 
kvm_mmu_page *sp,
 * we call mmu_set_spte() with host_writable = true because
 * pte_prefetch_gfn_to_pfn always gets a writable pfn.
 */
-   mmu_set_spte(vcpu, spte, pte_access, 0, NULL, PT_PAGE_TABLE_LEVEL,
-gfn, pfn, true, true);
+   mmu_set_spte(vcpu, spte, pte_access, 0, PT_PAGE_TABLE_LEVEL, gfn, pfn,
+true, true);
 
return true;
 }
@@ -556,7 +556,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
struct kvm_mmu_page *sp = NULL;
struct kvm_shadow_walk_iterator it;
unsigned direct_access, access = gw->pt_access;
-   int top_level, emulate = 0;
+   int top_level, emulate;
 
direct_access = gw->pte_access;
 
@@ -622,8 +622,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
}
 
clear_sp_write_flooding_count(it.sptep);
-   mmu_set_spte(vcpu, it.sptep, gw->pte_access, write_fault, &emulate,
-it.level, gw->gfn

[PATCH 03/10] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap

2015-11-20 Thread Takuya Yoshikawa
Both __mmu_unsync_walk() and mmu_pages_clear_parents() have three line
code which clears a bit in the unsync child bitmap; the former places it
inside a loop block and uses a few goto statements to jump to it.

A new helper function, clear_unsync_child_bit(), makes the code cleaner.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8a1593f..9832bc9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1809,6 +1809,13 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, 
struct kvm_mmu_page *sp,
return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
 
+static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
+{
+   --sp->unsync_children;
+   WARN_ON((int)sp->unsync_children < 0);
+   __clear_bit(idx, sp->unsync_child_bitmap);
+}
+
 static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
   struct kvm_mmu_pages *pvec)
 {
@@ -1818,8 +1825,10 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
struct kvm_mmu_page *child;
u64 ent = sp->spt[i];
 
-   if (!is_shadow_present_pte(ent) || is_large_pte(ent))
-   goto clear_child_bitmap;
+   if (!is_shadow_present_pte(ent) || is_large_pte(ent)) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   }
 
child = page_header(ent & PT64_BASE_ADDR_MASK);
 
@@ -1828,28 +1837,21 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
return -ENOSPC;
 
ret = __mmu_unsync_walk(child, pvec);
-   if (!ret)
-   goto clear_child_bitmap;
-   else if (ret > 0)
+   if (!ret) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   } else if (ret > 0) {
nr_unsync_leaf += ret;
-   else
+   } else
return ret;
} else if (child->unsync) {
nr_unsync_leaf++;
if (mmu_pages_add(pvec, child, i))
return -ENOSPC;
} else
-goto clear_child_bitmap;
-
-   continue;
-
-clear_child_bitmap:
-   __clear_bit(i, sp->unsync_child_bitmap);
-   sp->unsync_children--;
-   WARN_ON((int)sp->unsync_children < 0);
+   clear_unsync_child_bit(sp, i);
}
 
-
return nr_unsync_leaf;
 }
 
@@ -2012,9 +2014,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path 
*parents)
if (!sp)
return;
 
-   --sp->unsync_children;
-   WARN_ON((int)sp->unsync_children < 0);
-   __clear_bit(idx, sp->unsync_child_bitmap);
+   clear_unsync_child_bit(sp, idx);
level++;
} while (level < PT64_ROOT_LEVEL-1 && !sp->unsync_children);
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 02/10] KVM: x86: MMU: Remove unused parameter of __direct_map()

2015-11-20 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, 
u64 *sptep)
__direct_pte_prefetch(vcpu, sp, sptep);
 }
 
-static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
-   int map_writable, int level, gfn_t gfn, pfn_t pfn,
-   bool prefault)
+static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
+   int level, gfn_t gfn, pfn_t pfn, bool prefault)
 {
struct kvm_shadow_walk_iterator iterator;
struct kvm_mmu_page *sp;
@@ -3018,11 +3017,9 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
-prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
-
return r;
 
 out_unlock:
@@ -3531,8 +3528,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, gpa, write, map_writable,
-level, gfn, pfn, prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
return r;
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 01/10] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct

2015-11-20 Thread Takuya Yoshikawa
New struct kvm_rmap_head makes the code type-safe to some extent.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/include/asm/kvm_host.h |   8 +-
 arch/x86/kvm/mmu.c  | 196 
 arch/x86/kvm/mmu_audit.c|  13 +--
 3 files changed, 113 insertions(+), 104 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f608e17..8140077 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -214,6 +214,10 @@ union kvm_mmu_page_role {
};
 };
 
+struct kvm_rmap_head {
+   unsigned long val;
+};
+
 struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
@@ -231,7 +235,7 @@ struct kvm_mmu_page {
bool unsync;
int root_count;  /* Currently serving as active root */
unsigned int unsync_children;
-   unsigned long parent_ptes;  /* Reverse mapping for parent_pte */
+   struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
 
/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
unsigned long mmu_valid_gen;
@@ -606,7 +610,7 @@ struct kvm_lpage_info {
 };
 
 struct kvm_arch_memory_slot {
-   unsigned long *rmap[KVM_NR_PAGE_SIZES];
+   struct kvm_rmap_head *rmap[KVM_NR_PAGE_SIZES];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
 };
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 276d2f2..d9a6801 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -909,36 +909,35 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t 
large_gfn,
 }
 
 /*
- * Pte mapping structures:
+ * About rmap_head encoding:
  *
- * If pte_list bit zero is zero, then pte_list point to the spte.
- *
- * If pte_list bit zero is one, (then pte_list & ~1) points to a struct
+ * If the bit zero of rmap_head->val is clear, then it points to the only spte
+ * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct
  * pte_list_desc containing more mappings.
- *
- * Returns the number of pte entries before the spte was added or zero if
- * the spte was not added.
- *
+ */
+
+/*
+ * Returns the number of pointers in the rmap chain, not counting the new one.
  */
 static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
-   unsigned long *pte_list)
+   struct kvm_rmap_head *rmap_head)
 {
struct pte_list_desc *desc;
int i, count = 0;
 
-   if (!*pte_list) {
+   if (!rmap_head->val) {
rmap_printk("pte_list_add: %p %llx 0->1\n", spte, *spte);
-   *pte_list = (unsigned long)spte;
-   } else if (!(*pte_list & 1)) {
+   rmap_head->val = (unsigned long)spte;
+   } else if (!(rmap_head->val & 1)) {
rmap_printk("pte_list_add: %p %llx 1->many\n", spte, *spte);
desc = mmu_alloc_pte_list_desc(vcpu);
-   desc->sptes[0] = (u64 *)*pte_list;
+   desc->sptes[0] = (u64 *)rmap_head->val;
desc->sptes[1] = spte;
-   *pte_list = (unsigned long)desc | 1;
+   rmap_head->val = (unsigned long)desc | 1;
++count;
} else {
rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte);
-   desc = (struct pte_list_desc *)(*pte_list & ~1ul);
+   desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
while (desc->sptes[PTE_LIST_EXT-1] && desc->more) {
desc = desc->more;
count += PTE_LIST_EXT;
@@ -955,8 +954,9 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
 }
 
 static void
-pte_list_desc_remove_entry(unsigned long *pte_list, struct pte_list_desc *desc,
-  int i, struct pte_list_desc *prev_desc)
+pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head,
+  struct pte_list_desc *desc, int i,
+  struct pte_list_desc *prev_desc)
 {
int j;
 
@@ -967,43 +967,43 @@ pte_list_desc_remove_entry(unsigned long *pte_list, 
struct pte_list_desc *desc,
if (j != 0)
return;
if (!prev_desc && !desc->more)
-   *pte_list = (unsigned long)desc->sptes[0];
+   rmap_head->val = (unsigned long)desc->sptes[0];
else
if (prev_desc)
prev_desc->more = desc->more;
else
-   *pte_list = (unsigned long)desc->more | 1;
+   rmap_head->val = (unsigned long)desc->more | 1;
mmu_free_pte_list_desc(desc);
 }
 
-static void pte_list_remove(u64 *spte, unsigned long *pte_list)
+static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head)
 {
struct pte_lis

[PATCH V3 00/10] KVM: x86: MMU: Clean up x86's mmu code for future work

2015-11-20 Thread Takuya Yoshikawa
It seems like you all are busy now, so I've made this patch set so that
mechanical and trivial changes come before.

V2->V3:
Patch 01: Rebased and moved here. Updated stale comments.
  We may also want to use a union, inside the struct, to eliminate casting to
  (u64 *) type when spte is in the head in the future.
Patch 02-05: No change.
  About patch 03: There was a comment on the usage of braces for a single line
  else-if statement from Xiao. As I answered, checkpatch did not complain about
  this, and when the corresponding if block has multiple lines, some developers
  prefer/recommend this style. Feel free to modify it if you don't like it.
Patch 06: Changed WARN_ON to BUG_ON as Marcelo suggested.
Patch 07: Removed unnecessary zero-initialization of sp->parent_ptes as Xiao
  suggested.

I think these seven patches are ready for inclusion.

Patch 08-10: No change now, though there were a few comments.
  This patch set is not intended to optimize anything, so these patches try to
  keep the way mark_unsync() gets called as much as possible: the only changes
  are when this gets called for the new parent_pte and when
  mmu_page_add_parent_pte() gets called.

For these three, I'm not sure what we should do now, still RFC?
We can also consider other approaches, e.g. moving link_shadow_page() in the
kvm_get_mmu_page() as Paolo suggested before.

  Takuya

Takuya Yoshikawa (10):
  [01] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct
  [02] KVM: x86: MMU: Remove unused parameter of __direct_map()
  [03] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
  [04] KVM: x86: MMU: Make mmu_set_spte() return emulate value
  [05] KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()
  [06] KVM: x86: MMU: Consolidate BUG_ON checks for reverse-mapped sptes
  [07] KVM: x86: MMU: Move initialization of parent_ptes out from 
kvm_mmu_alloc_page()
  [08] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()
  [09] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to 
link_shadow_page()
  [10] KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

 Documentation/virtual/kvm/mmu.txt |   4 +-
 arch/x86/include/asm/kvm_host.h   |   8 +-
 arch/x86/kvm/mmu.c| 370 ++
 arch/x86/kvm/mmu_audit.c  |  15 +-
 arch/x86/kvm/paging_tmpl.h|  20 +--
 5 files changed, 201 insertions(+), 216 deletions(-)

-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 02/10] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap

2015-11-18 Thread Takuya Yoshikawa

On 2015/11/19 11:46, Xiao Guangrong wrote:


Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines.  You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().

In our case, I thought putting braces would align the else-if
and else and make the code look a bit nicer, but I know this
may be just a matter of personal feeling.

In short, unless the maintainer, Paolo for this file, has any
preference, both ways will be accepted.


The reason why i pointed this out is that it is the style documented
in Documentation/CodingStyle:
| Do not unnecessarily use braces where a single statement will do.
|
|if (condition)
|action();
|


Ah, this is a different thing.  For this case, there is a consensus
and checkpatch will complain if we don't obey the rule.

What I explained was:

  if (condition) {
 line1;
 line2;  // multiple lines
  } else if {
 single-line-statement;  -- (*1)
  } else
 single-line-statement;  -- (*2)

For (*1) and (*2), especially for (*1), some people put braces.


Actually, Ingo Molnar hated this braces-style too much and blamed
many developers who used this style (include me, that why i was
nervous to see this style :( ).


I think he likes the coding style of kernel/sched/core.c very much,
as you know.  Actually that is one reason why I took it as an example.

Let's just choose the way which Paolo prefers for this time, I don't
know which is better.

Thank you,
  Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 07/10] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct

2015-11-18 Thread Takuya Yoshikawa

On 2015/11/18 18:09, Paolo Bonzini wrote:


On 18/11/2015 04:21, Xiao Guangrong wrote:



On 11/12/2015 07:55 PM, Takuya Yoshikawa wrote:

@@ -1720,7 +1724,7 @@ static struct kvm_mmu_page
*kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
* this feature. See the comments in kvm_zap_obsolete_pages().
*/
   list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-sp->parent_ptes = 0;
+sp->parent_ptes.val = 0;


The sp is allocated from kmem_cache_zalloc() so explicitly initialize it
to zero is not needed.


Right, but it should be a separate patch.

Takuya, since you are going to send another version of this series, can
you also:


Yes, I'm preparing to do so.


1) move this patch either to the beginning or to the end

2) include "KVM: x86: always set accessed bit in shadow PTEs", also near
the beginning of the series?


Commit 1c9a5e19b1af8a2c ("KVM: x86: MMU: always set accessed bit
in shadow PTEs") will be the first.

Then, the ordering will become something like this:

02: Encapsulate the type of rmap-chain head in a new struct
03: Remove unused parameter of __direct_map()
04: Add helper function to clear a bit in unsync child bitmap
05: Make mmu_set_spte() return emulate value
06: Remove is_rmap_spte() and use is_shadow_present_pte()

These five seem to be easy ones for you to apply: since patch 02
touches many places, it should go first to become the base of the
following work.

07: Consolidate BUG_ON checks for reverse-mapped sptes

I will change the WARN_ON to BUG_ON.  // Marcelo's comment

08: Move initialization of parent_ptes out from kvm_mmu_alloc_page()

In this patch, I will delete "sp->parent_ptes.val = 0;" line since
this is the problem of kvm_mmu_alloc_page(), though not a new one.
  // Xiao's comment

09: Use for_each_rmap_spte macro instead of pte_list_walk()

There is some confusion between us: Paolo and I agreed that the
patch keeps the original way and calls mark_unsync() the same way
as before, but there are still comments from Marcelo and Xiao and
those comments seem to explain the code differently.

I will check again, but I may not change this one and the following
two patches in the next version.  If we can eliminate some of the
mark_unsync() calls, that will be kind of an optimization which this
series does not intend to achieve.

Anyway, by moving the non-trivial two patches (09 and 10) here,
reviewing will become easier and you can apply the other patches
separately.

10: Move parent_pte handling from kvm_mmu_get_page()
to link_shadow_page()
11: Remove unused parameter parent_pte from kvm_mmu_get_page()

Thanks,
  Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 02/10] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap

2015-11-18 Thread Takuya Yoshikawa

On 2015/11/18 11:44, Xiao Guangrong wrote:


On 11/12/2015 07:50 PM, Takuya Yoshikawa wrote:

+if (!ret) {
+clear_unsync_child_bit(sp, i);
+continue;
+} else if (ret > 0) {
  nr_unsync_leaf += ret;


Just a single line here, braces are unnecessary.


-else
+} else
  return ret;


I know we can eliminate the braces, but that does not mean
we should do so: there seems to be no consensus about this
style issue and checkpatch accepts both ways.

Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines.  You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().

In our case, I thought putting braces would align the else-if
and else and make the code look a bit nicer, but I know this
may be just a matter of personal feeling.

In short, unless the maintainer, Paolo for this file, has any
preference, both ways will be accepted.

  Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes

2015-11-15 Thread Takuya Yoshikawa

On 2015/11/14 7:08, Marcelo Tosatti wrote:

On Thu, Nov 12, 2015 at 08:53:43PM +0900, Takuya Yoshikawa wrote:

At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.

Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro, will be checked the
same way.  In addition, change the BUG_ON to WARN_ON since killing the
whole host is the last thing that KVM should try.


It should be a BUG_ON, if KVM continues it will corrupt (more) memory.


In the sense that we cannot predict what kind of corruption it will
cause, I agree with you.

But if it can only corrupt that guest's memory, it is a bit sad to
kill unrelated guests, and host, too.  Anyway, since we cannot say
for sure what a possible bug can cause, I agree with you now.

Thanks,
  Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 05/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()

2015-11-15 Thread Takuya Yoshikawa

On 2015/11/14 18:20, Marcelo Tosatti wrote:


The actual issue is this: a higher level page that had, under its children,
no out of sync pages, now, due to your addition, a child that is unsync:

initial state:
level1

final state:

level1 -x-> level2 -x-> level3

Where -x-> are the links created by this pagefault fixing round.

If _any_ page under you is unsync (not necessarily the ones this
pagefault is accessing), you have to mark parents unsync.


I understand this, but I don't think my patch will break this.

What kvm_mmu_mark_parents_unsync() does is:

  for each p_i in sp->parent_ptes rmap chain
mark_unsync(p_i);

Then, mark_unsync() finds the parent sp including that p_i to
set ->unsync_child_bitmap and increment ->unsync_children if
necessary.  It may also call kvm_mmu_mark_parents_unsync()
recursively.

I understand we need to tell the parents "you have an unsync
child/descendant" until this information reaches the top level
by that recursive calls.

But since these recursive calls cannot come back to the starting sp,
the child->parent graph has no loop, each mark_unsync(p_i) will not
be affected by other parents in that sp->parent_ptes rmap chain,
from which we started the recursive calls.


As the following code shows, my patch does mark_unsync(parent_pte)
separately, and then mmu_page_add_parent_pte(vcpu, sp, parent_pte):


-   } else if (sp->unsync)
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   } else if (sp->unsync) {
kvm_mmu_mark_parents_unsync(sp);
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   }
+   mmu_page_add_parent_pte(vcpu, sp, parent_pte);


So, as you worried, during each mark_unsync(p_i) is processed,
this parent_pte does not exist in that sp->parent_ptes rmap chain.

But as I explained above, this does not change anything about what
each mark_unsync(p_i) call does, so keeps the original behaviour.


By the way, I think "kvm_mmu_mark_parents_unsync" and "mark_unsync"
do not tell what they actually do well. When I first saw the names,
I thought they would just set the parents' sp->unsync.

To reflect the following meaning better, it should be
propagate_unsync(_to_parents) or something:

  Tell the parents "you have an unsync child/descendant"
  until this unsync information reaches the top level


Thanks,
  Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 09/10 RFC] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()

2015-11-12 Thread Takuya Yoshikawa

On 2015/11/12 23:27, Paolo Bonzini wrote:


On 12/11/2015 12:56, Takuya Yoshikawa wrote:

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
goto out_gpte_changed;

if (sp)
-   link_shadow_page(it.sptep, sp, PT_GUEST_ACCESSED_MASK);
+   link_shadow_page(vcpu, it.sptep, sp, 
PT_GUEST_ACCESSED_MASK);
}



Here I think you can remove completely the

if (sp)
kvm_mmu_put_page(sp, it.sptep);

later in FNAME(fetch).  Apart from this nit, it's okay.


Yes, that's what this patch does below:


@@ -629,8 +629,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
return emulate;

 out_gpte_changed:
-   if (sp)
-   kvm_mmu_put_page(sp, it.sptep);
kvm_release_pfn_clean(pfn);
return 0;
 }


Since this is the only user of kvm_mmu_put_page(), it also removes
the definition:


@@ -2268,11 +2268,6 @@ static void kvm_mmu_page_unlink_children(struct kvm *kvm,
mmu_page_zap_pte(kvm, sp, sp->spt + i);
 }

-static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte)
-{
-   mmu_page_remove_parent_pte(sp, parent_pte);
-}
-
 static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
u64 *sptep;


Actually, I don't understand why this is named kvm_mmu_put_page() for
just removing parent_pte pointer from the sp->parent_ptes pointer chain.



On to kvm_mmu_get_page...

 if (!direct) {
 if (rmap_write_protect(vcpu, gfn))
 kvm_flush_remote_tlbs(vcpu->kvm);
 if (level > PT_PAGE_TABLE_LEVEL && need_sync)
 kvm_sync_pages(vcpu, gfn);

This seems fishy.

need_sync is set if sp->unsync, but then the parents have not been
unsynced yet.


Reaching here means that kvm_mmu_get_page() could not return sp
from inside the for_each_gfn_sp() loop above, so even without
this patch, mark_unsync() has not been called.

Here, sp holds the new page allocated by kvm_mmu_alloc_page().
One confusing thing is that hlist_add_head() right before this
"if (!direct)" line has already added the new sp to the hash
list, so it will be found by for_each_gfn_indirect_valid_sp()
in kvm_sync_pages().

Because this sp is new and sp->unsync is not set,  kvm_sync_pages()
will just skip it and look for other sp's whose ->unsync were found
to be set in the for_each_gfn_sp() loop.

I'm not 100% sure if the existence of the parent_pte pointer in the
newly created sp->parent_ptes chain alone makes any difference:

@@ -2127,7 +2122,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
sp = kvm_mmu_alloc_page(vcpu, direct);

sp->parent_ptes.val = 0;
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);

sp->gfn = gfn;
sp->role = role;




On the other hand, all calls to kvm_mmu_get_page except for the
roots are followed by link_shadow_page...  Perhaps if parent_pte != NULL
you can call link_shadow_page directly from kvm_mmu_get_page.  The call
would go before the "if (!direct)" and it would subsume all the existing
calls.

We could probably also warn if

(parent_pte == NULL)
!= (level == vcpu->arch.mmu.root_level)

in kvm_mmu_get_page.


I think we should set the spte after init_shadow_page_table(), and
to make this subsume all the existing calls, we need to change the
"return sp;" in the for_each_gfn_sp() loop to a goto statement so
that the end of this function will become something like this:

init_shadow_page(sp);
out:
if (parent_pte) {
mmu_page_add_parent_pte(vcpu, sp, parent_pte);
link_shadow_page(parent_pte, sp, accessed);
}
trace_kvm_mmu_get_page(sp, created);
return sp;

So, "bool accessed" needs to be passed to kvm_mmu_get_page().
But any way, we need to understand if mmu_page_add_parent_pte()
really needs to be placed before the "if (!direct)" block.

  Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 10/10] KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

2015-11-12 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 20 +++-
 arch/x86/kvm/paging_tmpl.h |  4 ++--
 2 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 33fe720..101e77d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2072,8 +2072,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
 gva_t gaddr,
 unsigned level,
 int direct,
-unsigned access,
-u64 *parent_pte)
+unsigned access)
 {
union kvm_mmu_page_role role;
unsigned quadrant;
@@ -2730,8 +2729,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
base_addr &= PT64_LVL_ADDR_MASK(iterator.level);
pseudo_gfn = base_addr >> PAGE_SHIFT;
sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
- iterator.level - 1,
- 1, ACC_ALL, iterator.sptep);
+ iterator.level - 1, 1, ACC_ALL);
 
link_shadow_page(vcpu, iterator.sptep, sp, true);
}
@@ -3088,8 +3086,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) {
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL,
- 1, ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, 0, 0, PT64_ROOT_LEVEL, 1, ACC_ALL);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.root_hpa = __pa(sp->spt);
@@ -3101,9 +3098,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, i << (30 - PAGE_SHIFT),
- i << 30,
- PT32_ROOT_LEVEL, 1, ACC_ALL,
- NULL);
+   i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3140,7 +3135,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
sp = kvm_mmu_get_page(vcpu, root_gfn, 0, PT64_ROOT_LEVEL,
- 0, ACC_ALL, NULL);
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -3173,9 +3168,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
}
spin_lock(&vcpu->kvm->mmu_lock);
make_mmu_pages_available(vcpu);
-   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
- PT32_ROOT_LEVEL, 0,
- ACC_ALL, NULL);
+   sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30, PT32_ROOT_LEVEL,
+ 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index f414ca6..ee9d211 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -587,7 +587,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!is_shadow_present_pte(*it.sptep)) {
table_gfn = gw->table_gfn[it.level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
- false, access, it.sptep);
+ false, access);
}
 
/*
@@ -617,7 +617,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
- true, direct_access, it.sptep);
+ true, direct_access);
link_s

[PATCH 09/10 RFC] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()

2015-11-12 Thread Takuya Yoshikawa
Every time kvm_mmu_get_page() is called with a non-NULL parent_pte
argument, link_shadow_page() follows that to set the parent entry so
that the new mapping will point to the returned page table.

Moving parent_pte handling there allows to clean up the code because
parent_pte is passed to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 21 -
 arch/x86/kvm/paging_tmpl.h |  6 ++
 2 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9273cd4..33fe720 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2108,14 +2108,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
if (sp->unsync_children) {
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
kvm_mmu_mark_parents_unsync(sp);
-   if (parent_pte)
-   mark_unsync(parent_pte);
} else if (sp->unsync) {
kvm_mmu_mark_parents_unsync(sp);
-   if (parent_pte)
-   mark_unsync(parent_pte);
}
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 
__clear_sp_write_flooding_count(sp);
trace_kvm_mmu_get_page(sp, false);
@@ -2127,7 +2122,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
sp = kvm_mmu_alloc_page(vcpu, direct);
 
sp->parent_ptes.val = 0;
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 
sp->gfn = gfn;
sp->role = role;
@@ -2196,7 +2190,8 @@ static void shadow_walk_next(struct 
kvm_shadow_walk_iterator *iterator)
return __shadow_walk_next(iterator, *iterator->sptep);
 }
 
-static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp, bool 
accessed)
+static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
+struct kvm_mmu_page *sp, bool accessed)
 {
u64 spte;
 
@@ -2210,6 +2205,11 @@ static void link_shadow_page(u64 *sptep, struct 
kvm_mmu_page *sp, bool accessed)
spte |= shadow_accessed_mask;
 
mmu_spte_set(sptep, spte);
+
+   if (sp->unsync_children || sp->unsync)
+   mark_unsync(sptep);
+
+   mmu_page_add_parent_pte(vcpu, sp, sptep);
 }
 
 static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
@@ -2268,11 +2268,6 @@ static void kvm_mmu_page_unlink_children(struct kvm *kvm,
mmu_page_zap_pte(kvm, sp, sp->spt + i);
 }
 
-static void kvm_mmu_put_page(struct kvm_mmu_page *sp, u64 *parent_pte)
-{
-   mmu_page_remove_parent_pte(sp, parent_pte);
-}
-
 static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
u64 *sptep;
@@ -2738,7 +2733,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
  iterator.level - 1,
  1, ACC_ALL, iterator.sptep);
 
-   link_shadow_page(iterator.sptep, sp, true);
+   link_shadow_page(vcpu, iterator.sptep, sp, true);
}
}
return emulate;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
goto out_gpte_changed;
 
if (sp)
-   link_shadow_page(it.sptep, sp, PT_GUEST_ACCESSED_MASK);
+   link_shadow_page(vcpu, it.sptep, sp, 
PT_GUEST_ACCESSED_MASK);
}
 
for (;
@@ -618,7 +618,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
  true, direct_access, it.sptep);
-   link_shadow_page(it.sptep, sp, PT_GUEST_ACCESSED_MASK);
+   link_shadow_page(vcpu, it.sptep, sp, PT_GUEST_ACCESSED_MASK);
}
 
clear_sp_write_flooding_count(it.sptep);
@@ -629,8 +629,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
return emulate;
 
 out_gpte_changed:
-   if (sp)
-   kvm_mmu_put_page(sp, it.sptep);
kvm_release_pfn_clean(pfn);
return 0;
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 08/10] KVM: x86: MMU: Move initialization of parent_ptes out from kvm_mmu_alloc_page()

2015-11-12 Thread Takuya Yoshikawa
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra error check at its call site since the allocation cannot fail.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 85f4bbd..9273cd4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1707,8 +1707,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp,
mmu_spte_clear_no_track(parent_pte);
 }
 
-static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
-  u64 *parent_pte, int direct)
+static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int 
direct)
 {
struct kvm_mmu_page *sp;
 
@@ -1724,8 +1723,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu,
 * this feature. See the comments in kvm_zap_obsolete_pages().
 */
list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-   sp->parent_ptes.val = 0;
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
kvm_mod_used_mmu_pages(vcpu->kvm, +1);
return sp;
 }
@@ -2124,10 +2121,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
trace_kvm_mmu_get_page(sp, false);
return sp;
}
+
++vcpu->kvm->stat.mmu_cache_miss;
-   sp = kvm_mmu_alloc_page(vcpu, parent_pte, direct);
-   if (!sp)
-   return sp;
+
+   sp = kvm_mmu_alloc_page(vcpu, direct);
+
+   sp->parent_ptes.val = 0;
+   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
+
sp->gfn = gfn;
sp->role = role;
hlist_add_head(&sp->hash_link,
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 07/10] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct

2015-11-12 Thread Takuya Yoshikawa
New struct kvm_rmap_head makes the code type-safe to some extent.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/include/asm/kvm_host.h |   8 +-
 arch/x86/kvm/mmu.c  | 169 +---
 arch/x86/kvm/mmu_audit.c|  13 ++--
 3 files changed, 100 insertions(+), 90 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0535359..c5a0c4a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -214,6 +214,10 @@ union kvm_mmu_page_role {
};
 };
 
+struct kvm_rmap_head {
+   unsigned long val;
+};
+
 struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
@@ -231,7 +235,7 @@ struct kvm_mmu_page {
bool unsync;
int root_count;  /* Currently serving as active root */
unsigned int unsync_children;
-   unsigned long parent_ptes;  /* Reverse mapping for parent_pte */
+   struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
 
/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
unsigned long mmu_valid_gen;
@@ -604,7 +608,7 @@ struct kvm_lpage_info {
 };
 
 struct kvm_arch_memory_slot {
-   unsigned long *rmap[KVM_NR_PAGE_SIZES];
+   struct kvm_rmap_head *rmap[KVM_NR_PAGE_SIZES];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
 };
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ee7b101..85f4bbd 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -916,24 +916,24 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t 
large_gfn,
  *
  */
 static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
-   unsigned long *pte_list)
+   struct kvm_rmap_head *rmap_head)
 {
struct pte_list_desc *desc;
int i, count = 0;
 
-   if (!*pte_list) {
+   if (!rmap_head->val) {
rmap_printk("pte_list_add: %p %llx 0->1\n", spte, *spte);
-   *pte_list = (unsigned long)spte;
-   } else if (!(*pte_list & 1)) {
+   rmap_head->val = (unsigned long)spte;
+   } else if (!(rmap_head->val & 1)) {
rmap_printk("pte_list_add: %p %llx 1->many\n", spte, *spte);
desc = mmu_alloc_pte_list_desc(vcpu);
-   desc->sptes[0] = (u64 *)*pte_list;
+   desc->sptes[0] = (u64 *)rmap_head->val;
desc->sptes[1] = spte;
-   *pte_list = (unsigned long)desc | 1;
+   rmap_head->val = (unsigned long)desc | 1;
++count;
} else {
rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte);
-   desc = (struct pte_list_desc *)(*pte_list & ~1ul);
+   desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
while (desc->sptes[PTE_LIST_EXT-1] && desc->more) {
desc = desc->more;
count += PTE_LIST_EXT;
@@ -950,8 +950,9 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
 }
 
 static void
-pte_list_desc_remove_entry(unsigned long *pte_list, struct pte_list_desc *desc,
-  int i, struct pte_list_desc *prev_desc)
+pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head,
+  struct pte_list_desc *desc, int i,
+  struct pte_list_desc *prev_desc)
 {
int j;
 
@@ -962,43 +963,43 @@ pte_list_desc_remove_entry(unsigned long *pte_list, 
struct pte_list_desc *desc,
if (j != 0)
return;
if (!prev_desc && !desc->more)
-   *pte_list = (unsigned long)desc->sptes[0];
+   rmap_head->val = (unsigned long)desc->sptes[0];
else
if (prev_desc)
prev_desc->more = desc->more;
else
-   *pte_list = (unsigned long)desc->more | 1;
+   rmap_head->val = (unsigned long)desc->more | 1;
mmu_free_pte_list_desc(desc);
 }
 
-static void pte_list_remove(u64 *spte, unsigned long *pte_list)
+static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head)
 {
struct pte_list_desc *desc;
struct pte_list_desc *prev_desc;
int i;
 
-   if (!*pte_list) {
+   if (!rmap_head->val) {
printk(KERN_ERR "pte_list_remove: %p 0->BUG\n", spte);
BUG();
-   } else if (!(*pte_list & 1)) {
+   } else if (!(rmap_head->val & 1)) {
rmap_printk("pte_list_remove:  %p 1->0\n", spte);
-   if ((u64 *)*pte_list != spte) {
+   if ((u64 *)rmap_head->val != spte) {
printk(KERN_ERR "pte_list_remove:  %p 1->BUG\n", spte);
 

[PATCH 06/10] KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes

2015-11-12 Thread Takuya Yoshikawa
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.

Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro, will be checked the
same way.  In addition, change the BUG_ON to WARN_ON since killing the
whole host is the last thing that KVM should try.

One thing to keep in mind is that kvm_mmu_unlink_parents() also uses
rmap_get_first() to handle parent sptes.  The change will not break it
because parent sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.

Signed-off-by: Takuya Yoshikawa 
---
 Documentation/virtual/kvm/mmu.txt |  4 ++--
 arch/x86/kvm/mmu.c| 26 +-
 2 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt 
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -203,10 +203,10 @@ Shadow pages contain the following information:
 page cannot be destroyed.  See role.invalid.
   parent_ptes:
 The reverse mapping for the pte/ptes pointing at this page's spt. If
-parent_ptes bit 0 is zero, only one spte points at this pages and
+parent_ptes bit 0 is zero, only one spte points at this page and
 parent_ptes points at this single spte, otherwise, there exists multiple
 sptes pointing at this page and (parent_ptes & ~0x1) points at a data
-structure with a list of parent_ptes.
+structure with a list of parent sptes.
   unsync:
 If true, then the translations in this page may not match the guest's
 translation.  This is equivalent to the state of the tlb when a pte is
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 1691171..ee7b101 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1079,17 +1079,23 @@ struct rmap_iterator {
  */
 static u64 *rmap_get_first(unsigned long rmap, struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (!rmap)
return NULL;
 
if (!(rmap & 1)) {
iter->desc = NULL;
-   return (u64 *)rmap;
+   sptep = (u64 *)rmap;
+   goto out;
}
 
iter->desc = (struct pte_list_desc *)(rmap & ~1ul);
iter->pos = 0;
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+out:
+   WARN_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 /*
@@ -1099,14 +1105,14 @@ static u64 *rmap_get_first(unsigned long rmap, struct 
rmap_iterator *iter)
  */
 static u64 *rmap_get_next(struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (iter->desc) {
if (iter->pos < PTE_LIST_EXT - 1) {
-   u64 *sptep;
-
++iter->pos;
sptep = iter->desc->sptes[iter->pos];
if (sptep)
-   return sptep;
+   goto out;
}
 
iter->desc = iter->desc->more;
@@ -1114,17 +1120,20 @@ static u64 *rmap_get_next(struct rmap_iterator *iter)
if (iter->desc) {
iter->pos = 0;
/* desc->sptes[0] cannot be NULL */
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+   goto out;
}
}
 
return NULL;
+out:
+   WARN_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 #define for_each_rmap_spte(_rmap_, _iter_, _spte_) \
   for (_spte_ = rmap_get_first(*_rmap_, _iter_);   \
-   _spte_ && ({BUG_ON(!is_shadow_present_pte(*_spte_)); 1;});  \
-   _spte_ = rmap_get_next(_iter_))
+   _spte_; _spte_ = rmap_get_next(_iter_))
 
 static void drop_spte(struct kvm *kvm, u64 *sptep)
 {
@@ -1338,7 +1347,6 @@ static bool kvm_zap_rmapp(struct kvm *kvm, unsigned long 
*rmapp)
bool flush = false;
 
while ((sptep = rmap_get_first(*rmapp, &iter))) {
-   BUG_ON(!(*sptep & PT_PRESENT_MASK));
rmap_printk("%s: spte %p %llx.\n", __func__, sptep, *sptep);
 
drop_spte(kvm, sptep);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 05/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()

2015-11-12 Thread Takuya Yoshikawa
kvm_mmu_mark_parents_unsync() alone uses pte_list_walk(), witch does
nearly the same as the for_each_rmap_spte macro.  The only difference
is that is_shadow_present_pte() checks cannot be placed there because
kvm_mmu_mark_parents_unsync() can be called with a new parent pointer
whose entry is not set yet.

By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 36 +---
 1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e8cfdc4..1691171 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1007,26 +1007,6 @@ static void pte_list_remove(u64 *spte, unsigned long 
*pte_list)
}
 }
 
-typedef void (*pte_list_walk_fn) (u64 *spte);
-static void pte_list_walk(unsigned long *pte_list, pte_list_walk_fn fn)
-{
-   struct pte_list_desc *desc;
-   int i;
-
-   if (!*pte_list)
-   return;
-
-   if (!(*pte_list & 1))
-   return fn((u64 *)*pte_list);
-
-   desc = (struct pte_list_desc *)(*pte_list & ~1ul);
-   while (desc) {
-   for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i)
-   fn(desc->sptes[i]);
-   desc = desc->more;
-   }
-}
-
 static unsigned long *__gfn_to_rmap(gfn_t gfn, int level,
struct kvm_memory_slot *slot)
 {
@@ -1741,7 +1721,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu,
 static void mark_unsync(u64 *spte);
 static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp)
 {
-   pte_list_walk(&sp->parent_ptes, mark_unsync);
+   u64 *sptep;
+   struct rmap_iterator iter;
+
+   for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) {
+   mark_unsync(sptep);
+   }
 }
 
 static void mark_unsync(u64 *spte)
@@ -2111,12 +2096,17 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
if (sp->unsync && kvm_sync_page_transient(vcpu, sp))
break;
 
-   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
if (sp->unsync_children) {
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
kvm_mmu_mark_parents_unsync(sp);
-   } else if (sp->unsync)
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   } else if (sp->unsync) {
kvm_mmu_mark_parents_unsync(sp);
+   if (parent_pte)
+   mark_unsync(parent_pte);
+   }
+   mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 
__clear_sp_write_flooding_count(sp);
trace_kvm_mmu_get_page(sp, false);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 04/10] KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()

2015-11-12 Thread Takuya Yoshikawa
is_rmap_spte(), originally named is_rmap_pte(), was introduced when the
simple reverse mapping was implemented by commit cd4a4e5374110444
("[PATCH] KVM: MMU: Implement simple reverse mapping").  At that point,
its role was clear and only rmap_add() and rmap_remove() were using it
to select sptes that need to be reverse-mapped.

Independently of that, is_shadow_present_pte() was first introduced by
commit c7addb902054195b ("KVM: Allow not-present guest page faults to
bypass kvm") to do bypass_guest_pf optimization, which does not exist
any more.

These two seem to have changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.

Since using both of them with no clear distinction just makes the code
confusing, remove is_rmap_spte().

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c   | 13 -
 arch/x86/kvm/mmu_audit.c |  2 +-
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c229356..e8cfdc4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -311,11 +311,6 @@ static int is_large_pte(u64 pte)
return pte & PT_PAGE_SIZE_MASK;
 }
 
-static int is_rmap_spte(u64 pte)
-{
-   return is_shadow_present_pte(pte);
-}
-
 static int is_last_spte(u64 pte, int level)
 {
if (level == PT_PAGE_TABLE_LEVEL)
@@ -540,7 +535,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
u64 old_spte = *sptep;
bool ret = false;
 
-   WARN_ON(!is_rmap_spte(new_spte));
+   WARN_ON(!is_shadow_present_pte(new_spte));
 
if (!is_shadow_present_pte(old_spte)) {
mmu_spte_set(sptep, new_spte);
@@ -595,7 +590,7 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
else
old_spte = __update_clear_spte_slow(sptep, 0ull);
 
-   if (!is_rmap_spte(old_spte))
+   if (!is_shadow_present_pte(old_spte))
return 0;
 
pfn = spte_to_pfn(old_spte);
@@ -2575,7 +2570,7 @@ static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep, unsigned pte_access,
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
 
-   if (is_rmap_spte(*sptep)) {
+   if (is_shadow_present_pte(*sptep)) {
/*
 * If we overwrite a PTE page pointer with a 2MB PMD, unlink
 * the parent of the now unreachable PTE.
@@ -2919,7 +2914,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t 
gva, int level,
 * If the mapping has been changed, let the vcpu fault on the
 * same address again.
 */
-   if (!is_rmap_spte(spte)) {
+   if (!is_shadow_present_pte(spte)) {
ret = true;
goto exit;
}
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 03d518e..90ee420 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -183,7 +183,7 @@ static void check_mappings_rmap(struct kvm *kvm, struct 
kvm_mmu_page *sp)
return;
 
for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
-   if (!is_rmap_spte(sp->spt[i]))
+   if (!is_shadow_present_pte(sp->spt[i]))
continue;
 
inspect_spte_has_rmap(kvm, sp->spt + i);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 02/10] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap

2015-11-12 Thread Takuya Yoshikawa
Both __mmu_unsync_walk() and mmu_pages_clear_parents() have three line
code which clears a bit in the unsync child bitmap; the former places it
inside a loop block and uses a few goto statements to jump to it.

A new helper function, clear_unsync_child_bit(), makes the code cleaner.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c3bbc82..f3120aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, 
struct kvm_mmu_page *sp,
return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
 
+static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
+{
+   --sp->unsync_children;
+   WARN_ON((int)sp->unsync_children < 0);
+   __clear_bit(idx, sp->unsync_child_bitmap);
+}
+
 static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
   struct kvm_mmu_pages *pvec)
 {
@@ -1815,8 +1822,10 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
struct kvm_mmu_page *child;
u64 ent = sp->spt[i];
 
-   if (!is_shadow_present_pte(ent) || is_large_pte(ent))
-   goto clear_child_bitmap;
+   if (!is_shadow_present_pte(ent) || is_large_pte(ent)) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   }
 
child = page_header(ent & PT64_BASE_ADDR_MASK);
 
@@ -1825,28 +1834,21 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
return -ENOSPC;
 
ret = __mmu_unsync_walk(child, pvec);
-   if (!ret)
-   goto clear_child_bitmap;
-   else if (ret > 0)
+   if (!ret) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   } else if (ret > 0) {
nr_unsync_leaf += ret;
-   else
+   } else
return ret;
} else if (child->unsync) {
nr_unsync_leaf++;
if (mmu_pages_add(pvec, child, i))
return -ENOSPC;
} else
-goto clear_child_bitmap;
-
-   continue;
-
-clear_child_bitmap:
-   __clear_bit(i, sp->unsync_child_bitmap);
-   sp->unsync_children--;
-   WARN_ON((int)sp->unsync_children < 0);
+   clear_unsync_child_bit(sp, i);
}
 
-
return nr_unsync_leaf;
 }
 
@@ -2009,9 +2011,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path 
*parents)
if (!sp)
return;
 
-   --sp->unsync_children;
-   WARN_ON((int)sp->unsync_children < 0);
-   __clear_bit(idx, sp->unsync_child_bitmap);
+   clear_unsync_child_bit(sp, idx);
level++;
} while (level < PT64_ROOT_LEVEL-1 && !sp->unsync_children);
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 03/10] KVM: x86: MMU: Make mmu_set_spte() return emulate value

2015-11-12 Thread Takuya Yoshikawa
mmu_set_spte()'s code is based on the assumption that the emulate
parameter has a valid pointer value if set_spte() returns true and
write_fault is not zero.  In other cases, emulate may be NULL, so a
NULL-check is needed.

Stop passing emulate pointer and make mmu_set_spte() return the emulate
value instead to clean up this complex interface.  Prefetch functions
can just throw away the return value.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 27 ++-
 arch/x86/kvm/paging_tmpl.h | 10 +-
 2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f3120aa..c229356 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2564,13 +2564,13 @@ done:
return ret;
 }
 
-static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
-unsigned pte_access, int write_fault, int *emulate,
-int level, gfn_t gfn, pfn_t pfn, bool speculative,
-bool host_writable)
+static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned 
pte_access,
+int write_fault, int level, gfn_t gfn, pfn_t pfn,
+bool speculative, bool host_writable)
 {
int was_rmapped = 0;
int rmap_count;
+   bool emulate = false;
 
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
@@ -2600,12 +2600,12 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
if (set_spte(vcpu, sptep, pte_access, level, gfn, pfn, speculative,
  true, host_writable)) {
if (write_fault)
-   *emulate = 1;
+   emulate = true;
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
}
 
-   if (unlikely(is_mmio_spte(*sptep) && emulate))
-   *emulate = 1;
+   if (unlikely(is_mmio_spte(*sptep)))
+   emulate = true;
 
pgprintk("%s: setting spte %llx\n", __func__, *sptep);
pgprintk("instantiating %s PTE (%s) at %llx (%llx) addr %p\n",
@@ -2624,6 +2624,8 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
}
 
kvm_release_pfn_clean(pfn);
+
+   return emulate;
 }
 
 static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
@@ -2658,9 +2660,8 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
return -1;
 
for (i = 0; i < ret; i++, gfn++, start++)
-   mmu_set_spte(vcpu, start, access, 0, NULL,
-sp->role.level, gfn, page_to_pfn(pages[i]),
-true, true);
+   mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn,
+page_to_pfn(pages[i]), true, true);
 
return 0;
 }
@@ -2721,9 +2722,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
 
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
-   mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
-write, &emulate, level, gfn, pfn,
-prefault, map_writable);
+   emulate = mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
+  write, level, gfn, pfn, prefault,
+  map_writable);
direct_pte_prefetch(vcpu, iterator.sptep);
++vcpu->stat.pf_fixed;
break;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 3058a22..9d21b44 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -475,8 +475,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct 
kvm_mmu_page *sp,
 * we call mmu_set_spte() with host_writable = true because
 * pte_prefetch_gfn_to_pfn always gets a writable pfn.
 */
-   mmu_set_spte(vcpu, spte, pte_access, 0, NULL, PT_PAGE_TABLE_LEVEL,
-gfn, pfn, true, true);
+   mmu_set_spte(vcpu, spte, pte_access, 0, PT_PAGE_TABLE_LEVEL, gfn, pfn,
+true, true);
 
return true;
 }
@@ -556,7 +556,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
struct kvm_mmu_page *sp = NULL;
struct kvm_shadow_walk_iterator it;
unsigned direct_access, access = gw->pt_access;
-   int top_level, emulate = 0;
+   int top_level, emulate;
 
direct_access = gw->pte_access;
 
@@ -622,8 +622,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
}
 
clear_sp_write_flooding_count(it.sptep);
-   mmu_set_spte(vcpu, it.sptep, gw->pte_access, write_fault, &emulate,
-it.level, gw->gfn

[PATCH 01/10] KVM: x86: MMU: Remove unused parameter of __direct_map()

2015-11-12 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e7c2c14..c3bbc82 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, 
u64 *sptep)
__direct_pte_prefetch(vcpu, sp, sptep);
 }
 
-static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
-   int map_writable, int level, gfn_t gfn, pfn_t pfn,
-   bool prefault)
+static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
+   int level, gfn_t gfn, pfn_t pfn, bool prefault)
 {
struct kvm_shadow_walk_iterator iterator;
struct kvm_mmu_page *sp;
@@ -3018,11 +3017,9 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
-prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
-
return r;
 
 out_unlock:
@@ -3531,8 +3528,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, gpa, write, map_writable,
-level, gfn, pfn, prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
return r;
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 00/10 V2] KVM: x86: MMU: Clean up x86's mmu code for future work

2015-11-12 Thread Takuya Yoshikawa
v1->v2:
  Patch 5 and 7 are added based on Paolo's suggestions.
  Patch 8-10 are new.

Patch 1/2/3/4: no change.
Patch 5: Needed a bit more work than I had expected.
Patch 6: Removed extra comment of v1 (patch 5 made it inappropriate).
Patch 7: As expected, many places needed to be converted.
Patch 8: This is new, but only a small change.

Patch 9: Kind of an RFC (though I have checked it to some extent).
  Following two places need to be carefully checked:
  - in kvm_mmu_get_page: "if (!direct)" block after kvm_mmu_alloc_page()
  - in FNAME(fetch): "if (FNAME(gpte_changed)(vcpu, gw, it.level - 1))" case
Patch 10: Trivial cleanup, assuming that patch 9 is correct.


In summary: patch 1-7 is the result of updating v1 based on the suggestions.
  Although patch 5 does not look so nice than expected, this is the most
  conservative approach, and patch 8-10 try to alleviate the sadness.

  Takuya


Takuya Yoshikawa (10):
  01:  KVM: x86: MMU: Remove unused parameter of __direct_map()
  02:  KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
  03:  KVM: x86: MMU: Make mmu_set_spte() return emulate value
  04:  KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()
  05:  KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()
  06:  KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes
  07:  KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct
  08:  KVM: x86: MMU: Move initialization of parent_ptes out from 
kvm_mmu_alloc_page()
  09:  KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to 
link_shadow_page()
  10:  KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()

 Documentation/virtual/kvm/mmu.txt |   4 +-
 arch/x86/include/asm/kvm_host.h   |   8 +-
 arch/x86/kvm/mmu.c| 357 ++
 arch/x86/kvm/mmu_audit.c  |  15 +-
 arch/x86/kvm/paging_tmpl.h|  20 +--
 5 files changed, 196 insertions(+), 208 deletions(-)

-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 5/5] KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes

2015-11-10 Thread Takuya Yoshikawa

On 2015/11/09 19:14, Paolo Bonzini wrote:

Can you also change kvm_mmu_mark_parents_unsync to use
for_each_rmap_spte instead of pte_list_walk?  It is the last use of
pte_list_walk, and it's nice if we have two uses of for_each_rmap_spte
with parent_ptes as the argument.


No problem, I will do.

Since parent_ptes is also explained as the "reverse mapping" list of
parent sptes (in mmu.txt and kvm_host.h), using rmap helpers will not
look so strange.


BTW, on my todo list is to change the rmap items to a struct (with a
single u64 inside) for type safety.  Since you are touching this code,
perhaps you can give it a shot?


Yes, almost done here (assuming that you mean 'unsigned long').
But I have some candidates for its name in mind:

1. struct kvm_rmap { unsigned long val; };
2. struct kvm_rmap_head { unsigned long val; };
3. struct kvm_rmap_list_head { unsigned long val; };
4. struct kvm_spte_list_head { unsigned long val; };

Since this is the head of the reverse mapping list of sptes, I thought
name 3 might be the best and first made a patch with it, but it was
a bit longer than I had hoped it to be.

I have changed it to name 2, and it looks a bit nicer now, but even
shorter, e.g. name 1, may be good as well.

Do you have any preference?

  Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 5/5] KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes

2015-11-05 Thread Takuya Yoshikawa
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which should not
be found in the reverse-mapping list.

Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro, will be checked the
same way.  In addition, change the BUG_ON to WARN_ON since killing the
whole host is the last thing that KVM should try.

One thing to keep in mind is that kvm_mmu_unlink_parents() also uses
rmap_get_first() to handle parent sptes.  The change will not break it
because parent sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.

Signed-off-by: Takuya Yoshikawa 
---
 Documentation/virtual/kvm/mmu.txt |  4 ++--
 arch/x86/kvm/mmu.c| 31 ++-
 2 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt 
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -203,10 +203,10 @@ Shadow pages contain the following information:
 page cannot be destroyed.  See role.invalid.
   parent_ptes:
 The reverse mapping for the pte/ptes pointing at this page's spt. If
-parent_ptes bit 0 is zero, only one spte points at this pages and
+parent_ptes bit 0 is zero, only one spte points at this page and
 parent_ptes points at this single spte, otherwise, there exists multiple
 sptes pointing at this page and (parent_ptes & ~0x1) points at a data
-structure with a list of parent_ptes.
+structure with a list of parent sptes.
   unsync:
 If true, then the translations in this page may not match the guest's
 translation.  This is equivalent to the state of the tlb when a pte is
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c5e2363..353d752 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1099,17 +1099,28 @@ struct rmap_iterator {
  */
 static u64 *rmap_get_first(unsigned long rmap, struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (!rmap)
return NULL;
 
if (!(rmap & 1)) {
iter->desc = NULL;
-   return (u64 *)rmap;
+   sptep = (u64 *)rmap;
+   goto out;
}
 
iter->desc = (struct pte_list_desc *)(rmap & ~1ul);
iter->pos = 0;
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+out:
+   /*
+* Parent sptes found in sp->parent_ptes lists are also checked here
+* since kvm_mmu_unlink_parents() uses this function.  If the condition
+* needs to be changed for them, make another wrapper function.
+*/
+   WARN_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 /*
@@ -1119,14 +1130,14 @@ static u64 *rmap_get_first(unsigned long rmap, struct 
rmap_iterator *iter)
  */
 static u64 *rmap_get_next(struct rmap_iterator *iter)
 {
+   u64 *sptep;
+
if (iter->desc) {
if (iter->pos < PTE_LIST_EXT - 1) {
-   u64 *sptep;
-
++iter->pos;
sptep = iter->desc->sptes[iter->pos];
if (sptep)
-   return sptep;
+   goto out;
}
 
iter->desc = iter->desc->more;
@@ -1134,17 +1145,20 @@ static u64 *rmap_get_next(struct rmap_iterator *iter)
if (iter->desc) {
iter->pos = 0;
/* desc->sptes[0] cannot be NULL */
-   return iter->desc->sptes[iter->pos];
+   sptep = iter->desc->sptes[iter->pos];
+   goto out;
}
}
 
return NULL;
+out:
+   WARN_ON(!is_shadow_present_pte(*sptep));
+   return sptep;
 }
 
 #define for_each_rmap_spte(_rmap_, _iter_, _spte_) \
   for (_spte_ = rmap_get_first(*_rmap_, _iter_);   \
-   _spte_ && ({BUG_ON(!is_shadow_present_pte(*_spte_)); 1;});  \
-   _spte_ = rmap_get_next(_iter_))
+   _spte_; _spte_ = rmap_get_next(_iter_))
 
 static void drop_spte(struct kvm *kvm, u64 *sptep)
 {
@@ -1358,7 +1372,6 @@ static bool kvm_zap_rmapp(struct kvm *kvm, unsigned long 
*rmapp)
bool flush = false;
 
while ((sptep = rmap_get_first(*rmapp, &iter))) {
-   BUG_ON(!(*sptep & PT_PRESENT_MASK));
rmap_printk("%s: spte %p %llx.\n", __func__, sptep, *sptep);
 
drop_spte(kvm, sptep);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/5] KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()

2015-11-05 Thread Takuya Yoshikawa
is_rmap_spte(), originally named is_rmap_pte(), was introduced when the
simple reverse mapping was implemented by commit cd4a4e5374110444
("[PATCH] KVM: MMU: Implement simple reverse mapping").  At that point,
its role was clear and only rmap_add() and rmap_remove() were using it
to select sptes that need to be reverse-mapped.

Independently of that, is_shadow_present_pte() was first introduced by
commit c7addb902054195b ("KVM: Allow not-present guest page faults to
bypass kvm") to do bypass_guest_pf optimization, which does not exist
any more.

These two seem to have changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.

Since using both of them without no clear distinction just makes the
code confusing, remove is_rmap_spte().

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c   | 13 -
 arch/x86/kvm/mmu_audit.c |  2 +-
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 69e7d20..c5e2363 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -311,11 +311,6 @@ static int is_large_pte(u64 pte)
return pte & PT_PAGE_SIZE_MASK;
 }
 
-static int is_rmap_spte(u64 pte)
-{
-   return is_shadow_present_pte(pte);
-}
-
 static int is_last_spte(u64 pte, int level)
 {
if (level == PT_PAGE_TABLE_LEVEL)
@@ -540,7 +535,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
u64 old_spte = *sptep;
bool ret = false;
 
-   WARN_ON(!is_rmap_spte(new_spte));
+   WARN_ON(!is_shadow_present_pte(new_spte));
 
if (!is_shadow_present_pte(old_spte)) {
mmu_spte_set(sptep, new_spte);
@@ -595,7 +590,7 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
else
old_spte = __update_clear_spte_slow(sptep, 0ull);
 
-   if (!is_rmap_spte(old_spte))
+   if (!is_shadow_present_pte(old_spte))
return 0;
 
pfn = spte_to_pfn(old_spte);
@@ -2575,7 +2570,7 @@ static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep, unsigned pte_access,
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
 
-   if (is_rmap_spte(*sptep)) {
+   if (is_shadow_present_pte(*sptep)) {
/*
 * If we overwrite a PTE page pointer with a 2MB PMD, unlink
 * the parent of the now unreachable PTE.
@@ -2919,7 +2914,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t 
gva, int level,
 * If the mapping has been changed, let the vcpu fault on the
 * same address again.
 */
-   if (!is_rmap_spte(spte)) {
+   if (!is_shadow_present_pte(spte)) {
ret = true;
goto exit;
}
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 03d518e..90ee420 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -183,7 +183,7 @@ static void check_mappings_rmap(struct kvm *kvm, struct 
kvm_mmu_page *sp)
return;
 
for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
-   if (!is_rmap_spte(sp->spt[i]))
+   if (!is_shadow_present_pte(sp->spt[i]))
continue;
 
inspect_spte_has_rmap(kvm, sp->spt + i);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/5] KVM: x86: MMU: Make mmu_set_spte() return emulate value

2015-11-05 Thread Takuya Yoshikawa
mmu_set_spte()'s code is based on the assumption that the emulate
parameter has a valid pointer value if set_spte() returns true and
write_fault is not zero.  In other cases, emulate may be NULL, so a
NULL-check is needed.

Stop passing emulate pointer and make mmu_set_spte() return the emulate
value instead to clean up this complex interface.  Prefetch functions
can just throw away the return value.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 27 ++-
 arch/x86/kvm/paging_tmpl.h | 10 +-
 2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a9622a2..69e7d20 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2564,13 +2564,13 @@ done:
return ret;
 }
 
-static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
-unsigned pte_access, int write_fault, int *emulate,
-int level, gfn_t gfn, pfn_t pfn, bool speculative,
-bool host_writable)
+static bool mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned 
pte_access,
+int write_fault, int level, gfn_t gfn, pfn_t pfn,
+bool speculative, bool host_writable)
 {
int was_rmapped = 0;
int rmap_count;
+   bool emulate = false;
 
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
 *sptep, write_fault, gfn);
@@ -2600,12 +2600,12 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
if (set_spte(vcpu, sptep, pte_access, level, gfn, pfn, speculative,
  true, host_writable)) {
if (write_fault)
-   *emulate = 1;
+   emulate = true;
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
}
 
-   if (unlikely(is_mmio_spte(*sptep) && emulate))
-   *emulate = 1;
+   if (unlikely(is_mmio_spte(*sptep)))
+   emulate = true;
 
pgprintk("%s: setting spte %llx\n", __func__, *sptep);
pgprintk("instantiating %s PTE (%s) at %llx (%llx) addr %p\n",
@@ -2624,6 +2624,8 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
}
 
kvm_release_pfn_clean(pfn);
+
+   return emulate;
 }
 
 static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
@@ -2658,9 +2660,8 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
return -1;
 
for (i = 0; i < ret; i++, gfn++, start++)
-   mmu_set_spte(vcpu, start, access, 0, NULL,
-sp->role.level, gfn, page_to_pfn(pages[i]),
-true, true);
+   mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn,
+page_to_pfn(pages[i]), true, true);
 
return 0;
 }
@@ -2721,9 +2722,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
 
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
-   mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
-write, &emulate, level, gfn, pfn,
-prefault, map_writable);
+   emulate = mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
+  write, level, gfn, pfn, prefault,
+  map_writable);
direct_pte_prefetch(vcpu, iterator.sptep);
++vcpu->stat.pf_fixed;
break;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index b41faa9..de24499 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -475,8 +475,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct 
kvm_mmu_page *sp,
 * we call mmu_set_spte() with host_writable = true because
 * pte_prefetch_gfn_to_pfn always gets a writable pfn.
 */
-   mmu_set_spte(vcpu, spte, pte_access, 0, NULL, PT_PAGE_TABLE_LEVEL,
-gfn, pfn, true, true);
+   mmu_set_spte(vcpu, spte, pte_access, 0, PT_PAGE_TABLE_LEVEL, gfn, pfn,
+true, true);
 
return true;
 }
@@ -556,7 +556,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
struct kvm_mmu_page *sp = NULL;
struct kvm_shadow_walk_iterator it;
unsigned direct_access, access = gw->pt_access;
-   int top_level, emulate = 0;
+   int top_level, emulate;
 
direct_access = gw->pte_access;
 
@@ -622,8 +622,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
}
 
clear_sp_write_flooding_count(it.sptep);
-   mmu_set_spte(vcpu, it.sptep, gw->pte_access, write_fault, &emulate,
-it.level, gw->gfn

[PATCH 2/5] KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap

2015-11-05 Thread Takuya Yoshikawa
Both __mmu_unsync_walk() and mmu_pages_clear_parents() have three line
code which clears a bit in the unsync child bitmap; the former places it
inside a loop block and uses a few goto statements to jump to it.

A new helper function, clear_unsync_child_bit(), makes the code cleaner.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a76bc04..a9622a2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, 
struct kvm_mmu_page *sp,
return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
 
+static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
+{
+   --sp->unsync_children;
+   WARN_ON((int)sp->unsync_children < 0);
+   __clear_bit(idx, sp->unsync_child_bitmap);
+}
+
 static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
   struct kvm_mmu_pages *pvec)
 {
@@ -1815,8 +1822,10 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
struct kvm_mmu_page *child;
u64 ent = sp->spt[i];
 
-   if (!is_shadow_present_pte(ent) || is_large_pte(ent))
-   goto clear_child_bitmap;
+   if (!is_shadow_present_pte(ent) || is_large_pte(ent)) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   }
 
child = page_header(ent & PT64_BASE_ADDR_MASK);
 
@@ -1825,28 +1834,21 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
return -ENOSPC;
 
ret = __mmu_unsync_walk(child, pvec);
-   if (!ret)
-   goto clear_child_bitmap;
-   else if (ret > 0)
+   if (!ret) {
+   clear_unsync_child_bit(sp, i);
+   continue;
+   } else if (ret > 0) {
nr_unsync_leaf += ret;
-   else
+   } else
return ret;
} else if (child->unsync) {
nr_unsync_leaf++;
if (mmu_pages_add(pvec, child, i))
return -ENOSPC;
} else
-goto clear_child_bitmap;
-
-   continue;
-
-clear_child_bitmap:
-   __clear_bit(i, sp->unsync_child_bitmap);
-   sp->unsync_children--;
-   WARN_ON((int)sp->unsync_children < 0);
+   clear_unsync_child_bit(sp, i);
}
 
-
return nr_unsync_leaf;
 }
 
@@ -2009,9 +2011,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path 
*parents)
if (!sp)
return;
 
-   --sp->unsync_children;
-   WARN_ON((int)sp->unsync_children < 0);
-   __clear_bit(idx, sp->unsync_child_bitmap);
+   clear_unsync_child_bit(sp, idx);
level++;
} while (level < PT64_ROOT_LEVEL-1 && !sp->unsync_children);
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/5] KVM: x86: MMU: Remove unused parameter of __direct_map()

2015-11-05 Thread Takuya Yoshikawa
Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 11 ---
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7d85bca..a76bc04 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, 
u64 *sptep)
__direct_pte_prefetch(vcpu, sp, sptep);
 }
 
-static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
-   int map_writable, int level, gfn_t gfn, pfn_t pfn,
-   bool prefault)
+static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
+   int level, gfn_t gfn, pfn_t pfn, bool prefault)
 {
struct kvm_shadow_walk_iterator iterator;
struct kvm_mmu_page *sp;
@@ -3018,8 +3017,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn,
-prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
 
@@ -3541,8 +3539,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
make_mmu_pages_available(vcpu);
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
-   r = __direct_map(vcpu, gpa, write, map_writable,
-level, gfn, pfn, prefault);
+   r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
spin_unlock(&vcpu->kvm->mmu_lock);
 
return r;
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/5] KVM: x86: MMU: Clean up x86's mmu code for future work

2015-11-05 Thread Takuya Yoshikawa
Patch 1/2/3 are easy ones.

Following two, patch 4/5, may not be ideal solutions, but at least
explain, or try to explain, the problems.

Takuya Yoshikawa (5):
  KVM: x86: MMU: Remove unused parameter of __direct_map()
  KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
  KVM: x86: MMU: Make mmu_set_spte() return emulate value
  KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()
  KVM: x86: MMU: Consolidate WARN_ON/BUG_ON checks for reverse-mapped sptes

 Documentation/virtual/kvm/mmu.txt |   4 +-
 arch/x86/kvm/mmu.c| 118 --
 arch/x86/kvm/mmu_audit.c  |   2 +-
 arch/x86/kvm/paging_tmpl.h|  10 ++--
 4 files changed, 70 insertions(+), 64 deletions(-)

-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] KVM: x86: MMU: Initialize force_pt_level before calling mapping_level()

2015-10-18 Thread Takuya Yoshikawa
Commit fd1369021878 ("KVM: x86: MMU: Move mapping_level_dirty_bitmap()
call in mapping_level()") forgot to initialize force_pt_level to false
in FNAME(page_fault)() before calling mapping_level() like
nonpaging_map() does.  This can sometimes result in forcing page table
level mapping unnecessarily.

Fix this and move the first *force_pt_level check in mapping_level()
before kvm_vcpu_gfn_to_memslot() call to make it a bit clearer that
the variable must be initialized before mapping_level() gets called.

This change can also avoid calling kvm_vcpu_gfn_to_memslot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 7 ---
 arch/x86/kvm/paging_tmpl.h | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index dd2a7c6..7d85bca 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -886,10 +886,11 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t 
large_gfn,
int host_level, level, max_level;
struct kvm_memory_slot *slot;
 
-   slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
+   if (unlikely(*force_pt_level))
+   return PT_PAGE_TABLE_LEVEL;
 
-   if (likely(!*force_pt_level))
-   *force_pt_level = !memslot_valid_for_gpte(slot, true);
+   slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
+   *force_pt_level = !memslot_valid_for_gpte(slot, true);
if (unlikely(*force_pt_level))
return PT_PAGE_TABLE_LEVEL;
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index bf39d0f..b41faa9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -698,7 +698,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
int r;
pfn_t pfn;
int level = PT_PAGE_TABLE_LEVEL;
-   bool force_pt_level;
+   bool force_pt_level = false;
unsigned long mmu_seq;
bool map_writable, is_self_change_mapping;
 
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 5/5] KVM: x86: MMU: Eliminate an extra memory slot search in mapping_level()

2015-10-16 Thread Takuya Yoshikawa
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 09833b0..dd2a7c6 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -818,14 +818,11 @@ static void unaccount_shadowed(struct kvm *kvm, struct 
kvm_mmu_page *sp)
kvm->arch.indirect_shadow_pages--;
 }
 
-static int has_wrprotected_page(struct kvm_vcpu *vcpu,
-   gfn_t gfn,
-   int level)
+static int __has_wrprotected_page(gfn_t gfn, int level,
+ struct kvm_memory_slot *slot)
 {
-   struct kvm_memory_slot *slot;
struct kvm_lpage_info *linfo;
 
-   slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
if (slot) {
linfo = lpage_info_slot(gfn, slot, level);
return linfo->write_count;
@@ -834,6 +831,14 @@ static int has_wrprotected_page(struct kvm_vcpu *vcpu,
return 1;
 }
 
+static int has_wrprotected_page(struct kvm_vcpu *vcpu, gfn_t gfn, int level)
+{
+   struct kvm_memory_slot *slot;
+
+   slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+   return __has_wrprotected_page(gfn, level, slot);
+}
+
 static int host_mapping_level(struct kvm *kvm, gfn_t gfn)
 {
unsigned long page_size;
@@ -896,7 +901,7 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t 
large_gfn,
max_level = min(kvm_x86_ops->get_lpage_level(), host_level);
 
for (level = PT_DIRECTORY_LEVEL; level <= max_level; ++level)
-   if (has_wrprotected_page(vcpu, large_gfn, level))
+   if (__has_wrprotected_page(large_gfn, level, slot))
break;
 
return level - 1;
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/5] KVM: x86: MMU: Remove mapping_level_dirty_bitmap()

2015-10-16 Thread Takuya Yoshikawa
Now that it has only one caller, and its name is not so helpful for
readers, remove it.  Instead, the new memslot_valid_for_gpte() function
makes it possible to share the common code.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 24 
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 890cd69..09833b0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -851,6 +851,17 @@ static int host_mapping_level(struct kvm *kvm, gfn_t gfn)
return ret;
 }
 
+static inline bool memslot_valid_for_gpte(struct kvm_memory_slot *slot,
+ bool no_dirty_log)
+{
+   if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
+   return false;
+   if (no_dirty_log && slot->dirty_bitmap)
+   return false;
+
+   return true;
+}
+
 static struct kvm_memory_slot *
 gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn,
bool no_dirty_log)
@@ -858,25 +869,22 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t 
gfn,
struct kvm_memory_slot *slot;
 
slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
-   if (!slot || slot->flags & KVM_MEMSLOT_INVALID ||
- (no_dirty_log && slot->dirty_bitmap))
+   if (!memslot_valid_for_gpte(slot, no_dirty_log))
slot = NULL;
 
return slot;
 }
 
-static bool mapping_level_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t large_gfn)
-{
-   return !gfn_to_memslot_dirty_bitmap(vcpu, large_gfn, true);
-}
-
 static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
 bool *force_pt_level)
 {
int host_level, level, max_level;
+   struct kvm_memory_slot *slot;
+
+   slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
 
if (likely(!*force_pt_level))
-   *force_pt_level = mapping_level_dirty_bitmap(vcpu, large_gfn);
+   *force_pt_level = !memslot_valid_for_gpte(slot, true);
if (unlikely(*force_pt_level))
return PT_PAGE_TABLE_LEVEL;
 
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/5] KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()

2015-10-16 Thread Takuya Yoshikawa
This is necessary to eliminate an extra memory slot search later.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 29 ++---
 arch/x86/kvm/paging_tmpl.h |  6 +++---
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2262728..890cd69 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -870,10 +870,16 @@ static bool mapping_level_dirty_bitmap(struct kvm_vcpu 
*vcpu, gfn_t large_gfn)
return !gfn_to_memslot_dirty_bitmap(vcpu, large_gfn, true);
 }
 
-static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn)
+static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
+bool *force_pt_level)
 {
int host_level, level, max_level;
 
+   if (likely(!*force_pt_level))
+   *force_pt_level = mapping_level_dirty_bitmap(vcpu, large_gfn);
+   if (unlikely(*force_pt_level))
+   return PT_PAGE_TABLE_LEVEL;
+
host_level = host_mapping_level(vcpu->kvm, large_gfn);
 
if (host_level == PT_PAGE_TABLE_LEVEL)
@@ -2962,14 +2968,13 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t 
v, u32 error_code,
 {
int r;
int level;
-   bool force_pt_level;
+   bool force_pt_level = false;
pfn_t pfn;
unsigned long mmu_seq;
bool map_writable, write = error_code & PFERR_WRITE_MASK;
 
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, gfn);
+   level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
-   level = mapping_level(vcpu, gfn);
/*
 * This path builds a PAE pagetable - so we can map
 * 2mb pages at maximum. Therefore check if the level
@@ -2979,8 +2984,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
level = PT_DIRECTORY_LEVEL;
 
gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   } else
-   level = PT_PAGE_TABLE_LEVEL;
+   }
 
if (fast_page_fault(vcpu, v, level, error_code))
return 0;
@@ -3495,20 +3499,15 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
if (r)
return r;
 
-   if (mapping_level_dirty_bitmap(vcpu, gfn) ||
-   !check_hugepage_cache_consistency(vcpu, gfn, PT_DIRECTORY_LEVEL))
-   force_pt_level = true;
-   else
-   force_pt_level = false;
-
+   force_pt_level = !check_hugepage_cache_consistency(vcpu, gfn,
+  PT_DIRECTORY_LEVEL);
+   level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
-   level = mapping_level(vcpu, gfn);
if (level > PT_DIRECTORY_LEVEL &&
!check_hugepage_cache_consistency(vcpu, gfn, level))
level = PT_DIRECTORY_LEVEL;
gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   } else
-   level = PT_PAGE_TABLE_LEVEL;
+   }
 
if (fast_page_fault(vcpu, gpa, level, error_code))
return 0;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 8ebc3a5..bf39d0f 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -744,9 +744,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
  &walker, user_fault, &vcpu->arch.write_fault_to_shadow_pgtable);
 
if (walker.level >= PT_DIRECTORY_LEVEL && !is_self_change_mapping) {
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn);
-   if (!force_pt_level) {
-   level = min(walker.level, mapping_level(vcpu, 
walker.gfn));
+   level = mapping_level(vcpu, walker.gfn, &force_pt_level);
+   if (likely(!force_pt_level)) {
+   level = min(walker.level, level);
walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) 
- 1);
}
} else
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/5] KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()

2015-10-16 Thread Takuya Yoshikawa
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/paging_tmpl.h | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 07f1a4e..8ebc3a5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -743,15 +743,14 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu,
  &walker, user_fault, &vcpu->arch.write_fault_to_shadow_pgtable);
 
-   if (walker.level >= PT_DIRECTORY_LEVEL)
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn)
-  || is_self_change_mapping;
-   else
+   if (walker.level >= PT_DIRECTORY_LEVEL && !is_self_change_mapping) {
+   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn);
+   if (!force_pt_level) {
+   level = min(walker.level, mapping_level(vcpu, 
walker.gfn));
+   walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) 
- 1);
+   }
+   } else
force_pt_level = true;
-   if (!force_pt_level) {
-   level = min(walker.level, mapping_level(vcpu, walker.gfn));
-   walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   }
 
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/5] KVM: x86: MMU: Make force_pt_level bool

2015-10-16 Thread Takuya Yoshikawa
This will be passed to a function later.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c | 8 
 arch/x86/kvm/paging_tmpl.h | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2962,7 +2962,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
 {
int r;
int level;
-   int force_pt_level;
+   bool force_pt_level;
pfn_t pfn;
unsigned long mmu_seq;
bool map_writable, write = error_code & PFERR_WRITE_MASK;
@@ -3476,7 +3476,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
pfn_t pfn;
int r;
int level;
-   int force_pt_level;
+   bool force_pt_level;
gfn_t gfn = gpa >> PAGE_SHIFT;
unsigned long mmu_seq;
int write = error_code & PFERR_WRITE_MASK;
@@ -3497,9 +3497,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
 
if (mapping_level_dirty_bitmap(vcpu, gfn) ||
!check_hugepage_cache_consistency(vcpu, gfn, PT_DIRECTORY_LEVEL))
-   force_pt_level = 1;
+   force_pt_level = true;
else
-   force_pt_level = 0;
+   force_pt_level = false;
 
if (likely(!force_pt_level)) {
level = mapping_level(vcpu, gfn);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 736e6ab..07f1a4e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -698,7 +698,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
int r;
pfn_t pfn;
int level = PT_PAGE_TABLE_LEVEL;
-   int force_pt_level;
+   bool force_pt_level;
unsigned long mmu_seq;
bool map_writable, is_self_change_mapping;
 
@@ -747,7 +747,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn)
   || is_self_change_mapping;
else
-   force_pt_level = 1;
+   force_pt_level = true;
if (!force_pt_level) {
level = min(walker.level, mapping_level(vcpu, walker.gfn));
walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/5 v2] KVM: x86: MMU: Eliminate extra memory slot searches in page fault handlers

2015-10-16 Thread Takuya Yoshikawa
v2: Based on Paolo's suggestion, kept the common code as much as possible by
introducing memslot_valid_for_gpte().

Note: instead of joining all checks by boolean operators, splitted the
no_dirty_log case off to be a separate if-check because it is checking
clearly different thing than the rest.  See patch 4 for details.

In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.

With a bit of cleanup effort, the patch set reduces this overhead.

Takuya

Takuya Yoshikawa (5):
  KVM: x86: MMU: Make force_pt_level bool
  KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()
  KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()
  KVM: x86: MMU: Remove mapping_level_dirty_bitmap()
  KVM: x86: MMU: Eliminate an extra memory slot search in mapping_level()

 arch/x86/kvm/mmu.c | 70 +++---
 arch/x86/kvm/paging_tmpl.h | 19 ++---
 2 files changed, 50 insertions(+), 39 deletions(-)

-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 5/5] KVM: x86: MMU: Eliminate an extra memory slot search in mapping_level()

2015-10-15 Thread Takuya Yoshikawa
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |   17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 78a3d08..8d285dc 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -818,14 +818,11 @@ static void unaccount_shadowed(struct kvm *kvm, struct 
kvm_mmu_page *sp)
kvm->arch.indirect_shadow_pages--;
 }
 
-static int has_wrprotected_page(struct kvm_vcpu *vcpu,
-   gfn_t gfn,
-   int level)
+static int __has_wrprotected_page(gfn_t gfn, int level,
+ struct kvm_memory_slot *slot)
 {
-   struct kvm_memory_slot *slot;
struct kvm_lpage_info *linfo;
 
-   slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
if (slot) {
linfo = lpage_info_slot(gfn, slot, level);
return linfo->write_count;
@@ -834,6 +831,14 @@ static int has_wrprotected_page(struct kvm_vcpu *vcpu,
return 1;
 }
 
+static int has_wrprotected_page(struct kvm_vcpu *vcpu, gfn_t gfn, int level)
+{
+   struct kvm_memory_slot *slot;
+
+   slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+   return __has_wrprotected_page(gfn, level, slot);
+}
+
 static int host_mapping_level(struct kvm *kvm, gfn_t gfn)
 {
unsigned long page_size;
@@ -893,7 +898,7 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t 
large_gfn,
max_level = min(kvm_x86_ops->get_lpage_level(), host_level);
 
for (level = PT_DIRECTORY_LEVEL; level <= max_level; ++level)
-   if (has_wrprotected_page(vcpu, large_gfn, level))
+   if (__has_wrprotected_page(large_gfn, level, slot))
break;
 
return level - 1;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/5] KVM: x86: MMU: Remove mapping_level_dirty_bitmap()

2015-10-15 Thread Takuya Yoshikawa
Now that it has only one caller, and its name is not so helpful for
readers, just remove it.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |   21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 890cd69..78a3d08 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -851,6 +851,14 @@ static int host_mapping_level(struct kvm *kvm, gfn_t gfn)
return ret;
 }
 
+static inline bool memslot_invalid(struct kvm_memory_slot *slot)
+{
+   if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
+   return true;
+
+   return false;
+}
+
 static struct kvm_memory_slot *
 gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn,
bool no_dirty_log)
@@ -858,25 +866,22 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t 
gfn,
struct kvm_memory_slot *slot;
 
slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
-   if (!slot || slot->flags & KVM_MEMSLOT_INVALID ||
- (no_dirty_log && slot->dirty_bitmap))
+   if (memslot_invalid(slot) || (no_dirty_log && slot->dirty_bitmap))
slot = NULL;
 
return slot;
 }
 
-static bool mapping_level_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t large_gfn)
-{
-   return !gfn_to_memslot_dirty_bitmap(vcpu, large_gfn, true);
-}
-
 static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
 bool *force_pt_level)
 {
int host_level, level, max_level;
+   struct kvm_memory_slot *slot;
+
+   slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
 
if (likely(!*force_pt_level))
-   *force_pt_level = mapping_level_dirty_bitmap(vcpu, large_gfn);
+   *force_pt_level = memslot_invalid(slot) || slot->dirty_bitmap;
if (unlikely(*force_pt_level))
return PT_PAGE_TABLE_LEVEL;
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/5] KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()

2015-10-15 Thread Takuya Yoshikawa
This is necessary to eliminate an extra memory slot search later.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |   29 ++---
 arch/x86/kvm/paging_tmpl.h |6 +++---
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2262728..890cd69 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -870,10 +870,16 @@ static bool mapping_level_dirty_bitmap(struct kvm_vcpu 
*vcpu, gfn_t large_gfn)
return !gfn_to_memslot_dirty_bitmap(vcpu, large_gfn, true);
 }
 
-static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn)
+static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
+bool *force_pt_level)
 {
int host_level, level, max_level;
 
+   if (likely(!*force_pt_level))
+   *force_pt_level = mapping_level_dirty_bitmap(vcpu, large_gfn);
+   if (unlikely(*force_pt_level))
+   return PT_PAGE_TABLE_LEVEL;
+
host_level = host_mapping_level(vcpu->kvm, large_gfn);
 
if (host_level == PT_PAGE_TABLE_LEVEL)
@@ -2962,14 +2968,13 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t 
v, u32 error_code,
 {
int r;
int level;
-   bool force_pt_level;
+   bool force_pt_level = false;
pfn_t pfn;
unsigned long mmu_seq;
bool map_writable, write = error_code & PFERR_WRITE_MASK;
 
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, gfn);
+   level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
-   level = mapping_level(vcpu, gfn);
/*
 * This path builds a PAE pagetable - so we can map
 * 2mb pages at maximum. Therefore check if the level
@@ -2979,8 +2984,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
level = PT_DIRECTORY_LEVEL;
 
gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   } else
-   level = PT_PAGE_TABLE_LEVEL;
+   }
 
if (fast_page_fault(vcpu, v, level, error_code))
return 0;
@@ -3495,20 +3499,15 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
if (r)
return r;
 
-   if (mapping_level_dirty_bitmap(vcpu, gfn) ||
-   !check_hugepage_cache_consistency(vcpu, gfn, PT_DIRECTORY_LEVEL))
-   force_pt_level = true;
-   else
-   force_pt_level = false;
-
+   force_pt_level = !check_hugepage_cache_consistency(vcpu, gfn,
+  PT_DIRECTORY_LEVEL);
+   level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
-   level = mapping_level(vcpu, gfn);
if (level > PT_DIRECTORY_LEVEL &&
!check_hugepage_cache_consistency(vcpu, gfn, level))
level = PT_DIRECTORY_LEVEL;
gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   } else
-   level = PT_PAGE_TABLE_LEVEL;
+   }
 
if (fast_page_fault(vcpu, gpa, level, error_code))
return 0;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 8ebc3a5..bf39d0f 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -744,9 +744,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
  &walker, user_fault, &vcpu->arch.write_fault_to_shadow_pgtable);
 
if (walker.level >= PT_DIRECTORY_LEVEL && !is_self_change_mapping) {
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn);
-   if (!force_pt_level) {
-   level = min(walker.level, mapping_level(vcpu, 
walker.gfn));
+   level = mapping_level(vcpu, walker.gfn, &force_pt_level);
+   if (likely(!force_pt_level)) {
+   level = min(walker.level, level);
walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) 
- 1);
}
} else
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/5] KVM: x86: MMU: Make force_pt_level bool

2015-10-15 Thread Takuya Yoshikawa
This will be passed to a function later.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |8 
 arch/x86/kvm/paging_tmpl.h |4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2962,7 +2962,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, 
u32 error_code,
 {
int r;
int level;
-   int force_pt_level;
+   bool force_pt_level;
pfn_t pfn;
unsigned long mmu_seq;
bool map_writable, write = error_code & PFERR_WRITE_MASK;
@@ -3476,7 +3476,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
pfn_t pfn;
int r;
int level;
-   int force_pt_level;
+   bool force_pt_level;
gfn_t gfn = gpa >> PAGE_SHIFT;
unsigned long mmu_seq;
int write = error_code & PFERR_WRITE_MASK;
@@ -3497,9 +3497,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
 
if (mapping_level_dirty_bitmap(vcpu, gfn) ||
!check_hugepage_cache_consistency(vcpu, gfn, PT_DIRECTORY_LEVEL))
-   force_pt_level = 1;
+   force_pt_level = true;
else
-   force_pt_level = 0;
+   force_pt_level = false;
 
if (likely(!force_pt_level)) {
level = mapping_level(vcpu, gfn);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 736e6ab..07f1a4e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -698,7 +698,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
int r;
pfn_t pfn;
int level = PT_PAGE_TABLE_LEVEL;
-   int force_pt_level;
+   bool force_pt_level;
unsigned long mmu_seq;
bool map_writable, is_self_change_mapping;
 
@@ -747,7 +747,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn)
   || is_self_change_mapping;
else
-   force_pt_level = 1;
+   force_pt_level = true;
if (!force_pt_level) {
level = min(walker.level, mapping_level(vcpu, walker.gfn));
walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/5] KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()

2015-10-15 Thread Takuya Yoshikawa
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/paging_tmpl.h |   15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 07f1a4e..8ebc3a5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -743,15 +743,14 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t 
addr, u32 error_code,
is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu,
  &walker, user_fault, &vcpu->arch.write_fault_to_shadow_pgtable);
 
-   if (walker.level >= PT_DIRECTORY_LEVEL)
-   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn)
-  || is_self_change_mapping;
-   else
+   if (walker.level >= PT_DIRECTORY_LEVEL && !is_self_change_mapping) {
+   force_pt_level = mapping_level_dirty_bitmap(vcpu, walker.gfn);
+   if (!force_pt_level) {
+   level = min(walker.level, mapping_level(vcpu, 
walker.gfn));
+   walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) 
- 1);
+   }
+   } else
force_pt_level = true;
-   if (!force_pt_level) {
-   level = min(walker.level, mapping_level(vcpu, walker.gfn));
-   walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
-   }
 
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/5] KVM: x86: MMU: Eliminate extra memory slot searches in page fault handlers

2015-10-15 Thread Takuya Yoshikawa
In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.

With a bit of cleanup effort, the patch set reduces this overhead.

  [PATCH 1/5] KVM: x86: MMU: Make force_pt_level bool
  [PATCH 2/5] KVM: x86: MMU: Simplify force_pt_level calculation code in 
FNAME(page_fault)()
  [PATCH 3/5] KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into 
mapping_level()
  [PATCH 4/5] KVM: x86: MMU: Remove mapping_level_dirty_bitmap()
  [PATCH 5/5] KVM: x86: MMU: Eliminate an extra memory slot search in 
mapping_level()

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 9/9] KVM: x86: pass struct kvm_mmu_page to account/unaccount_shadowed

2015-05-20 Thread Takuya Yoshikawa
On 2015/05/20 2:25, Paolo Bonzini wrote:
> Prepare for multiple address spaces this way, since a VCPU is not available
> where unaccount_shadowed is called.  We will get to the right kvm_memslots
> 1truct through the role field in struct kvm_mmu_page.

typo: s/1truct/struct/

Reviewed-by: Takuya Yoshikawa 

> 
> Signed-off-by: Paolo Bonzini 


Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/9] KVM: const-ify uses of struct kvm_userspace_memory_region

2015-05-20 Thread Takuya Yoshikawa
On 2015/05/20 2:25, Paolo Bonzini wrote:
> Architecture-specific helpers are not supposed to muck with
> struct kvm_userspace_memory_region contents.  Add const to
> enforce this.

Nice, this is what I wanted to do, but could not do, when I
cleaned up these functions a few years ago.

> 
> In order to eliminate the only write in __kvm_set_memory_region,
> the cleaning of deleted slots is pulled up from update_memslots
> to __kvm_set_memory_region.

-   if (!npages)
-   mem->flags &= ~KVM_MEM_LOG_DIRTY_PAGES;

Especially, removing this hard-to-understand lines is really
nice.

> 
> Signed-off-by: Paolo Bonzini 

Reviewed-by: Takuya Yoshikawa 


Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/9] KVM: introduce kvm_alloc/free_memslots

2015-05-20 Thread Takuya Yoshikawa
On 2015/05/20 2:25, Paolo Bonzini wrote:

> +static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots)
> +{
> + struct kvm_memory_slot *memslot;
> +
> + if (!slots)
> + return;
> +
> + kvm_for_each_memslot(memslot, slots)
> + kvm_free_memslot(kvm, memslot, NULL);
> +
> + kvfree(slots);
>   }
>   
>   static struct kvm *kvm_create_vm(unsigned long type)
> @@ -472,17 +519,10 @@ static struct kvm *kvm_create_vm(unsigned long type)
>   BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
>   
>   r = -ENOMEM;
> - kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots));
> + kvm->memslots = kvm_alloc_memslots();
>   if (!kvm->memslots)
>   goto out_err_no_srcu;
>   
> - /*
> -  * Init kvm generation close to the maximum to easily test the
> -  * code of handling generation number wrap-around.
> -  */
> - kvm->memslots->generation = -150;
> -
> - kvm_init_memslots_id(kvm);
>   if (init_srcu_struct(&kvm->srcu))
>   goto out_err_no_srcu;
>   if (init_srcu_struct(&kvm->irq_srcu))
> @@ -523,7 +563,7 @@ out_err_no_srcu:
>   out_err_no_disable:
>   for (i = 0; i < KVM_NR_BUSES; i++)
>   kfree(kvm->buses[i]);
> - kvfree(kvm->memslots);
> + kvm_free_memslots(kvm, kvm->memslots);

Newly added code
+   kvm_for_each_memslot(memslot, slots)
+   kvm_free_memslot(kvm, memslot, NULL);
does nothing in effect, but looks better to be here since this
corresponds to kvm_alloc_memslots() part and may be safer for
future changes.

Other changes look like trivial transitions to the new
kvm_alloc/free_memslots.

Reviewed-by: Takuya Yoshikawa 

>   kvm_arch_free_vm(kvm);
>   return ERR_PTR(r);
>   }

Takuya

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()

2015-03-17 Thread Takuya Yoshikawa
When all bits in mask are not set,
kvm_arch_mmu_enable_log_dirty_pt_masked() has nothing to do.  But since
it needs to be called from the generic code, it cannot be inlined, and
a few function calls, two when PML is enabled, are wasted.

Since it is common to see many pages remain clean, e.g. framebuffers can
stay calm for a long time, it is worth eliminating this overhead.

Signed-off-by: Takuya Yoshikawa 
---
 virt/kvm/kvm_main.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a109370..420d8cf 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1061,9 +1061,11 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
mask = xchg(&dirty_bitmap[i], 0);
dirty_bitmap_buffer[i] = mask;
 
-   offset = i * BITS_PER_LONG;
-   kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset,
-   mask);
+   if (mask) {
+   offset = i * BITS_PER_LONG;
+   kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+   offset, mask);
+   }
}
 
spin_unlock(&kvm->mmu_lock);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: simplification to the memslots code

2014-11-17 Thread Takuya Yoshikawa
On 2014/11/17 18:23, Paolo Bonzini wrote:
> 
> 
> On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>>>> here are a few small patches that simplify __kvm_set_memory_region
>>>> and associated code.  Can you please review them?
>> Ah, already queued.  Sorry for being late to respond.
> 
> While they are not in kvm/next, there's time to add Reviewed-by's and
> all that.  kvm/queue basically means "I want Fengguang to compile-test
> them, some testing done on x86_64".
> 
> Paolo
> 

OK.

I reviewed patch 2/3 and 3/3, and saw no problem, some
improvements, there.

Takuya


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: simplification to the memslots code

2014-11-16 Thread Takuya Yoshikawa
On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
> 
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code.  Can you please review them?

Ah, already queued.  Sorry for being late to respond.

Takuya

> 
> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (3):
>kvm: memslots: track id_to_index changes during the insertion sort
>kvm: commonize allocation of the new memory slots
>kvm: simplify update_memslots invocation
> 
>   virt/kvm/kvm_main.c | 87 
> ++---
>   1 file changed, 36 insertions(+), 51 deletions(-)
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] kvm: commonize allocation of the new memory slots

2014-11-16 Thread Takuya Yoshikawa
On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified.  I find that the new
> placement of the comment makes it easier to see what happens.

A lot easier to follow the logic.

Reviewed-by: Takuya Yoshikawa 

> 
> Signed-off-by: Paolo Bonzini 
> ---
>   virt/kvm/kvm_main.c | 28 +++-
>   1 file changed, 11 insertions(+), 17 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c8ff99cc0ccb..7bfc842b96d7 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -865,11 +865,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
>   goto out_free;
>   }
>   
> + slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> + GFP_KERNEL);
> + if (!slots)
> + goto out_free;
> +
>   if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
> - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
>   slot = id_to_memslot(slots, mem->slot);
>   slot->flags |= KVM_MEMSLOT_INVALID;
>   
> @@ -885,6 +886,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
>*  - kvm_is_visible_gfn (mmu_check_roots)
>*/
>   kvm_arch_flush_shadow_memslot(kvm, slot);
> +
> + /*
> +  * We can re-use the old_memslots from above, the only 
> difference
> +  * from the currently installed memslots is the invalid flag.  
> This
> +  * will get overwritten by update_memslots anyway.
> +  */
>   slots = old_memslots;
>   }
>   
> @@ -892,19 +899,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
>   if (r)
>   goto out_slots;
>   
> - r = -ENOMEM;
> - /*
> -  * We can re-use the old_memslots from above, the only difference
> -  * from the currently installed memslots is the invalid flag.  This
> -  * will get overwritten by update_memslots anyway.
> -  */
> - if (!slots) {
> - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),
> - GFP_KERNEL);
> - if (!slots)
> - goto out_free;
> - }
> -
>   /* actual memory is freed via old in kvm_free_physmem_slot below */
>   if (change == KVM_MR_DELETE) {
>   new.dirty_bitmap = NULL;
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 10/12] KVM: MMU: allow locklessly access shadow page table out of vcpu thread

2013-08-07 Thread Takuya Yoshikawa
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong  wrote:

> @@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>*/
>   kvm_flush_remote_tlbs(kvm);
>  
> + if (kvm->arch.rcu_free_shadow_page) {
> + sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
> + list_del_init(invalid_list);
> + call_rcu(&sp->rcu, free_pages_rcu);
> + return;
> + }
> +
>   list_for_each_entry_safe(sp, nsp, invalid_list, link) {
>   WARN_ON(!sp->role.invalid || sp->root_count);
>   kvm_mmu_free_page(sp);

Shouldn't we avoid calling call_rcu() when we are holding mmu_lock?

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH 00/12] KVM: MMU: locklessly wirte-protect

2013-08-02 Thread Takuya Yoshikawa
om nonpresent to
>present. Other writes cost lots of time to trigger the page fault due to
>write-protection which are fixed by fast page fault which need not take
>mmu-lock.
> 
> Xiao Guangrong (12):
>   KVM: MMU: remove unused parameter
>   KVM: MMU: properly check last spte in fast_page_fault()
>   KVM: MMU: lazily drop large spte
>   KVM: MMU: log dirty page after marking spte writable
>   KVM: MMU: add spte into rmap before logging dirty page
>   KVM: MMU: flush tlb if the spte can be locklessly modified
>   KVM: MMU: redesign the algorithm of pte_list
>   KVM: MMU: introduce nulls desc
>   KVM: MMU: introduce pte-list lockless walker
>   KVM: MMU: allow locklessly access shadow page table out of vcpu thread
>   KVM: MMU: locklessly write-protect the page
>   KVM: MMU: clean up spte_write_protect
> 
>  arch/x86/include/asm/kvm_host.h |  10 +-
>  arch/x86/kvm/mmu.c  | 442 
> 
>  arch/x86/kvm/mmu.h  |  28 +++
>  arch/x86/kvm/x86.c  |  19 +-
>  4 files changed, 356 insertions(+), 143 deletions(-)
> 
> -- 
> 1.8.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] kvm: reset arch memslot info on memslot creation

2013-07-11 Thread Takuya Yoshikawa
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov  wrote:

> On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 10 Jul 2013 11:24:39 +0300
> > "Michael S. Tsirkin"  wrote:
> > 
> > > On x86, kvm_arch_create_memslot assumes that rmap/lpage_info for the
> > > slot are zeroed out: if they weren't, error handling code after out_free
> > > label will free memory which wasn't allocated here.
> > > This always happens to be the case because on KVM_MR_DELETE we clear the
> > > whole arch structure.  So there's no bug, but it's cleaner not to rely
> > > on this here.
> > 
> > Yes, the assumption is that the function is called only with zero-sized 
> > slots.
> > Since changing the size is not allowed, DELETE-CREATE is the only case we
> > care about.
> > 
> > But isn't it possible to make it explicit that zero-sized slots have always
> > zero-cleared contents instead?  Otherwise, there would be many troubles.
> > 
> Do you have something in mind?
> 

I remember that I once wrote code that assumed flags field was cleared before
creating a new slot and was pointed out that such assumptions might be 
dangerous:
actually, it's cleared separately but not so easy to notice.

So, I want to make it clear what differentiate DELETE'ed slots from others.

If we only assume (npages == 0), CREATE should properly set everything,
having out_free troubles in mind like this patch.  If we also assume the other
fields are zero, then DELETE is responsible for that assumption, some comment
in code may be helpful.

Actually, (rmap==NULL) was once used to check if we needed to allocate memory
for a new slot, meaning that we assumed the latter.  I felt uneasy and changed
that to (npages == 0).

Let's make it clear the underlying assumptions now.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] kvm: reset arch memslot info on memslot creation

2013-07-10 Thread Takuya Yoshikawa
On Wed, 10 Jul 2013 11:24:39 +0300
"Michael S. Tsirkin"  wrote:

> On x86, kvm_arch_create_memslot assumes that rmap/lpage_info for the
> slot are zeroed out: if they weren't, error handling code after out_free
> label will free memory which wasn't allocated here.
> This always happens to be the case because on KVM_MR_DELETE we clear the
> whole arch structure.  So there's no bug, but it's cleaner not to rely
> on this here.

Yes, the assumption is that the function is called only with zero-sized slots.
Since changing the size is not allowed, DELETE-CREATE is the only case we
care about.

But isn't it possible to make it explicit that zero-sized slots have always
zero-cleared contents instead?  Otherwise, there would be many troubles.

Takuya

> 
> Make the code more robust by clearing the rmap/lpage_info explicitly.
> 
> Signed-off-by: Michael S. Tsirkin 
> ---
>  arch/x86/kvm/x86.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e8ba99c..96e6eb4 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6922,6 +6922,10 @@ int kvm_arch_create_memslot(struct kvm_memory_slot 
> *slot, unsigned long npages)
>  {
>   int i;
>  
> + /* Reset in case slot had some rmap/lpage_info. */
> + memset(&slot->arch.rmap, 0, sizeof slot->arch.rmap);
> + memset(&slot->arch.lpage_info, 0, sizeof slot->arch.lpage_info);
> +
>   for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
>   unsigned long ugfn;
>   int lpages;
> -- 
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 0/6] KVM: MMU: fast invalidate all mmio sptes

2013-06-14 Thread Takuya Yoshikawa
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti  wrote:

> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:

> - Where is the generation number increased?

Looks like when a new slot is installed in update_memslots() because
it's based on slots->generation.  This is not restricted to "create"
and "move".

> - Should use spinlock breakable code in kvm_mmu_zap_mmio_sptes()
> (picture guest with 512GB of RAM, even walking all those pages is
> expensive) (ah, patch to remove kvm_mmu_zap_mmio_sptes does that).
> - Is -13 enough to test wraparound? Its highly likely the guest has 
> not began executing by the time 13 kvm_set_memory_calls are made
> (so no sptes around). Perhaps -2000 is more sensible (should confirm
> though).

In the future, after we've tested enough, we should change the testing
code to be executed only for some debugging configs.  Especially, if we
change zap_mmio_sptes() to zap_all_shadows(), very common guests, even
without huge memory like 512GB, can see the effect induced by sudden page
faults unnecessarily.

If necessary, developers can test the wraparound code by lowering the
max_gen itself anyway.

> - Why remove "if (change == KVM_MR_CREATE) || (change
> ==  KVM_MR_MOVE)" from kvm_arch_commit_memory_region?
> Its instructive.

There may be a chance that we miss generation wraparounds if we don't
check other cases: seems unlikely, but theoretically possible.

In short, all memory slot changes make mmio sptes stored in shadow pages
obsolete, or zapped for wraparounds, in the new way -- am I right?

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 0/6] KVM: MMU: fast invalidate all mmio sptes

2013-06-10 Thread Takuya Yoshikawa
On Mon, 10 Jun 2013 16:39:37 +0800
Xiao Guangrong  wrote:

> On 06/10/2013 03:56 PM, Gleb Natapov wrote:
> > On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:

> > Looks good to me, but doesn't tis obsolete kvm_mmu_zap_mmio_sptes() and
> > sp->mmio_cached, so they should be removed as part of the patch series?
> 
> Yes, i agree, they should be removed. :)

I'm fine with removing it but please make it clear that you all agree
on the same basis.

Last time, Paolo mentioned the possibility to use some bits of spte for
other things.  The suggestion there was to keep sp->mmio_cached code
for the time we would need to reduce the bits for generation numbers.

Do you think that zap_all() is now preemptible and can treat the
situation reasonably well as the current kvm_mmu_zap_mmio_sptes()?

One downside is the need to zap unrelated shadow pages, but if this case
is really very rare, yes I agree, it should not be a problem: it depends
on how many bits we can use.

Just please reconfirm.

Takuya

> 
> There is the patch to do these things:
> 
> From bc1bc36e2640059f06c4860af802ecc74e1f3d2d Mon Sep 17 00:00:00 2001
> From: Xiao Guangrong 
> Date: Mon, 10 Jun 2013 16:28:55 +0800
> Subject: [PATCH 7/6] KVM: MMU: drop kvm_mmu_zap_mmio_sptes
> 
> Drop kvm_mmu_zap_mmio_sptes and use kvm_mmu_invalidate_zap_all_pages
> instead to handle mmio generation number overflow
> 
> Signed-off-by: Xiao Guangrong 
> ---
>  arch/x86/include/asm/kvm_host.h |  1 -
>  arch/x86/kvm/mmu.c  | 22 +-
>  2 files changed, 1 insertion(+), 22 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 90d05ed..966f265 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -230,7 +230,6 @@ struct kvm_mmu_page {
>  #endif
> 
>   int write_flooding_count;
> - bool mmio_cached;
>  };
> 
>  struct kvm_pio_request {
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 35cd0b6..c87b19d 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -246,13 +246,11 @@ static unsigned int kvm_current_mmio_generation(struct 
> kvm *kvm)
>  static void mark_mmio_spte(struct kvm *kvm, u64 *sptep, u64 gfn,
>  unsigned access)
>  {
> - struct kvm_mmu_page *sp =  page_header(__pa(sptep));
>   unsigned int gen = kvm_current_mmio_generation(kvm);
>   u64 mask = generation_mmio_spte_mask(gen);
> 
>   access &= ACC_WRITE_MASK | ACC_USER_MASK;
>   mask |= shadow_mmio_mask | access | gfn << PAGE_SHIFT;
> - sp->mmio_cached = true;
> 
>   trace_mark_mmio_spte(sptep, gfn, access, gen);
>   mmu_spte_set(sptep, mask);
> @@ -4362,24 +4360,6 @@ void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm)
>   spin_unlock(&kvm->mmu_lock);
>  }
> 
> -static void kvm_mmu_zap_mmio_sptes(struct kvm *kvm)
> -{
> - struct kvm_mmu_page *sp, *node;
> - LIST_HEAD(invalid_list);
> -
> - spin_lock(&kvm->mmu_lock);
> -restart:
> - list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
> - if (!sp->mmio_cached)
> - continue;
> - if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
> - goto restart;
> - }
> -
> - kvm_mmu_commit_zap_page(kvm, &invalid_list);
> - spin_unlock(&kvm->mmu_lock);
> -}
> -
>  static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
>  {
>   return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> @@ -4395,7 +4375,7 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
>* when mark memslot invalid.
>*/
>   if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1)))
> - kvm_mmu_zap_mmio_sptes(kvm);
> + kvm_mmu_invalidate_zap_all_pages(kvm);
>  }
> 
>  static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
> -- 
> 1.8.1.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 3/6] KVM: MMU: make return value of mmio page fault handler more readable

2013-06-10 Thread Takuya Yoshikawa
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov  wrote:

> On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:

> > +
> > +/*
> > + * Return values of handle_mmio_page_fault_common:
> > + * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the 
> > instruction
> > + *  directly.
> > + * RET_MMIO_PF_RETRY: let CPU fault again on the address.
> > + * RET_MMIO_PF_BUG: bug is detected.
> > + */
> > +enum {
> > +   RET_MMIO_PF_EMULATE = 1,
> > +   RET_MMIO_PF_RETRY = 0,
> > +   RET_MMIO_PF_BUG = -1
> > +};
> I would order them from -1 to 1 and rename RET_MMIO_PF_BUG to
> RET_MMIO_PF_ERROR, but no need to resend just for that.

Why not just let compilers select the values? -- It's an enum.
Any reason to make it start from -1?

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v7 10/11] KVM: MMU: collapse TLB flushes when zap all pages

2013-05-30 Thread Takuya Yoshikawa
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa  wrote:

> On Thu, 30 May 2013 03:53:38 +0300
> Gleb Natapov  wrote:
> 
> > On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > >>>>> the pages since other vcpus may be doing locklessly shadow
> > > >>>>> page walking
> > > >>>
> > > >>> Ah, yes, i agree with you.
> > > >>>
> > > >>> We can introduce a list, say kvm->arch.obsolte_pages, to link all of 
> > > >>> the
> > > >>> zapped-page, the page-shrink will free the page on that list first.
> > > >>>
> > > >>> Marcelo, if you do not have objection on  patch 1 ~ 8 and 11, could 
> > > >>> you please
> > > >>> let them merged first, and do add some comments and tlb optimization 
> > > >>> later?
> > > >>
> > > >> Exclude patch 11 please, since it depends on the "collapse" 
> > > >> optimization.
> > > > 
> > > > I'm fine with patch 1 being merged. I think the remaining patches need 
> > > > better
> > > > understanding or explanation. The problems i see are:
> > > > 
> > > > 1) The magic number "10" to zap before considering reschedule is
> > > > annoying. It would be good to understand why it is needed at all.
> > > 
> > > ..
> > > 
> > > > 
> > > > But then again, the testcase is measuring kvm_mmu_zap_all performance
> > > > alone which we know is not a common operation, so perhaps there is
> > > > no need for that minimum-pages-to-zap-before-reschedule.
> > > 
> > > Well. Although, this is not the common operation, but this operation
> > > can be triggered by VCPU - it one VCPU take long time on zap-all-pages,
> > > other vcpus is missing IPI-synce, or missing IO. This is easily cause
> > > soft lockups if the vcpu is doing memslot-releated things.
> > > 
> > +1. If it is trigarable by a guest it may slow down the guest, but we
> > should not allow for it to slow down a host.
> > 
> 
> Well, I don't object to the minimum-pages-to-zap-before-reschedule idea
> itself, but if you're going to take patch 4, please at least add a warning
> in the changelog that the magic number "10" was selected without good enough
> reasoning.
> 
> "[ It improves kernel building 0.6% ~ 1% ]" alone will make it hard for
> others to change the number later.
> 
> I actually once tried to do a similar thing for other code.  So I have a
> possible reasoning for this, and 10 should probably be changed later.
> 

In this case, the solution seems to be very simple: just drop spin_needbreak()
and leave need_resched() alone.

This way we can guarantee that zap-all will get a fair amount of CPU time for
each scheduling from the host scheduler's point of view.  Of course this can
block other VCPU threads waiting for mmu_lock during that time slice, but
should be much better than blocking them for some magical number of zappings.

We also need to remember that spin_needbreak() does not do anything for some
preempt config settings.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v7 10/11] KVM: MMU: collapse TLB flushes when zap all pages

2013-05-30 Thread Takuya Yoshikawa
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov  wrote:

> On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > the pages since other vcpus may be doing locklessly shadow
> > > page walking
> > >>>
> > >>> Ah, yes, i agree with you.
> > >>>
> > >>> We can introduce a list, say kvm->arch.obsolte_pages, to link all of the
> > >>> zapped-page, the page-shrink will free the page on that list first.
> > >>>
> > >>> Marcelo, if you do not have objection on  patch 1 ~ 8 and 11, could you 
> > >>> please
> > >>> let them merged first, and do add some comments and tlb optimization 
> > >>> later?
> > >>
> > >> Exclude patch 11 please, since it depends on the "collapse" optimization.
> > > 
> > > I'm fine with patch 1 being merged. I think the remaining patches need 
> > > better
> > > understanding or explanation. The problems i see are:
> > > 
> > > 1) The magic number "10" to zap before considering reschedule is
> > > annoying. It would be good to understand why it is needed at all.
> > 
> > ..
> > 
> > > 
> > > But then again, the testcase is measuring kvm_mmu_zap_all performance
> > > alone which we know is not a common operation, so perhaps there is
> > > no need for that minimum-pages-to-zap-before-reschedule.
> > 
> > Well. Although, this is not the common operation, but this operation
> > can be triggered by VCPU - it one VCPU take long time on zap-all-pages,
> > other vcpus is missing IPI-synce, or missing IO. This is easily cause
> > soft lockups if the vcpu is doing memslot-releated things.
> > 
> +1. If it is trigarable by a guest it may slow down the guest, but we
> should not allow for it to slow down a host.
> 

Well, I don't object to the minimum-pages-to-zap-before-reschedule idea
itself, but if you're going to take patch 4, please at least add a warning
in the changelog that the magic number "10" was selected without good enough
reasoning.

"[ It improves kernel building 0.6% ~ 1% ]" alone will make it hard for
others to change the number later.

I actually once tried to do a similar thing for other code.  So I have a
possible reasoning for this, and 10 should probably be changed later.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v5 0/8] KVM: MMU: fast zap all shadow pages

2013-05-16 Thread Takuya Yoshikawa
On Thu, 16 May 2013 20:17:45 +0800
Xiao Guangrong  wrote:

> Bechmark result:
> I have tested this patchset and the previous version that only zaps the
> pages linked on invalid slot's rmap. The benchmark is written by myself
> which has been attached, it writes large memory when do pci rom read.
> 
> Host: Intel(R) Xeon(R) CPU X5690  @ 3.47GHz + 36G Memory
> Guest: 12 VCPU + 32G Memory
> 
> Current code:   This patchset Previous Version 
> 2405434959 ns   2323016424 ns 2368810003 ns
> 
> The interesting thing is, the previous version is slower than this patch,
> i guess the reason is that the former keeps lots of invalid pages in mmu
> which cause shadow page to be reclaimed due to used-pages > request-pages
> or host memory shrink.

This patch series looks very nice!

Minor issues may still need to be improved, but I really hope to see this
get merged during this cycle.

[for the future]  Do you think that postponing some zapping/freeing of
obsolete(already invalidated) pages to make_mmu_pages_available() time
can improve the situation more?  -- say, for big guests.

If accounting kept correct, make_mmu_pages_available() only needs to free
some obsolete pages instead of valid pages.

Takuya

> 
> Changlog:
> V5:
>   1): rename is_valid_sp to is_obsolete_sp
>   2): use lock-break technique to zap all old pages instead of only pages
>   linked on invalid slot's rmap suggested by Marcelo.
>   3): trace invalid pages and kvm_mmu_invalidate_memslot_pages() 
>   4): rename kvm_mmu_invalid_memslot_pages to kvm_mmu_invalidate_memslot_pages
>   according to Takuya's comments. 
> 
> V4:
>   1): drop unmapping invalid rmap out of mmu-lock and use lock-break technique
>   instead. Thanks to Gleb's comments.
> 
>   2): needn't handle invalid-gen pages specially due to page table always
>   switched by KVM_REQ_MMU_RELOAD. Thanks to Marcelo's comments.
> 
> V3:
>   completely redesign the algorithm, please see below.
> 
> V2:
>   - do not reset n_requested_mmu_pages and n_max_mmu_pages
>   - batch free root shadow pages to reduce vcpu notification and mmu-lock
> contention
>   - remove the first patch that introduce kvm->arch.mmu_cache since we only
> 'memset zero' on hashtable rather than all mmu cache members in this
> version
>   - remove unnecessary kvm_reload_remote_mmus after kvm_mmu_zap_all
> 
> * Issue
> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
> walk and zap all shadow pages one by one, also it need to zap all guest
> page's rmap and all shadow page's parent spte list. Particularly, things
> become worse if guest uses more memory or vcpus. It is not good for
> scalability.
> 
> * Idea
> KVM maintains a global mmu invalid generation-number which is stored in
> kvm->arch.mmu_valid_gen and every shadow page stores the current global
> generation-number into sp->mmu_valid_gen when it is created.
> 
> When KVM need zap all shadow pages sptes, it just simply increase the
> global generation-number then reload root shadow pages on all vcpus.
> Vcpu will create a new shadow page table according to current kvm's
> generation-number. It ensures the old pages are not used any more.
> 
> Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
> are zapped by using lock-break technique.
> 
> Xiao Guangrong (8):
>   KVM: MMU: drop unnecessary kvm_reload_remote_mmus
>   KVM: MMU: delete shadow page from hash list in
> kvm_mmu_prepare_zap_page
>   KVM: MMU: fast invalidate all pages
>   KVM: x86: use the fast way to invalidate all pages
>   KVM: MMU: make kvm_mmu_zap_all preemptable
>   KVM: MMU: show mmu_valid_gen in shadow page related tracepoints
>   KVM: MMU: add tracepoint for kvm_mmu_invalidate_memslot_pages
>   KVM: MMU: zap pages in batch
> 
>  arch/x86/include/asm/kvm_host.h |2 +
>  arch/x86/kvm/mmu.c  |  124 
> ++-
>  arch/x86/kvm/mmu.h  |2 +
>  arch/x86/kvm/mmutrace.h |   45 +++---
>  arch/x86/kvm/x86.c      |9 +--
>  5 files changed, 163 insertions(+), 19 deletions(-)
> 
> -- 
> 1.7.7.6
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 4/6] KVM: MMU: fast invalid all shadow pages

2013-05-02 Thread Takuya Yoshikawa
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong  wrote:

> +/*
> + * Fast invalid all shadow pages belong to @slot.
> + *
> + * @slot != NULL means the invalidation is caused the memslot specified
> + * by @slot is being deleted, in this case, we should ensure that rmap
> + * and lpage-info of the @slot can not be used after calling the function.
> + *
> + * @slot == NULL means the invalidation due to other reasons, we need

The comment should explain what the "other reasons" are.
But this API may better be split into two separate functions; it depends
on the "other reasons".

> + * not care rmap and lpage-info since they are still valid after calling
> + * the function.
> + */
> +void kvm_mmu_invalid_memslot_pages(struct kvm *kvm,
> +struct kvm_memory_slot *slot)

You yourself is explaining this as "invalidation" in the comment.
kvm_mmu_invalidate_shadow_pages_memslot() or something...

Anybody can think of a better name?

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 3/6] KVM: MMU: introduce kvm_clear_all_lpage_info

2013-05-02 Thread Takuya Yoshikawa
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong  wrote:

> This function is used to reset the large page info of all guest pages
> which will be used in later patch
> 
> Signed-off-by: Xiao Guangrong 
> ---
>  arch/x86/kvm/x86.c |   25 +
>  arch/x86/kvm/x86.h |2 ++
>  2 files changed, 27 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 52b4e97..8e4494c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6951,6 +6951,31 @@ static void memslot_set_lpage_disallowed(struct 
> kvm_memory_slot *slot,
>   }
>  }
>  
> +static void clear_memslot_lpage_info(struct kvm_memory_slot *slot)
> +{
> + int i;
> +
> + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) {
> + int lpages;
> + int level = i + 1;
> +
> + lpages = gfn_to_index(slot->base_gfn + slot->npages - 1,
> +   slot->base_gfn, level) + 1;
> +
> + memset(slot->arch.lpage_info[i - 1], 0,
> +sizeof(*slot->arch.lpage_info[i - 1]));
> + memslot_set_lpage_disallowed(slot, slot->npages, i, lpages);

This does something other than clearing.
Any better name?

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/6] KVM: x86: introduce memslot_set_lpage_disallowed

2013-05-02 Thread Takuya Yoshikawa
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong  wrote:

> It is used to set disallowed large page on the specified level, can be
> used in later patch
> 
> Signed-off-by: Xiao Guangrong 
> ---
>  arch/x86/kvm/x86.c |   53 ++-
>  1 files changed, 35 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 91dd9f4..52b4e97 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6917,12 +6917,45 @@ void kvm_arch_free_memslot(struct kvm_memory_slot 
> *free,
>   }
>  }
>  
> +static void memslot_set_lpage_disallowed(struct kvm_memory_slot *slot,
> +  unsigned long npages,
> +  int lpage_size, int lpages)

What this function does is to disable large page support for this slot
as can be seen in the comment below.

Since setting lpage_info to something ("disallowed" ?) is an implementation
detail, we'd better hide such a thing from the function name.

Taking into account that we have "kvm_largepages_enabled()", something like
disable_largepages_memslot() may be a candidate.

But I want to see suggestions from others as well.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages

2013-04-22 Thread Takuya Yoshikawa
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov  wrote:

> > > Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
> > > suspectible to:
> > > 
> > > vcpu 1|   kvm_set_memory
> > > create shadow page
> > >   nuke shadow page
> > > create shadow page
> > >   nuke shadow page
> > > 
> > > Which is guest triggerable behavior with spinlock preemption algorithm.
> > 
> > Not only guest triggerable as in the sense of a malicious guest, 
> > but condition above can be induced by host workload with non-malicious
> > guest system.
> > 
> Is the problem that newly created shadow pages are immediately zapped?
> Shouldn't generation number/kvm_mmu_zap_all_invalid() idea described here
> https://lkml.org/lkml/2013/4/22/111 solve this?

I guess so.  That's what Avi described when he tried to achieve
lockless TLB flushes.  Mixing that idea with Xiao's approach will
achieve reasonably nice performance, I think.

Various improvements should be added later on top of that if needed.

> > Also kvm_set_memory being relatively fast with huge memory guests
> > is nice (which is what Xiaos idea allows).

I agree with this point.  But if so, it should be actually measured on
such guests, even if the algorithm looks promising.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 5/5] KVM: MMU: fast invalid all mmio sptes

2013-03-15 Thread Takuya Yoshikawa
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong  wrote:

> +/*
> + * The caller should protect concurrent access on
> + * kvm->arch.mmio_invalid_gen. Currently, it is used by
> + * kvm_arch_commit_memory_region and protected by kvm->slots_lock.
> + */
> +void kvm_mmu_invalid_mmio_spte(struct kvm *kvm)

kvm_mmu_invalidate_mmio_sptes() may be a better name.

Thanks,
Takuya

> +{
> + /* Ensure update memslot has been completed. */
> + smp_mb();
> +
> +  trace_kvm_mmu_invalid_mmio_spte(kvm);
> +
> + /*
> +  * The very rare case: if the generation-number is round,
> +  * zap all shadow pages.
> +  */
> + if (unlikely(kvm->arch.mmio_invalid_gen++ == MAX_GEN)) {
> + kvm->arch.mmio_invalid_gen = 0;
> + return kvm_mmu_zap_all(kvm);
> + }
> +}
> +
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/5] Revert "KVM: x86: Optimize mmio spte zapping when, creating/moving memslot"

2013-03-15 Thread Takuya Yoshikawa
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong  wrote:

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3c4787..61a5bb6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>* mmio sptes.
>*/
>   if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
> - kvm_mmu_zap_mmio_sptes(kvm);
> 

???
+ kvm_mmu_zap_all()


Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/5] KVM: MMU: fast invalid all mmio sptes

2013-03-15 Thread Takuya Yoshikawa
Still reading, but sounds great if this works!

I did not like the idea of mmio-rmap based approach so much, but this
would be really/perfectly scalable.

Thanks,
Takuya

On Fri, 15 Mar 2013 23:26:16 +0800
Xiao Guangrong  wrote:

> The current way is holding hot mmu-lock and walking all shadow pages, this
> is not scale. This patchset tries to introduce a very simple and scale way
> to fast invalid all mmio sptes - it need not walk any shadow pages and hold
> any locks.
> 
> The idea is simple:
> KVM maintains a global mmio invalid generation-number which is stored in
> kvm->arch.mmio_invalid_gen and every mmio spte stores the current global
> generation-number into his available bits when it is created.
> 
> When KVM need zap all mmio sptes, it just simply increase the global
> generation-number. When guests do mmio access, KVM intercepts a MMIO #PF
> then it walks the shadow page table and get the mmio spte. If the
> generation-number on the spte does not equal the global generation-number,
> it will go to the normal #PF handler to update the mmio spte.
> 
> Since 19 bits are used to store generation-number on mmio spte, the
> generation-number can be round after 33554432 times. It is large enough
> for nearly all most cases, but making the code be more strong, we zap all
> shadow pages when the number is round.
> 
> Note: after my patchset that fast zap all shadow pages, kvm_mmu_zap_all is
> not a problem any more. The scalability is the same as zap mmio shadow page
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-29 Thread Takuya Yoshikawa
On Wed, 30 Jan 2013 12:06:32 +0800
Xiao Guangrong  wrote:

>  So, i guess we can do the simple fix first.
> 
> >>> By simple fix you mean calling kvm_arch_flush_shadow_all() on READONLY
> >>> flag change?
> >>
> >> Simply disallow READONLY flag changing.
> > Ok, can somebody craft a patch?
> 
> Takuya, will you? ;)

I will.
Thank you for pointing out the problem.

We can revisit the API later to determine what kind of changes
should be allowed.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-28 Thread Takuya Yoshikawa
On Mon, 28 Jan 2013 08:36:56 -0700
Alex Williamson  wrote:

> On Mon, 2013-01-28 at 21:25 +0900, Takuya Yoshikawa wrote:
> > On Mon, 28 Jan 2013 12:59:03 +0200
> > Gleb Natapov  wrote:
> > 
> > > > It sets spte based on the old value that means the readonly flag check
> > > > is missed. We need to call kvm_arch_flush_shadow_all under this case.
> > > Why not just disallow changing memory region KVM_MEM_READONLY flag
> > > without deleting the region?
> > 
> > Sounds good.
> > 
> > If you prefer, I can fold the required change into my patch.
> 
> That would seem to make my patch 1/2 unnecessary.  Thanks,

I've decided not to fold that change into my
  KVM: set_memory_region: Identify the requested change explicitly
since this is a functional change, see v3 I posted today.

Since QEMU is not the only hypervisor using KVM, we should make the
change with a proper subject informing every user of the new restriction.

If the maintainers prefer to remove Alex's patch 1/2 first and then
rebase my v3 patch, I can do so.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-28 Thread Takuya Yoshikawa
On Mon, 28 Jan 2013 12:59:03 +0200
Gleb Natapov  wrote:

> > It sets spte based on the old value that means the readonly flag check
> > is missed. We need to call kvm_arch_flush_shadow_all under this case.
> Why not just disallow changing memory region KVM_MEM_READONLY flag
> without deleting the region?

Sounds good.

If you prefer, I can fold the required change into my patch.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-24 Thread Takuya Yoshikawa
On Fri, 25 Jan 2013 12:59:12 +0900
Takuya Yoshikawa  wrote:

> > The commit c972f3b1 changed the write-protect behaviour - it does
> > wirte-protection only when dirty flag is set.
> > [ I did not see this commit when we discussed the problem before. ]
> 
> I'll look at the commit later, after the lunch break.

I've done!

So we need to do:

if (npages && (GET_DIRTY || RDONLY))
write protect

About sync_page, I'm not sure what is the best fix.

Takuya

> 
> > Further more, i notice that write-protect is not enough, when do sync
> > shadow page:
> > 
> > FNAME(sync_page):
> > 
> > host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> > 
> > set_spte(vcpu, &sp->spt[i], pte_access,
> >  PT_PAGE_TABLE_LEVEL, gfn,
> >  spte_to_pfn(sp->spt[i]), true, false,
> >  host_writable);
> > 
> > It sets spte based on the old value that means the readonly flag check
> > is missed. We need to call kvm_arch_flush_shadow_all under this case.
> 
> So the change needed will be in arch/x86.
>   in arch_commit_* one.
> 
> Right?
> 
> Note: I'm not touching arch_* memory slot APIs now because ARM KVM
> is coming now.  So no problem, the flags will be passed as before.
> 
>   Takuya


-- 
Takuya Yoshikawa 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-24 Thread Takuya Yoshikawa
On Fri, 25 Jan 2013 11:28:40 +0800
Xiao Guangrong  wrote:

> > I think I can naturally update my patch after this gets merged.
> > 
> 
> Please wait.

The patch I mentioned above won't change anything.  Just cleans up
set_memory_region().  The only possible change which we discussed
before was whether we call iommu_map() on a flags change.

> The commit c972f3b1 changed the write-protect behaviour - it does
> wirte-protection only when dirty flag is set.
> [ I did not see this commit when we discussed the problem before. ]

I'll look at the commit later, after the lunch break.

> Further more, i notice that write-protect is not enough, when do sync
> shadow page:
> 
> FNAME(sync_page):
> 
>   host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> 
>   set_spte(vcpu, &sp->spt[i], pte_access,
>PT_PAGE_TABLE_LEVEL, gfn,
>spte_to_pfn(sp->spt[i]), true, false,
>host_writable);
> 
> It sets spte based on the old value that means the readonly flag check
> is missed. We need to call kvm_arch_flush_shadow_all under this case.

So the change needed will be in arch/x86.
in arch_commit_* one.

Right?

Note: I'm not touching arch_* memory slot APIs now because ARM KVM
is coming now.  So no problem, the flags will be passed as before.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/2] kvm: IOMMU read-only mapping support

2013-01-24 Thread Takuya Yoshikawa
On Thu, 24 Jan 2013 15:03:57 -0700
Alex Williamson  wrote:

> A couple patches to make KVM IOMMU support honor read-only mappings.
> This causes an un-map, re-map when the read-only flag changes and
> makes use of it when setting IOMMU attributes.  Thanks,

Looks good to me.

I think I can naturally update my patch after this gets merged.

Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 7/7] KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

2013-01-08 Thread Takuya Yoshikawa
If the userspace starts dirty logging for a large slot, say 64GB of
memory, kvm_mmu_slot_remove_write_access() needs to hold mmu_lock for
a long time such as tens of milliseconds.  This patch controls the lock
hold time by asking the scheduler if we need to reschedule for others.

One penalty for this is that we need to flush TLBs before releasing
mmu_lock.  But since holding mmu_lock for a long time does affect not
only the guest, vCPU threads in other words, but also the host as a
whole, we should pay for that.

In practice, the cost will not be so high because we can protect a fair
amount of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP.  We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b7a1235..a32e8cf 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4212,6 +4212,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, 
int slot)
for (index = 0; index <= last_index; ++index, ++rmapp) {
if (*rmapp)
__rmap_write_protect(kvm, rmapp, false);
+
+   if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+   kvm_flush_remote_tlbs(kvm);
+   cond_resched_lock(&kvm->mmu_lock);
+   }
}
}
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself

2013-01-08 Thread Takuya Yoshikawa
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |3 +++
 arch/x86/kvm/x86.c |5 +
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fc7d84a..b7a1235 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4199,6 +4199,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, 
int slot)
memslot = id_to_memslot(kvm->memslots, slot);
last_gfn = memslot->base_gfn + memslot->npages - 1;
 
+   spin_lock(&kvm->mmu_lock);
+
for (i = PT_PAGE_TABLE_LEVEL;
 i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
unsigned long *rmapp;
@@ -4214,6 +4216,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, 
int slot)
}
 
kvm_flush_remote_tlbs(kvm);
+   spin_unlock(&kvm->mmu_lock);
 }
 
 void kvm_mmu_zap_all(struct kvm *kvm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 080bbdc..5483228 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6899,11 +6899,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 * Existing largepage mappings are destroyed here and new ones will
 * not be created until the end of the logging.
 */
-   if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
-   spin_lock(&kvm->mmu_lock);
+   if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
-   spin_unlock(&kvm->mmu_lock);
-   }
/*
 * If memory slot is created, or moved, we need to clear all
 * mmio sptes.
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself

2013-01-08 Thread Takuya Yoshikawa
No reason to make callers take mmu_lock since we do not need to protect
kvm_mmu_change_mmu_pages() and kvm_mmu_slot_remove_write_access()
together by mmu_lock in kvm_arch_commit_memory_region(): the former
calls kvm_mmu_commit_zap_page() and flushes TLBs by itself.

Note: we do not need to protect kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |4 
 arch/x86/kvm/x86.c |9 -
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bb964b3..fc7d84a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2143,6 +2143,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned 
int goal_nr_mmu_pages)
 * change the value
 */
 
+   spin_lock(&kvm->mmu_lock);
+
if (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages) {
while (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages &&
!list_empty(&kvm->arch.active_mmu_pages)) {
@@ -2157,6 +2159,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned 
int goal_nr_mmu_pages)
}
 
kvm->arch.n_max_mmu_pages = goal_nr_mmu_pages;
+
+   spin_unlock(&kvm->mmu_lock);
 }
 
 int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index add5e48..080bbdc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3270,12 +3270,10 @@ static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm 
*kvm,
return -EINVAL;
 
mutex_lock(&kvm->slots_lock);
-   spin_lock(&kvm->mmu_lock);
 
kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages;
 
-   spin_unlock(&kvm->mmu_lock);
mutex_unlock(&kvm->slots_lock);
return 0;
 }
@@ -6894,7 +6892,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
if (!kvm->arch.n_requested_mmu_pages)
nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
 
-   spin_lock(&kvm->mmu_lock);
if (nr_mmu_pages)
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
/*
@@ -6902,9 +6899,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 * Existing largepage mappings are destroyed here and new ones will
 * not be created until the end of the logging.
 */
-   if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
+   if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+   spin_lock(&kvm->mmu_lock);
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
-   spin_unlock(&kvm->mmu_lock);
+   spin_unlock(&kvm->mmu_lock);
+   }
/*
 * If memory slot is created, or moved, we need to clear all
 * mmio sptes.
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/7] KVM: Remove unused slot_bitmap from kvm_mmu_page

2013-01-08 Thread Takuya Yoshikawa
Not needed any more.

Signed-off-by: Takuya Yoshikawa 
---
 Documentation/virtual/kvm/mmu.txt |7 ---
 arch/x86/include/asm/kvm_host.h   |5 -
 arch/x86/kvm/mmu.c|   10 --
 3 files changed, 0 insertions(+), 22 deletions(-)

diff --git a/Documentation/virtual/kvm/mmu.txt 
b/Documentation/virtual/kvm/mmu.txt
index fa5f1db..43fcb76 100644
--- a/Documentation/virtual/kvm/mmu.txt
+++ b/Documentation/virtual/kvm/mmu.txt
@@ -187,13 +187,6 @@ Shadow pages contain the following information:
 perform a reverse map from a pte to a gfn. When role.direct is set, any
 element of this array can be calculated from the gfn field when used, in
 this case, the array of gfns is not allocated. See role.direct and gfn.
-  slot_bitmap:
-A bitmap containing one bit per memory slot.  If the page contains a pte
-mapping a page from memory slot n, then bit n of slot_bitmap will be set
-(if a page is aliased among several slots, then it is not guaranteed that
-all slots will be marked).
-Used during dirty logging to avoid scanning a shadow page if none if its
-pages need tracking.
   root_count:
 A counter keeping track of how many hardware registers (guest cr3 or
 pdptrs) are now pointing at the page.  While this counter is nonzero, the
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c431b33..f75e1fe 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -219,11 +219,6 @@ struct kvm_mmu_page {
u64 *spt;
/* hold the gfn of each spte inside spt */
gfn_t *gfns;
-   /*
-* One bit set per slot which has memory
-* in this shadow page.
-*/
-   DECLARE_BITMAP(slot_bitmap, KVM_MEM_SLOTS_NUM);
bool unsync;
int root_count;  /* Currently serving as active root */
unsigned int unsync_children;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b4d4fd1..bb964b3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1522,7 +1522,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct 
kvm_vcpu *vcpu,
sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache);
set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-   bitmap_zero(sp->slot_bitmap, KVM_MEM_SLOTS_NUM);
sp->parent_ptes = 0;
mmu_page_add_parent_pte(vcpu, sp, parent_pte);
kvm_mod_used_mmu_pages(vcpu->kvm, +1);
@@ -2183,14 +2182,6 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_unprotect_page);
 
-static void page_header_update_slot(struct kvm *kvm, void *pte, gfn_t gfn)
-{
-   int slot = memslot_id(kvm, gfn);
-   struct kvm_mmu_page *sp = page_header(__pa(pte));
-
-   __set_bit(slot, sp->slot_bitmap);
-}
-
 /*
  * The function is based on mtrr_type_lookup() in
  * arch/x86/kernel/cpu/mtrr/generic.c
@@ -2497,7 +2488,6 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep,
++vcpu->kvm->stat.lpages;
 
if (is_shadow_present_pte(*sptep)) {
-   page_header_update_slot(vcpu->kvm, sptep, gfn);
if (!was_rmapped) {
rmap_count = rmap_add(vcpu, sptep, gfn);
if (rmap_count > RMAP_RECYCLE_THRESHOLD)
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based

2013-01-08 Thread Takuya Yoshikawa
This makes it possible to release mmu_lock and reschedule conditionally
in a later patch.  Although this may increase the time needed to protect
the whole slot when we start dirty logging, the kernel should not allow
the userspace to trigger something that will hold a spinlock for such a
long time as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.

Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |   28 +++-
 1 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bee3509..b4d4fd1 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4198,25 +4198,27 @@ int kvm_mmu_setup(struct kvm_vcpu *vcpu)
 
 void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
 {
-   struct kvm_mmu_page *sp;
-   bool flush = false;
+   struct kvm_memory_slot *memslot;
+   gfn_t last_gfn;
+   int i;
 
-   list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) {
-   int i;
-   u64 *pt;
+   memslot = id_to_memslot(kvm->memslots, slot);
+   last_gfn = memslot->base_gfn + memslot->npages - 1;
 
-   if (!test_bit(slot, sp->slot_bitmap))
-   continue;
+   for (i = PT_PAGE_TABLE_LEVEL;
+i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
+   unsigned long *rmapp;
+   unsigned long last_index, index;
 
-   pt = sp->spt;
-   for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
-   if (!is_shadow_present_pte(pt[i]) ||
- !is_last_spte(pt[i], sp->role.level))
-   continue;
+   rmapp = memslot->arch.rmap[i - PT_PAGE_TABLE_LEVEL];
+   last_index = gfn_to_index(last_gfn, memslot->base_gfn, i);
 
-   spte_write_protect(kvm, &pt[i], &flush, false);
+   for (index = 0; index <= last_index; ++index, ++rmapp) {
+   if (*rmapp)
+   __rmap_write_protect(kvm, rmapp, false);
}
}
+
kvm_flush_remote_tlbs(kvm);
 }
 
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect()

2013-01-08 Thread Takuya Yoshikawa
No longer need to care about the mapping level in this function.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/mmu.c |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1142,7 +1142,7 @@ spte_write_protect(struct kvm *kvm, u64 *sptep, bool 
*flush, bool pt_protect)
 }
 
 static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp,
-int level, bool pt_protect)
+bool pt_protect)
 {
u64 *sptep;
struct rmap_iterator iter;
@@ -1180,7 +1180,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
while (mask) {
rmapp = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
  PT_PAGE_TABLE_LEVEL, slot);
-   __rmap_write_protect(kvm, rmapp, PT_PAGE_TABLE_LEVEL, false);
+   __rmap_write_protect(kvm, rmapp, false);
 
/* clear the first set bit */
mask &= mask - 1;
@@ -1199,7 +1199,7 @@ static bool rmap_write_protect(struct kvm *kvm, u64 gfn)
for (i = PT_PAGE_TABLE_LEVEL;
 i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
rmapp = __gfn_to_rmap(gfn, i, slot);
-   write_protected |= __rmap_write_protect(kvm, rmapp, i, true);
+   write_protected |= __rmap_write_protect(kvm, rmapp, true);
}
 
return write_protected;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/7] KVM: Write protect the updated slot only when dirty logging is enabled

2013-01-08 Thread Takuya Yoshikawa
Calling kvm_mmu_slot_remove_write_access() for a deleted slot does
nothing but search for non-existent mmu pages which have mappings to
that deleted memory; this is safe but a waste of time.

Since we want to make the function rmap based in a later patch, in a
manner which makes it unsafe to be called for a deleted slot, we makes
the caller see if the slot is non-zero and being dirty logged.

Signed-off-by: Takuya Yoshikawa 
---
 arch/x86/kvm/x86.c  |8 +++-
 virt/kvm/kvm_main.c |1 -
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1c9c834..add5e48 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6897,7 +6897,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
if (nr_mmu_pages)
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
-   kvm_mmu_slot_remove_write_access(kvm, mem->slot);
+   /*
+* Write protect all pages for dirty logging.
+* Existing largepage mappings are destroyed here and new ones will
+* not be created until the end of the logging.
+*/
+   if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
+   kvm_mmu_slot_remove_write_access(kvm, mem->slot);
spin_unlock(&kvm->mmu_lock);
/*
 * If memory slot is created, or moved, we need to clear all
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e45c20c..f689a6d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -817,7 +817,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
if (kvm_create_dirty_bitmap(&new) < 0)
goto out_free;
-   /* destroy any largepage mappings for dirty tracking */
}
 
if (!npages || base_gfn != old.base_gfn) {
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/7 -v2] KVM: Alleviate mmu_lock hold time when we start dirty logging

2013-01-08 Thread Takuya Yoshikawa
Changelog v1->v2:
  The condition in patch 1 was changed like this:
npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)

This patch set makes kvm_mmu_slot_remove_write_access() rmap based and
adds conditional rescheduling to it.

The motivation for this change is of course to reduce the mmu_lock hold
time when we start dirty logging for a large memory slot.  You may not
see the problem if you just give 8GB or less of the memory to the guest
with THP enabled on the host -- this is for the worst case.

Takuya Yoshikawa (7):
  KVM: Write protect the updated slot only when dirty logging is enabled
  KVM: MMU: Remove unused parameter level from __rmap_write_protect()
  KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
  KVM: Remove unused slot_bitmap from kvm_mmu_page
  KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
  KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
  KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() 
takes a long time

 Documentation/virtual/kvm/mmu.txt |7 
 arch/x86/include/asm/kvm_host.h   |5 ---
 arch/x86/kvm/mmu.c|   56 +++-
 arch/x86/kvm/x86.c|   12 ---
 virt/kvm/kvm_main.c   |1 -
 5 files changed, 37 insertions(+), 44 deletions(-)

-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2013-01-08 Thread Takuya Yoshikawa
On Mon, 7 Jan 2013 18:36:42 -0200
Marcelo Tosatti  wrote:

> Looks good, except patch 1 - 
> 
> a) don't understand why it is necessary and 

What's really necessary is to make sure that we don't call the function
for a deleted slot.  My explanation was wrong.

> b) not confident its safe - isnt clearing necessary for KVM_SET_MEMORY
> instances other than
> 
> !(old.flags & LOG_DIRTY) && (new.flags & LOG_DIRTY)

I think flushing shadows should be enough for other cases, e.g. moving a slot.

But I've changed the condition (see v2) to make it easier to understand:
  npages && LOG_DIRTY

Since remove_write_access() is for dirty logging, this condition should be safe.

Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   >