This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h | 6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 8
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
of cleanup effort, the patch set reduces this overhead.
Takuya
Takuya Yoshikawa (5):
KVM: x86: MMU: Make force_pt_level bool
KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()
KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()
KVM: x86: MMU
Now that it has only one caller, and its name is not so helpful for
readers, remove it. Instead, the new memslot_valid_for_gpte() function
makes it possible to share the common code.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.
of cleanup effort, the patch set reduces this overhead.
Takuya
Takuya Yoshikawa (5):
KVM: x86: MMU: Make force_pt_level bool
KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()
KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()
KVM: x86: MMU
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h | 6 +++---
2 files changed, 17 insertions(+), 18 del
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 8
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
Now that it has only one caller, and its name is not so helpful for
readers, just remove it.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 890cd69
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h |6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |8
arch/x86/kvm/paging_tmpl.h |4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.
With a bit of cleanup effort, the patch set reduces this
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c |8
arch/x86/kvm/paging_tmpl.h |4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h |6 +++---
2 files changed, 17 insertions(+), 18 del
In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.
With a bit of cleanup effort, the patch set reduces this
Now that it has only one caller, and its name is not so helpful for
readers, just remove it.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm
On 2015/05/20 2:25, Paolo Bonzini wrote:
> Prepare for multiple address spaces this way, since a VCPU is not available
> where unaccount_shadowed is called. We will get to the right kvm_memslots
> 1truct through the role field in struct kvm_mmu_page.
typo: s/1truct/struct/
Reviewed-b
erstand lines is really
nice.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Takuya Yoshikawa
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vge
kvm_for_each_memslot(memslot, slots)
+ kvm_free_memslot(kvm, memslot, NULL);
does nothing in effect, but looks better to be here since this
corresponds to kvm_alloc_memslots() part and may be safer for
future changes.
Other changes look like trivial transitions to the new
kvm_allo
() part and may be safer for
future changes.
Other changes look like trivial transitions to the new
kvm_alloc/free_memslots.
Reviewed-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
kvm_arch_free_vm(kvm);
return ERR_PTR(r);
}
Takuya
--
To unsubscribe from this list
.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Reviewed-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Takuya
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 2015/05/20 2:25, Paolo Bonzini wrote:
Prepare for multiple address spaces this way, since a VCPU is not available
where unaccount_shadowed is called. We will get to the right kvm_memslots
1truct through the role field in struct kvm_mmu_page.
typo: s/1truct/struct/
Reviewed-by: Takuya
. framebuffers can
stay calm for a long time, it is worth eliminating this overhead.
Signed-off-by: Takuya Yoshikawa
---
virt/kvm/kvm_main.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a109370..420d8cf 100644
--- a/virt/kvm
. framebuffers can
stay calm for a long time, it is worth eliminating this overhead.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
virt/kvm/kvm_main.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index
On 2014/11/17 18:23, Paolo Bonzini wrote:
>
>
> On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>>>> here are a few small patches that simplify __kvm_set_memory_region
>>>> and associated code. Can you please review them?
>> Ah, already queued. Sorr
On 2014/11/17 18:23, Paolo Bonzini wrote:
On 17/11/2014 02:56, Takuya Yoshikawa wrote:
here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
While they are not in kvm
On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
>
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
>
> Thanks,
>
> Paolo
>
> Paolo
On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa
>
> Signed-off-by: Paolo Bonzini
> -
On 2014/11/14 20:12, Paolo Bonzini wrote:
The two kmemdup invocations can be unified. I find that the new
placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Signed-off-by: Paolo
On 2014/11/14 20:11, Paolo Bonzini wrote:
Hi Igor and Takuya,
here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
Thanks,
Paolo
Paolo Bonzini (3):
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong wrote:
> @@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>*/
> kvm_flush_remote_tlbs(kvm);
>
> + if (kvm->arch.rcu_free_shadow_page) {
> + sp = list_first_entry(invalid_list, struct
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
@@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
*/
kvm_flush_remote_tlbs(kvm);
+ if (kvm-arch.rcu_free_shadow_page) {
+ sp =
lush tlb if the spte can be locklessly modified
> KVM: MMU: redesign the algorithm of pte_list
> KVM: MMU: introduce nulls desc
> KVM: MMU: introduce pte-list lockless walker
> KVM: MMU: allow locklessly access shadow page table out of vcpu thread
> KVM: MMU: locklessly write-protect
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 10 Jul 2013 11:24:39 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On x86, kvm_arch_create_memslot assumes that rmap/
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
On Wed, 10 Jul 2013 11:24:39 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On x86, kvm_arch_create_memslot assumes that rmap/lpage_info
page_info);
> +
> for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
> unsigned long ugfn;
> int lpages;
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.ke
;
int lpages;
--
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti wrote:
> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
> - Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots->generation. This is not
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
- Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots-generation. This
emslot invalid.
>*/
> if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1)))
> - kvm_mmu_zap_mmio_sptes(kvm);
> + kvm_mmu_invalidate_zap_all_pages(kvm);
> }
>
> static int mmu_shrink(struct shrinker *shrink, struct shrink_control
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov wrote:
> On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
> > +
> > +/*
> > + * Return values of handle_mmio_page_fault_common:
> > + * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
> > instruction
> > + *
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov g...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
+
+/*
+ * Return values of handle_mmio_page_fault_common:
+ * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
instruction
+ *
shrink_control *sc)
--
1.8.1.4
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa wrote:
> On Thu, 30 May 2013 03:53:38 +0300
> Gleb Natapov wrote:
>
> > On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > > On W
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov wrote:
> On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > the pages since other vcpus may be doing
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
the pages since other vcpus may be doing
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29
-
> arch/x86/kvm/mmu.h |2 +
> arch/x86/kvm/mmutrace.h | 45 +++---
> arch/x86/kvm/x86.c |9 +--
> 5 files changed, 163 insertions(+), 19 deletions(-)
>
> --
> 1.7.7.6
>
> --
> To unsubscribe from this list
of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong wrote:
> +/*
> + * Fast invalid all shadow pages belong to @slot.
> + *
> + * @slot != NULL means the invalidation is caused the memslot specified
> + * by @slot is being deleted, in this case, we should ensure that rmap
> + * and lpage-info of
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong wrote:
> This function is used to reset the large page info of all guest pages
> which will be used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 25 +
> arch/x86/kvm/x86.h |2 ++
>
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong wrote:
> It is used to set disallowed large page on the specified level, can be
> used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 53 ++-
> 1 files changed,
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov wrote:
> > > Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
> > > suspectible to:
> > >
> > > vcpu 1| kvm_set_memory
> > > create shadow page
> > > nuke shadow page
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov g...@redhat.com wrote:
Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
suspectible to:
vcpu 1| kvm_set_memory
create shadow page
nuke shadow page
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong wrote:
> +/*
> + * The caller should protect concurrent access on
> + * kvm->arch.mmio_invalid_gen. Currently, it is used by
> + * kvm_arch_commit_memory_region and protected by kvm->slots_lock.
> + */
> +void kvm_mmu_invalid_mmio_spte(struct kvm
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3c4787..61a5bb6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>* mmio sptes.
; not a problem any more. The scalability is the same as zap mmio shadow page
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-
shadow page
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3c4787..61a5bb6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * The caller should protect concurrent access on
+ * kvm-arch.mmio_invalid_gen. Currently, it is used by
+ * kvm_arch_commit_memory_region and protected by kvm-slots_lock.
+ */
+void
On Wed, 30 Jan 2013 12:06:32 +0800
Xiao Guangrong wrote:
> So, i guess we can do the simple fix first.
>
> >>> By simple fix you mean calling kvm_arch_flush_shadow_all() on READONLY
> >>> flag change?
> >>
> >> Simply disallow READONLY flag changing.
> > Ok, can somebody craft a patch?
On Wed, 30 Jan 2013 12:06:32 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
So, i guess we can do the simple fix first.
By simple fix you mean calling kvm_arch_flush_shadow_all() on READONLY
flag change?
Simply disallow READONLY flag changing.
Ok, can somebody craft a
On Mon, 28 Jan 2013 08:36:56 -0700
Alex Williamson wrote:
> On Mon, 2013-01-28 at 21:25 +0900, Takuya Yoshikawa wrote:
> > On Mon, 28 Jan 2013 12:59:03 +0200
> > Gleb Natapov wrote:
> >
> > > > It sets spte based on the old value that means the readonly flag
On Mon, 28 Jan 2013 12:59:03 +0200
Gleb Natapov wrote:
> > It sets spte based on the old value that means the readonly flag check
> > is missed. We need to call kvm_arch_flush_shadow_all under this case.
> Why not just disallow changing memory region KVM_MEM_READONLY flag
> without deleting the
On Mon, 28 Jan 2013 12:59:03 +0200
Gleb Natapov g...@redhat.com wrote:
It sets spte based on the old value that means the readonly flag check
is missed. We need to call kvm_arch_flush_shadow_all under this case.
Why not just disallow changing memory region KVM_MEM_READONLY flag
without
On Mon, 28 Jan 2013 08:36:56 -0700
Alex Williamson alex.william...@redhat.com wrote:
On Mon, 2013-01-28 at 21:25 +0900, Takuya Yoshikawa wrote:
On Mon, 28 Jan 2013 12:59:03 +0200
Gleb Natapov g...@redhat.com wrote:
It sets spte based on the old value that means the readonly flag check
On Fri, 25 Jan 2013 12:59:12 +0900
Takuya Yoshikawa wrote:
> > The commit c972f3b1 changed the write-protect behaviour - it does
> > wirte-protection only when dirty flag is set.
> > [ I did not see this commit when we discussed the problem before. ]
>
> I'll look at
On Fri, 25 Jan 2013 11:28:40 +0800
Xiao Guangrong wrote:
> > I think I can naturally update my patch after this gets merged.
> >
>
> Please wait.
The patch I mentioned above won't change anything. Just cleans up
set_memory_region(). The only possible change which we discussed
before was
On Thu, 24 Jan 2013 15:03:57 -0700
Alex Williamson wrote:
> A couple patches to make KVM IOMMU support honor read-only mappings.
> This causes an un-map, re-map when the read-only flag changes and
> makes use of it when setting IOMMU attributes. Thanks,
Looks good to me.
I think I can
On Thu, 24 Jan 2013 15:03:57 -0700
Alex Williamson alex.william...@redhat.com wrote:
A couple patches to make KVM IOMMU support honor read-only mappings.
This causes an un-map, re-map when the read-only flag changes and
makes use of it when setting IOMMU attributes. Thanks,
Looks good to me.
On Fri, 25 Jan 2013 11:28:40 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
I think I can naturally update my patch after this gets merged.
Please wait.
The patch I mentioned above won't change anything. Just cleans up
set_memory_region(). The only possible change which
On Fri, 25 Jan 2013 12:59:12 +0900
Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp wrote:
The commit c972f3b1 changed the write-protect behaviour - it does
wirte-protection only when dirty flag is set.
[ I did not see this commit when we discussed the problem before. ]
I'll look
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: Takuya
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/a
Not needed any more.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions(-)
diff --git a/Documentation/virtual
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509 100644
--- a/arch/x86/kvm/mmu.c
to be called for a deleted slot, we makes
the caller see if the slot is non-zero and being dirty logged.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/x86.c |8 +++-
virt/kvm/kvm_main.c |1 -
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
educe the mmu_lock hold
time when we start dirty logging for a large memory slot. You may not
see the problem if you just give 8GB or less of the memory to the guest
with THP enabled on the host -- this is for the worst case.
Takuya Yoshikawa (7):
KVM: Write protect the updated slot only
On Mon, 7 Jan 2013 18:36:42 -0200
Marcelo Tosatti wrote:
> Looks good, except patch 1 -
>
> a) don't understand why it is necessary and
What's really necessary is to make sure that we don't call the function
for a deleted slot. My explanation was wrong.
> b) not confident its safe - isnt
On Mon, 7 Jan 2013 18:36:42 -0200
Marcelo Tosatti mtosa...@redhat.com wrote:
Looks good, except patch 1 -
a) don't understand why it is necessary and
What's really necessary is to make sure that we don't call the function
for a deleted slot. My explanation was wrong.
b) not confident
time when we start dirty logging for a large memory slot. You may not
see the problem if you just give 8GB or less of the memory to the guest
with THP enabled on the host -- this is for the worst case.
Takuya Yoshikawa (7):
KVM: Write protect the updated slot only when dirty logging is enabled
to be called for a deleted slot, we makes
the caller see if the slot is non-zero and being dirty logged.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/x86.c |8 +++-
virt/kvm/kvm_main.c |1 -
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
Not needed any more.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions
kvm-arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's unlocked TLB flush work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: Takuya Yoshikawa
101 - 200 of 264 matches
Mail list logo