On 2014/11/17 18:23, Paolo Bonzini wrote:
On 17/11/2014 02:56, Takuya Yoshikawa wrote:
here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
While they are not in kvm
On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
>
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
>
> Thanks,
>
> Paolo
>
> Paolo
On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa
>
> Signed-off-by: Paolo Bonzini
> -
On 2014/11/14 20:12, Paolo Bonzini wrote:
The two kmemdup invocations can be unified. I find that the new
placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Signed-off-by: Paolo
On 2014/11/14 20:11, Paolo Bonzini wrote:
Hi Igor and Takuya,
here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
Thanks,
Paolo
Paolo Bonzini (3):
On 2014/11/14 20:12, Paolo Bonzini wrote:
The two kmemdup invocations can be unified. I find that the new
placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Signed-off-by: Paolo
On 2014/11/14 20:11, Paolo Bonzini wrote:
Hi Igor and Takuya,
here are a few small patches that simplify __kvm_set_memory_region
and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
Thanks,
Paolo
Paolo Bonzini (3):
needlessly.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
needlessly.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
No need to scan the entire VCPU array.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
BTW, this looks like hyperv support forces us to stick to the current
implementation which stores VCPUs in an array, or at least something
we can index them; not a good thing.
arch
Please take patch A or B.
Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/paging_tmpl.h |7 ---
include/linux/kvm_host.h |2 +-
virt/kvm/kvm_main.c| 11 +++
3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
This patch adds a comment explaining this.
Signed-off-by: Takuya
(2014/02/18 18:43), Xiao Guangrong wrote:
On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
(2014/02/18 18:07), Paolo Bonzini wrote:
Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c |2 +-
arch/powerpc/kvm/book3s_hv.c |2 +-
arch/x86/kvm/x86.c |2 +-
include/linux/kvm_host.h |1 -
virt/kvm/kvm_main.c |8
5 files changed, 3 insertions(+), 12
Giving proper names to the 0 and 1 was once suggested. But since 0 is
returned to the userspace, giving it another name can introduce extra
confusion. This patch just explains the meanings instead.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/x86.c |5
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c |2 +-
arch/powerpc/kvm/book3s_hv.c |2 +-
arch/x86/kvm/x86.c |2 +-
include/linux/kvm_host.h |1 -
virt/kvm/kvm_main.c |8
5 files changed, 3 insertions(+), 12
I think this patch set answers Gleb's comment.
Takuya
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Cc: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 18 --
virt/kvm/kvm_main.c |6 +-
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index
Xiao's KVM: MMU: flush tlb if the spte can be locklessly modified
allows us to release mmu_lock before flushing TLBs.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Cc: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
Xiao can change the remaining mmu_lock to RCU's read
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong wrote:
> @@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>*/
> kvm_flush_remote_tlbs(kvm);
>
> + if (kvm->arch.rcu_free_shadow_page) {
> + sp = list_first_entry(invalid_list, struct
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
@@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
*/
kvm_flush_remote_tlbs(kvm);
+ if (kvm-arch.rcu_free_shadow_page) {
+ sp =
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
@@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
*/
kvm_flush_remote_tlbs(kvm);
+ if (kvm-arch.rcu_free_shadow_page) {
+ sp =
lush tlb if the spte can be locklessly modified
> KVM: MMU: redesign the algorithm of pte_list
> KVM: MMU: introduce nulls desc
> KVM: MMU: introduce pte-list lockless walker
> KVM: MMU: allow locklessly access shadow page table out of vcpu thread
> KVM: MMU: locklessly write-protect
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 10 Jul 2013 11:24:39 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On x86, kvm_arch_create_memslot assumes that rmap/
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
On Wed, 10 Jul 2013 11:24:39 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On x86, kvm_arch_create_memslot assumes that rmap/lpage_info
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
On Wed, 10 Jul 2013 11:24:39 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On x86, kvm_arch_create_memslot assumes that rmap/lpage_info
page_info);
> +
> for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
> unsigned long ugfn;
> int lpages;
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.ke
;
int lpages;
--
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from
;
int lpages;
--
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from
Since kvm_arch_prepare_memory_region() is called right after installing
the slot marked invalid, wraparound checking should be there to avoid
zapping mmio sptes when mmio generation is still MMIO_MAX_GEN - 1.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
This seems
On Wed, 03 Jul 2013 16:39:25 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Please wait a while. I can not understand it very clearly.
This conditional check will cause caching a overflow value into mmio spte.
The simple case is that kvm adds new slots for many times, the
On Wed, 03 Jul 2013 10:53:51 +0200
Paolo Bonzini pbonz...@redhat.com wrote:
Il 03/07/2013 10:50, Xiao Guangrong ha scritto:
Please wait a while. I can not understand it very clearly.
This conditional check will cause caching a overflow value into mmio
spte.
The simple case is
On Wed, 3 Jul 2013 12:10:57 +0300
Gleb Natapov g...@redhat.com wrote:
Yes, makes sense. However, this patch is still an improvement because
the current code is too easily mistaken for an off-by-one bug.
Any improvements to the API can go on top.
If Takuya will send the proper fix
Patch 1: KVM-arch maintainers, please review this one.
{x86, power, s390, arm}-kvm maintainers CCed.
Could not find mips-kvm maintainer in MAINTAINERS.
Patch 2: I did not move the body of kvm_mmu_invalidate_mmio_sptes() into
x86.c because it looked like mmu details.
Takuya Yoshikawa (2
.
In the following patch, x86 will use this new API to check if the mmio
generation has reached its maximum value, in which case mmio sptes need
to be flushed out.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
Removed the trailing space after return old_memslots; at this chance.
arch
Now that kvm_arch_memslots_updated() catches every increment of the
memslots-generation, checking if the mmio generation has reached its
maximum value is enough.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |5 +
arch/x86/kvm/x86.c | 10
shadow pages to be zapped.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c60c5da..bc8302f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86
On Thu, 20 Jun 2013 14:45:04 +0300
Gleb Natapov g...@redhat.com wrote:
On Thu, Jun 20, 2013 at 12:59:54PM +0200, Paolo Bonzini wrote:
Il 20/06/2013 10:59, Takuya Yoshikawa ha scritto:
Without this information, users will just see unexpected performance
problems and there is little chance
On Thu, 20 Jun 2013 15:54:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Thu, Jun 20, 2013 at 09:28:37PM +0900, Takuya Yoshikawa wrote:
On Thu, 20 Jun 2013 14:45:04 +0300
Gleb Natapov g...@redhat.com wrote:
On Thu, Jun 20, 2013 at 12:59:54PM +0200, Paolo Bonzini wrote:
Il 20/06
On Thu, 20 Jun 2013 15:14:42 +0200
Paolo Bonzini pbonz...@redhat.com wrote:
Il 20/06/2013 14:54, Gleb Natapov ha scritto:
If they see mysterious peformance problems induced by this wraparound, the
only
way to know the cause later is by this kind of information in the syslog.
So even the
From: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
Without this information, users will just see unexpected performance
problems and there is little chance we will get good reports from them:
note that mmio generation is increased even when we just start, or stop,
dirty logging for some
On Thu, 20 Jun 2013 23:29:22 +0200
Paolo Bonzini pbonz...@redhat.com wrote:
@@ -4385,8 +4385,10 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
* The max value is MMIO_MAX_GEN - 1 since it is not called
* when mark memslot invalid.
*/
- if
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti wrote:
> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
> - Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots->generation. This is not
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
- Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots-generation. This
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
- Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots-generation. This
emslot invalid.
>*/
> if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1)))
> - kvm_mmu_zap_mmio_sptes(kvm);
> + kvm_mmu_invalidate_zap_all_pages(kvm);
> }
>
> static int mmu_shrink(struct shrinker *shrink, struct shrink_control
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov wrote:
> On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
> > +
> > +/*
> > + * Return values of handle_mmio_page_fault_common:
> > + * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
> > instruction
> > + *
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov g...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
+
+/*
+ * Return values of handle_mmio_page_fault_common:
+ * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
instruction
+ *
shrink_control *sc)
--
1.8.1.4
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov g...@redhat.com wrote:
On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
+
+/*
+ * Return values of handle_mmio_page_fault_common:
+ * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
instruction
+ *
shrink_control *sc)
--
1.8.1.4
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa wrote:
> On Thu, 30 May 2013 03:53:38 +0300
> Gleb Natapov wrote:
>
> > On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > > On W
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov wrote:
> On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > the pages since other vcpus may be doing
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
the pages since other vcpus may be doing
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
the pages since other vcpus may be doing
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov g...@redhat.com wrote:
On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
On Wed, May 29
-
> arch/x86/kvm/mmu.h |2 +
> arch/x86/kvm/mmutrace.h | 45 +++---
> arch/x86/kvm/x86.c |9 +--
> 5 files changed, 163 insertions(+), 19 deletions(-)
>
> --
> 1.7.7.6
>
> --
> To unsubscribe from this list
of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On Mon, 13 May 2013 21:02:10 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 05/13/2013 07:24 PM, Gleb Natapov wrote:
I agree that this is mostly code style issue and with Takuya patch the
indentation is dipper. Also the structure of mmu_free_roots() resembles
Found during documenting mmu_lock usage for myself.
Takuya Yoshikawa (3):
KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load()
KVM: MMU: Consolidate common code in mmu_free_roots()
arch/x86/kvm/mmu.c | 48
Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
if (mmu_need_write_protect()) block not to call mark_page_dirty() in
the following if statement, simply moving the call into the appropriate
else block is better.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
No need to open-code this function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 08119a8..d01f340 100644
--- a/arch/x86/kvm/mmu.c
By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here is that VCPU's root_hpa does not need to be protected by mmu_lock.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm
On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here
On Thu, 09 May 2013 20:16:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
That function is really magic, and this change do no really help it. I had
several
patches posted some months ago to make these kind of code better
understanding, but
i am too tired to update
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong wrote:
> +/*
> + * Fast invalid all shadow pages belong to @slot.
> + *
> + * @slot != NULL means the invalidation is caused the memslot specified
> + * by @slot is being deleted, in this case, we should ensure that rmap
> + * and lpage-info of
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong wrote:
> This function is used to reset the large page info of all guest pages
> which will be used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 25 +
> arch/x86/kvm/x86.h |2 ++
>
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong wrote:
> It is used to set disallowed large page on the specified level, can be
> used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 53 ++-
> 1 files changed,
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov wrote:
> > > Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
> > > suspectible to:
> > >
> > > vcpu 1| kvm_set_memory
> > > create shadow page
> > > nuke shadow page
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov g...@redhat.com wrote:
Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
suspectible to:
vcpu 1| kvm_set_memory
create shadow page
nuke shadow page
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov g...@redhat.com wrote:
Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
suspectible to:
vcpu 1| kvm_set_memory
create shadow page
nuke shadow page
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c | 11 +++
arch/x86/kvm/paging_tmpl.h |1 +
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 633e30c..004cc87 100644
--- a/arch/x86/kvm/mmu.c
+++ b
Takuya Yoshikawa (2):
KVM: MMU: Move kvm_mmu_free_some_pages() into kvm_mmu_alloc_page()
KVM: MMU: Rename kvm_mmu_free_some_pages() to make_mmu_pages_available()
arch/x86/kvm/mmu.c | 16 +---
arch/x86/kvm/mmu.h |6 --
arch/x86/kvm/paging_tmpl.h |1
except when we actually need to
allocate some shadow pages, so we do not need to care about calling it
multiple times in one path by doing kvm_mmu_get_page() a few times.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |9 +++--
arch/x86/kvm
the name to reflect this meaning better; while doing
this renaming, the code in the wrapper function is inlined into the main
body since the whole function will be inlined into the only caller now.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |9
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong wrote:
> +/*
> + * The caller should protect concurrent access on
> + * kvm->arch.mmio_invalid_gen. Currently, it is used by
> + * kvm_arch_commit_memory_region and protected by kvm->slots_lock.
> + */
> +void kvm_mmu_invalid_mmio_spte(struct kvm
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3c4787..61a5bb6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>* mmio sptes.
; not a problem any more. The scalability is the same as zap mmio shadow page
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-
shadow page
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3c4787..61a5bb6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * The caller should protect concurrent access on
+ * kvm-arch.mmio_invalid_gen. Currently, it is used by
+ * kvm_arch_commit_memory_region and protected by kvm-slots_lock.
+ */
+void
[ I'm still reading your patches, so please forgive me If I'm wrong. ]
On Thu, 14 Mar 2013 13:13:30 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Actually, the time complexity of current kvm_mmu_zap_all is the same as zap
mmio shadow page in the mmu-lock (O(n), n is the number
shadow page
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d3c4787..61a5bb6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * The caller should protect concurrent access on
+ * kvm-arch.mmio_invalid_gen. Currently, it is used by
+ * kvm_arch_commit_memory_region and protected by kvm-slots_lock.
+ */
+void
On Wed, 13 Mar 2013 13:06:23 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 03/12/2013 04:44 PM, Takuya Yoshikawa wrote:
This will be used not to zap unrelated mmu pages when creating/moving
a memory slot later.
How about save all mmio spte into a mmio-rmap?
The problem
On Wed, 13 Mar 2013 20:42:41 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
How about save all mmio spte into a mmio-rmap?
The problem is that other mmu code would need to care about the pointers
stored in the new rmap list: when mmu_shrink zaps shadow pages for
example.
On Wed, 13 Mar 2013 22:58:21 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
In zap_spte, don't we need to search the pointer to be removed from the
global mmio-rmap list? How long can that list be?
It is not bad. On softmmu, the rmap list has already been long more than
300.
On
201 - 300 of 1354 matches
Mail list logo