Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-21 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 07:55:43 -0700 Alex Williamson wrote: > > Yes, the fix should work, but I do not want to update the > > generation from outside of update_memslots(). > > Ok, then: > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 87089dd..c7b5061 100644 > ---

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-21 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 07:55:43 -0700 Alex Williamson alex.william...@redhat.com wrote: Yes, the fix should work, but I do not want to update the generation from outside of update_memslots(). Ok, then: diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 87089dd..c7b5061

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-21 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 07:55:43 -0700 Alex Williamson alex.william...@redhat.com wrote: Yes, the fix should work, but I do not want to update the generation from outside of update_memslots(). Ok, then: diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 87089dd..c7b5061

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-21 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 07:55:43 -0700 Alex Williamson alex.william...@redhat.com wrote: Yes, the fix should work, but I do not want to update the generation from outside of update_memslots(). Ok, then: diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 87089dd..c7b5061

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-21 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 07:55:43 -0700 Alex Williamson alex.william...@redhat.com wrote: Yes, the fix should work, but I do not want to update the generation from outside of update_memslots(). Ok, then: diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 87089dd..c7b5061

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-20 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 06:41:27 -0700 Alex Williamson wrote: > Hmm, isn't the fix as simple as: > > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm, > GFP_KERNEL); > if (!slots)

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-20 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 06:41:27 -0700 Alex Williamson alex.william...@redhat.com wrote: Hmm, isn't the fix as simple as: --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm, GFP_KERNEL);

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-20 Thread Takuya Yoshikawa
On Thu, 20 Dec 2012 06:41:27 -0700 Alex Williamson alex.william...@redhat.com wrote: Hmm, isn't the fix as simple as: --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm, GFP_KERNEL);

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
On Wed, 19 Dec 2012 08:42:57 -0700 Alex Williamson wrote: > Please let me know if you can identify one of these as the culprit. > They're all very simple, but there's always a chance I've missed a hard > coding of slot numbers somewhere. Thanks, I identified the one: commit

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
loc -> bool [08/10] KVM: struct kvm_memory_slot.flags -> u32 [09/10] KVM: struct kvm_memory_slot.id -> short [10/10] KVM: Increase user memory slots on x86 to 125 If I can get time, I will check which one caused the problem tomorrow. Thanks, Takuya On Tue, 18 Dec 2012 16:25:58

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
- bool [08/10] KVM: struct kvm_memory_slot.flags - u32 [09/10] KVM: struct kvm_memory_slot.id - short [10/10] KVM: Increase user memory slots on x86 to 125 If I can get time, I will check which one caused the problem tomorrow. Thanks, Takuya On Tue, 18 Dec 2012 16:25:58 +0900 Takuya

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
On Wed, 19 Dec 2012 08:42:57 -0700 Alex Williamson alex.william...@redhat.com wrote: Please let me know if you can identify one of these as the culprit. They're all very simple, but there's always a chance I've missed a hard coding of slot numbers somewhere. Thanks, I identified the one:

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
- bool [08/10] KVM: struct kvm_memory_slot.flags - u32 [09/10] KVM: struct kvm_memory_slot.id - short [10/10] KVM: Increase user memory slots on x86 to 125 If I can get time, I will check which one caused the problem tomorrow. Thanks, Takuya On Tue, 18 Dec 2012 16:25:58 +0900 Takuya

Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-19 Thread Takuya Yoshikawa
On Wed, 19 Dec 2012 08:42:57 -0700 Alex Williamson alex.william...@redhat.com wrote: Please let me know if you can identify one of these as the culprit. They're all very simple, but there's always a chance I've missed a hard coding of slot numbers somewhere. Thanks, I identified the one:

[PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based

2012-12-17 Thread Takuya Yoshikawa
as tens of milliseconds: actually there is no limit since it is roughly proportional to the number of guest pages. Another point to note is that this patch removes the only user of slot_bitmap which will cause some problems when we increase the number of slots further. Signed-off-by: Takuya

[PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
kvm->arch.n_requested_mmu_pages by mmu_lock as can be seen from the fact that it is read locklessly. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c |4 arch/x86/kvm/x86.c |9 - 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/a

[PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page

2012-12-17 Thread Takuya Yoshikawa
Not needed any more. Signed-off-by: Takuya Yoshikawa --- Documentation/virtual/kvm/mmu.txt |7 --- arch/x86/include/asm/kvm_host.h |5 - arch/x86/kvm/mmu.c| 10 -- 3 files changed, 0 insertions(+), 22 deletions(-) diff --git a/Documentation/virtual

[PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

2012-12-17 Thread Takuya Yoshikawa
of memory before being rescheduled: on my test environment, cond_resched_lock() was called only once for protecting 12GB of memory even without THP. We can also revisit Avi's "unlocked TLB flush" work later for completely suppressing extra TLB flushes if needed. Signed-off-by: Takuya

[PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
Better to place mmu_lock handling and TLB flushing code together since this is a self-contained function. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c |3 +++ arch/x86/kvm/x86.c |5 + 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch

[PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect()

2012-12-17 Thread Takuya Yoshikawa
No longer need to care about the mapping level in this function. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 01d7c2a..bee3509 100644 --- a/arch/x86/kvm/mmu.c

[PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
This is needed to make kvm_mmu_slot_remove_write_access() rmap based: otherwise we may end up using invalid rmap's. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/x86.c |9 - virt/kvm/kvm_main.c |1 - 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm

[PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
xbd/0x110 [ 575.242298] [] ? fget_light+0x3c/0x140 [ 575.242381] [] do_vfs_ioctl+0x98/0x570 [ 575.242463] [] ? fget_light+0xa1/0x140 [ 575.246393] [] ? fget_light+0x3c/0x140 [ 575.250363] [] sys_ioctl+0x91/0xb0 [ 575.254327] [] system_call_fastpath+0x16/0x1b Takuya Yoshikawa (7):

[PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
[ 575.242463] [811a91b1] ? fget_light+0xa1/0x140 [ 575.246393] [811a914c] ? fget_light+0x3c/0x140 [ 575.250363] [8119e511] sys_ioctl+0x91/0xb0 [ 575.254327] [81684c19] system_call_fastpath+0x16/0x1b Takuya Yoshikawa (7): KVM: Write protect the updated slot

[PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
This is needed to make kvm_mmu_slot_remove_write_access() rmap based: otherwise we may end up using invalid rmap's. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/x86.c |9 - virt/kvm/kvm_main.c |1 - 2 files changed, 8 insertions(+), 2

[PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect()

2012-12-17 Thread Takuya Yoshikawa
No longer need to care about the mapping level in this function. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 01d7c2a..bee3509

[PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
Better to place mmu_lock handling and TLB flushing code together since this is a self-contained function. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |3 +++ arch/x86/kvm/x86.c |5 + 2 files changed, 4 insertions(+), 4 deletions(-) diff

[PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

2012-12-17 Thread Takuya Yoshikawa
of memory before being rescheduled: on my test environment, cond_resched_lock() was called only once for protecting 12GB of memory even without THP. We can also revisit Avi's unlocked TLB flush work later for completely suppressing extra TLB flushes if needed. Signed-off-by: Takuya Yoshikawa

[PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page

2012-12-17 Thread Takuya Yoshikawa
Not needed any more. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- Documentation/virtual/kvm/mmu.txt |7 --- arch/x86/include/asm/kvm_host.h |5 - arch/x86/kvm/mmu.c| 10 -- 3 files changed, 0 insertions(+), 22 deletions

[PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
kvm-arch.n_requested_mmu_pages by mmu_lock as can be seen from the fact that it is read locklessly. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |4 arch/x86/kvm/x86.c |9 - 2 files changed, 8 insertions(+), 5 deletions(-) diff

[PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based

2012-12-17 Thread Takuya Yoshikawa
as tens of milliseconds: actually there is no limit since it is roughly proportional to the number of guest pages. Another point to note is that this patch removes the only user of slot_bitmap which will cause some problems when we increase the number of slots further. Signed-off-by: Takuya

[PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
[ 575.242463] [811a91b1] ? fget_light+0xa1/0x140 [ 575.246393] [811a914c] ? fget_light+0x3c/0x140 [ 575.250363] [8119e511] sys_ioctl+0x91/0xb0 [ 575.254327] [81684c19] system_call_fastpath+0x16/0x1b Takuya Yoshikawa (7): KVM: Write protect the updated slot

[PATCH 1/7] KVM: Write protect the updated slot only when we start dirty logging

2012-12-17 Thread Takuya Yoshikawa
This is needed to make kvm_mmu_slot_remove_write_access() rmap based: otherwise we may end up using invalid rmap's. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/x86.c |9 - virt/kvm/kvm_main.c |1 - 2 files changed, 8 insertions(+), 2

[PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect()

2012-12-17 Thread Takuya Yoshikawa
No longer need to care about the mapping level in this function. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 01d7c2a..bee3509

[PATCH 7/7] KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

2012-12-17 Thread Takuya Yoshikawa
of memory before being rescheduled: on my test environment, cond_resched_lock() was called only once for protecting 12GB of memory even without THP. We can also revisit Avi's unlocked TLB flush work later for completely suppressing extra TLB flushes if needed. Signed-off-by: Takuya Yoshikawa

[PATCH 4/7] KVM: x86: Remove unused slot_bitmap from kvm_mmu_page

2012-12-17 Thread Takuya Yoshikawa
Not needed any more. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- Documentation/virtual/kvm/mmu.txt |7 --- arch/x86/include/asm/kvm_host.h |5 - arch/x86/kvm/mmu.c| 10 -- 3 files changed, 0 insertions(+), 22 deletions

[PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
Better to place mmu_lock handling and TLB flushing code together since this is a self-contained function. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |3 +++ arch/x86/kvm/x86.c |5 + 2 files changed, 4 insertions(+), 4 deletions(-) diff

[PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based

2012-12-17 Thread Takuya Yoshikawa
as tens of milliseconds: actually there is no limit since it is roughly proportional to the number of guest pages. Another point to note is that this patch removes the only user of slot_bitmap which will cause some problems when we increase the number of slots further. Signed-off-by: Takuya

[PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself

2012-12-17 Thread Takuya Yoshikawa
kvm-arch.n_requested_mmu_pages by mmu_lock as can be seen from the fact that it is read locklessly. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/mmu.c |4 arch/x86/kvm/x86.c |9 - 2 files changed, 8 insertions(+), 5 deletions(-) diff

[PATCH] KVM: Don't use vcpu-requests for steal time accounting

2012-12-14 Thread Takuya Yoshikawa
We can check if accum_steal has any positive value instead of using KVM_REQ_STEAL_UPDATE bit in vcpu-requests; and this is the way we usually do for accounting for something in the kernel. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- arch/x86/kvm/x86.c | 11

Re: [PATCH] KVM: Don't use vcpu-requests for steal time accounting

2012-12-14 Thread Takuya Yoshikawa
On Fri, 14 Dec 2012 13:28:15 +0200 Gleb Natapov g...@redhat.com wrote: On Fri, Dec 14, 2012 at 07:37:18PM +0900, Takuya Yoshikawa wrote: We can check if accum_steal has any positive value instead of using KVM_REQ_STEAL_UPDATE bit in vcpu-requests; and this is the way we usually do

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-10 Thread Takuya Yoshikawa
On Fri, 07 Dec 2012 09:09:39 -0700 Alex Williamson wrote: > On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote: > > On Thu, 06 Dec 2012 15:21:26 -0700 > > Alex Williamson wrote: > > > > > With the 3 private slots, this gives us a nice round 128 sl

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-10 Thread Takuya Yoshikawa
On Fri, 07 Dec 2012 09:09:39 -0700 Alex Williamson alex.william...@redhat.com wrote: On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote: On Thu, 06 Dec 2012 15:21:26 -0700 Alex Williamson alex.william...@redhat.com wrote: With the 3 private slots, this gives us a nice round 128

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-10 Thread Takuya Yoshikawa
On Fri, 07 Dec 2012 09:09:39 -0700 Alex Williamson alex.william...@redhat.com wrote: On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote: On Thu, 06 Dec 2012 15:21:26 -0700 Alex Williamson alex.william...@redhat.com wrote: With the 3 private slots, this gives us a nice round 128

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-07 Thread Takuya Yoshikawa
On Thu, 06 Dec 2012 15:21:26 -0700 Alex Williamson wrote: > With the 3 private slots, this gives us a nice round 128 slots total. So I think this patch needs to be applied after resolving the slot_bitmap issue. We may not need to protect slots with large slot id values, but still it's possible

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-07 Thread Takuya Yoshikawa
On Thu, 06 Dec 2012 15:21:26 -0700 Alex Williamson alex.william...@redhat.com wrote: With the 3 private slots, this gives us a nice round 128 slots total. So I think this patch needs to be applied after resolving the slot_bitmap issue. We may not need to protect slots with large slot id

Re: [PATCH 10/10] kvm: Increase user memory slots on x86 to 125

2012-12-07 Thread Takuya Yoshikawa
On Thu, 06 Dec 2012 15:21:26 -0700 Alex Williamson alex.william...@redhat.com wrote: With the 3 private slots, this gives us a nice round 128 slots total. So I think this patch needs to be applied after resolving the slot_bitmap issue. We may not need to protect slots with large slot id

Re: [RFC PATCH 0/6] kvm: Growable memory slot array

2012-12-04 Thread Takuya Yoshikawa
On Mon, 03 Dec 2012 16:39:05 -0700 Alex Williamson wrote: > A couple notes/questions; in the previous version we had a > kvm_arch_flush_shadow() call when we increased the number of slots. > I'm not sure if this is still necessary. I had also made the x86 > specific slot_bitmap dynamically grow

Re: [RFC PATCH 0/6] kvm: Growable memory slot array

2012-12-04 Thread Takuya Yoshikawa
On Mon, 03 Dec 2012 16:39:05 -0700 Alex Williamson alex.william...@redhat.com wrote: A couple notes/questions; in the previous version we had a kvm_arch_flush_shadow() call when we increased the number of slots. I'm not sure if this is still necessary. I had also made the x86 specific

Re: [RFC PATCH 0/6] kvm: Growable memory slot array

2012-12-04 Thread Takuya Yoshikawa
On Mon, 03 Dec 2012 16:39:05 -0700 Alex Williamson alex.william...@redhat.com wrote: A couple notes/questions; in the previous version we had a kvm_arch_flush_shadow() call when we increased the number of slots. I'm not sure if this is still necessary. I had also made the x86 specific

Re: [PATCH] KVM: MMU: lazily drop large spte

2012-11-13 Thread Takuya Yoshikawa
Ccing live migration developers who should be interested in this work, On Mon, 12 Nov 2012 21:10:32 -0200 Marcelo Tosatti wrote: > On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote: > > Do not drop large spte until it can be insteaded by small pages so that > > the guest can

Re: [PATCH] KVM: MMU: lazily drop large spte

2012-11-13 Thread Takuya Yoshikawa
Ccing live migration developers who should be interested in this work, On Mon, 12 Nov 2012 21:10:32 -0200 Marcelo Tosatti mtosa...@redhat.com wrote: On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote: Do not drop large spte until it can be insteaded by small pages so that the

Re: [PATCH] KVM: MMU: lazily drop large spte

2012-11-13 Thread Takuya Yoshikawa
Ccing live migration developers who should be interested in this work, On Mon, 12 Nov 2012 21:10:32 -0200 Marcelo Tosatti mtosa...@redhat.com wrote: On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote: Do not drop large spte until it can be insteaded by small pages so that the

Re: [Qemu-devel] [PATCH] KVM: MMU: lazily drop large spte

2012-11-13 Thread Takuya Yoshikawa
Ccing live migration developers who should be interested in this work, On Mon, 12 Nov 2012 21:10:32 -0200 Marcelo Tosatti mtosa...@redhat.com wrote: On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote: Do not drop large spte until it can be insteaded by small pages so that the

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-10-04 Thread Takuya Yoshikawa
On Mon, 24 Sep 2012 09:16:12 +0200 Gleb Natapov g...@redhat.com wrote: Yes, for guests that do not enable steal time KVM_REQ_STEAL_UPDATE should be never set, but currently it is. The patch (not tested) should fix this. Thinking a bit more about KVM_REQ_STEAL_UPDATE... diff --git

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-09-25 Thread Takuya Yoshikawa
On Tue, 25 Sep 2012 10:12:49 +0200 Avi Kivity wrote: > It will. The tradeoff is between false-positive costs (undercommit) and > true positive costs (overcommit). I think undercommit should perform > well no matter what. > > If we utilize preempt notifiers to track overcommit dynamically,

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-09-25 Thread Takuya Yoshikawa
On Tue, 25 Sep 2012 10:12:49 +0200 Avi Kivity a...@redhat.com wrote: It will. The tradeoff is between false-positive costs (undercommit) and true positive costs (overcommit). I think undercommit should perform well no matter what. If we utilize preempt notifiers to track overcommit

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-09-25 Thread Takuya Yoshikawa
On Tue, 25 Sep 2012 10:12:49 +0200 Avi Kivity a...@redhat.com wrote: It will. The tradeoff is between false-positive costs (undercommit) and true positive costs (overcommit). I think undercommit should perform well no matter what. If we utilize preempt notifiers to track overcommit

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-25 Thread Takuya Yoshikawa
On Mon, 24 Sep 2012 16:50:13 +0200 Avi Kivity a...@redhat.com wrote: Afterwards, most exits are APIC and interrupt related, HLT, and MMIO. Of these, some are special (HLT, interrupt injection) and some are not (read/write most APIC registers). I don't think one group dominates the other. So

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-24 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 23:15:40 +0530 Raghavendra K T wrote: > >> How about doing cond_resched() instead? > > > > Actually, an actual call to yield() may be better. > > > > That will set scheduler hints to make the scheduler pick > > another task for one round, while preserving this task's > > top

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-24 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 23:15:40 +0530 Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote: How about doing cond_resched() instead? Actually, an actual call to yield() may be better. That will set scheduler hints to make the scheduler pick another task for one round, while preserving

[RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-24 Thread Takuya Yoshikawa
update occurs frequently enough except when we give each vcpu a dedicated core justifies its tiny cost. Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp --- [My email address change is not a mistake.] arch/x86/kvm/x86.c | 11 --- 1 files changed, 8 insertions(+), 3

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-24 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 23:15:40 +0530 Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote: How about doing cond_resched() instead? Actually, an actual call to yield() may be better. That will set scheduler hints to make the scheduler pick another task for one round, while preserving

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-24 Thread Takuya Yoshikawa
On Mon, 24 Sep 2012 14:59:44 +0800 Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote: On 09/24/2012 02:24 PM, Takuya Yoshikawa wrote: This is an RFC since I have not done any comparison with the approach using for_each_set_bit() which can be seen in Avi's work. Why not compare

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-24 Thread Takuya Yoshikawa
majordomo info at http://vger.kernel.org/majordomo-info.html -- Takuya Yoshikawa takuya.yoshik...@gmail.com -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-24 Thread Takuya Yoshikawa
On Mon, 24 Sep 2012 12:18:15 +0200 Avi Kivity a...@redhat.com wrote: On 09/24/2012 08:24 AM, Takuya Yoshikawa wrote: This is an RFC since I have not done any comparison with the approach using for_each_set_bit() which can be seen in Avi's work. Takuya --- We did a simple test

Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively

2012-09-24 Thread Takuya Yoshikawa
On Mon, 24 Sep 2012 12:09:00 +0200 Avi Kivity a...@redhat.com wrote: while (vcpu-request) { xchg(vcpu-request, request); for_each_set_bit(request) { clear_bit(X); .. } } In fact I had something like that in one of the earlier

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-21 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 17:30:20 +0530 Raghavendra K T wrote: > From: Raghavendra K T > > When PLE handler fails to find a better candidate to yield_to, it > goes back and does spin again. This is acceptable when we do not > have overcommit. > But in overcommitted scenarios (especially when we

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-21 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 17:30:20 +0530 Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote: From: Raghavendra K T raghavendra...@linux.vnet.ibm.com When PLE handler fails to find a better candidate to yield_to, it goes back and does spin again. This is acceptable when we do not have

Re: [PATCH RFC 2/2] kvm: Be courteous to other VMs in overcommitted scenario in PLE handler

2012-09-21 Thread Takuya Yoshikawa
On Fri, 21 Sep 2012 17:30:20 +0530 Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote: From: Raghavendra K T raghavendra...@linux.vnet.ibm.com When PLE handler fails to find a better candidate to yield_to, it goes back and does spin again. This is acceptable when we do not have

Re: [PATCH -v3] KVM: x86: lapic: Clean up find_highest_vector() and count_vectors()

2012-09-05 Thread Takuya Yoshikawa
On Thu, 30 Aug 2012 19:49:23 +0300 Michael S. Tsirkin m...@redhat.com wrote: On Fri, Aug 31, 2012 at 01:09:56AM +0900, Takuya Yoshikawa wrote: On Thu, 30 Aug 2012 16:21:31 +0300 Michael S. Tsirkin m...@redhat.com wrote: +static u32 apic_read_reg(int reg_off, void *bitmap

Re: [PATCH -v3] KVM: x86: lapic: Clean up find_highest_vector() and count_vectors()

2012-09-05 Thread Takuya Yoshikawa
On Wed, 5 Sep 2012 12:26:49 +0300 Michael S. Tsirkin m...@redhat.com wrote: It's not guaranteed if another thread can modify the bitmap. Is this the case here? If yes we need at least ACCESS_ONCE. In this patch, using the wrapper function to read out a register value forces compilers not to do

[PATCH -v4] KVM: x86: lapic: Clean up find_highest_vector() and count_vectors()

2012-09-05 Thread Takuya Yoshikawa
() did wrong predictions by inserting debug code. Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp Cc: Michael S. Tsirkin m...@redhat.com --- arch/x86/kvm/lapic.c | 30 ++ 1 files changed, 18 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/lapic.c b

Re: [PATCH] KVM: x86: lapic: Fix the misuse of likely() in find_highest_vector()

2012-08-30 Thread Takuya Yoshikawa
On Thu, 30 Aug 2012 09:37:02 +0300 Michael S. Tsirkin m...@redhat.com wrote: After staring at your code for a while it does appear to do the right thing, and looks cleaner than what we have now. commit log could be clearer. It should state something like: Clean up code in

Re: [PATCH] KVM: x86: lapic: Fix the misuse of likely() in find_highest_vector()

2012-08-30 Thread Takuya Yoshikawa
On Thu, 30 Aug 2012 13:10:33 +0300 Michael S. Tsirkin m...@redhat.com wrote: OK, I'll do these on top of this patch. Tweaking these 5 lines for readability across multiple patches is just not worth it. As long as we do random cleanups of this function it's probably easier to just do them

[PATCH -v3] KVM: x86: lapic: Clean up find_highest_vector() and count_vectors()

2012-08-30 Thread Takuya Yoshikawa
, to iterate over the register array to make the code clearer. Note that we actually confirmed that the likely() did wrong predictions by inserting debug code. Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp Cc: Michael S. Tsirkin m...@redhat.com --- arch/x86/kvm/lapic.c | 35

Re: [PATCH -v3] KVM: x86: lapic: Clean up find_highest_vector() and count_vectors()

2012-08-30 Thread Takuya Yoshikawa
On Thu, 30 Aug 2012 16:21:31 +0300 Michael S. Tsirkin m...@redhat.com wrote: +static u32 apic_read_reg(int reg_off, void *bitmap) +{ + return *((u32 *)(bitmap + reg_off)); +} + Contrast with apic_set_reg which gets apic, add fact that all callers invoke REG_POS and you will see

Re: [PATCH] KVM: x86: lapic: Fix the misuse of likely() in find_highest_vector()

2012-08-29 Thread Takuya Yoshikawa
On Thu, 30 Aug 2012 01:51:20 +0300 Michael S. Tsirkin m...@redhat.com wrote: This text: + if (likely(!word_offset !word[0])) + return -1; is a left-over from the original implementation. There we did a ton of gratitious calls to interrupt injection so it was important

Re: [PATCH] KVM: x86: lapic: Fix the misuse of likely() in find_highest_vector()

2012-08-28 Thread Takuya Yoshikawa
On Mon, 27 Aug 2012 17:25:42 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: On Fri, Aug 24, 2012 at 06:15:49PM +0900, Takuya Yoshikawa wrote: Although returning -1 should be likely according to the likely(), the ASSERT in apic_find_highest_irr() will be triggered in such a case. It seems

Re: [patch 3/3] KVM: move postcommit flush to x86, as mmio sptes are x86 specific

2012-08-27 Thread Takuya Yoshikawa
On Fri, 24 Aug 2012 15:54:59 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: Other arches do not need this. Signed-off-by: Marcelo Tosatti mtosa...@redhat.com Index: kvm/arch/x86/kvm/x86.c === ---

Re: [patch 3/3] KVM: move postcommit flush to x86, as mmio sptes are x86 specific

2012-08-27 Thread Takuya Yoshikawa
On Mon, 27 Aug 2012 16:06:01 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: Any explanation why (old.base_gfn != new.base_gfn) case can be omitted? (old.base_gfn != new.base_gfn) check covers the cases 1. old.base_gfn = 0, new.base_gfn = !0 (slot creation) and x != 0, y != 0, x

[PATCH] KVM: x86: lapic: Fix the misuse of likely() in find_highest_vector()

2012-08-24 Thread Takuya Yoshikawa
in a for loop and then use __fls() if found. When nothing found, we are out of the loop, so we can just return -1. Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp --- arch/x86/kvm/lapic.c | 18 ++ 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86

Re: [PATCH] kvm/book3s: fix build error caused by gfn_to_hva_memslot()

2012-08-23 Thread Takuya Yoshikawa
On Thu, 23 Aug 2012 15:42:49 +0800 Gavin Shan sha...@linux.vnet.ibm.com wrote: The build error was caused by that builtin functions are calling the functions implemented in modules. That was introduced by the following commit. commit 4d8b81abc47b83a1939e59df2fdb0e98dfe0eedd The patches

Re: [PATCH] kvm/book3s: fix build error caused by gfn_to_hva_memslot()

2012-08-23 Thread Takuya Yoshikawa
Alex, what do you think about this? On Thu, 23 Aug 2012 16:35:15 +0800 Gavin Shan sha...@linux.vnet.ibm.com wrote: On Thu, Aug 23, 2012 at 05:24:00PM +0900, Takuya Yoshikawa wrote: On Thu, 23 Aug 2012 15:42:49 +0800 Gavin Shan sha...@linux.vnet.ibm.com wrote: The build error was caused

[PATCH v2] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended

2012-08-20 Thread Takuya Yoshikawa
in the future. Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp Cc: Gleb Natapov g...@redhat.com --- arch/x86/kvm/mmu.c | 13 + 1 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9651c2c..5e4b255 100644 --- a/arch

Re: [PATCH RESEND] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended

2012-08-14 Thread Takuya Yoshikawa
On Tue, 14 Aug 2012 12:17:12 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: - if (kvm-arch.n_used_mmu_pages 0) { - if (!nr_to_scan--) - break; -- (*1) + if (!kvm-arch.n_used_mmu_pages)

Re: [PATCH RESEND] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended

2012-08-13 Thread Takuya Yoshikawa
On Mon, 13 Aug 2012 19:15:23 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: On Fri, Aug 10, 2012 at 05:16:12PM +0900, Takuya Yoshikawa wrote: The following commit changed mmu_shrink() so that it would skip VMs whose n_used_mmu_pages was not zero and try to free pages from others

[PATCH RESEND] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended

2012-08-10 Thread Takuya Yoshikawa
mmu pages as before. Note that if (!nr_to_scan--) check is removed since we do not try to free mmu pages from more than one VM. Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp Cc: Gleb Natapov g...@redhat.com --- This patch just recovers the original behaviour and is not related

Re: [PATCH 5/8] KVM: Add hva_to_memslot

2012-08-09 Thread Takuya Yoshikawa
On Tue, 7 Aug 2012 12:57:13 +0200 Alexander Graf ag...@suse.de wrote: +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot; + + kvm_for_each_memslot(memslot, slots) +

Re: [PATCH 3/5] KVM: PPC: Book3S HV: Handle memory slot deletion and modification correctly

2012-08-09 Thread Takuya Yoshikawa
On Thu, 9 Aug 2012 22:25:32 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: I'll send a patch to flush per memslot in the next days, you can work out the PPC details in the meantime. Are you going to implement that using slot_bitmap? Since I'm now converting

Re: [PATCH 5/8] KVM: Add hva_to_memslot

2012-08-09 Thread Takuya Yoshikawa
On Tue, 7 Aug 2012 12:57:13 +0200 Alexander Graf ag...@suse.de wrote: +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot; + + kvm_for_each_memslot(memslot, slots) +

Re: [PATCH 3/5] KVM: PPC: Book3S HV: Handle memory slot deletion and modification correctly

2012-08-09 Thread Takuya Yoshikawa
On Thu, 9 Aug 2012 22:25:32 -0300 Marcelo Tosatti mtosa...@redhat.com wrote: I'll send a patch to flush per memslot in the next days, you can work out the PPC details in the meantime. Are you going to implement that using slot_bitmap? Since I'm now converting

[PATCH 5/5] KVM: Replace test_and_set_bit_le() in mark_page_dirty_in_slot() with set_bit_le()

2012-08-07 Thread Takuya Yoshikawa
From: Takuya Yoshikawa Now that we have defined generic set_bit_le() we do not need to use test_and_set_bit_le() for atomically setting a bit. Signed-off-by: Takuya Yoshikawa Cc: Avi Kivity Cc: Marcelo Tosatti --- virt/kvm/kvm_main.c |3 +-- 1 files changed, 1 insertions(+), 2 deletions

[PATCH 4/5] powerpc: bitops: Introduce {clear,set}_bit_le()

2012-08-07 Thread Takuya Yoshikawa
From: Takuya Yoshikawa Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is being used for this missing function. Signed-off-by: Takuya Yoshikawa Acked-by: Benjamin Herrenschmidt --- arch/powerpc/include/asm/bitops.h | 10 ++ 1 files changed, 10 insertions(+), 0

[PATCH 3/5] bitops: Introduce generic {clear,set}_bit_le()

2012-08-07 Thread Takuya Yoshikawa
From: Takuya Yoshikawa Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is being used for this missing function. Signed-off-by: Takuya Yoshikawa Acked-by: Arnd Bergmann --- include/asm-generic/bitops/le.h | 10 ++ 1 files changed, 10 insertions(+), 0 deletions

[PATCH 2/5] drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function

2012-08-07 Thread Takuya Yoshikawa
From: Takuya Yoshikawa To introduce generic set_bit_le() later, we remove our own definition and use a proper non-atomic bitops function: __set_bit_le(). Signed-off-by: Takuya Yoshikawa Acked-by: Grant Grundler --- drivers/net/ethernet/dec/tulip/de2104x.c|7 ++- drivers/net

[PATCH 1/5] sfc: Use standard __{clear,set}_bit_le() functions

2012-08-07 Thread Takuya Yoshikawa
From: Ben Hutchings There are now standard functions for dealing with little-endian bit arrays, so use them instead of our own implementations. Signed-off-by: Ben Hutchings Signed-off-by: Takuya Yoshikawa --- drivers/net/ethernet/sfc/efx.c|4 ++-- drivers/net/ethernet/sfc

[PATCH 0/5 - RESEND] Introduce generic set_bit_le()

2012-08-07 Thread Takuya Yoshikawa
for big-endian case, than the generic __set_bit_le(), it should not be a problem to use the latter since both maintainers prefer it. Ben Hutchings (1): sfc: Use standard __{clear,set}_bit_le() functions Takuya Yoshikawa (4): drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function

[PATCH 0/5 - RESEND] Introduce generic set_bit_le()

2012-08-07 Thread Takuya Yoshikawa
for big-endian case, than the generic __set_bit_le(), it should not be a problem to use the latter since both maintainers prefer it. Ben Hutchings (1): sfc: Use standard __{clear,set}_bit_le() functions Takuya Yoshikawa (4): drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function

[PATCH 1/5] sfc: Use standard __{clear,set}_bit_le() functions

2012-08-07 Thread Takuya Yoshikawa
From: Ben Hutchings bhutchi...@solarflare.com There are now standard functions for dealing with little-endian bit arrays, so use them instead of our own implementations. Signed-off-by: Ben Hutchings bhutchi...@solarflare.com Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp

[PATCH 2/5] drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function

2012-08-07 Thread Takuya Yoshikawa
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp To introduce generic set_bit_le() later, we remove our own definition and use a proper non-atomic bitops function: __set_bit_le(). Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp Acked-by: Grant Grundler grund...@parisc

<    1   2   3   4   5   6   7   8   9   10   >