On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson wrote:
> > Yes, the fix should work, but I do not want to update the
> > generation from outside of update_memslots().
>
> Ok, then:
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 87089dd..c7b5061 100644
> ---
On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson alex.william...@redhat.com wrote:
Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().
Ok, then:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 87089dd..c7b5061
On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson alex.william...@redhat.com wrote:
Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().
Ok, then:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 87089dd..c7b5061
On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson alex.william...@redhat.com wrote:
Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().
Ok, then:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 87089dd..c7b5061
On Thu, 20 Dec 2012 07:55:43 -0700
Alex Williamson alex.william...@redhat.com wrote:
Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().
Ok, then:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 87089dd..c7b5061
On Thu, 20 Dec 2012 06:41:27 -0700
Alex Williamson wrote:
> Hmm, isn't the fix as simple as:
>
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> GFP_KERNEL);
> if (!slots)
On Thu, 20 Dec 2012 06:41:27 -0700
Alex Williamson alex.william...@redhat.com wrote:
Hmm, isn't the fix as simple as:
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
GFP_KERNEL);
On Thu, 20 Dec 2012 06:41:27 -0700
Alex Williamson alex.william...@redhat.com wrote:
Hmm, isn't the fix as simple as:
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
GFP_KERNEL);
On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson wrote:
> Please let me know if you can identify one of these as the culprit.
> They're all very simple, but there's always a chance I've missed a hard
> coding of slot numbers somewhere. Thanks,
I identified the one:
commit
loc -> bool
[08/10] KVM: struct kvm_memory_slot.flags -> u32
[09/10] KVM: struct kvm_memory_slot.id -> short
[10/10] KVM: Increase user memory slots on x86 to 125
If I can get time, I will check which one caused the problem tomorrow.
Thanks,
Takuya
On Tue, 18 Dec 2012 16:25:58
- bool
[08/10] KVM: struct kvm_memory_slot.flags - u32
[09/10] KVM: struct kvm_memory_slot.id - short
[10/10] KVM: Increase user memory slots on x86 to 125
If I can get time, I will check which one caused the problem tomorrow.
Thanks,
Takuya
On Tue, 18 Dec 2012 16:25:58 +0900
Takuya
On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson alex.william...@redhat.com wrote:
Please let me know if you can identify one of these as the culprit.
They're all very simple, but there's always a chance I've missed a hard
coding of slot numbers somewhere. Thanks,
I identified the one:
- bool
[08/10] KVM: struct kvm_memory_slot.flags - u32
[09/10] KVM: struct kvm_memory_slot.id - short
[10/10] KVM: Increase user memory slots on x86 to 125
If I can get time, I will check which one caused the problem tomorrow.
Thanks,
Takuya
On Tue, 18 Dec 2012 16:25:58 +0900
Takuya
On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson alex.william...@redhat.com wrote:
Please let me know if you can identify one of these as the culprit.
They're all very simple, but there's always a chance I've missed a hard
coding of slot numbers somewhere. Thanks,
I identified the one:
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/a
Not needed any more.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions(-)
diff --git a/Documentation/virtual
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: Takuya
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509 100644
--- a/arch/x86/kvm/mmu.c
This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
otherwise we may end up using invalid rmap's.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/x86.c |9 -
virt/kvm/kvm_main.c |1 -
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm
xbd/0x110
[ 575.242298] [] ? fget_light+0x3c/0x140
[ 575.242381] [] do_vfs_ioctl+0x98/0x570
[ 575.242463] [] ? fget_light+0xa1/0x140
[ 575.246393] [] ? fget_light+0x3c/0x140
[ 575.250363] [] sys_ioctl+0x91/0xb0
[ 575.254327] [] system_call_fastpath+0x16/0x1b
Takuya Yoshikawa (7):
[ 575.242463] [811a91b1] ? fget_light+0xa1/0x140
[ 575.246393] [811a914c] ? fget_light+0x3c/0x140
[ 575.250363] [8119e511] sys_ioctl+0x91/0xb0
[ 575.254327] [81684c19] system_call_fastpath+0x16/0x1b
Takuya Yoshikawa (7):
KVM: Write protect the updated slot
This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
otherwise we may end up using invalid rmap's.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/x86.c |9 -
virt/kvm/kvm_main.c |1 -
2 files changed, 8 insertions(+), 2
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's unlocked TLB flush work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: Takuya Yoshikawa
Not needed any more.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions
kvm-arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
[ 575.242463] [811a91b1] ? fget_light+0xa1/0x140
[ 575.246393] [811a914c] ? fget_light+0x3c/0x140
[ 575.250363] [8119e511] sys_ioctl+0x91/0xb0
[ 575.254327] [81684c19] system_call_fastpath+0x16/0x1b
Takuya Yoshikawa (7):
KVM: Write protect the updated slot
This is needed to make kvm_mmu_slot_remove_write_access() rmap based:
otherwise we may end up using invalid rmap's.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/x86.c |9 -
virt/kvm/kvm_main.c |1 -
2 files changed, 8 insertions(+), 2
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's unlocked TLB flush work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: Takuya Yoshikawa
Not needed any more.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
kvm-arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff
We can check if accum_steal has any positive value instead of using
KVM_REQ_STEAL_UPDATE bit in vcpu-requests; and this is the way we
usually do for accounting for something in the kernel.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/x86/kvm/x86.c | 11
On Fri, 14 Dec 2012 13:28:15 +0200
Gleb Natapov g...@redhat.com wrote:
On Fri, Dec 14, 2012 at 07:37:18PM +0900, Takuya Yoshikawa wrote:
We can check if accum_steal has any positive value instead of using
KVM_REQ_STEAL_UPDATE bit in vcpu-requests; and this is the way we
usually do
On Fri, 07 Dec 2012 09:09:39 -0700
Alex Williamson wrote:
> On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote:
> > On Thu, 06 Dec 2012 15:21:26 -0700
> > Alex Williamson wrote:
> >
> > > With the 3 private slots, this gives us a nice round 128 sl
On Fri, 07 Dec 2012 09:09:39 -0700
Alex Williamson alex.william...@redhat.com wrote:
On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote:
On Thu, 06 Dec 2012 15:21:26 -0700
Alex Williamson alex.william...@redhat.com wrote:
With the 3 private slots, this gives us a nice round 128
On Fri, 07 Dec 2012 09:09:39 -0700
Alex Williamson alex.william...@redhat.com wrote:
On Fri, 2012-12-07 at 23:02 +0900, Takuya Yoshikawa wrote:
On Thu, 06 Dec 2012 15:21:26 -0700
Alex Williamson alex.william...@redhat.com wrote:
With the 3 private slots, this gives us a nice round 128
On Thu, 06 Dec 2012 15:21:26 -0700
Alex Williamson wrote:
> With the 3 private slots, this gives us a nice round 128 slots total.
So I think this patch needs to be applied after resolving the
slot_bitmap issue. We may not need to protect slots with large
slot id values, but still it's possible
On Thu, 06 Dec 2012 15:21:26 -0700
Alex Williamson alex.william...@redhat.com wrote:
With the 3 private slots, this gives us a nice round 128 slots total.
So I think this patch needs to be applied after resolving the
slot_bitmap issue. We may not need to protect slots with large
slot id
On Thu, 06 Dec 2012 15:21:26 -0700
Alex Williamson alex.william...@redhat.com wrote:
With the 3 private slots, this gives us a nice round 128 slots total.
So I think this patch needs to be applied after resolving the
slot_bitmap issue. We may not need to protect slots with large
slot id
On Mon, 03 Dec 2012 16:39:05 -0700
Alex Williamson wrote:
> A couple notes/questions; in the previous version we had a
> kvm_arch_flush_shadow() call when we increased the number of slots.
> I'm not sure if this is still necessary. I had also made the x86
> specific slot_bitmap dynamically grow
On Mon, 03 Dec 2012 16:39:05 -0700
Alex Williamson alex.william...@redhat.com wrote:
A couple notes/questions; in the previous version we had a
kvm_arch_flush_shadow() call when we increased the number of slots.
I'm not sure if this is still necessary. I had also made the x86
specific
On Mon, 03 Dec 2012 16:39:05 -0700
Alex Williamson alex.william...@redhat.com wrote:
A couple notes/questions; in the previous version we had a
kvm_arch_flush_shadow() call when we increased the number of slots.
I'm not sure if this is still necessary. I had also made the x86
specific
Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti wrote:
> On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
> > Do not drop large spte until it can be insteaded by small pages so that
> > the guest can
Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti mtosa...@redhat.com wrote:
On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
Do not drop large spte until it can be insteaded by small pages so that
the
Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti mtosa...@redhat.com wrote:
On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
Do not drop large spte until it can be insteaded by small pages so that
the
Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti mtosa...@redhat.com wrote:
On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
Do not drop large spte until it can be insteaded by small pages so that
the
On Mon, 24 Sep 2012 09:16:12 +0200
Gleb Natapov g...@redhat.com wrote:
Yes, for guests that do not enable steal time KVM_REQ_STEAL_UPDATE
should be never set, but currently it is. The patch (not tested) should
fix this.
Thinking a bit more about KVM_REQ_STEAL_UPDATE...
diff --git
On Tue, 25 Sep 2012 10:12:49 +0200
Avi Kivity wrote:
> It will. The tradeoff is between false-positive costs (undercommit) and
> true positive costs (overcommit). I think undercommit should perform
> well no matter what.
>
> If we utilize preempt notifiers to track overcommit dynamically,
On Tue, 25 Sep 2012 10:12:49 +0200
Avi Kivity a...@redhat.com wrote:
It will. The tradeoff is between false-positive costs (undercommit) and
true positive costs (overcommit). I think undercommit should perform
well no matter what.
If we utilize preempt notifiers to track overcommit
On Tue, 25 Sep 2012 10:12:49 +0200
Avi Kivity a...@redhat.com wrote:
It will. The tradeoff is between false-positive costs (undercommit) and
true positive costs (overcommit). I think undercommit should perform
well no matter what.
If we utilize preempt notifiers to track overcommit
On Mon, 24 Sep 2012 16:50:13 +0200
Avi Kivity a...@redhat.com wrote:
Afterwards, most exits are APIC and interrupt related, HLT, and MMIO.
Of these, some are special (HLT, interrupt injection) and some are not
(read/write most APIC registers). I don't think one group dominates the
other. So
On Fri, 21 Sep 2012 23:15:40 +0530
Raghavendra K T wrote:
> >> How about doing cond_resched() instead?
> >
> > Actually, an actual call to yield() may be better.
> >
> > That will set scheduler hints to make the scheduler pick
> > another task for one round, while preserving this task's
> > top
On Fri, 21 Sep 2012 23:15:40 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
How about doing cond_resched() instead?
Actually, an actual call to yield() may be better.
That will set scheduler hints to make the scheduler pick
another task for one round, while preserving
update occurs frequently enough except when we give each vcpu a
dedicated core justifies its tiny cost.
Signed-off-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
[My email address change is not a mistake.]
arch/x86/kvm/x86.c | 11 ---
1 files changed, 8 insertions(+), 3
On Fri, 21 Sep 2012 23:15:40 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
How about doing cond_resched() instead?
Actually, an actual call to yield() may be better.
That will set scheduler hints to make the scheduler pick
another task for one round, while preserving
On Mon, 24 Sep 2012 14:59:44 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 09/24/2012 02:24 PM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Why not compare
majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 24 Sep 2012 12:18:15 +0200
Avi Kivity a...@redhat.com wrote:
On 09/24/2012 08:24 AM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test
On Mon, 24 Sep 2012 12:09:00 +0200
Avi Kivity a...@redhat.com wrote:
while (vcpu-request) {
xchg(vcpu-request, request);
for_each_set_bit(request) {
clear_bit(X);
..
}
}
In fact I had something like that in one of the earlier
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T wrote:
> From: Raghavendra K T
>
> When PLE handler fails to find a better candidate to yield_to, it
> goes back and does spin again. This is acceptable when we do not
> have overcommit.
> But in overcommitted scenarios (especially when we
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have
On Thu, 30 Aug 2012 19:49:23 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Fri, Aug 31, 2012 at 01:09:56AM +0900, Takuya Yoshikawa wrote:
On Thu, 30 Aug 2012 16:21:31 +0300
Michael S. Tsirkin m...@redhat.com wrote:
+static u32 apic_read_reg(int reg_off, void *bitmap
On Wed, 5 Sep 2012 12:26:49 +0300
Michael S. Tsirkin m...@redhat.com wrote:
It's not guaranteed if another thread can modify the bitmap.
Is this the case here? If yes we need at least ACCESS_ONCE.
In this patch, using the wrapper function to read out a register
value forces compilers not to do
() did wrong predictions
by inserting debug code.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Michael S. Tsirkin m...@redhat.com
---
arch/x86/kvm/lapic.c | 30 ++
1 files changed, 18 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/lapic.c b
On Thu, 30 Aug 2012 09:37:02 +0300
Michael S. Tsirkin m...@redhat.com wrote:
After staring at your code for a while it does appear to
do the right thing, and looks cleaner than what
we have now. commit log could be clearer.
It should state something like:
Clean up code in
On Thu, 30 Aug 2012 13:10:33 +0300
Michael S. Tsirkin m...@redhat.com wrote:
OK, I'll do these on top of this patch.
Tweaking these 5 lines for readability across multiple
patches is just not worth it.
As long as we do random cleanups of this function it's probably easier
to just do them
, to iterate over the register array to make
the code clearer.
Note that we actually confirmed that the likely() did wrong predictions
by inserting debug code.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Michael S. Tsirkin m...@redhat.com
---
arch/x86/kvm/lapic.c | 35
On Thu, 30 Aug 2012 16:21:31 +0300
Michael S. Tsirkin m...@redhat.com wrote:
+static u32 apic_read_reg(int reg_off, void *bitmap)
+{
+ return *((u32 *)(bitmap + reg_off));
+}
+
Contrast with apic_set_reg which gets apic,
add fact that all callers invoke REG_POS and you will
see
On Thu, 30 Aug 2012 01:51:20 +0300
Michael S. Tsirkin m...@redhat.com wrote:
This text:
+ if (likely(!word_offset !word[0]))
+ return -1;
is a left-over from the original implementation.
There we did a ton of gratitious calls to interrupt
injection so it was important
On Mon, 27 Aug 2012 17:25:42 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Aug 24, 2012 at 06:15:49PM +0900, Takuya Yoshikawa wrote:
Although returning -1 should be likely according to the likely(),
the ASSERT in apic_find_highest_irr() will be triggered in such a case.
It seems
On Fri, 24 Aug 2012 15:54:59 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
Other arches do not need this.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: kvm/arch/x86/kvm/x86.c
===
---
On Mon, 27 Aug 2012 16:06:01 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
Any explanation why (old.base_gfn != new.base_gfn) case can be
omitted?
(old.base_gfn != new.base_gfn) check covers the cases
1. old.base_gfn = 0, new.base_gfn = !0 (slot creation)
and
x != 0, y != 0, x
in a for loop and then use __fls() if found. When
nothing found, we are out of the loop, so we can just return -1.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/lapic.c | 18 ++
1 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/x86
On Thu, 23 Aug 2012 15:42:49 +0800
Gavin Shan sha...@linux.vnet.ibm.com wrote:
The build error was caused by that builtin functions are calling
the functions implemented in modules. That was introduced by the
following commit.
commit 4d8b81abc47b83a1939e59df2fdb0e98dfe0eedd
The patches
Alex, what do you think about this?
On Thu, 23 Aug 2012 16:35:15 +0800
Gavin Shan sha...@linux.vnet.ibm.com wrote:
On Thu, Aug 23, 2012 at 05:24:00PM +0900, Takuya Yoshikawa wrote:
On Thu, 23 Aug 2012 15:42:49 +0800
Gavin Shan sha...@linux.vnet.ibm.com wrote:
The build error was caused
in the future.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Gleb Natapov g...@redhat.com
---
arch/x86/kvm/mmu.c | 13 +
1 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9651c2c..5e4b255 100644
--- a/arch
On Tue, 14 Aug 2012 12:17:12 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
- if (kvm-arch.n_used_mmu_pages 0) {
- if (!nr_to_scan--)
- break;
-- (*1)
+ if (!kvm-arch.n_used_mmu_pages)
On Mon, 13 Aug 2012 19:15:23 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Aug 10, 2012 at 05:16:12PM +0900, Takuya Yoshikawa wrote:
The following commit changed mmu_shrink() so that it would skip VMs
whose n_used_mmu_pages was not zero and try to free pages from others
mmu pages as before.
Note that if (!nr_to_scan--) check is removed since we do not try to
free mmu pages from more than one VM.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Gleb Natapov g...@redhat.com
---
This patch just recovers the original behaviour and is not related
On Tue, 7 Aug 2012 12:57:13 +0200
Alexander Graf ag...@suse.de wrote:
+struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
+{
+ struct kvm_memslots *slots = kvm_memslots(kvm);
+ struct kvm_memory_slot *memslot;
+
+ kvm_for_each_memslot(memslot, slots)
+
On Thu, 9 Aug 2012 22:25:32 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
I'll send a patch to flush per memslot in the next days, you can work
out the PPC details in the meantime.
Are you going to implement that using slot_bitmap?
Since I'm now converting
On Tue, 7 Aug 2012 12:57:13 +0200
Alexander Graf ag...@suse.de wrote:
+struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
+{
+ struct kvm_memslots *slots = kvm_memslots(kvm);
+ struct kvm_memory_slot *memslot;
+
+ kvm_for_each_memslot(memslot, slots)
+
On Thu, 9 Aug 2012 22:25:32 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
I'll send a patch to flush per memslot in the next days, you can work
out the PPC details in the meantime.
Are you going to implement that using slot_bitmap?
Since I'm now converting
From: Takuya Yoshikawa
Now that we have defined generic set_bit_le() we do not need to use
test_and_set_bit_le() for atomically setting a bit.
Signed-off-by: Takuya Yoshikawa
Cc: Avi Kivity
Cc: Marcelo Tosatti
---
virt/kvm/kvm_main.c |3 +--
1 files changed, 1 insertions(+), 2 deletions
From: Takuya Yoshikawa
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa
Acked-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/bitops.h | 10 ++
1 files changed, 10 insertions(+), 0
From: Takuya Yoshikawa
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa
Acked-by: Arnd Bergmann
---
include/asm-generic/bitops/le.h | 10 ++
1 files changed, 10 insertions(+), 0 deletions
From: Takuya Yoshikawa
To introduce generic set_bit_le() later, we remove our own definition
and use a proper non-atomic bitops function: __set_bit_le().
Signed-off-by: Takuya Yoshikawa
Acked-by: Grant Grundler
---
drivers/net/ethernet/dec/tulip/de2104x.c|7 ++-
drivers/net
From: Ben Hutchings
There are now standard functions for dealing with little-endian bit
arrays, so use them instead of our own implementations.
Signed-off-by: Ben Hutchings
Signed-off-by: Takuya Yoshikawa
---
drivers/net/ethernet/sfc/efx.c|4 ++--
drivers/net/ethernet/sfc
for big-endian
case, than the generic __set_bit_le(), it should not be a problem to
use the latter since both maintainers prefer it.
Ben Hutchings (1):
sfc: Use standard __{clear,set}_bit_le() functions
Takuya Yoshikawa (4):
drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function
for big-endian
case, than the generic __set_bit_le(), it should not be a problem to
use the latter since both maintainers prefer it.
Ben Hutchings (1):
sfc: Use standard __{clear,set}_bit_le() functions
Takuya Yoshikawa (4):
drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function
From: Ben Hutchings bhutchi...@solarflare.com
There are now standard functions for dealing with little-endian bit
arrays, so use them instead of our own implementations.
Signed-off-by: Ben Hutchings bhutchi...@solarflare.com
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
To introduce generic set_bit_le() later, we remove our own definition
and use a proper non-atomic bitops function: __set_bit_le().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Grant Grundler grund...@parisc
401 - 500 of 1354 matches
Mail list logo