On 01/08/2017 05:26, Longpeng (Mike) wrote:
>
>
> On 2017/7/31 21:20, Paolo Bonzini wrote:
>
>> On 31/07/2017 14:27, David Hildenbrand wrote:
I'm not sure whether the operation of get the vcpu's priority-level is
expensive on all architectures, so I record it in kvm_sched_out() for
>>>
On Mon, 31 Jul 2017 19:32:26 +0200
David Hildenbrand wrote:
> This one should work for s390x, no caching (or special access patterns
> like on x86) needed:
>
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2447,6 +2447,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> return kvm_s390_vcpu
On 2017/7/31 21:20, Paolo Bonzini wrote:
> On 31/07/2017 14:27, David Hildenbrand wrote:
>>> I'm not sure whether the operation of get the vcpu's priority-level is
>>> expensive on all architectures, so I record it in kvm_sched_out() for
>>> minimal the extra cycles cost in kvm_vcpu_on_spin().
>
On 2017/7/31 21:22, Christoffer Dall wrote:
> On Sat, Jul 29, 2017 at 02:22:57PM +0800, Longpeng(Mike) wrote:
>> We had disscuss the idea here:
>> https://www.spinics.net/lists/kvm/msg140593.html
>
> This is not a very nice way to start a commit description.
>
> Please provide the necessary ba
On 31.07.2017 15:22, Christoffer Dall wrote:
> On Sat, Jul 29, 2017 at 02:22:57PM +0800, Longpeng(Mike) wrote:
>> We had disscuss the idea here:
>> https://www.spinics.net/lists/kvm/msg140593.html
>
> This is not a very nice way to start a commit description.
>
> Please provide the necessary back
On 31/07/2017 15:42, Marc Zyngier wrote:
>> If the vcpu(me) exit due to request a usermode spinlock, then
>> the spinlock-holder may be preempted in usermode or kernmode.
>> But if the vcpu(me) is in kernmode, then the holder must be
>> preempted in kernmode, so we should choose a vcpu in kernmode
On 29/07/17 07:22, Longpeng(Mike) wrote:
> We had disscuss the idea here:
> https://www.spinics.net/lists/kvm/msg140593.html
>
> I think it's also suitable for other architectures.
>
> If the vcpu(me) exit due to request a usermode spinlock, then
> the spinlock-holder may be preempted in usermode
On Sat, Jul 29, 2017 at 02:22:57PM +0800, Longpeng(Mike) wrote:
> We had disscuss the idea here:
> https://www.spinics.net/lists/kvm/msg140593.html
This is not a very nice way to start a commit description.
Please provide the necessary background to understand your change
directly in the commit m
On 31/07/2017 14:27, David Hildenbrand wrote:
>> I'm not sure whether the operation of get the vcpu's priority-level is
>> expensive on all architectures, so I record it in kvm_sched_out() for
>> minimal the extra cycles cost in kvm_vcpu_on_spin().
>>
> as you only care for x86 right now either way
On 2017/7/31 20:31, Cornelia Huck wrote:
> On Mon, 31 Jul 2017 20:08:14 +0800
> "Longpeng (Mike)" wrote:
>
>> Hi David,
>>
>> On 2017/7/31 19:31, David Hildenbrand wrote:
>
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 648b34c..f8f0d74 100644
--- a/incl
On Mon, 31 Jul 2017 20:08:14 +0800
"Longpeng (Mike)" wrote:
> Hi David,
>
> On 2017/7/31 19:31, David Hildenbrand wrote:
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 648b34c..f8f0d74 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host
> I'm not sure whether the operation of get the vcpu's priority-level is
> expensive on all architectures, so I record it in kvm_sched_out() for
> minimal the extra cycles cost in kvm_vcpu_on_spin().
>
as you only care for x86 right now either way, you can directly optimize
here for the good (her
Hi David,
On 2017/7/31 19:31, David Hildenbrand wrote:
> [no idea if this change makes sense (and especially if it has any bad
> side effects), do you have performance numbers? I'll just have a look at
> the general structure of the patch in the meanwhile]
>
I haven't any test results yet, coul
[no idea if this change makes sense (and especially if it has any bad
side effects), do you have performance numbers? I'll just have a look at
the general structure of the patch in the meanwhile]
> +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu)
kvm_arch_vcpu_in_kernel() ?
> +{
> +
We had disscuss the idea here:
https://www.spinics.net/lists/kvm/msg140593.html
I think it's also suitable for other architectures.
If the vcpu(me) exit due to request a usermode spinlock, then
the spinlock-holder may be preempted in usermode or kernmode.
But if the vcpu(me) is in kernmode, then
15 matches
Mail list logo