Re: [Qemu-devel] [PATCH v1 7/7] kvm/x86: Hyper-V SynIC timers

2015-11-30 Thread Roman Kagan
On Fri, Nov 27, 2015 at 11:49:40AM +0100, Paolo Bonzini wrote:

[ sorry missed your message on Friday, replying now ]

> On 27/11/2015 09:12, Roman Kagan wrote:
> >> > +n = div64_u64(time_now - stimer->exp_time, stimer->count) + 1;
> >> > +stimer->exp_time += n * stimer->count;
> > This is actually just a reminder calculation so I'd rather do it
> > directly with div64_u64_rem().
> 
> It took me a while to understand why it was a remained. :)

It gets easier if you think of it this way: we've slipped a few whole
periods and the remainder of the slack into the current period, so
the time left till the next tick is ("count" is the timer period here)

  delta = count - slack % count
where
  slack = time_now - exp_time

This gives you immediately your

>   exp_time = time_now + (count - (time_now - exp_time) % count)

Roman.



Re: [Qemu-devel] [PATCH v1 7/7] kvm/x86: Hyper-V SynIC timers

2015-11-27 Thread Andrey Smetanin



On 11/27/2015 01:49 PM, Paolo Bonzini wrote:



On 27/11/2015 09:12, Roman Kagan wrote:

+   n = div64_u64(time_now - stimer->exp_time, stimer->count) + 1;
+   stimer->exp_time += n * stimer->count;

This is actually just a reminder calculation so I'd rather do it
directly with div64_u64_rem().


It took me a while to understand why it was a remained. :)  Expanding
Andrey's formula you get

exp_time = exp_time + count + (time_now - exp_time) / count * count

the remainder, (time_now - exp_time) % count would be

time_now - exp_time - (time_now - exp_time) / count * count

so -((time_now - exp_time) % count) is

exp_time - time_now + (time_now - exp_time) / count * count

so Andrey's expression is

exp_time = time_now + (count - (time_now - exp_time) % count)

Yeah, that looks nice.

I'll redo this way.


Paolo





Re: [Qemu-devel] [PATCH v1 7/7] kvm/x86: Hyper-V SynIC timers

2015-11-27 Thread Roman Kagan
On Wed, Nov 25, 2015 at 06:20:21PM +0300, Andrey Smetanin wrote:
> Per Hyper-V specification (and as required by Hyper-V-aware guests),
> SynIC provides 4 per-vCPU timers.  Each timer is programmed via a pair
> of MSRs, and signals expiration by delivering a special format message
> to the configured SynIC message slot and triggering the corresponding
> synthetic interrupt.
> 
> Note: as implemented by this patch, all periodic timers are "lazy"
> (i.e. if the vCPU wasn't scheduled for more than the timer period the
> timer events are lost), regardless of the corresponding configuration
> MSR.  If deemed necessary, the "catch up" mode (the timer period is
> shortened until the timer catches up) will be implemented later.
> 
> Signed-off-by: Andrey Smetanin 
> CC: Gleb Natapov 
> CC: Paolo Bonzini 
> CC: "K. Y. Srinivasan" 
> CC: Haiyang Zhang 
> CC: Vitaly Kuznetsov 
> CC: Roman Kagan 
> CC: Denis V. Lunev 
> CC: qemu-devel@nongnu.org
> ---
>  arch/x86/include/asm/kvm_host.h|  13 ++
>  arch/x86/include/uapi/asm/hyperv.h |   6 +
>  arch/x86/kvm/hyperv.c  | 325 
> -
>  arch/x86/kvm/hyperv.h  |  24 +++
>  arch/x86/kvm/x86.c |   9 +
>  include/linux/kvm_host.h   |   1 +
>  6 files changed, 375 insertions(+), 3 deletions(-)

A couple of nitpicks:

> +static void stimer_restart(struct kvm_vcpu_hv_stimer *stimer)
> +{
> + u64 time_now;
> + ktime_t ktime_now;
> + u64 n;
> +
> + time_now = get_time_ref_counter(stimer_to_vcpu(stimer)->kvm);
> + ktime_now = ktime_get();
> +
> + /*
> +  * Calculate positive integer n for which condtion -
> +  * (stimer->exp_time + n * stimer->count) > time_now
> +  * is true. We will use (stimer->exp_time + n * stimer->count)
> +  * as new stimer->exp_time.
> +  */
> +
> + n = div64_u64(time_now - stimer->exp_time, stimer->count) + 1;
> + stimer->exp_time += n * stimer->count;

This is actually just a reminder calculation so I'd rather do it
directly with div64_u64_rem().

> +void kvm_hv_process_stimers(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu);
> + struct kvm_vcpu_hv_stimer *stimer;
> + u64 time_now;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++)
> + if (test_and_clear_bit(i, hv_vcpu->stimer_pending_bitmap)) {
> + stimer = _vcpu->stimer[i];
> + stimer_stop(stimer);

I think there's no need in this explicit stop: I see no way to arrive
here with a running timer, and even if there were, it would be safe as
your timer callback only manipulates the bitmaps atomically.

Neither comment is critical so

Reviewed-by: Roman Kagan 

Roman.



Re: [Qemu-devel] [PATCH v1 7/7] kvm/x86: Hyper-V SynIC timers

2015-11-27 Thread Paolo Bonzini


On 27/11/2015 09:12, Roman Kagan wrote:
>> > +  n = div64_u64(time_now - stimer->exp_time, stimer->count) + 1;
>> > +  stimer->exp_time += n * stimer->count;
> This is actually just a reminder calculation so I'd rather do it
> directly with div64_u64_rem().

It took me a while to understand why it was a remained. :)  Expanding
Andrey's formula you get

exp_time = exp_time + count + (time_now - exp_time) / count * count

the remainder, (time_now - exp_time) % count would be

time_now - exp_time - (time_now - exp_time) / count * count

so -((time_now - exp_time) % count) is

exp_time - time_now + (time_now - exp_time) / count * count

so Andrey's expression is

exp_time = time_now + (count - (time_now - exp_time) % count)

Yeah, that looks nice.

Paolo



[Qemu-devel] [PATCH v1 7/7] kvm/x86: Hyper-V SynIC timers

2015-11-25 Thread Andrey Smetanin
Per Hyper-V specification (and as required by Hyper-V-aware guests),
SynIC provides 4 per-vCPU timers.  Each timer is programmed via a pair
of MSRs, and signals expiration by delivering a special format message
to the configured SynIC message slot and triggering the corresponding
synthetic interrupt.

Note: as implemented by this patch, all periodic timers are "lazy"
(i.e. if the vCPU wasn't scheduled for more than the timer period the
timer events are lost), regardless of the corresponding configuration
MSR.  If deemed necessary, the "catch up" mode (the timer period is
shortened until the timer catches up) will be implemented later.

Signed-off-by: Andrey Smetanin 
CC: Gleb Natapov 
CC: Paolo Bonzini 
CC: "K. Y. Srinivasan" 
CC: Haiyang Zhang 
CC: Vitaly Kuznetsov 
CC: Roman Kagan 
CC: Denis V. Lunev 
CC: qemu-devel@nongnu.org
---
 arch/x86/include/asm/kvm_host.h|  13 ++
 arch/x86/include/uapi/asm/hyperv.h |   6 +
 arch/x86/kvm/hyperv.c  | 325 -
 arch/x86/kvm/hyperv.h  |  24 +++
 arch/x86/kvm/x86.c |   9 +
 include/linux/kvm_host.h   |   1 +
 6 files changed, 375 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f608e17..e35c5ca 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -375,6 +375,17 @@ struct kvm_mtrr {
struct list_head head;
 };
 
+/* Hyper-V SynIC timer */
+struct kvm_vcpu_hv_stimer {
+   struct hrtimer timer;
+   int index;
+   u64 config;
+   u64 count;
+   u64 exp_time;
+   struct hv_message msg;
+   bool msg_pending;
+};
+
 /* Hyper-V synthetic interrupt controller (SynIC)*/
 struct kvm_vcpu_hv_synic {
u64 version;
@@ -394,6 +405,8 @@ struct kvm_vcpu_hv {
s64 runtime_offset;
struct kvm_vcpu_hv_synic synic;
struct kvm_hyperv_exit exit;
+   struct kvm_vcpu_hv_stimer stimer[HV_SYNIC_STIMER_COUNT];
+   DECLARE_BITMAP(stimer_pending_bitmap, HV_SYNIC_STIMER_COUNT);
 };
 
 struct kvm_vcpu_arch {
diff --git a/arch/x86/include/uapi/asm/hyperv.h 
b/arch/x86/include/uapi/asm/hyperv.h
index e86d77e..f9d3349 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -362,4 +362,10 @@ struct hv_message_page {
struct hv_message sint_message[HV_SYNIC_SINT_COUNT];
 };
 
+#define HV_STIMER_ENABLE   (1ULL << 0)
+#define HV_STIMER_PERIODIC (1ULL << 1)
+#define HV_STIMER_LAZY (1ULL << 2)
+#define HV_STIMER_AUTOENABLE   (1ULL << 3)
+#define HV_STIMER_SINT(config) (__u8)(((config) >> 16) & 0x0F)
+
 #endif
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 6412b6b..9f8eb82 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -147,15 +147,32 @@ static void kvm_hv_notify_acked_sint(struct kvm_vcpu 
*vcpu, u32 sint)
 {
struct kvm *kvm = vcpu->kvm;
struct kvm_vcpu_hv_synic *synic = vcpu_to_synic(vcpu);
-   int gsi, idx;
+   struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu);
+   struct kvm_vcpu_hv_stimer *stimer;
+   int gsi, idx, stimers_pending;
 
vcpu_debug(vcpu, "Hyper-V SynIC acked sint %d\n", sint);
 
if (synic->msg_page & HV_SYNIC_SIMP_ENABLE)
synic_clear_sint_msg_pending(synic, sint);
 
+   /* Try to deliver pending Hyper-V SynIC timers messages */
+   stimers_pending = 0;
+   for (idx = 0; idx < ARRAY_SIZE(hv_vcpu->stimer); idx++) {
+   stimer = _vcpu->stimer[idx];
+   if (stimer->msg_pending &&
+   (stimer->config & HV_STIMER_ENABLE) &&
+   HV_STIMER_SINT(stimer->config) == sint) {
+   set_bit(stimer->index,
+   hv_vcpu->stimer_pending_bitmap);
+   stimers_pending++;
+   }
+   }
+   if (stimers_pending)
+   kvm_make_request(KVM_REQ_HV_STIMER, vcpu);
+
idx = srcu_read_lock(>irq_srcu);
-   gsi = atomic_read(_to_synic(vcpu)->sint_to_gsi[sint]);
+   gsi = atomic_read(>sint_to_gsi[sint]);
if (gsi != -1)
kvm_notify_acked_gsi(kvm, gsi);
srcu_read_unlock(>irq_srcu, idx);
@@ -371,9 +388,275 @@ static u64 get_time_ref_counter(struct kvm *kvm)
return div_u64(get_kernel_ns() + kvm->arch.kvmclock_offset, 100);
 }
 
+static void stimer_mark_expired(struct kvm_vcpu_hv_stimer *stimer,
+   bool vcpu_kick)
+{
+   struct kvm_vcpu *vcpu = stimer_to_vcpu(stimer);
+
+   set_bit(stimer->index,
+   vcpu_to_hv_vcpu(vcpu)->stimer_pending_bitmap);
+   kvm_make_request(KVM_REQ_HV_STIMER, vcpu);
+   if (vcpu_kick)
+