From: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>

The patch indroduces a helper function that calculates the system load
(idea borrowed from loadavg calculation). The load is normalized to
2048 i.e., return value (threshold) of 2048 implies an approximate 1:1
committed guest.

In undercommit cases (threshold/2) we simply return from PLE handler.
In overcommit cases (1.75 * threshold) we do a yield(). The rationale is to
allow other VMs of the host to run instead of burning the cpu cycle.

Reviewed-by: Srikar Dronamraju <sri...@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
Idea of yielding in overcommit cases (especially in large number of small guest
cases was
Acked-by: Rik van Riel <r...@redhat.com>
Andrew Theurer also has stressed the importance of reducing yield_to
overhead and using yield().

(let threshold = 2048)
Rationale for using threshold/2 for undercommit limit:
 Having a load below (0.5 * threshold) is used to avoid (the concern rasied by 
Rik)
scenarios where we still have lock holder preempted vcpu waiting to be
scheduled. (scenario arises when rq length is > 1 even when we are under
committed)

Rationale for using (1.75 * threshold) for overcommit scenario:
This is a heuristic where we should probably see rq length > 1
and a vcpu of a different VM is waiting to be scheduled.

 virt/kvm/kvm_main.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e376434..28bbdfb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1697,15 +1697,43 @@ bool kvm_vcpu_eligible_for_directed_yield(struct 
kvm_vcpu *vcpu)
 }
 #endif
 
+/*
+ * A load of 2048 corresponds to 1:1 overcommit
+ * undercommit threshold is half the 1:1 overcommit
+ * overcommit threshold is 1.75 times of 1:1 overcommit threshold
+ */
+#define COMMIT_THRESHOLD (FIXED_1)
+#define UNDERCOMMIT_THRESHOLD (COMMIT_THRESHOLD >> 1)
+#define OVERCOMMIT_THRESHOLD ((COMMIT_THRESHOLD << 1) - (COMMIT_THRESHOLD >> 
2))
+
+unsigned long kvm_system_load(void)
+{
+       unsigned long load;
+
+       load = avenrun[0] + FIXED_1/200;
+       load = load / num_online_cpus();
+
+       return load;
+}
+
 void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 {
        struct kvm *kvm = me->kvm;
        struct kvm_vcpu *vcpu;
        int last_boosted_vcpu = me->kvm->last_boosted_vcpu;
        int yielded = 0;
+       unsigned long load;
        int pass;
        int i;
 
+       load = kvm_system_load();
+       /*
+        * When we are undercomitted let us not waste time in
+        * iterating over all the VCPUs.
+        */
+       if (load < UNDERCOMMIT_THRESHOLD)
+               return;
+
        kvm_vcpu_set_in_spin_loop(me, true);
        /*
         * We boost the priority of a VCPU that is runnable but not
@@ -1735,6 +1763,13 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
                                break;
                }
        }
+       /*
+        * If we are not able to yield especially in overcommit cases
+        * let us be courteous to other VM's VCPUs waiting to be scheduled.
+        */
+       if (!yielded && load > OVERCOMMIT_THRESHOLD)
+               yield();
+
        kvm_vcpu_set_in_spin_loop(me, false);
 
        /* Ensure vcpu is not eligible during next spinloop */

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to