Re: [Xen-devel] [PATCH v1 4/5] xen: sched: null: reassign vcpus to pcpus when they come back online

2019-07-19 Thread George Dunlap
On 8/25/18 1:22 AM, Dario Faggioli wrote:
> When a vcpu that was offline, comes back online, we do want it to either
> be assigned to a pCPU, or go into the wait list.
> 
> So let's do exactly that. Detecting that a vcpu is coming back online is
> a bit tricky. Basically, if the vcpu is waking up, and is neither
> assigned to a pCPU, nor in the wait list, it must be coming back from
> offline.
> 
> When this happens, we put it in the waitqueue, and we "tickle" an idle
> pCPU (if any), to go pick it up.
> 
> Looking at the patch, it seems that the vcpu wakeup code is getting
> complex, and hence that it could potentially introduce latencies.
> However, all this new logic is triggered only by the case of a vcpu
> coming online, so, basically, the overhead during normal operations is
> just an additional 'if()'.
> 
> Signed-off-by: Dario Faggioli 

Reviewed-by: George Dunlap 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 4/5] xen: sched: null: reassign vcpus to pcpus when they come back online

2018-08-24 Thread Dario Faggioli
When a vcpu that was offline, comes back online, we do want it to either
be assigned to a pCPU, or go into the wait list.

So let's do exactly that. Detecting that a vcpu is coming back online is
a bit tricky. Basically, if the vcpu is waking up, and is neither
assigned to a pCPU, nor in the wait list, it must be coming back from
offline.

When this happens, we put it in the waitqueue, and we "tickle" an idle
pCPU (if any), to go pick it up.

Looking at the patch, it seems that the vcpu wakeup code is getting
complex, and hence that it could potentially introduce latencies.
However, all this new logic is triggered only by the case of a vcpu
coming online, so, basically, the overhead during normal operations is
just an additional 'if()'.

Signed-off-by: Dario Faggioli 
---
Cc: George Dunlap 
Cc: Stefano Stabellini 
Cc: Roger Pau Monne 
---
 xen/common/sched_null.c |   50 +--
 1 file changed, 48 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index 6259f4643e..3dc2f0738b 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -552,15 +552,19 @@ static void null_vcpu_remove(const struct scheduler *ops, 
struct vcpu *v)
 
 static void null_vcpu_wake(const struct scheduler *ops, struct vcpu *v)
 {
+struct null_private *prv = null_priv(ops);
+struct null_vcpu *nvc = null_vcpu(v);
+unsigned int cpu = v->processor;
+
 ASSERT(!is_idle_vcpu(v));
 
-if ( unlikely(curr_on_cpu(v->processor) == v) )
+if ( unlikely(curr_on_cpu(cpu) == v) )
 {
 SCHED_STAT_CRANK(vcpu_wake_running);
 return;
 }
 
-if ( unlikely(!list_empty(_vcpu(v)->waitq_elem)) )
+if ( unlikely(!list_empty(>waitq_elem)) )
 {
 /* Not exactly "on runq", but close enough for reusing the counter */
 SCHED_STAT_CRANK(vcpu_wake_onrunq);
@@ -572,6 +576,45 @@ static void null_vcpu_wake(const struct scheduler *ops, 
struct vcpu *v)
 else
 SCHED_STAT_CRANK(vcpu_wake_not_runnable);
 
+/* We need to special case the handling of the vcpu being onlined */
+if ( unlikely(per_cpu(npc, cpu).vcpu != v && list_empty(>waitq_elem)) 
)
+{
+spin_lock(>waitq_lock);
+list_add_tail(>waitq_elem, >waitq);
+spin_unlock(>waitq_lock);
+
+cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity,
+cpupool_domain_cpumask(v->domain));
+
+if ( !cpumask_intersects(>cpus_free, cpumask_scratch_cpu(cpu)) )
+{
+dprintk(XENLOG_G_WARNING, "WARNING: d%dv%d not assigned to any 
CPU!\n",
+v->domain->domain_id, v->vcpu_id);
+return;
+}
+
+/*
+ * Now we would want to assign the vcpu to cpu, but we can't, because
+ * we don't have the lock. So, let's do the following:
+ * - try to remove cpu from the list of free cpus, to avoid races with
+ *   other onlining, inserting or migrating operations;
+ * - tickle the cpu, which will pickup work from the waitqueue, and
+ *   assign it to itself;
+ * - if we're racing already, and if there still are free cpus, try
+ *   again.
+ */
+while ( cpumask_intersects(>cpus_free, cpumask_scratch_cpu(cpu)) )
+{
+unsigned int new_cpu = pick_cpu(prv, v);
+
+if ( test_and_clear_bit(new_cpu, >cpus_free) )
+{
+cpu_raise_softirq(new_cpu, SCHEDULE_SOFTIRQ);
+return;
+}
+}
+}
+
 /* Note that we get here only for vCPUs assigned to a pCPU */
 cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ);
 }
@@ -822,6 +865,9 @@ static struct task_slice null_schedule(const struct 
scheduler *ops,
 }
  unlock:
 spin_unlock(>waitq_lock);
+
+if ( ret.task == NULL && !cpumask_test_cpu(cpu, >cpus_free) )
+cpumask_set_cpu(cpu, >cpus_free);
 }
 
 if ( unlikely(ret.task == NULL || !vcpu_runnable(ret.task)) )


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel