as doing that include changing the scheduler lock mapping for the pCPU itself, and the correct way of doing that is: - take the lock that the pCPU is using right now (which may be the lock of another scheduler); - change the mapping of the lock to the RTDS one; - release the lock (the one that has actually been taken!)
Signed-off-by: Dario Faggioli <dario.faggi...@citrix.com> --- Cc: Meng Xu <men...@cis.upenn.edu> Cc: George Dunlap <george.dun...@eu.citrix.com> Cc: Tianyang Chen <ti...@seas.upenn.edu> --- xen/common/sched_rt.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index c896a6f..d98bfb6 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -653,11 +653,16 @@ static void * rt_alloc_pdata(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); + spinlock_t *old_lock; unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + /* Move the scheduler lock to our global runqueue lock. */ + old_lock = pcpu_schedule_lock_irqsave(cpu, &flags); + per_cpu(schedule_data, cpu).schedule_lock = &prv->lock; - spin_unlock_irqrestore(&prv->lock, flags); + + /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */ + spin_unlock_irqrestore(old_lock, flags); if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) ) return NULL; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel