On Wed, 2016-03-23 at 19:05 +0000, George Dunlap wrote:
> On 18/03/16 19:05, Dario Faggioli wrote:
> > 
> > by using the sched_switch hook that we have introduced in
> > the various schedulers.
> > 
> > The key is to let the actual switch of scheduler and the
> > remapping of the scheduler lock for the CPU (if necessary)
> > happen together (in the same critical section) protected
> > (at least) by the old scheduler lock for the CPU.
> > 
> > This also means that, in Credit2 and RTDS, we can get rid
> > of the code that was doing the scheduler lock remapping
> > in csched2_free_pdata() and rt_free_pdata(), and of their
> > triggering ASSERT-s.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggi...@citrix.com>
> Similar to my comment before -- in my own tree I squashed patches 6-9
> into a single commit and found it much easier to review. :-)
> 
I understand your point.

I'll consider doing something like this for v2 (that I'm just finishing
putting together), but I'm not sure I like it.

For instance, although the issue has the same roots and similar
consequences for all schedulers, the actual race is different between
Credit1 and Credit2 (RTDS is the same as Credit2), and having distinct
patches for each scheduler allows me to describe both the situations in
details, in their respective changelog, without the changelogs
themselves becoming too long (they're actually quite long already!!).

> One important question...
> > 
> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > 
> > @@ -1652,17 +1661,20 @@ int schedule_cpu_switch(unsigned int cpu,
> > struct cpupool *c)
> >          return -ENOMEM;
> >      }
> >  
> > -    lock = pcpu_schedule_lock_irq(cpu);
> > -
> >      SCHED_OP(old_ops, tick_suspend, cpu);
> > +
> > +    /*
> > +     * The actual switch, including (if necessary) the rerouting
> > of the
> > +     * scheduler lock to whatever new_ops prefers,  needs to
> > happen in one
> > +     * critical section, protected by old_ops' lock, or races are
> > possible.
> > +     * Since each scheduler has its own contraints and locking
> > scheme, do
> > +     * that inside specific scheduler code, rather than here.
> > +     */
> >      vpriv_old = idle->sched_priv;
> > -    idle->sched_priv = vpriv;
> > -    per_cpu(scheduler, cpu) = new_ops;
> >      ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
> > -    per_cpu(schedule_data, cpu).sched_priv = ppriv;
> > -    SCHED_OP(new_ops, tick_resume, cpu);
> > +    SCHED_OP(new_ops, switch_sched, cpu, ppriv, vpriv);
> Is it really safe to read sched_priv without the lock held?
> 
So, you put down a lot more reasoning on this issue in another email,
and I'll reply in more length to that one.

But just about this specific thing. We're in schedule_cpu_switch(), and
schedule_cpu_switch() is indeed the only function that changes the
content of sd->sched_priv, when the system is _live_. It both reads the
old pointer, stash it, allocate the new one, assign it, and free the
old one. It's therefore only because of this function that a race can
happen.

In fact, the only other situation where sched_priv changes is during
cpu bringup (CPU_UP_PREPARE phase), or teardown. But those cases are
not of much concern (and, in fact, there's no locking in there,
independently from this series).

Now, schedule_cpu_switch is called by:

1 cpupool.c  cpupool_assign_cpu_locked    268 ret = schedule_cpu_switch(cpu, c);
2 cpupool.c  cpupool_unassign_cpu_helper  325 ret = schedule_cpu_switch(cpu, 
NULL);

to move the cpu inside or outside from a cpupool, and in both cases, we
have taken the cpupool_lock spinlock already when calling it.

So, yes, it looks to me that sched_priv is safe to be manipulated like
the patch is doing... Am I overlooking something?

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to