Peter Zijlstra <pet...@infradead.org> writes: > On Tue, Jan 21, 2014 at 11:24:39AM -0800, bseg...@google.com wrote: >> Peter Zijlstra <pet...@infradead.org> writes: > >> > +#ifdef CONFIG_FAIR_GROUP_SCHED >> > + /* >> > + * If we haven't yet done put_prev_entity and the selected task is >> > + * a different task than we started out with, try and touch the least >> > + * amount of cfs_rq trees. >> > + */ >> > + if (prev) { >> > + if (prev != p) { >> > + pse = &prev->se; >> > + >> > + while (!(cfs_rq = is_same_group(se, pse))) { >> > + int se_depth = se->depth; >> > + int pse_depth = pse->depth; >> > + >> > + if (se_depth <= pse_depth) { >> > + put_prev_entity(cfs_rq_of(pse), pse); >> > + pse = parent_entity(pse); >> > + } >> > + if (se_depth >= pse_depth) { >> > + set_next_entity(cfs_rq_of(se), se); >> > + se = parent_entity(se); >> > + } >> > + } >> > >> > + put_prev_entity(cfs_rq, pse); >> > + set_next_entity(cfs_rq, se); >> > + } > > (A) > >> > + /* >> > + * In case the common cfs_rq got throttled, just give up and >> > + * put the stack and retry. >> > + */ >> > + if (unlikely(check_cfs_rq_runtime(cfs_rq))) { >> > + put_prev_task_fair(rq, p); >> > + prev = NULL; >> > + goto again; >> > + } >> >> This double-calls put_prev_entity on any non-common cfs_rqs and ses, >> which means double __enqueue_entity, among other things. Just doing the >> put_prev loop from se->parent should fix that. > > I'm not seeing that, so at point (A) we've completely switched over from > @prev to @p, we've put all pse until the common parent and set all se > back to @p. > > So if we then do: put_prev_task_fair(rq, p), we simply undo all the > set_next_entity(se) we just did, and continue from the common parent > upwards. > >> However, any sort of abort means that we may have already done >> set_next_entity on some children, which even with the changes to >> pick_next_entity will cause problems, up to and including double >> __dequeue_entity I think. > > But the abort is only done after we've completely set up @p as the > current task. > > Yes, completely tearing it down again is probably a waste, but given > that bandwidth enforcement should be rare and I didn't want to > complicate things even further for rare cases.
Ah, I missed that this was p, not prev. That makes a lot more sense, and I agree that this should be fine. > >> Also, this way we never do check_cfs_rq_runtime on any parents of the >> common cfs_rq, which could even have been the reason for the resched to >> begin with. I'm not sure if there would be any problem doing it on the >> way down or not, I don't see any problems at a glance. > > Oh, so we allow a parent to have less runtime than the sum of all its > children? Yeah, the check is currently max(children) <= parent, and unthrottled children are also allowed. > > Indeed, in that case we can miss something... we could try to call > check_cfs_rq_runtime() from the initial top-down selection loop? When > true, just put the entire stack and don't pretend to be smart? Yeah, I think that should work. I wasn't sure if there could be a problem of doing throttle_cfs_rq(parent); throttle_cfs_rq(child);, but thinking about, that has to be ok, because schedule can do that if deactivate throttled the parent, schedule calls update_rq_clock, and then put_prev throttled the child. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/