On Wed, Oct 09, 2013 at 10:50:06PM -0700, Andrew Morton wrote: > On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra <pet...@infradead.org> > wrote: > > > The current cpu hotplug lock is a single global lock; therefore excluding > > hotplug is a very expensive proposition even though it is rare occurrence > > under > > normal operation. > > > > There is a desire for a more light weight implementation of > > {get,put}_online_cpus() from both the NUMA scheduling as well as the -RT > > side. > > > > The current hotplug lock is a full reader preference lock -- and thus > > supports > > reader recursion. However since we're making the read side lock much > > cheaper it > > is the expectation that it will also be used far more. Which in turn would > > lead > > to writer starvation. > > > > Therefore the new lock proposed is completely fair; albeit somewhat > > expensive > > on the write side. This in turn means that we need a per-task nesting count > > to > > support reader recursion. > > This is a lot of code and a lot of new complexity. It needs some pretty > convincing performance numbers to justify its inclusion, no?
And here I thought it was generally understood to be unwise to bash global state on anything like a regular manner from every cpu. The NUMA bits really ought to use get_online_cpus()/put_online_cpus() on every balance pass; which is about once a second on every cpu. RT -- which has some quite horrible hotplug hacks due to this -- basically takes get_online_cpus() for every spin_lock/spin_unlock in the kernel. But the thing is; our sense of NR_CPUS has shifted, where it used to be ok to do something like: for_each_cpu() With preemption disabled; it gets to be less and less sane to do so, simply because 'common' hardware has 256+ CPUs these days. If we cannot rely on preempt disable to exclude hotplug, we must use get_online_cpus(), but get_online_cpus() is global state and thus cannot be used at any sort of frequency. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/