On Thu, 10 Oct 2013 08:27:41 +0200 Ingo Molnar <mi...@kernel.org> wrote:
> * Andrew Morton <a...@linux-foundation.org> wrote: > > > On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra <pet...@infradead.org> > > wrote: > > > > > The current cpu hotplug lock is a single global lock; therefore > > > excluding hotplug is a very expensive proposition even though it is > > > rare occurrence under normal operation. > > > > > > There is a desire for a more light weight implementation of > > > {get,put}_online_cpus() from both the NUMA scheduling as well as the > > > -RT side. > > > > > > The current hotplug lock is a full reader preference lock -- and thus > > > supports reader recursion. However since we're making the read side > > > lock much cheaper it is the expectation that it will also be used far > > > more. Which in turn would lead to writer starvation. > > > > > > Therefore the new lock proposed is completely fair; albeit somewhat > > > expensive on the write side. This in turn means that we need a > > > per-task nesting count to support reader recursion. > > > > This is a lot of code and a lot of new complexity. It needs some pretty > > convincing performance numbers to justify its inclusion, no? > > Should be fairly straightforward to test: the sys_sched_getaffinity() and > sys_sched_setaffinity() syscalls both make use of > get_online_cpus()/put_online_cpus(), so a testcase frobbing affinities on > N CPUs in parallel ought to demonstrate scalability improvements pretty > nicely. Well, an in-kernel microbenchmark which camps in a loop doing get/put would measure this as well. But neither approach answers the question "how useful is this patchset". -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/