On Fri, Jun 20, 2014 at 07:59:58PM -0700, Paul E. McKenney wrote: > Commit ac1bea85781e (Make cond_resched() report RCU quiescent states) > fixed a problem where a CPU looping in the kernel with but one runnable > task would give RCU CPU stall warnings, even if the in-kernel loop > contained cond_resched() calls. Unfortunately, in so doing, it introduced > performance regressions in Anton Blanchard's will-it-scale "open1" test. > The problem appears to be not so much the increased cond_resched() path > length as an increase in the rate at which grace periods complete, which > increased per-update grace-period overhead. > > This commit takes a different approach to fixing this bug, mainly by > moving the RCU-visible quiescent state from cond_resched() to > rcu_note_context_switch(), and by further reducing the check to a > simple non-zero test of a single per-CPU variable. However, this > approach requires that the force-quiescent-state processing send > resched IPIs to the offending CPUs. These will be sent only once > the grace period has reached an age specified by the boot/sysfs > parameter rcutree.jiffies_till_sched_qs, or once the grace period > reaches an age halfway to the point at which RCU CPU stall warnings > will be emitted, whichever comes first.
Right, and I suppose the force quiescent stuff is triggered from the tick, which in turn wakes some of these rcu kthreads, which on UP would cause scheduling themselves. On the topic of these threads; I recently noticed RCU grew a metric ton of them, I found some 75 rcu kthreads on my box, wth up with that? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/