On Wed, Jun 24, 2015 at 03:43:37PM +0200, Ingo Molnar wrote: > > * Paul E. McKenney <paul...@linux.vnet.ibm.com> wrote: > > > On Wed, Jun 24, 2015 at 10:42:48AM +0200, Ingo Molnar wrote: > > > > > > * Peter Zijlstra <pet...@infradead.org> wrote: > > > > > > > On Tue, Jun 23, 2015 at 11:26:26AM -0700, Paul E. McKenney wrote: > > > > > > > > > > > > I really think you're making that expedited nonsense far too > > > > > > accessible. > > > > > > > > > > This has nothing to do with accessibility and everything to do with > > > > > robustness. And with me not becoming the triage center for too many > > > > > non-RCU > > > > > bugs. > > > > > > > > But by making it so you're rewarding abuse instead of flagging it :-( > > > > > > Btw., being a 'triage center' is the bane of APIs that are overly > > > successful, > > > so we should take that burden with pride! :-) > > > > I will gladly accept that compliment. > > > > And the burden. But, lazy as I am, I intend to automate it. ;-) > > lol :) > > > > Lockdep (and the scheduler APIs as well) frequently got into such > > > situations as > > > well, and we mostly solved it by being more informative with debug splats. > > > > > > I don't think a kernel API should (ever!) stay artificially silent, just > > > for fear > > > of flagging too many problems in other code. > > > > I agree, as attested by RCU CPU stall warnings, lockdep-RCU, sparse-based > > RCU checks, and the object-debug-based checks for double call_rcu(). > > That said, in all of these cases, including your example of lockdep, > > the diagnostic is a debug splat rather than a mutex-contention meltdown. > > And it is the mutex-contention meltdown that I will continue making > > synchronize_sched_expedited() avoid. > > > > But given the change from bulk try_stop_cpus() to either stop_one_cpu() or > > IPIs, it would not be hard to splat if a given CPU didn't come back fast > > enough. The latency tracer would of course provide better information, > > but synchronize_sched_expedited() could do a coarse-grained job with > > less setup required. > > > > My first guess for the timeout would be something like 500 milliseconds. > > Thoughts? > > So I'd start with 5,000 milliseconds and observe the results first ...
Sounds good, especially when I recall that the default RCU CPU stall warning timeout is 21,000 milliseconds... ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/