On Wed, Jul 01, 2015 at 08:43:54AM -0700, Josh Triplett wrote: > On Tue, Jun 30, 2015 at 08:37:01PM -0700, Paul E. McKenney wrote: > > On Tue, Jun 30, 2015 at 05:42:14PM -0700, Josh Triplett wrote: > > > On Tue, Jun 30, 2015 at 05:15:58PM -0700, Paul E. McKenney wrote: > > > > On Tue, Jun 30, 2015 at 04:46:33PM -0700, j...@joshtriplett.org wrote: > > > > > On Tue, Jun 30, 2015 at 03:12:24PM -0700, Paul E. McKenney wrote: > > > > > > On Tue, Jun 30, 2015 at 03:00:15PM -0700, j...@joshtriplett.org > > > > > > wrote: > > > > > > > On Tue, Jun 30, 2015 at 02:48:05PM -0700, Paul E. McKenney wrote: > > > > > > > > Hello! > > > > > > > > > > > > > > > > This series contains some highly experimental patches that > > > > > > > > allow normal > > > > > > > > grace periods to take advantage of the work done by concurrent > > > > > > > > expedited > > > > > > > > grace periods. This can reduce the overhead incurred by normal > > > > > > > > grace > > > > > > > > periods by eliminating the need for force-quiescent-state scans > > > > > > > > that > > > > > > > > would otherwise have happened after the expedited grace period > > > > > > > > completed. > > > > > > > > It is not clear whether this is a useful tradeoff. > > > > > > > > Nevertheless, this > > > > > > > > series contains the following patches: > > > > > > > > > > > > > > While it makes sense to avoid unnecessarily delaying a normal > > > > > > > grace > > > > > > > period if the expedited machinery has provided the necessary > > > > > > > delay, I'm > > > > > > > also *deeply* concerned that this will create a new class of > > > > > > > nondeterministic performance issues. Something that uses RCU may > > > > > > > perform badly due to grace period latency, but then suddenly start > > > > > > > performing well because an unrelated task starts hammering > > > > > > > expedited > > > > > > > grace periods. This seems particularly likely during boot, for > > > > > > > instance, where RCU grace periods can be a significant component > > > > > > > of boot > > > > > > > time (when you're trying to boot to userspace in small fractions > > > > > > > of a > > > > > > > second). > > > > > > > > > > > > I will take that as another vote against. And for a reason that I > > > > > > had > > > > > > not yet come up with, so good show! ;-) > > > > > > > > > > Consider it a fairly weak concern against. Increasing performance > > > > > seems > > > > > like a good thing in general; I just don't relish the future "feels > > > > > less > > > > > responsive" bug reports that take a long time to track down and turn > > > > > out > > > > > to be "this completely unrelated driver was loaded and started using > > > > > expedited grace periods". > > > > > > > > From what I can see, this one needs a good reason to go in, as opposed > > > > to a good reason to stay out. > > > > > > > > > Then again, perhaps the more relevant concern would be why drivers use > > > > > expedited grace periods in the first place. > > > > > > > > Networking uses expedited grace periods when RTNL is held to reduce > > > > contention on that lock. > > > > > > Wait, what? Why is anything using traditional (non-S) RCU while *any* > > > lock is held? > > > > In their defense, it is a sleeplock that is never taken except when > > rearranging networking configuration. Sometimes they need a grace period > > under the lock. So synchronize_net() checks to see if RTNL is held, and > > does a synchronize_rcu_expedited() if so and a synchronize_rcu() if not. > > > > But maybe I am misunderstanding your question? > > No, you understood my question. It seems wrong that they would need a > grace period *under* the lock, rather than the usual case of making a > change under the lock, dropping the lock, and *then* synchronizing.
On that I must defer to the networking folks. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/