On Thu, Jun 12, 2014 at 06:35:15PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 12, 2014 at 06:24:32PM -0700, Paul E. McKenney wrote:
> > On Fri, Jun 13, 2014 at 02:16:59AM +0200, Frederic Weisbecker wrote:
> > > CONFIG_NO_HZ_FULL may be enabled widely on distros nowadays but actual
> > > users should be a tiny minority, if actually any.
> > > 
> > > Also there is a risk that affining the GP kthread to a single CPU could
> > > end up noticeably reducing RCU performances and increasing energy
> > > consumption.
> > > 
> > > So lets affine the GP kthread only when nohz full is actually used
> > > (ie: when the nohz_full= parameter is filled or CONFIG_NO_HZ_FULL_ALL=y)
> 
> Which reminds me...  Kernel-heavy workloads running NO_HZ_FULL_ALL=y
> can see long RCU grace periods, as in about two seconds each.  It is
> not hard for me to detect this situation.

Ah yeah sounds quite long.

> Is there some way I can
> call for a given CPU's scheduling-clock interrupt to be turned on?

Yeah, once the nohz kick patchset (https://lwn.net/Articles/601214/) is merged,
a simple call to tick_nohz_full_kick_cpu() should do the trick. Although the
right condition must be there on the IPI side. Maybe with rcu_needs_cpu() or 
such.

But it would be interesting to identify the sources of these extended grace 
periods.
If we only restart the tick, we may ignore some deeper oustanding issue.

Thanks.

> 
> I believe that the nsproxy guys were seeing something like this as well.
> 
>                                                       Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to