hat, make sure that base->next_expiry doesn't get below
base->clk.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Anna-Maria Gleixner
Cc: Juri Lelli
---
kernel/time/timer.c | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/kernel/time/timer
So far next expiry was only tracked while the CPU was in nohz_idle mode
in order to cope with missing ticks that can't increment the base->clk
periodically anymore.
We are going to expand that logic beyond nohz in order to spare timers
softirqs so do it unconditionally.
Signed-off-by: Frede
On Mon, Jun 29, 2020 at 02:36:51PM +0200, Juri Lelli wrote:
> Hi,
>
> On 16/06/20 22:46, Frederic Weisbecker wrote:
> > On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote:
> > > Sure. Let me know if you find anything.
> >
> > I managed to reproduce. Wi
On Tue, Jun 23, 2020 at 03:23:29PM -0400, Nitesh Narayan Lal wrote:
> From: Alex Belits
>
> The current implementation of cpumask_local_spread() does not respect the
> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
> it will return it to the caller for pinning of its
; cpu = nr_cpu_ids;
> else
> - cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
> + cpu = cpumask_any_and(cpumask_of_node(node),
> + housekeeping_cpumask(hk_flags));
Looks like cpumask_o
On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote:
> Sure. Let me know if you find anything.
I managed to reproduce. With "threadirqs" and without
"tsc=reliable". I see tons of spurious TIMER softirqs.
Investigation begins! I'll let you know.
Thanks!
On Thu, May 21, 2020 at 07:00:20PM +0200, Juri Lelli wrote:
> ksoftirqd/3-26[003]99.942485: timer_expire_entry:
> timer=0xa55a9d20 function=clocksource_watchdog now=4294759328
> baseclk=4294759328
> ksoftirqd/3-26[003]99.942489: timer_start:
>
ll not do anything stupid even
> if rcu_irq_enter() has not been invoked.
>
> Fixes: 3eeec3858488 ("x86/entry: Provide idtentry_entry/exit_cond_rcu()")
> Reported-by: "Paul E. McKenney"
> Signed-off-by: Thomas Gleixner
Acked-by: Frederic Weisbecker
So, in the end the call to rcu_irq_enter() in irq_enter() is going to
be useless in x86, right?
On Wed, Jun 10, 2020 at 07:02:10AM -0700, Paul E. McKenney wrote:
> And just to argue against myself...
>
> Another approach is to maintain explicit multiple states for each
> ->cblist, perhaps something like this:
>
> 1.In softirq. Transition code advances to next.
> 2.To no-CB 1.
Looks good. I don't have a better idea.
Thanks!
Reviewed-by: Frederic Weisbecker
On Tue, Jun 09, 2020 at 11:02:27AM -0700, Paul E. McKenney wrote:
> > > > And anyway we still want to unconditionally lock on many places,
> > > > regardless of the offloaded state. I don't know how we could have
> > > > a magic helper do the unconditional lock on some places and the
> > > >
On Thu, Jun 04, 2020 at 09:36:55AM -0700, Paul E. McKenney wrote:
> On Thu, Jun 04, 2020 at 01:41:22PM +0200, Frederic Weisbecker wrote:
> > On Fri, May 22, 2020 at 10:57:39AM -0700, Paul E. McKenney wrote:
> > > On Wed, May 20, 2020 at 08:29:49AM -0400, Joel Fernandes wrote
On Fri, Jun 05, 2020 at 11:37:04AM +0200, Peter Zijlstra wrote:
> On Fri, May 29, 2020 at 03:36:41PM +0200, Peter Zijlstra wrote:
> > Maybe I can anonymous-union my way around it, dunno. I'll think about
> > it. I'm certainly not proud of this. But at least the BUILD_BUG_ON()s
> > should catch the
On Tue, May 26, 2020 at 05:20:17PM -0400, Joel Fernandes wrote:
> On Wed, May 13, 2020 at 06:47:12PM +0200, Frederic Weisbecker wrote:
> > Allow a CPU's rdp to quit the callback offlined mode.
>
> nit: s/offlined/offloaded/ ?
Oh, looks like I did that everywhere :)
>
>
On Tue, May 26, 2020 at 06:49:08PM -0400, Joel Fernandes wrote:
> On Tue, May 26, 2020 at 05:20:17PM -0400, Joel Fernandes wrote:
>
> > > The switch happens on the target with IRQs disabled and rdp->nocb_lock
> > > held to avoid races between local callbacks handling and kthread
> > > offloaded
On Fri, May 22, 2020 at 10:57:39AM -0700, Paul E. McKenney wrote:
> On Wed, May 20, 2020 at 08:29:49AM -0400, Joel Fernandes wrote:
> > Reviewed-by: Joel Fernandes (Google)
>
> Thank you for looking this over, Joel!
>
> Is it feasible to make rcu_nocb_lock*() and rcu_nocb_unlock*() "do the
>
On Mon, Jun 01, 2020 at 09:52:18AM -, tip-bot2 for Peter Zijlstra wrote:
> The following commit has been merged into the sched/core branch of tip:
>
> Commit-ID: a148866489fbe243c936fe43e4525d8dbfa0318f
> Gitweb:
>
IPI callback and thus guarantees the required serialization for the
> CSD.
>
> Fixes: 90b5363acd47 ("sched: Clean up scheduler_ipi()")
> Reported-by: Qian Cai
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Ingo Molnar
> Reviewed-by: Frederic Weisbecker
>
On Tue, May 26, 2020 at 06:11:00PM +0200, Peter Zijlstra wrote:
> This ensures flush_smp_call_function_queue() is strictly about
> call_single_queue.
>
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> kernel/smp.c | 17 +
> 1 file changed, 9 insertions(+), 8 deletions(-)
>
>
ding() didn't have that
llist_empty() optimization. The ordering should allow it.
Anyway,
Reviewed-by: Frederic Weisbecker
On Tue, May 26, 2020 at 06:11:02PM +0200, Peter Zijlstra wrote:
> Currently irq_work_queue_on() will issue an unconditional
> arch_send_call_function_single_ipi() and has the handler do
> irq_work_run().
>
> This is unfortunate in that it makes the IPI handler look at a second
> cacheline and it
tion
in order.
Reviewed-by: Frederic Weisbecker
(Sweet llist dance, I think I need fresh air and coffee now).
: Christoph Lameter
Signed-off-by: Frederic Weisbecker
---
include/linux/sched/isolation.h | 1 +
kernel/kthread.c| 6 --
kernel/sched/isolation.c| 3 ++-
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched/isolation.h b/include/linux
Kthreads are harder to affine and isolate than user tasks. They can't
be placed inside cgroups/cpusets and the affinity for any newly
created kthread is always overriden from the inherited kthreadd's
affinity to system wide. Take that into account for nohz_full.
to be initialized
at setup_arch() time, way before kthreadd is created.
Suggested-by: Frederic Weisbecker
Signed-off-by: Marcelo Tosatti
Cc: Chris Friesen
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Andrew Morton
Cc: Jim Somerville
Cc: Christoph Lameter
Signed-off-by: Frederic Weisbecker
On Wed, May 27, 2020 at 12:23:23PM +0200, Vincent Guittot wrote:
> > -static void nohz_csd_func(void *info)
> > -{
> > - struct rq *rq = info;
> > + flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
>
> Why can't this be done in nohz_idle_balance() instead ?
>
> you are
uarantee this serialization.
>
> Rework the nohz_idle_balance() trigger so that the release is in the
> IPI callback and thus guarantees the required serialization for the
> CSD.
>
> Fixes: 90b5363acd47 ("sched: Clean up scheduler_ipi()")
> Reported-by: Qian Cai
> Signed-off-by
On Mon, May 25, 2020 at 03:21:05PM +0200, Peter Zijlstra wrote:
> @@ -2320,7 +2304,7 @@ static void ttwu_queue_remote(struct task_struct *p,
> int cpu, int wake_flags)
>
> if (llist_add(>wake_entry, >wake_list)) {
> if (!set_nr_if_polling(rq->idle))
> -
On Thu, May 21, 2020 at 01:00:27PM +0200, Peter Zijlstra wrote:
> On Thu, May 21, 2020 at 12:49:37PM +0200, Peter Zijlstra wrote:
> > On Thu, May 21, 2020 at 11:39:39AM +0200, Peter Zijlstra wrote:
> > > On Thu, May 21, 2020 at 02:40:36AM +0200, Frederi
On Wed, May 20, 2020 at 08:47:10PM +0200, Juri Lelli wrote:
> On 20/05/20 19:02, Frederic Weisbecker wrote:
> > On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote:
> > > On 20/05/20 18:24, Frederic Weisbecker wrote:
> > >
> > > Hummm, so I enabled
On Wed, May 20, 2020 at 02:50:56PM +0200, Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:58:17PM -0400, Qian Cai wrote:
> > Just a head up. Repeatedly compiling kernels for a while would trigger
> > endless soft-lockups since next-20200519 on both x86_64 and powerpc.
> > .config are in,
>
>
On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote:
> On 20/05/20 18:24, Frederic Weisbecker wrote:
>
> Hummm, so I enabled 'timer:*', anything else you think I should be
> looking at?
Are you sure you also enabled timer_expire_entry?
Because:
>
> ...
> ks
Hi Juri,
On Wed, May 20, 2020 at 04:04:02PM +0200, Juri Lelli wrote:
> After tasks enter or leave a runqueue (wakeup/block) SCHED full_nohz
> dependency is checked (via sched_update_tick_dependency()). In case tick
> can be stopped on a CPU (see sched_can_stop_tick() for details), SCHED
>
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 28f6bf9e247fe23d177cfdbf7e709270e8cc7fa6
Gitweb:
https://git.kernel.org/tip/28f6bf9e247fe23d177cfdbf7e709270e8cc7fa6
Author:Frederic Weisbecker
AuthorDate:Thu, 27 Feb 2020 09:51:40 +01:00
On Mon, May 18, 2020 at 10:57:58AM +0200, Peter Zijlstra wrote:
> On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wrote:
> > So far setting a tick dependency on any task, including current, used to
> > trigger an IPI to all CPUs. That's of course suboptimal but it wasn
On Sun, May 17, 2020 at 08:53:22AM -0700, Paul E. McKenney wrote:
> On Sun, May 17, 2020 at 03:31:16PM +0200, Frederic Weisbecker wrote:
> > On Fri, May 15, 2020 at 08:07:18PM -0700, Paul E. McKenney wrote:
> > > On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wro
On Fri, May 15, 2020 at 08:07:18PM -0700, Paul E. McKenney wrote:
> On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wrote:
> > So far setting a tick dependency on any task, including current, used to
> > trigger an IPI to all CPUs. That's of course suboptima
On Fri, May 15, 2020 at 11:29:13PM +0200, Thomas Gleixner wrote:
> Thomas Gleixner writes:
>
> > From: Frederic Weisbecker
>
> This changelog was very empty. Here is what Peter provided:
>
> When using nmi_enter() recursively, arch_nmi_enter() must also be recursio
On Wed, May 13, 2020 at 07:28:34PM -0400, Mathieu Desnoyers wrote:
> - On May 5, 2020, at 9:16 AM, Thomas Gleixner t...@linutronix.de wrote:
>
> > +#define arch_nmi_enter() \
> [...] \
> > + ___hcr =
of callbacks)
Reported-by: Matt Fleming
Signed-off-by: Frederic Weisbecker
Cc: sta...@kernel.org
Cc: Paul E. McKenney
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
---
kernel/time/tick-sched.c | 22 +++---
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git
On Thu, May 14, 2020 at 03:47:35PM -0700, Paul E. McKenney wrote:
> On Fri, May 15, 2020 at 12:30:23AM +0200, Frederic Weisbecker wrote:
> > On Thu, May 14, 2020 at 08:47:07AM -0700, Paul E. McKenney wrote:
> > > On Thu, May 14, 2020 at 12:45:26AM +0200, Frederic Weisbecker wrot
On Thu, May 14, 2020 at 08:50:32AM -0700, Paul E. McKenney wrote:
> On Thu, May 14, 2020 at 01:08:28AM +0200, Frederic Weisbecker wrote:
> > On Wed, May 13, 2020 at 11:25:27AM -0700, Paul E. McKenney wrote:
> > > On Wed, May 13, 2020 at 06:47:11PM +0200, Frederic Weisbecker wro
On Thu, May 14, 2020 at 08:47:07AM -0700, Paul E. McKenney wrote:
> On Thu, May 14, 2020 at 12:45:26AM +0200, Frederic Weisbecker wrote:
> This last seems best to me. The transition from CBLIST_NOT_OFFLOADED
> to CBLIST_OFFLOADING of course needs to be on the CPU in question with
>
On Wed, May 13, 2020 at 11:42:29AM -0700, Paul E. McKenney wrote:
> On Wed, May 13, 2020 at 06:47:14PM +0200, Frederic Weisbecker wrote:
> > Not for merge.
> >
> > Make nocb toggable for a given CPU using:
> > /sys/devices/system/cpu/cpu*/hotplug/nocb
> >
&g
On Wed, May 13, 2020 at 11:25:27AM -0700, Paul E. McKenney wrote:
> On Wed, May 13, 2020 at 06:47:11PM +0200, Frederic Weisbecker wrote:
> > So far nohz_full CPUs had to be nocb. This requirement may change
> > temporarily as we are working on preparing RCU to be able to toggle the
On Wed, May 13, 2020 at 11:20:29AM -0700, Paul E. McKenney wrote:
> On Wed, May 13, 2020 at 06:47:08PM +0200, Frederic Weisbecker wrote:
> > This simplify the usage of this API and avoid checking the kernel
> > config from the callers.
> >
> > Signed-off-by: Frederic
On Wed, May 13, 2020 at 11:38:31AM -0700, Paul E. McKenney wrote:
> On Wed, May 13, 2020 at 06:47:12PM +0200, Frederic Weisbecker wrote:
> > +static void __rcu_nocb_rdp_deoffload(struct rcu_data *rdp)
> > +{
> > + unsigned long flags;
> > + struct
Unconditionally lock rdp->nocb_lock on nocb code that is called after
we verified that the rdp is offloaded:
This clarify the locking rules and expectations.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangs
This simplify the usage of this API and avoid checking the kernel
config from the callers.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
---
include/linux/rcu_segcblist.h | 2
This will be necessary to correctly implement rdp de-offloading. We
don't want rcu_do_batch() in nocb_cb kthread to race with local
rcu_do_batch().
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
state, make rcu_nohz_full_cpu() aware of
nohz_full CPUs that are not nocb so that they can handle the callbacks
locally.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
---
kernel/rcu
ent rcu_do_batch() executions. Then the cblist is set to offloaded
so that the nocb_gp kthread ignores this rdp.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernan
Not for merge.
Make nocb toggable for a given CPU using:
/sys/devices/system/cpu/cpu*/hotplug/nocb
This is only intended for those who want to test this patchset. The real
interfaces will be cpuset/isolation and rcutorture.
Not-Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
This is essentially the reverse operation of de-offloading. For now it's
only supported on CPUs that used to be offloaded and therefore still have
the relevant nocb_cb/nocb_gp kthreads around.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh
Not only is it in the bad order (rdp->nocb_lock should be unlocked after
rnp) but it's also dead code as we are in the !offloaded path.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernan
It can either be called inline (locally or CPU hotplug locked) when
rdp->nocb_defer_wakeup is pending or from the nocb timer. In both cases
the rdp is offlined and we want to take the nocb lock.
Clarify the locking rules and expectations.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKen
hen rdp->nocb_lock
isn't taken. We'll still want the entrypoints to lock the rdp in any
case.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
---
kernel/rcu/tree_plugin.
/git/frederic/linux-dynticks.git
rcu/nohz
HEAD: 31cb4ee9da4e9cc6314498ff22d83f0d872b1a88
Thanks,
Frederic
---
Frederic Weisbecker (10):
rcu: Directly lock rdp->nocb_lock on nocb code entrypoints
rcu: Use direct rdp->nocb_lock operations on local calls
rcu
This allows us to check if a remote CPU runs context tracking
(ie: is nohz_full). We'll need that to reliably support "nice"
accounting on kcpustat.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: I
This function is a leftover from old removal or rename. We can drop it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking_state.h | 1 -
1 file changed, 1
as a start because it's the trivial
case. User and guest time will require more preparation work to
correctly handle niceness.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking.h | 2 +-
include/linux/context_tracking_state.h | 4 ++--
include/linux/vtime.h | 2 +-
3 files changed, 4
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use
it to start fixing frozen kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc
This allows us to check if a remote CPU runs vtime accounting
(ie: is nohz_full). We'll need that to reliably support reading kcpustat
on nohz_full CPUs.
Also simplify a bit the condition in the local flavoured function while
at it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Standardize the naming on top of the vtime_accounting_enabled_*() base.
Also make it clear we are checking the vtime state of the
*current* CPU with this function. We'll need to add an API to check that
state on remote CPUs as well, so we must disambiguate the naming.
Signed-off-by: Frederic
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use
it to start fixing frozen kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Jacek Anaszewski
Cc: Pavel Machek
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use
it to start fixing frozen kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van
the optimization.
Reported-by: Peter Zijlstra
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking_state.h | 4 ++--
include/linux/vtime.h | 4 ++--
2
moving forward on full nohz CPUs.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/sched.h | 1 +
kernel/sched/cputime.c | 3 +++
2 files changed, 4 insertions(+)
diff --git
Record guest as a VTIME state instead of guessing it from VTIME_SYS and
PF_VCPU. This is going to simplify the cputime read side especially as
its state machine is going to further expand in order to fully support
kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Remove the superfluous "is" in the middle of the name. We want to
standardize the naming so that it can be expanded through suffixes:
context_tracking_enabled()
context_tracking_enabled_cpu()
context_tracking_enabled_this_cpu()
Signed-off-by: Frederic Weis
Record idle as a VTIME state instead of guessing it from VTIME_SYS and
is_idle_task(). This is going to simplify the cputime read side
especially as its state machine is going to further expand in order to
fully support kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni
its own.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
nohz/kcpustat-v2
HEAD: e179e89320c53a96c5d585af38126cfb124da789
Thanks,
Frederic
---
Frederic Weisbecker (14):
sched/vtime: Record CPU under seqcount for kcpustat needs
sched/cputime: Add vtime id
On Wed, Oct 02, 2019 at 06:55:35PM -0400, Scott Wood wrote:
> The way loadavg is tracked during nohz only pays attention to the load
> upon entering nohz. This can be particularly noticeable if nohz is
> entered while non-idle, and then the cpu goes idle and stays that way for
> a long time.
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 68e7a4d66b0ce04bf18ff2ffded5596ab3618585
Gitweb:
https://git.kernel.org/tip/68e7a4d66b0ce04bf18ff2ffded5596ab3618585
Author:Frederic Weisbecker
AuthorDate:Wed, 25 Sep 2019 23:42:42 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 68e7a4d66b0ce04bf18ff2ffded5596ab3618585
Gitweb:
https://git.kernel.org/tip/68e7a4d66b0ce04bf18ff2ffded5596ab3618585
Author:Frederic Weisbecker
AuthorDate:Wed, 25 Sep 2019 23:42:42 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 8d495477d62e4397207f22a432fcaa86d9f2bc2d
Gitweb:
https://git.kernel.org/tip/8d495477d62e4397207f22a432fcaa86d9f2bc2d
Author:Frederic Weisbecker
AuthorDate:Thu, 03 Oct 2019 18:17:45 +02:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: f83eeb1a01689b2691f6f56629ac9f66de8d41c2
Gitweb:
https://git.kernel.org/tip/f83eeb1a01689b2691f6f56629ac9f66de8d41c2
Author:Frederic Weisbecker
AuthorDate:Thu, 03 Oct 2019 18:17:44 +02:00
On Mon, Oct 07, 2019 at 06:20:31PM +0200, Ingo Molnar wrote:
>
> * Frederic Weisbecker wrote:
>
> > Extracted from a larger queue that fixes kcpustat on nohz_full, these
> > two patches have value on their own as they remove two write barriers
> > on nohz_full contex
ing kernel cputime. Whether it belongs to guest or system time
is a lower level detail.
Rename this function to vtime_account_kernel(). This will clarify things
and avoid too many underscored vtime_account_system() versions.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
C
under vtime in the future and fetch
CPUTIME_IDLE without race.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/vtime.h | 32
kernel/sched/cputime.c
Extracted from a larger queue that fixes kcpustat on nohz_full, these
two patches have value on their own as they remove two write barriers
on nohz_full context switch.
Frederic Weisbecker (2):
vtime: Rename vtime_account_system() to vtime_account_kernel()
vtime: Spare a seqcount lock/unlock
On Wed, Oct 02, 2019 at 06:38:59PM -0700, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> CPUs running for long time periods in the kernel in nohz_full mode
> might leave the scheduling-clock interrupt disabled for then full
> duration of their in-kernel execution. This can (among
On Wed, Oct 02, 2019 at 06:38:57PM -0700, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> CPU-hotplug removal operations run the multi_cpu_stop() function, which
> relies on the scheduler to gain control from whatever is running on the
> various online CPUs, including any nohz_full CPUs
On Wed, Oct 02, 2019 at 06:38:55PM -0700, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> Readers and callback flooders in the rcutorture stress-test suite run for
> extended time periods by design. They do take pains to relinquish the
> CPU from time to time, but in some cases this
On Wed, Oct 02, 2019 at 06:38:54PM -0700, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> Callback invocation can run for a significant time period, and within
> CONFIG_NO_HZ_FULL=y kernels, this period will be devoid of scheduler-clock
> interrupts. In-kernel execution without such
On Wed, Sep 25, 2019 at 08:49:17PM -0500, Eric W. Biederman wrote:
> Frederic Weisbecker writes:
>
> > On Sat, Sep 14, 2019 at 07:35:02AM -0500, Eric W. Biederman wrote:
> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >> index 69015b7c28da..6682628
ulate vtime on top of nsec
clocksource")
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
kernel/sched/cputime.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/cputime.
, Nov 20, 2018 at 02:25:12PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:45:48AM +0100, Frederic Weisbecker wrote:
>
> So I definitely like avoiding that superfluous atomic op, however:
>
> > @@ -730,19 +728,25 @@ static void vtime_account_guest(struct tas
eck fails.
>
> Suggested-by: Peter Zijlstra
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
xner
Reviewed-by: Frederic Weisbecker
if (!has_group_leader_pid(p))
> return NULL;
So, right after you should have that:
if (same_thread_group(p, current))
return p;
Which I suggested to convert as:
if (p == current)
* Avoid the ptrace overhead when this is current's process
> - */
> - if (same_thread_group(p, current))
> - return p;
> + /*
> + * Avoid the ptrace overhead when this is current's process
> + */
> +
Restrict it by checking ptrace MODE_READ permissions of the reader on the
> target process.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
s permissions to attach ptrace on the
> target process.
>
> Signed-off-by: Thomas Gleixner
Makes sense. I hope no serious user currently rely on that lack of
restriction. Let's just apply and wait for complains if any.
Reviewed-by: Frederic Weisbecker
On Sat, Sep 14, 2019 at 07:35:02AM -0500, Eric W. Biederman wrote:
>
> The current task on the runqueue is currently read with rcu_dereference().
>
> To obtain ordinary rcu semantics for an rcu_dereference of rq->curr it needs
> to be paird with rcu_assign_pointer of rq->curr. Which provides
wouldn't
be pretty.
Reviewed-by: Frederic Weisbecker
On Thu, Sep 05, 2019 at 02:03:41PM +0200, Thomas Gleixner wrote:
> The recent consolidation of the three permission checks introduced a subtle
> regression. For timer_create() with a process wide timer it returns the
> current task if the lookup through the PID which is encoded into the
> clockid
equeue and remove the unused requeue
> function while at it.
>
> Fixes: 60bda037f1dd ("posix-cpu-timers: Utilize timerqueue for storage")
> Reported-by: syzbot+55acd54b57bb4b384...@syzkaller.appspotmail.com
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
On Thu, Sep 05, 2019 at 04:57:10PM +0200, Thomas Gleixner wrote:
> On Thu, 5 Sep 2019, Frederic Weisbecker wrote:
> > On Thu, Sep 05, 2019 at 02:03:39PM +0200, Thomas Gleixner wrote:
> > > Sysbot triggered an issue in the posix timer rework which was trivial to
> > > fix
501 - 600 of 8299 matches
Mail list logo