[RFC PATCH 01/10] timer: Prevent base->clk from moving backward

2020-06-30 Thread Frederic Weisbecker
hat, make sure that base->next_expiry doesn't get below base->clk. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Anna-Maria Gleixner Cc: Juri Lelli --- kernel/time/timer.c | 17 ++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/kernel/time/timer

[RFC PATCH 05/10] timers: Always keep track of next expiry

2020-06-30 Thread Frederic Weisbecker
So far next expiry was only tracked while the CPU was in nohz_idle mode in order to cope with missing ticks that can't increment the base->clk periodically anymore. We are going to expand that logic beyond nohz in order to spare timers softirqs so do it unconditionally. Signed-off-by: Frede

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-06-29 Thread Frederic Weisbecker
On Mon, Jun 29, 2020 at 02:36:51PM +0200, Juri Lelli wrote: > Hi, > > On 16/06/20 22:46, Frederic Weisbecker wrote: > > On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote: > > > Sure. Let me know if you find anything. > > > > I managed to reproduce. Wi

Re: [Patch v3 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2020-06-24 Thread Frederic Weisbecker
On Tue, Jun 23, 2020 at 03:23:29PM -0400, Nitesh Narayan Lal wrote: > From: Alex Belits > > The current implementation of cpumask_local_spread() does not respect the > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task, > it will return it to the caller for pinning of its

Re: [Patch v1 2/3] PCI: prevent work_on_cpu's probe to execute on isolated CPUs

2020-06-16 Thread Frederic Weisbecker
; cpu = nr_cpu_ids; > else > - cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask); > + cpu = cpumask_any_and(cpumask_of_node(node), > + housekeeping_cpumask(hk_flags)); Looks like cpumask_o

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-06-16 Thread Frederic Weisbecker
On Tue, Jun 16, 2020 at 08:57:57AM +0200, Juri Lelli wrote: > Sure. Let me know if you find anything. I managed to reproduce. With "threadirqs" and without "tsc=reliable". I see tons of spurious TIMER softirqs. Investigation begins! I'll let you know. Thanks!

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-06-15 Thread Frederic Weisbecker
On Thu, May 21, 2020 at 07:00:20PM +0200, Juri Lelli wrote: > ksoftirqd/3-26[003]99.942485: timer_expire_entry: > timer=0xa55a9d20 function=clocksource_watchdog now=4294759328 > baseclk=4294759328 > ksoftirqd/3-26[003]99.942489: timer_start: >

Re: [PATCH x86/entry: Force rcu_irq_enter() when in idle task

2020-06-12 Thread Frederic Weisbecker
ll not do anything stupid even > if rcu_irq_enter() has not been invoked. > > Fixes: 3eeec3858488 ("x86/entry: Provide idtentry_entry/exit_cond_rcu()") > Reported-by: "Paul E. McKenney" > Signed-off-by: Thomas Gleixner Acked-by: Frederic Weisbecker So, in the end the call to rcu_irq_enter() in irq_enter() is going to be useless in x86, right?

Re: [PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code entrypoints

2020-06-10 Thread Frederic Weisbecker
On Wed, Jun 10, 2020 at 07:02:10AM -0700, Paul E. McKenney wrote: > And just to argue against myself... > > Another approach is to maintain explicit multiple states for each > ->cblist, perhaps something like this: > > 1.In softirq. Transition code advances to next. > 2.To no-CB 1.

Re: [RFC][PATCH 5/7] irq_work, smp: Allow irq_work on call_single_queue

2020-06-10 Thread Frederic Weisbecker
Looks good. I don't have a better idea. Thanks! Reviewed-by: Frederic Weisbecker

Re: [PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code entrypoints

2020-06-10 Thread Frederic Weisbecker
On Tue, Jun 09, 2020 at 11:02:27AM -0700, Paul E. McKenney wrote: > > > > And anyway we still want to unconditionally lock on many places, > > > > regardless of the offloaded state. I don't know how we could have > > > > a magic helper do the unconditional lock on some places and the > > > >

Re: [PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code entrypoints

2020-06-08 Thread Frederic Weisbecker
On Thu, Jun 04, 2020 at 09:36:55AM -0700, Paul E. McKenney wrote: > On Thu, Jun 04, 2020 at 01:41:22PM +0200, Frederic Weisbecker wrote: > > On Fri, May 22, 2020 at 10:57:39AM -0700, Paul E. McKenney wrote: > > > On Wed, May 20, 2020 at 08:29:49AM -0400, Joel Fernandes wrote

Re: [RFC][PATCH 5/7] irq_work, smp: Allow irq_work on call_single_queue

2020-06-05 Thread Frederic Weisbecker
On Fri, Jun 05, 2020 at 11:37:04AM +0200, Peter Zijlstra wrote: > On Fri, May 29, 2020 at 03:36:41PM +0200, Peter Zijlstra wrote: > > Maybe I can anonymous-union my way around it, dunno. I'll think about > > it. I'm certainly not proud of this. But at least the BUILD_BUG_ON()s > > should catch the

Re: [PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-06-04 Thread Frederic Weisbecker
On Tue, May 26, 2020 at 05:20:17PM -0400, Joel Fernandes wrote: > On Wed, May 13, 2020 at 06:47:12PM +0200, Frederic Weisbecker wrote: > > Allow a CPU's rdp to quit the callback offlined mode. > > nit: s/offlined/offloaded/ ? Oh, looks like I did that everywhere :) > >

Re: [PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-06-04 Thread Frederic Weisbecker
On Tue, May 26, 2020 at 06:49:08PM -0400, Joel Fernandes wrote: > On Tue, May 26, 2020 at 05:20:17PM -0400, Joel Fernandes wrote: > > > > The switch happens on the target with IRQs disabled and rdp->nocb_lock > > > held to avoid races between local callbacks handling and kthread > > > offloaded

Re: [PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code entrypoints

2020-06-04 Thread Frederic Weisbecker
On Fri, May 22, 2020 at 10:57:39AM -0700, Paul E. McKenney wrote: > On Wed, May 20, 2020 at 08:29:49AM -0400, Joel Fernandes wrote: > > Reviewed-by: Joel Fernandes (Google) > > Thank you for looking this over, Joel! > > Is it feasible to make rcu_nocb_lock*() and rcu_nocb_unlock*() "do the >

Re: [tip: sched/core] sched: Replace rq::wake_list

2020-06-02 Thread Frederic Weisbecker
On Mon, Jun 01, 2020 at 09:52:18AM -, tip-bot2 for Peter Zijlstra wrote: > The following commit has been merged into the sched/core branch of tip: > > Commit-ID: a148866489fbe243c936fe43e4525d8dbfa0318f > Gitweb: >

Re: [tip: sched/core] sched: Fix smp_call_function_single_async() usage for ILB

2020-06-01 Thread Frederic Weisbecker
IPI callback and thus guarantees the required serialization for the > CSD. > > Fixes: 90b5363acd47 ("sched: Clean up scheduler_ipi()") > Reported-by: Qian Cai > Signed-off-by: Peter Zijlstra (Intel) > Signed-off-by: Ingo Molnar > Reviewed-by: Frederic Weisbecker >

Re: [RFC][PATCH 3/7] smp: Move irq_work_run() out of flush_smp_call_function_queue()

2020-05-29 Thread Frederic Weisbecker
On Tue, May 26, 2020 at 06:11:00PM +0200, Peter Zijlstra wrote: > This ensures flush_smp_call_function_queue() is strictly about > call_single_queue. > > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/smp.c | 17 + > 1 file changed, 9 insertions(+), 8 deletions(-) > >

Re: [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi()

2020-05-29 Thread Frederic Weisbecker
ding() didn't have that llist_empty() optimization. The ordering should allow it. Anyway, Reviewed-by: Frederic Weisbecker

Re: [RFC][PATCH 5/7] irq_work, smp: Allow irq_work on call_single_queue

2020-05-28 Thread Frederic Weisbecker
On Tue, May 26, 2020 at 06:11:02PM +0200, Peter Zijlstra wrote: > Currently irq_work_queue_on() will issue an unconditional > arch_send_call_function_single_ipi() and has the handler do > irq_work_run(). > > This is unfortunate in that it makes the IPI handler look at a second > cacheline and it

Re: [RFC][PATCH 2/7] smp: Optimize flush_smp_call_function_queue()

2020-05-28 Thread Frederic Weisbecker
tion in order. Reviewed-by: Frederic Weisbecker (Sweet llist dance, I think I need fresh air and coffee now).

[PATCH 2/2] isolcpus: Affine unbound kernel threads to housekeeping cpus

2020-05-27 Thread Frederic Weisbecker
: Christoph Lameter Signed-off-by: Frederic Weisbecker --- include/linux/sched/isolation.h | 1 + kernel/kthread.c| 6 -- kernel/sched/isolation.c| 3 ++- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/linux/sched/isolation.h b/include/linux

[PATCH 0/2] sched/isolation: Isolate unbound kthreads

2020-05-27 Thread Frederic Weisbecker
Kthreads are harder to affine and isolate than user tasks. They can't be placed inside cgroups/cpusets and the affinity for any newly created kthread is always overriden from the inherited kthreadd's affinity to system wide. Take that into account for nohz_full.

[PATCH 1/2] kthread: Switch to cpu_possible_mask

2020-05-27 Thread Frederic Weisbecker
to be initialized at setup_arch() time, way before kthreadd is created. Suggested-by: Frederic Weisbecker Signed-off-by: Marcelo Tosatti Cc: Chris Friesen Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Andrew Morton Cc: Jim Somerville Cc: Christoph Lameter Signed-off-by: Frederic Weisbecker

Re: [RFC][PATCH 1/7] sched: Fix smp_call_function_single_async() usage for ILB

2020-05-27 Thread Frederic Weisbecker
On Wed, May 27, 2020 at 12:23:23PM +0200, Vincent Guittot wrote: > > -static void nohz_csd_func(void *info) > > -{ > > - struct rq *rq = info; > > + flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu)); > > Why can't this be done in nohz_idle_balance() instead ? > > you are

Re: [RFC][PATCH 1/7] sched: Fix smp_call_function_single_async() usage for ILB

2020-05-26 Thread Frederic Weisbecker
uarantee this serialization. > > Rework the nohz_idle_balance() trigger so that the release is in the > IPI callback and thus guarantees the required serialization for the > CSD. > > Fixes: 90b5363acd47 ("sched: Clean up scheduler_ipi()") > Reported-by: Qian Cai > Signed-off-by

Re: Endless soft-lockups for compiling workload since next-20200519

2020-05-25 Thread Frederic Weisbecker
On Mon, May 25, 2020 at 03:21:05PM +0200, Peter Zijlstra wrote: > @@ -2320,7 +2304,7 @@ static void ttwu_queue_remote(struct task_struct *p, > int cpu, int wake_flags) > > if (llist_add(>wake_entry, >wake_list)) { > if (!set_nr_if_polling(rq->idle)) > -

Re: Endless soft-lockups for compiling workload since next-20200519

2020-05-21 Thread Frederic Weisbecker
On Thu, May 21, 2020 at 01:00:27PM +0200, Peter Zijlstra wrote: > On Thu, May 21, 2020 at 12:49:37PM +0200, Peter Zijlstra wrote: > > On Thu, May 21, 2020 at 11:39:39AM +0200, Peter Zijlstra wrote: > > > On Thu, May 21, 2020 at 02:40:36AM +0200, Frederi

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-05-20 Thread Frederic Weisbecker
On Wed, May 20, 2020 at 08:47:10PM +0200, Juri Lelli wrote: > On 20/05/20 19:02, Frederic Weisbecker wrote: > > On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote: > > > On 20/05/20 18:24, Frederic Weisbecker wrote: > > > > > > Hummm, so I enabled

Re: Endless soft-lockups for compiling workload since next-20200519

2020-05-20 Thread Frederic Weisbecker
On Wed, May 20, 2020 at 02:50:56PM +0200, Peter Zijlstra wrote: > On Tue, May 19, 2020 at 11:58:17PM -0400, Qian Cai wrote: > > Just a head up. Repeatedly compiling kernels for a while would trigger > > endless soft-lockups since next-20200519 on both x86_64 and powerpc. > > .config are in, > >

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-05-20 Thread Frederic Weisbecker
On Wed, May 20, 2020 at 06:49:25PM +0200, Juri Lelli wrote: > On 20/05/20 18:24, Frederic Weisbecker wrote: > > Hummm, so I enabled 'timer:*', anything else you think I should be > looking at? Are you sure you also enabled timer_expire_entry? Because: > > ... > ks

Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep is cleared

2020-05-20 Thread Frederic Weisbecker
Hi Juri, On Wed, May 20, 2020 at 04:04:02PM +0200, Juri Lelli wrote: > After tasks enter or leave a runqueue (wakeup/block) SCHED full_nohz > dependency is checked (via sched_update_tick_dependency()). In case tick > can be stopped on a CPU (see sched_can_stop_tick() for details), SCHED >

[tip: core/rcu] arm64: Prepare arch_nmi_enter() for recursion

2020-05-19 Thread tip-bot2 for Frederic Weisbecker
The following commit has been merged into the core/rcu branch of tip: Commit-ID: 28f6bf9e247fe23d177cfdbf7e709270e8cc7fa6 Gitweb: https://git.kernel.org/tip/28f6bf9e247fe23d177cfdbf7e709270e8cc7fa6 Author:Frederic Weisbecker AuthorDate:Thu, 27 Feb 2020 09:51:40 +01:00

Re: [PATCH] tick/nohz: Narrow down noise while setting current task's tick dependency

2020-05-18 Thread Frederic Weisbecker
On Mon, May 18, 2020 at 10:57:58AM +0200, Peter Zijlstra wrote: > On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wrote: > > So far setting a tick dependency on any task, including current, used to > > trigger an IPI to all CPUs. That's of course suboptimal but it wasn

Re: [PATCH] tick/nohz: Narrow down noise while setting current task's tick dependency

2020-05-17 Thread Frederic Weisbecker
On Sun, May 17, 2020 at 08:53:22AM -0700, Paul E. McKenney wrote: > On Sun, May 17, 2020 at 03:31:16PM +0200, Frederic Weisbecker wrote: > > On Fri, May 15, 2020 at 08:07:18PM -0700, Paul E. McKenney wrote: > > > On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wro

Re: [PATCH] tick/nohz: Narrow down noise while setting current task's tick dependency

2020-05-17 Thread Frederic Weisbecker
On Fri, May 15, 2020 at 08:07:18PM -0700, Paul E. McKenney wrote: > On Fri, May 15, 2020 at 02:34:29AM +0200, Frederic Weisbecker wrote: > > So far setting a tick dependency on any task, including current, used to > > trigger an IPI to all CPUs. That's of course suboptima

Re: [patch V4 part 1 27/36] arm64: Prepare arch_nmi_enter() for recursion

2020-05-15 Thread Frederic Weisbecker
On Fri, May 15, 2020 at 11:29:13PM +0200, Thomas Gleixner wrote: > Thomas Gleixner writes: > > > From: Frederic Weisbecker > > This changelog was very empty. Here is what Peter provided: > > When using nmi_enter() recursively, arch_nmi_enter() must also be recursio

Re: [patch V4 part 1 27/36] arm64: Prepare arch_nmi_enter() for recursion

2020-05-15 Thread Frederic Weisbecker
On Wed, May 13, 2020 at 07:28:34PM -0400, Mathieu Desnoyers wrote: > - On May 5, 2020, at 9:16 AM, Thomas Gleixner t...@linutronix.de wrote: > > > +#define arch_nmi_enter() \ > [...] \ > > + ___hcr =

[PATCH] tick/nohz: Narrow down noise while setting current task's tick dependency

2020-05-14 Thread Frederic Weisbecker
of callbacks) Reported-by: Matt Fleming Signed-off-by: Frederic Weisbecker Cc: sta...@kernel.org Cc: Paul E. McKenney Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar --- kernel/time/tick-sched.c | 22 +++--- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git

Re: [PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-05-14 Thread Frederic Weisbecker
On Thu, May 14, 2020 at 03:47:35PM -0700, Paul E. McKenney wrote: > On Fri, May 15, 2020 at 12:30:23AM +0200, Frederic Weisbecker wrote: > > On Thu, May 14, 2020 at 08:47:07AM -0700, Paul E. McKenney wrote: > > > On Thu, May 14, 2020 at 12:45:26AM +0200, Frederic Weisbecker wrot

Re: [PATCH 07/10] rcu: Temporarily assume that nohz full CPUs might not be NOCB

2020-05-14 Thread Frederic Weisbecker
On Thu, May 14, 2020 at 08:50:32AM -0700, Paul E. McKenney wrote: > On Thu, May 14, 2020 at 01:08:28AM +0200, Frederic Weisbecker wrote: > > On Wed, May 13, 2020 at 11:25:27AM -0700, Paul E. McKenney wrote: > > > On Wed, May 13, 2020 at 06:47:11PM +0200, Frederic Weisbecker wro

Re: [PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-05-14 Thread Frederic Weisbecker
On Thu, May 14, 2020 at 08:47:07AM -0700, Paul E. McKenney wrote: > On Thu, May 14, 2020 at 12:45:26AM +0200, Frederic Weisbecker wrote: > This last seems best to me. The transition from CBLIST_NOT_OFFLOADED > to CBLIST_OFFLOADING of course needs to be on the CPU in question with >

Re: [PATCH 10/10] rcu: Nocb (de)activate through sysfs

2020-05-13 Thread Frederic Weisbecker
On Wed, May 13, 2020 at 11:42:29AM -0700, Paul E. McKenney wrote: > On Wed, May 13, 2020 at 06:47:14PM +0200, Frederic Weisbecker wrote: > > Not for merge. > > > > Make nocb toggable for a given CPU using: > > /sys/devices/system/cpu/cpu*/hotplug/nocb > > &g

Re: [PATCH 07/10] rcu: Temporarily assume that nohz full CPUs might not be NOCB

2020-05-13 Thread Frederic Weisbecker
On Wed, May 13, 2020 at 11:25:27AM -0700, Paul E. McKenney wrote: > On Wed, May 13, 2020 at 06:47:11PM +0200, Frederic Weisbecker wrote: > > So far nohz_full CPUs had to be nocb. This requirement may change > > temporarily as we are working on preparing RCU to be able to toggle the

Re: [PATCH 04/10] rcu: Implement rcu_segcblist_is_offloaded() config dependent

2020-05-13 Thread Frederic Weisbecker
On Wed, May 13, 2020 at 11:20:29AM -0700, Paul E. McKenney wrote: > On Wed, May 13, 2020 at 06:47:08PM +0200, Frederic Weisbecker wrote: > > This simplify the usage of this API and avoid checking the kernel > > config from the callers. > > > > Signed-off-by: Frederic

Re: [PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-05-13 Thread Frederic Weisbecker
On Wed, May 13, 2020 at 11:38:31AM -0700, Paul E. McKenney wrote: > On Wed, May 13, 2020 at 06:47:12PM +0200, Frederic Weisbecker wrote: > > +static void __rcu_nocb_rdp_deoffload(struct rcu_data *rdp) > > +{ > > + unsigned long flags; > > + struct

[PATCH 02/10] rcu: Use direct rdp->nocb_lock operations on local calls

2020-05-13 Thread Frederic Weisbecker
Unconditionally lock rdp->nocb_lock on nocb code that is called after we verified that the rdp is offloaded: This clarify the locking rules and expectations. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangs

[PATCH 04/10] rcu: Implement rcu_segcblist_is_offloaded() config dependent

2020-05-13 Thread Frederic Weisbecker
This simplify the usage of this API and avoid checking the kernel config from the callers. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes --- include/linux/rcu_segcblist.h | 2

[PATCH 06/10] rcu: Make nocb_cb kthread parkable

2020-05-13 Thread Frederic Weisbecker
This will be necessary to correctly implement rdp de-offloading. We don't want rcu_do_batch() in nocb_cb kthread to race with local rcu_do_batch(). Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel

[PATCH 07/10] rcu: Temporarily assume that nohz full CPUs might not be NOCB

2020-05-13 Thread Frederic Weisbecker
state, make rcu_nohz_full_cpu() aware of nohz_full CPUs that are not nocb so that they can handle the callbacks locally. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes --- kernel/rcu

[PATCH 08/10] rcu: Allow to deactivate nocb on a CPU

2020-05-13 Thread Frederic Weisbecker
ent rcu_do_batch() executions. Then the cblist is set to offloaded so that the nocb_gp kthread ignores this rdp. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernan

[PATCH 10/10] rcu: Nocb (de)activate through sysfs

2020-05-13 Thread Frederic Weisbecker
Not for merge. Make nocb toggable for a given CPU using: /sys/devices/system/cpu/cpu*/hotplug/nocb This is only intended for those who want to test this patchset. The real interfaces will be cpuset/isolation and rcutorture. Not-Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney

[PATCH 09/10] rcu: Allow to re-offload a CPU that used to be nocb

2020-05-13 Thread Frederic Weisbecker
This is essentially the reverse operation of de-offloading. For now it's only supported on CPUs that used to be offloaded and therefore still have the relevant nocb_cb/nocb_gp kthreads around. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh

[PATCH 05/10] rcu: Remove useless conditional nocb unlock

2020-05-13 Thread Frederic Weisbecker
Not only is it in the bad order (rdp->nocb_lock should be unlocked after rnp) but it's also dead code as we are in the !offloaded path. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernan

[PATCH 03/10] rcu: Make locking explicit in do_nocb_deferred_wakeup_common()

2020-05-13 Thread Frederic Weisbecker
It can either be called inline (locally or CPU hotplug locked) when rdp->nocb_defer_wakeup is pending or from the nocb timer. In both cases the rdp is offlined and we want to take the nocb lock. Clarify the locking rules and expectations. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKen

[PATCH 01/10] rcu: Directly lock rdp->nocb_lock on nocb code entrypoints

2020-05-13 Thread Frederic Weisbecker
hen rdp->nocb_lock isn't taken. We'll still want the entrypoints to lock the rdp in any case. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes --- kernel/rcu/tree_plugin.

[PATCH 00/10] rcu: Allow a CPU to leave and reenter NOCB state

2020-05-13 Thread Frederic Weisbecker
/git/frederic/linux-dynticks.git rcu/nohz HEAD: 31cb4ee9da4e9cc6314498ff22d83f0d872b1a88 Thanks, Frederic --- Frederic Weisbecker (10): rcu: Directly lock rdp->nocb_lock on nocb code entrypoints rcu: Use direct rdp->nocb_lock operations on local calls rcu

[PATCH 07/14] context_tracking: Introduce context_tracking_enabled_cpu()

2019-10-15 Thread Frederic Weisbecker
This allows us to check if a remote CPU runs context tracking (ie: is nohz_full). We'll need that to reliably support "nice" accounting on kcpustat. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: I

[PATCH 04/14] context_tracking: Remove context_tracking_active()

2019-10-15 Thread Frederic Weisbecker
This function is a leftover from old removal or rename. We can drop it. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- include/linux/context_tracking_state.h | 1 - 1 file changed, 1

[PATCH 11/14] sched/kcpustat: Introduce vtime-aware kcpustat accessor for CPUTIME_SYSTEM

2019-10-15 Thread Frederic Weisbecker
as a start because it's the trivial case. User and guest time will require more preparation work to correctly handle niceness. Reported-by: Yauheni Kaliuta Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar

[PATCH 06/14] context_tracking: Rename context_tracking_is_cpu_enabled() to context_tracking_enabled_this_cpu()

2019-10-15 Thread Frederic Weisbecker
-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- include/linux/context_tracking.h | 2 +- include/linux/context_tracking_state.h | 4 ++-- include/linux/vtime.h | 2 +- 3 files changed, 4

[PATCH 12/14] procfs: Use vtime aware kcpustat accessor to fetch CPUTIME_SYSTEM

2019-10-15 Thread Frederic Weisbecker
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use it to start fixing frozen kcpustat values on nohz_full CPUs. Reported-by: Yauheni Kaliuta Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc

[PATCH 09/14] sched/vtime: Introduce vtime_accounting_enabled_cpu()

2019-10-15 Thread Frederic Weisbecker
This allows us to check if a remote CPU runs vtime accounting (ie: is nohz_full). We'll need that to reliably support reading kcpustat on nohz_full CPUs. Also simplify a bit the condition in the local flavoured function while at it. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc

[PATCH 08/14] sched/vtime: Rename vtime_accounting_cpu_enabled() to vtime_accounting_enabled_this_cpu()

2019-10-15 Thread Frederic Weisbecker
Standardize the naming on top of the vtime_accounting_enabled_*() base. Also make it clear we are checking the vtime state of the *current* CPU with this function. We'll need to add an API to check that state on remote CPUs as well, so we must disambiguate the naming. Signed-off-by: Frederic

[PATCH 14/14] leds: Use vtime aware kcpustat accessor to fetch CPUTIME_SYSTEM

2019-10-15 Thread Frederic Weisbecker
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use it to start fixing frozen kcpustat values on nohz_full CPUs. Reported-by: Yauheni Kaliuta Signed-off-by: Frederic Weisbecker Cc: Jacek Anaszewski Cc: Pavel Machek Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel

[PATCH 13/14] cpufreq: Use vtime aware kcpustat accessor to fetch CPUTIME_SYSTEM

2019-10-15 Thread Frederic Weisbecker
Now that we have a vtime safe kcpustat accessor for CPUTIME_SYSTEM, use it to start fixing frozen kcpustat values on nohz_full CPUs. Reported-by: Yauheni Kaliuta Signed-off-by: Frederic Weisbecker Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van

[PATCH 10/14] context_tracking: Check static key on context_tracking_enabled_*cpu()

2019-10-15 Thread Frederic Weisbecker
the optimization. Reported-by: Peter Zijlstra Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- include/linux/context_tracking_state.h | 4 ++-- include/linux/vtime.h | 4 ++-- 2

[PATCH 01/14] sched/vtime: Record CPU under seqcount for kcpustat needs

2019-10-15 Thread Frederic Weisbecker
moving forward on full nohz CPUs. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- include/linux/sched.h | 1 + kernel/sched/cputime.c | 3 +++ 2 files changed, 4 insertions(+) diff --git

[PATCH 03/14] sched/cputime: Add vtime guest task state

2019-10-15 Thread Frederic Weisbecker
Record guest as a VTIME state instead of guessing it from VTIME_SYS and PF_VCPU. This is going to simplify the cputime read side especially as its state machine is going to further expand in order to fully support kcpustat on nohz_full. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc

[PATCH 05/14] context_tracking: s/context_tracking_is_enabled/context_tracking_enabled()

2019-10-15 Thread Frederic Weisbecker
Remove the superfluous "is" in the middle of the name. We want to standardize the naming so that it can be expanded through suffixes: context_tracking_enabled() context_tracking_enabled_cpu() context_tracking_enabled_this_cpu() Signed-off-by: Frederic Weis

[PATCH 02/14] sched/cputime: Add vtime idle task state

2019-10-15 Thread Frederic Weisbecker
Record idle as a VTIME state instead of guessing it from VTIME_SYS and is_idle_task(). This is going to simplify the cputime read side especially as its state machine is going to further expand in order to fully support kcpustat on nohz_full. Signed-off-by: Frederic Weisbecker Cc: Yauheni

[PATCH 00/14] sched/nohz: Make kcpustat's CPUTIME_SYSTEM vtime aware v2 (Partially fix kcpustat on nohz_full)

2019-10-15 Thread Frederic Weisbecker
its own. git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git nohz/kcpustat-v2 HEAD: e179e89320c53a96c5d585af38126cfb124da789 Thanks, Frederic --- Frederic Weisbecker (14): sched/vtime: Record CPU under seqcount for kcpustat needs sched/cputime: Add vtime id

Re: [PATCH] tick-sched: Update nohz load even if tick already stopped

2019-10-09 Thread Frederic Weisbecker
On Wed, Oct 02, 2019 at 06:55:35PM -0400, Scott Wood wrote: > The way loadavg is tracked during nohz only pays attention to the load > upon entering nohz. This can be particularly noticeable if nohz is > entered while non-idle, and then the cpu goes idle and stays that way for > a long time.

[tip: sched/urgent] sched/vtime: Fix guest/system mis-accounting on task switch

2019-10-09 Thread tip-bot2 for Frederic Weisbecker
The following commit has been merged into the sched/urgent branch of tip: Commit-ID: 68e7a4d66b0ce04bf18ff2ffded5596ab3618585 Gitweb: https://git.kernel.org/tip/68e7a4d66b0ce04bf18ff2ffded5596ab3618585 Author:Frederic Weisbecker AuthorDate:Wed, 25 Sep 2019 23:42:42 +02:00

[tip: sched/core] sched/vtime: Fix guest/system mis-accounting on task switch

2019-10-09 Thread tip-bot2 for Frederic Weisbecker
The following commit has been merged into the sched/core branch of tip: Commit-ID: 68e7a4d66b0ce04bf18ff2ffded5596ab3618585 Gitweb: https://git.kernel.org/tip/68e7a4d66b0ce04bf18ff2ffded5596ab3618585 Author:Frederic Weisbecker AuthorDate:Wed, 25 Sep 2019 23:42:42 +02:00

[tip: sched/core] sched/cputime: Spare a seqcount lock/unlock cycle on context switch

2019-10-09 Thread tip-bot2 for Frederic Weisbecker
The following commit has been merged into the sched/core branch of tip: Commit-ID: 8d495477d62e4397207f22a432fcaa86d9f2bc2d Gitweb: https://git.kernel.org/tip/8d495477d62e4397207f22a432fcaa86d9f2bc2d Author:Frederic Weisbecker AuthorDate:Thu, 03 Oct 2019 18:17:45 +02:00

[tip: sched/core] sched/cputime: Rename vtime_account_system() to vtime_account_kernel()

2019-10-09 Thread tip-bot2 for Frederic Weisbecker
The following commit has been merged into the sched/core branch of tip: Commit-ID: f83eeb1a01689b2691f6f56629ac9f66de8d41c2 Gitweb: https://git.kernel.org/tip/f83eeb1a01689b2691f6f56629ac9f66de8d41c2 Author:Frederic Weisbecker AuthorDate:Thu, 03 Oct 2019 18:17:44 +02:00

Re: [PATCH 0/2] vtime: Remove pair of seqcount on context switch

2019-10-07 Thread Frederic Weisbecker
On Mon, Oct 07, 2019 at 06:20:31PM +0200, Ingo Molnar wrote: > > * Frederic Weisbecker wrote: > > > Extracted from a larger queue that fixes kcpustat on nohz_full, these > > two patches have value on their own as they remove two write barriers > > on nohz_full contex

[PATCH 1/2] vtime: Rename vtime_account_system() to vtime_account_kernel()

2019-10-03 Thread Frederic Weisbecker
ing kernel cputime. Whether it belongs to guest or system time is a lower level detail. Rename this function to vtime_account_kernel(). This will clarify things and avoid too many underscored vtime_account_system() versions. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner C

[PATCH 2/2] vtime: Spare a seqcount lock/unlock cycle on context switch

2019-10-03 Thread Frederic Weisbecker
under vtime in the future and fetch CPUTIME_IDLE without race. Signed-off-by: Frederic Weisbecker Cc: Yauheni Kaliuta Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- include/linux/vtime.h | 32 kernel/sched/cputime.c

[PATCH 0/2] vtime: Remove pair of seqcount on context switch

2019-10-03 Thread Frederic Weisbecker
Extracted from a larger queue that fixes kcpustat on nohz_full, these two patches have value on their own as they remove two write barriers on nohz_full context switch. Frederic Weisbecker (2): vtime: Rename vtime_account_system() to vtime_account_kernel() vtime: Spare a seqcount lock/unlock

Re: [PATCH tip/core/rcu 08/12] rcu: Force tick on for nohz_full CPUs not reaching quiescent states

2019-10-03 Thread Frederic Weisbecker
On Wed, Oct 02, 2019 at 06:38:59PM -0700, paul...@kernel.org wrote: > From: "Paul E. McKenney" > > CPUs running for long time periods in the kernel in nohz_full mode > might leave the scheduling-clock interrupt disabled for then full > duration of their in-kernel execution. This can (among

Re: [PATCH tip/core/rcu 06/12] rcu: Make CPU-hotplug removal operations enable tick

2019-10-03 Thread Frederic Weisbecker
On Wed, Oct 02, 2019 at 06:38:57PM -0700, paul...@kernel.org wrote: > From: "Paul E. McKenney" > > CPU-hotplug removal operations run the multi_cpu_stop() function, which > relies on the scheduler to gain control from whatever is running on the > various online CPUs, including any nohz_full CPUs

Re: [PATCH tip/core/rcu 04/12] rcutorture: Force on tick for readers and callback flooders

2019-10-03 Thread Frederic Weisbecker
On Wed, Oct 02, 2019 at 06:38:55PM -0700, paul...@kernel.org wrote: > From: "Paul E. McKenney" > > Readers and callback flooders in the rcutorture stress-test suite run for > extended time periods by design. They do take pains to relinquish the > CPU from time to time, but in some cases this

Re: [PATCH tip/core/rcu 03/12] rcu: Force on tick when invoking lots of callbacks

2019-10-03 Thread Frederic Weisbecker
On Wed, Oct 02, 2019 at 06:38:54PM -0700, paul...@kernel.org wrote: > From: "Paul E. McKenney" > > Callback invocation can run for a significant time period, and within > CONFIG_NO_HZ_FULL=y kernels, this period will be devoid of scheduler-clock > interrupts. In-kernel execution without such

Re: [PATCH v2 4/4] task: RCUify the assignment of rq->curr

2019-09-26 Thread Frederic Weisbecker
On Wed, Sep 25, 2019 at 08:49:17PM -0500, Eric W. Biederman wrote: > Frederic Weisbecker writes: > > > On Sat, Sep 14, 2019 at 07:35:02AM -0500, Eric W. Biederman wrote: > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > >> index 69015b7c28da..6682628

[PATCH] sched/vtime: Fix guest/system mis-accounting on task switch

2019-09-25 Thread Frederic Weisbecker
ulate vtime on top of nsec clocksource") Signed-off-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: Rik van Riel Cc: Peter Zijlstra Cc: Wanpeng Li Cc: Ingo Molnar --- kernel/sched/cputime.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/sched/cputime.

Re: [PATCH 04/25] vtime: Spare a seqcount lock/unlock cycle on context switch

2019-09-25 Thread Frederic Weisbecker
, Nov 20, 2018 at 02:25:12PM +0100, Peter Zijlstra wrote: > On Wed, Nov 14, 2018 at 03:45:48AM +0100, Frederic Weisbecker wrote: > > So I definitely like avoiding that superfluous atomic op, however: > > > @@ -730,19 +728,25 @@ static void vtime_account_guest(struct tas

Re: [patch V2 6/6] posix-cpu-timers: Return -EPERM if ptrace permission check fails

2019-09-23 Thread Frederic Weisbecker
eck fails. > > Suggested-by: Peter Zijlstra > Signed-off-by: Thomas Gleixner Reviewed-by: Frederic Weisbecker

Re: [patch V2 5/6] posix-cpu-timers: Return PTR_ERR() from lookup_task()

2019-09-23 Thread Frederic Weisbecker
xner Reviewed-by: Frederic Weisbecker

Re: [patch 6/6] posix-cpu-timers: Make PID=0 and PID=self handling consistent

2019-09-23 Thread Frederic Weisbecker
if (!has_group_leader_pid(p)) > return NULL; So, right after you should have that: if (same_thread_group(p, current)) return p; Which I suggested to convert as: if (p == current)

Re: [patch 5/6] posix-cpu-timers: Sanitize thread clock permissions

2019-09-23 Thread Frederic Weisbecker
* Avoid the ptrace overhead when this is current's process > - */ > - if (same_thread_group(p, current)) > - return p; > + /* > + * Avoid the ptrace overhead when this is current's process > + */ > +

Re: [patch 4/6] posix-cpu-timers: Restrict clock_gettime() permissions

2019-09-23 Thread Frederic Weisbecker
Restrict it by checking ptrace MODE_READ permissions of the reader on the > target process. > > Signed-off-by: Thomas Gleixner Reviewed-by: Frederic Weisbecker

Re: [patch 3/6] posix-cpu-timers: Restrict timer_create() permissions

2019-09-20 Thread Frederic Weisbecker
s permissions to attach ptrace on the > target process. > > Signed-off-by: Thomas Gleixner Makes sense. I hope no serious user currently rely on that lack of restriction. Let's just apply and wait for complains if any. Reviewed-by: Frederic Weisbecker

Re: [PATCH v2 4/4] task: RCUify the assignment of rq->curr

2019-09-20 Thread Frederic Weisbecker
On Sat, Sep 14, 2019 at 07:35:02AM -0500, Eric W. Biederman wrote: > > The current task on the runqueue is currently read with rcu_dereference(). > > To obtain ordinary rcu semantics for an rcu_dereference of rq->curr it needs > to be paird with rcu_assign_pointer of rq->curr. Which provides

Re: [patch V2 2/6] posix-cpu-timers: Fix permission check regression

2019-09-09 Thread Frederic Weisbecker
wouldn't be pretty. Reviewed-by: Frederic Weisbecker

Re: [patch 2/6] posix-cpu-timers: Fix permission check regression

2019-09-05 Thread Frederic Weisbecker
On Thu, Sep 05, 2019 at 02:03:41PM +0200, Thomas Gleixner wrote: > The recent consolidation of the three permission checks introduced a subtle > regression. For timer_create() with a process wide timer it returns the > current task if the lookup through the PID which is encoded into the > clockid

Re: [patch 1/6] posix-cpu-timers: Always clear head pointer on dequeue

2019-09-05 Thread Frederic Weisbecker
equeue and remove the unused requeue > function while at it. > > Fixes: 60bda037f1dd ("posix-cpu-timers: Utilize timerqueue for storage") > Reported-by: syzbot+55acd54b57bb4b384...@syzkaller.appspotmail.com > Signed-off-by: Thomas Gleixner Reviewed-by: Frederic Weisbecker

Re: [patch 0/6] posix-cpu-timers: Fallout fixes and permission tightening

2019-09-05 Thread Frederic Weisbecker
On Thu, Sep 05, 2019 at 04:57:10PM +0200, Thomas Gleixner wrote: > On Thu, 5 Sep 2019, Frederic Weisbecker wrote: > > On Thu, Sep 05, 2019 at 02:03:39PM +0200, Thomas Gleixner wrote: > > > Sysbot triggered an issue in the posix timer rework which was trivial to > > > fix

<    1   2   3   4   5   6   7   8   9   10   >