On Mon, May 05, 2014 at 08:26:38AM -0500, Josh Poimboeuf wrote:
On Mon, May 05, 2014 at 10:55:37AM +0200, Ingo Molnar wrote:
* Josh Poimboeuf jpoim...@redhat.com wrote:
[...]
kpatch checks the backtraces of all tasks in stop_machine() to
ensure that no instances of the old
On Mon, May 05, 2014 at 02:37:06PM +0200, Peter Zijlstra wrote:
On Wed, Apr 16, 2014 at 12:40:01AM -0700, tip-bot for Frederic Weisbecker
wrote:
Commit-ID: 72aacf0259bb7d53b7a3b5b2f7bf982acaa52b61
Gitweb:
http://git.kernel.org/tip/72aacf0259bb7d53b7a3b5b2f7bf982acaa52b61
Author
On Mon, May 05, 2014 at 03:31:13PM +0200, Peter Zijlstra wrote:
On Mon, May 05, 2014 at 02:37:06PM +0200, Peter Zijlstra wrote:
On Wed, Apr 16, 2014 at 12:40:01AM -0700, tip-bot for Frederic Weisbecker
wrote:
Commit-ID: 72aacf0259bb7d53b7a3b5b2f7bf982acaa52b61
Gitweb:
http
On Mon, May 05, 2014 at 04:58:15PM +0200, Peter Zijlstra wrote:
On Mon, May 05, 2014 at 04:52:59PM +0200, Frederic Weisbecker wrote:
Should we instead do irq_work_queue_on() ?
I would really much prefer that yeah. But if we do that, expect some added
overhead on the local
On Mon, May 05, 2014 at 05:12:28PM +0200, Peter Zijlstra wrote:
Note the current ordering:
cmpxchg(qsd-pending, 0, 1) get ipi
csd_lock(qsd-csd) xchg(qsd-pending, 1)
send ipi csd_unlock(qsd-csd)
So there shouldn't be
On Mon, May 05, 2014 at 08:43:04PM +0200, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
On Mon, May 05, 2014 at 08:26:38AM -0500, Josh Poimboeuf wrote:
On Mon, May 05, 2014 at 10:55:37AM +0200, Ingo Molnar wrote:
* Josh Poimboeuf jpoim...@redhat.com wrote
On Tue, May 06, 2014 at 07:12:11AM -0500, Josh Poimboeuf wrote:
On Mon, May 05, 2014 at 11:49:23PM +0200, Frederic Weisbecker wrote:
On Mon, May 05, 2014 at 08:43:04PM +0200, Ingo Molnar wrote:
If a kernel refuses to patch with certain threads running, that will
drive those kernel
On Sun, Mar 30, 2014 at 09:01:39AM -0400, Tejun Heo wrote:
On Thu, Mar 27, 2014 at 06:21:01PM +0100, Frederic Weisbecker wrote:
We call anon workqueues the set of unbound workqueues that don't
carry the WQ_SYSFS flag.
They are a problem nowadays because people who work on CPU isolation
On Sun, Mar 30, 2014 at 08:57:51AM -0400, Tejun Heo wrote:
On Thu, Mar 27, 2014 at 06:21:00PM +0100, Frederic Weisbecker wrote:
The workqueues are all listed in a global list protected by a big mutex.
And this big mutex is used in apply_workqueue_attrs() as well.
Now as we plan
On Thu, Apr 03, 2014 at 10:58:05AM -0400, Tejun Heo wrote:
Hello, Frederic.
On Thu, Apr 03, 2014 at 04:42:55PM +0200, Frederic Weisbecker wrote:
I'm not really sure this is the good approach. I think I wrote this
way back but wouldn't it make more sense to allow userland to restrict
On Thu, Apr 03, 2014 at 11:01:28AM -0400, Tejun Heo wrote:
Hello,
On Thu, Apr 03, 2014 at 04:48:28PM +0200, Frederic Weisbecker wrote:
Wouldn't the right thing to do would be factoring out
apply_workqueue_attrs_locked()? It's cleaner to block out addition of
new workqueues while
On Thu, Apr 03, 2014 at 08:38:00AM -0700, Paul E. McKenney wrote:
On Thu, Apr 03, 2014 at 02:09:25AM +0200, Frederic Weisbecker wrote:
Hi Paul,
Here's an updated version of the patches with your review addressed.
I ripped the function parameter and let it be setup on queued IPI object
On Mon, Mar 31, 2014 at 09:15:26PM +0800, Lai Jiangshan wrote:
On 03/31/2014 08:50 PM, Lai Jiangshan wrote:
Sorry, I'm wrong.
Tejun had told there is only one default worker pool for ordered workqueues.
It is true. But this pool may share with other non-ordered workqueues which
maybe have
Thanks,
Frederic
---
Frederic Weisbecker (2):
smp: Non busy-waiting IPI queue
nohz: Move full nohz kick to its own IPI
include/linux/smp.h | 11 +++
include/linux/tick.h | 2 ++
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 2 +-
kernel
...@linux.vnet.ibm.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/tick.h | 2 ++
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 2 +-
kernel/time/tick-sched.c | 21
Molnar mi...@kernel.org
Cc: Jens Axboe ax...@fb.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/smp.h | 11
On Sun, Mar 30, 2014 at 04:08:56PM -0700, Paul E. McKenney wrote:
For whatever it is worth, the following model claims safety and progress
for the sysidle state machine.
Thoughts?
I'm going to get fun of myself by risking a review of this. Warning,
I don't speak promelian, so I may well
Hi Guys,
You and Hidetoshi have sent a few patches with very detailed changelogs
and it's going to be hard to synthesize. So my reviews are going to be a
bit chaotic, sorry for that in advance.
On Wed, Apr 02, 2014 at 09:35:47PM +0200, Denys Vlasenko wrote:
On Mon, Mar 31, 2014 at 4:08 AM,
On Fri, Apr 04, 2014 at 07:02:43PM +0200, Denys Vlasenko wrote:
On Fri, Apr 4, 2014 at 6:03 PM, Frederic Weisbecker fweis...@gmail.com
wrote:
However, if we would put ourselves into admin's seat, iowait
immediately starts to make sense: for admin, the system state
where a lot of CPU time
On Sat, Apr 05, 2014 at 04:56:54PM +0200, Denys Vlasenko wrote:
On Sat, Apr 5, 2014 at 12:08 PM, Frederic Weisbecker fweis...@gmail.com
wrote:
Iowait makes sense but not per cpu. Eventually it's a global
stat. Or per task.
There a lot of situations where admins want to know
how much
On Mon, Apr 07, 2014 at 11:11:55AM -0700, Paul E. McKenney wrote:
On the upstream code, the first read of full_sysidle_state after exiting
idle is not
performed by an atomic operation. So I wonder if this is right to put this
in the atomic section.
I don't know the language enough to
On Mon, Apr 07, 2014 at 08:16:24PM +0200, Toralf Förster wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/07/2014 05:07 PM, Peter Zijlstra wrote:
On Mon, Apr 07, 2014 at 05:03:37PM +0200, Peter Zijlstra wrote:
So what I suspect at this point is that because i386 and x86_64
On Mon, Apr 07, 2014 at 09:57:00PM +0200, Toralf Förster wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/07/2014 08:59 PM, Frederic Weisbecker wrote:
On Mon, Apr 07, 2014 at 08:16:24PM +0200, Toralf Förster wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256
On 04/07
On Mon, Apr 07, 2014 at 10:34:51PM -0700, Tony Luck wrote:
On Mon, Apr 7, 2014 at 3:25 PM, Tony Luck tony.l...@intel.com wrote:
c) If not this ... then what? Separate routine to convert large numbers
of jiffies to usec/nsecs? Should we make the existing one barf when
handed a
On Tue, Apr 08, 2014 at 02:15:43PM -0400, Steven Rostedt wrote:
On Tue, 8 Apr 2014 19:49:51 +0200
Frederic Weisbecker fweis...@gmail.com wrote:
On Mon, Apr 07, 2014 at 10:34:51PM -0700, Tony Luck wrote:
On Mon, Apr 7, 2014 at 3:25 PM, Tony Luck tony.l...@intel.com wrote:
c
On Tue, Apr 08, 2014 at 01:57:12PM -0700, Andrew Morton wrote:
On Fri, 8 Nov 2013 21:06:22 +0100 Frederic Weisbecker fweis...@gmail.com
wrote:
On Fri, Nov 08, 2013 at 07:52:37PM +, Christoph Lameter wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
I understand, but why
On Wed, Apr 09, 2014 at 07:21:53PM +0530, Viresh Kumar wrote:
On Thu, Nov 14, 2013 at 1:31 AM, Thomas Gleixner t...@linutronix.de wrote:
Subject: NOHZ: Check for nohz active instead of nohz enabled
RCU and the fine grained idle time accounting functions check
tick_nohz_enabled. But that
On Thu, Apr 03, 2014 at 06:17:10PM +0200, Frederic Weisbecker wrote:
Ingo, Thomas,
Please pull the timers/nohz-ipi-for-tip-v3 branch that can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/nohz-ipi-for-tip-v3
Ping?
--
To unsubscribe
iterators that are
RCU unsafe.
It also makes thread_group_cputime() eventually RCU-safe.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
that are
RCU unsafe.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/sched/core.c | 13 ++---
1 file changed, 6 insertions
unsafe.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Oleg Nesterov o...@redhat.com
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
fs/proc/array.c | 7 ---
fs/proc/base.c | 4 ++--
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs
iterators that are
RCU unsafe.
It also makes the hung task threads iteration eventually RCU safe.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Oleg Nesterov o...@redhat.com
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/hung_task.c | 8
these patches don't
depend on any pending preparatory work.
So ideally it would be nice if maintainers cherry-pick the patches
corresponding to their own subsystem.
Thanks,
Frederic
---
Frederic Weisbecker (5):
sched: Convert thread_group_cputime() to use for_each_thread
that are
RCU unsafe.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Mathieu Desnoyers mathieu.desnoy...@efficios.com
Cc: Oleg Nesterov o...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel
On Wed, Apr 09, 2014 at 04:28:35PM +, Mathieu Desnoyers wrote:
- Original Message -
From: Frederic Weisbecker fweis...@gmail.com
To: LKML linux-kernel@vger.kernel.org
Cc: Frederic Weisbecker fweis...@gmail.com, Andrew Morton
a...@linux-foundation.org, Ingo Molnar
mi
will be added to iowait_sleeptime.
This, along with proper SMP syncronization, fixes the bug where iowait
counts could go backwards.
Signed-off-by: Denys Vlasenko dvlas...@redhat.com
Cc: Frederic Weisbecker fweis...@gmail.com
Cc: Hidetoshi Seto seto.hideto...@jp.fujitsu.com
Cc: Fernando Luis
- idle_entrytime)
gets accounted as iowait, and the remaining (now - iowait_exittime)
as true idle.
Run-tested: /proc/stats no longer go backwards.
Signed-off-by: Denys Vlasenko dvlas...@redhat.com
Cc: Frederic Weisbecker fweis...@gmail.com
Cc: Hidetoshi Seto seto.hideto...@jp.fujitsu.com
Cc
Hi Viresh,
On Thu, Apr 03, 2014 at 12:39:37PM +0530, Viresh Kumar wrote:
Nothing much, just some nitpicks :)
Thanks for your reviews, but I'm eventually dropping these two patches :)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/workqueue.c | 76 +++---
1 file changed, 67 insertions(+), 9 deletions
paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/workqueue.c | 73 ++
1 file changed, 41 insertions(+), 32 deletions(-)
diff --git
to post the current state now in case I'm wandering off.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
core/workqueue-v3
Thanks,
Frederic
---
Frederic Weisbecker (4):
workqueue: Create low-level unbound workqueues cpumask
workqueue: Split apply
...@linux.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mike Galbraith bitbuc...@online.de
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
: Christoph Lameter c...@linux.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mike Galbraith bitbuc...@online.de
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker
/355
Suggested-by: Frederic Weisbecker fweis...@gmail.com
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
---
kernel/time/tick-sched.c | 16
1 file changed, 16 insertions(+)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 71f64ee..c3aed50 100644
On Mon, Apr 21, 2014 at 03:24:57PM +0530, Viresh Kumar wrote:
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 6558b7a..9e9ddba 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -108,7 +108,6 @@ static ktime_t tick_init_jiffy_update(void)
Hi Lai,
So actually I'll need to use apply_workqueue_attr() on the next patchset. So
I'm considering this patch.
Some comments below:
On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan
On Mon, Apr 14, 2014 at 08:38:38PM +0200, Peter Zijlstra wrote:
On Mon, Apr 14, 2014 at 09:47:41PM +0530, Viresh Kumar wrote:
sched_can_stop_tick() was using 7 spaces instead of 8 spaces or a 'tab' at
the
beginning of each line. Which doesn't align with the Coding Guidelines.
Also it
On Mon, Apr 14, 2014 at 01:22:17PM +0200, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
On Thu, Apr 03, 2014 at 06:17:10PM +0200, Frederic Weisbecker wrote:
Ingo, Thomas,
Please pull the timers/nohz-ipi-for-tip-v3 branch that can be found at:
git
On Mon, Apr 14, 2014 at 09:53:46PM +0530, Viresh Kumar wrote:
__tick_nohz_task_switch() calls tick_nohz_full_kick(), which is already
checking
tick_nohz_full_cpu() and so we don't need to repeat the same check here.
Remove it.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Ack.
On Mon, Apr 14, 2014 at 09:53:50PM +0530, Viresh Kumar wrote:
tick_nohz_task_switch() and __tick_nohz_task_switch() routines get task_struct
passed to them (always for the 'current' task), but they never use it. Remove
it.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Ack.
--
To
On Mon, Apr 14, 2014 at 09:53:51PM +0530, Viresh Kumar wrote:
__tick_nohz_task_switch() was called only from tick_nohz_task_switch() and
there
is nothing much in tick_nohz_task_switch() as well. IOW, we don't need
unnecessary wrapper over __tick_nohz_task_switch() to be there. Merge all code
On Mon, Apr 14, 2014 at 09:53:52PM +0530, Viresh Kumar wrote:
nohz_full_buf[] is used at only one place, i.e. inside tick_nohz_init(). Make
it
a local variable. Can move it out in case it is used in some other routines in
future.
OTOH nohz_full_buf can have a big size and moving it to a
On Sun, Apr 13, 2014 at 08:58:28PM +0200, Oleg Nesterov wrote:
On 04/11, Oleg Nesterov wrote:
On 04/11, Steven Rostedt wrote:
Are you going to send a new series?
Yes, will do. I will split 1/2, and I need to update the changelog
in 2/2.
Please see the patches.
Frederic! I am
copy_process() to update the child's TIF_SYSCALL_TRACEPOINT
under tasklist.
Signed-off-by: Oleg Nesterov o...@redhat.com
Acked-by: Frederic Weisbecker fweis...@gmail.com
---
include/trace/syscall.h | 15 +++
kernel/fork.c |2 ++
2 files changed, 17 insertions
On Tue, Apr 15, 2014 at 10:15:24AM +0530, Viresh Kumar wrote:
On 15 April 2014 04:52, Frederic Weisbecker fweis...@gmail.com wrote:
On Mon, Apr 14, 2014 at 09:53:51PM +0530, Viresh Kumar wrote:
__tick_nohz_task_switch() was called only from tick_nohz_task_switch() and
there
is nothing
On Mon, Apr 14, 2014 at 02:06:00PM +0200, Peter Zijlstra wrote:
On Mon, Apr 14, 2014 at 05:22:30PM +0530, Viresh Kumar wrote:
On 14 April 2014 17:17, Peter Zijlstra pet...@infradead.org wrote:
What causes this tick? I was under the impression that once there's a
single task (not doing any
On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan la...@cn.fujitsu.com
Date: Tue, 15 Apr 2014 17:52:19 +0800
Subject: [PATCH] workqueue: allow changing attributions of ordered workqueue
On Tue, Apr 15, 2014 at 03:23:37PM +0530, Viresh Kumar wrote:
On 15 April 2014 14:43, Frederic Weisbecker fweis...@gmail.com wrote:
Yeah. But not just that.
Using an inline saves a function call and reduce the offline case to a
simple
condition check. But there is also the jump label
On Wed, Apr 09, 2014 at 04:19:44PM +0530, Viresh Kumar wrote:
On 9 April 2014 16:03, Viresh Kumar viresh.ku...@linaro.org wrote:
Hi Frederic,
File: kernel/time/tick-sched.c
Function: tick_nohz_full_stop_tick()
We are doing this:
if (!tick_nohz_full_cpu(cpu) ||
On Wed, Apr 09, 2014 at 05:28:57PM +0530, Viresh Kumar wrote:
Hi Guys,
File: kernel/time/tick-sched.c
function: tick_nohz_idle_exit()
We are checking here if idle_active is true or not and then
do some stuff. But is it possible that idle_active be false
here?
The sequence as far as I
On Fri, Apr 11, 2014 at 03:34:23PM +0530, Viresh Kumar wrote:
On 10 April 2014 20:09, Frederic Weisbecker fweis...@gmail.com wrote:
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 9f8af69..1e2d6b7 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
On Fri, Apr 11, 2014 at 03:24:11PM +0530, Viresh Kumar wrote:
On 10 April 2014 20:26, Frederic Weisbecker fweis...@gmail.com wrote:
When a dynticks idle CPU is woken up (typically with an IPI),
tick_nohz_stop_idle()
is called on interrupt entry but, because this is a waking up IPI
On Tue, Apr 01, 2014 at 07:43:08PM -0700, Linus Torvalds wrote:
On Tue, Apr 1, 2014 at 12:05 PM, Jens Axboe ax...@fb.com wrote:
- Cleanup of the IPI usage from the block layer, and associated helper
code. From Frederic Weisbecker and Jan Kara.
So I absolutely *hate* how this was done
On Tue, Apr 01, 2014 at 08:48:48PM -0600, Jens Axboe wrote:
On 2014-04-01 20:43, Linus Torvalds wrote:
On Tue, Apr 1, 2014 at 12:05 PM, Jens Axboe ax...@fb.com wrote:
- Cleanup of the IPI usage from the block layer, and associated helper
code. From Frederic Weisbecker and Jan Kara.
So
...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/tick.h | 2 ++
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 2 +-
kernel/time/tick-sched.c | 20
4 files changed, 24 insertions
Cc: Kevin Hilman khil...@linaro.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/smp.h | 12
kernel/smp.c| 44
by a suggestion from Peter Zijlstra.
* Patch 1/2 brings the IPI infrastructure to support this
* Patch 2/2 does the nohz IPI conversion
---
Frederic Weisbecker (2):
smp: Non busy-waiting IPI queue
nohz: Move full nohz kick to its own IPI
include/linux/smp.h | 12
include
On Wed, Apr 02, 2014 at 08:02:13AM -0700, Linus Torvalds wrote:
On Wed, Apr 2, 2014 at 7:00 AM, Frederic Weisbecker fweis...@gmail.com
wrote:
So yeah that's because I was worried about strong conflicts. What kind of
approach
do you prefer then to solve that kind of issue? Do you prefer
2014-04-02 20:05 GMT+02:00 Paul E. McKenney paul...@linux.vnet.ibm.com:
On Wed, Apr 02, 2014 at 06:26:05PM +0200, Frederic Weisbecker wrote:
diff --git a/kernel/smp.c b/kernel/smp.c
index 06d574e..bfe7b36 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -265,6 +265,50 @@ int
Cc: Kevin Hilman khil...@linaro.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/smp.h | 11 +++
kernel/smp.c| 42
...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
include/linux/tick.h | 2 ++
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 2 +-
kernel/time/tick-sched.c | 21 +
4 files changed, 25 insertions
other issues.
Thanks.
Frederic Weisbecker (2):
smp: Non busy-waiting IPI queue
nohz: Move full nohz kick to its own IPI
include/linux/smp.h | 11 +++
include/linux/tick.h | 2 ++
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 2 +-
kernel/smp.c
Sorry I got Jens address wrong once again :-(
On Thu, Apr 03, 2014 at 02:09:25AM +0200, Frederic Weisbecker wrote:
Hi Paul,
Here's an updated version of the patches with your review addressed.
I ripped the function parameter and let it be setup on queued IPI object
initialization time so
On Wed, Jun 26, 2013 at 03:05:11PM +0200, Peter Zijlstra wrote:
On Thu, Jun 20, 2013 at 10:45:41PM +0200, Frederic Weisbecker wrote:
preempt_schedule() and preempt_schedule_context() open
code their preemptability checks.
Use the standard API instead for consolidation.
Signed-off
On Sun, Jun 23, 2013 at 12:58:39PM +0200, Ingo Molnar wrote:
* Dave Jones da...@redhat.com wrote:
On Wed, Jun 12, 2013 at 11:34:07AM -0400, Dave Jones wrote:
On Thu, Jun 06, 2013 at 05:43:13PM +0200, Frederic Weisbecker wrote:
Every process 200% or 0%.
I see
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
/*
+ * Unconditionally force exit from full system-idle state. This is
+ * invoked when a normal CPU exits idle, but must be called separately
+ * for the timekeeping CPU (tick_do_timer_cpu). The reason for this
+ * is that
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
+
+/*
+ * Check to see if the system is fully idle, other than the timekeeping CPU.
+ * The caller must have disabled interrupts.
+ */
+bool rcu_sys_is_idle(void)
+{
+ static struct rcu_sysidle_head rsh;
+ int rss =
On Mon, Jul 01, 2013 at 11:10:40AM -0700, Paul E. McKenney wrote:
On Mon, Jul 01, 2013 at 06:35:31PM +0200, Frederic Weisbecker wrote:
What makes sure that we are not reading a stale value of rdtp-dynticks_idle
in the following scenario:
CPU 0 CPU 1
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
+/*
+ * Check to see if the system is fully idle, other than the timekeeping CPU.
+ * The caller must have disabled interrupts.
+ */
+bool rcu_sys_is_idle(void)
Where is this function called? I can't find any caller in the
On Tue, Jul 08, 2014 at 03:05:56PM -0700, Paul E. McKenney wrote:
Fair point. This would be a kthread_bind_housekeeping(), then.
Hmm, after all this should only be needed for kthreads so yeah.
But I need to create an kthread_bind_mask() or some such that acts like
kthread_bind(), but which
On Fri, May 02, 2014 at 11:26:24PM +0200, Thomas Gleixner wrote:
Russell reported, that irqtime_account_idle_ticks() takes ages due to:
for (i = 0; i ticks; i++)
irqtime_account_process_tick(current, 0, rq);
It's sad, that this code was written way _AFTER_ the NOHZ
On Tue, Jul 08, 2014 at 09:09:25PM -0700, Joe Perches wrote:
On Tue, 2014-07-08 at 15:25 -0700, Paul E. McKenney wrote:
On Tue, Jul 08, 2014 at 03:05:16PM -0700, Joe Perches wrote:
[]
I still think the concept is pretty useless and it's
just a duplication of M:, which isn't anything
));
towards the end of __hrtimer_start_range_ns().
Suggested-by: Frederic Weisbecker fweis...@gmail.com
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
---
kernel/hrtimer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index 3ab2899
On Wed, Jul 09, 2014 at 11:30:41PM +0200, Thomas Gleixner wrote:
On Wed, 9 Jul 2014, Viresh Kumar wrote:
So your patch series drops active hrtimer checks after adding it,
according to your subject line.
Quite useeul to drop something after adding it, right?
hrtimer_start*() family
.
Thanks.
Thanks,
Lai
On 05/17/2014 12:16 AM, Frederic Weisbecker wrote:
So in this version I actually save the cpumask belonging to wq (before
it's intersected against the low level cpumask) in its unbounds attrs.
But the attrs passed to pwq and worker pools have the low level
. This means
that the general mechanism to control worker thread
cpu use by Frederic Weisbecker is necessary to
restrict the shepherd thread to the cpus not used
for low latency tasks. Hopefully that is ready to be
merged soon. No need anymore to have a specific
cpu be the housekeeper cpu
On Fri, Jul 11, 2014 at 06:35:03AM -0700, Paul E. McKenney wrote:
From: Paul E. McKenney paul...@linux.vnet.ibm.com
Enabling NO_HZ_FULL currently has the side effect of enabling callback
offloading on all CPUs. This results in lots of additional rcuo kthreads,
and can also increase context
On Fri, Jul 11, 2014 at 08:56:04AM -0500, Christoph Lameter wrote:
On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
@@ -1228,20 +1244,105 @@ static const struct file_operations proc
#ifdef CONFIG_SMP
static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
int sysctl_stat_interval
On Fri, Jul 11, 2014 at 10:17:41AM -0500, Christoph Lameter wrote:
On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
Converted what? We still need to keep a cpumask around that tells us which
processor have vmstat running and which do not.
Converted to cpumask_var_t.
I mean we
On Fri, Jul 11, 2014 at 01:10:41PM -0500, Christoph Lameter wrote:
On Tue, 8 Jul 2014, Frederic Weisbecker wrote:
I was figuring that a fair number of the kthreads might eventually
be using this, not just for the grace-period kthreads.
Ok makes sense. But can we just rename
On Fri, Jul 11, 2014 at 11:45:28AM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 08:25:43PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11, 2014 at 01:10:41PM -0500, Christoph Lameter wrote:
On Tue, 8 Jul 2014, Frederic Weisbecker wrote:
I was figuring that a fair number
On Fri, Jul 11, 2014 at 02:05:08PM -0500, Christoph Lameter wrote:
On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
That would imply that all no-nohz processors are housekeeping? So all
processors with a tick are housekeeping?
Well, now that I think about it again, I would really like
On Fri, Jul 11, 2014 at 12:08:16PM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 08:57:33PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11, 2014 at 11:45:28AM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 08:25:43PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11
On Fri, Jul 11, 2014 at 12:43:14PM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 09:26:14PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11, 2014 at 12:08:16PM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 08:57:33PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11
On Fri, Jul 11, 2014 at 01:35:13PM -0700, Paul E. McKenney wrote:
On Fri, Jul 11, 2014 at 09:11:15PM +0200, Frederic Weisbecker wrote:
On Fri, Jul 11, 2014 at 02:05:08PM -0500, Christoph Lameter wrote:
On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
That would imply that all no-nohz
Hi,
It's a 2nd set that fixes some missing dyntick kicks in the timer's code.
This new version also handles missing kicks in the hrtimers subsystem.
The patches are also available at:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/missing-kick-v2
to a hrtimer's object
'cpu_base' so that the kick can be centralized there.
So lets store it in the 'struct hrtimer_cpu_base' to resolve the CPU
without overhead. It is set once at CPU's online notification.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis
hrtimer_reprogram() and can be dropped.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/hrtimer.c | 23 ---
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index 5f30917
hrtimer requires tick rescheduling, like timer list timer do.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/hrtimer.c | 27 +++
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/kernel/hrtimer.c
that it is well handled for all sort of timer
enqueue. Even timer migration is concerned so that a full dynticks target
is correctly kicked as needed when timers are migrating to it.
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
kernel/timer.c
1801 - 1900 of 8299 matches
Mail list logo