2013/4/25 Frederic Weisbecker :
> 2013/4/25 Ingo Molnar :
>>
>> * Frederic Weisbecker wrote:
>>
>>> 2013/4/25 Ingo Molnar :
>>> >
>>> > * Frederic Weisbecker wrote:
>>> >
>>> >> > depends on VIRT_CPU_ACCO
CONFIG_64BITS
Thanks,
Frederic
---
Frederic Weisbecker (1):
nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config
Kevin Hilman (1):
cputime_nsecs: use math64.h for nsec resolution conversion helpers
include/asm-generic/cputime_nsecs.h | 28
for each cputime accounting choices.
Reported-by: Ingo Molnar
Signed-off-by: Frederic Weisbecker
Cc: Christoph Lameter
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixner
---
init/K
: Kevin Hilman
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixner
Signed-off-by: Frederic Weisbecker
---
include/asm-generic/cputime_nsecs.h | 28 +++-
1 files changed, 19 insertions(+), 9 deletions(-)
diff
On Fri, Apr 26, 2013 at 08:39:56AM -0700, Paul E. McKenney wrote:
> > config VIRT_CPU_ACCOUNTING_GEN
> > bool "Full dynticks CPU time accounting"
> > - depends on HAVE_CONTEXT_TRACKING && 64BIT
> > + depends on HAVE_CONTEXT_TRACKING && 64BIT && NO_HZ_FULL
>
> Do you really want this chang
2013/4/26 Sedat Dilek :
> On Fri, Apr 26, 2013 at 8:22 PM, Tejun Heo wrote:
>> On Fri, Apr 26, 2013 at 07:40:20PM +0200, Sedat Dilek wrote:
>>> Oops, NULL-pointer-deref [ __queue_work() ]
>>>
>>> [ 25.974932] BUG: unable to handle kernel NULL pointer dereference
>>> at 0100
>>> [ 2
2013/4/26 Frederic Weisbecker :
> On Fri, Apr 26, 2013 at 08:39:56AM -0700, Paul E. McKenney wrote:
>> > config VIRT_CPU_ACCOUNTING_GEN
>> > bool "Full dynticks CPU time accounting"
>> > - depends on HAVE_CONTEXT_TRACKING && 64BIT
&g
2013/4/11 Paul E. McKenney :
> From: "Paul E. McKenney"
>
> Signed-off-by: Paul E. McKenney
> Cc: Frederic Weisbecker
> Cc: Steven Rostedt
> Cc: Borislav Petkov
> Cc: Arjan van de Ven
> Cc: Kevin Hilman
> Cc: Christoph Lameter
> ---
There have been
2013/4/27 Li Zhong :
> I saw following error when testing the latest nohz code on Power:
>
> [ 85.295384] BUG: using smp_processor_id() in preemptible [] code:
> rsyslogd/3493
> [ 85.295396] caller is .tick_nohz_task_switch+0x1c/0xb8
> [ 85.295402] Call Trace:
> [ 85.295408] [c
On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
> CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
>
> If /proc/stat is monitored with a short interval (e.g. 1 or 2 secs) using
> sysstat package, sar reports bogus %idle/iowait values because sar expects
> that idle/iowait val
On Sat, Apr 27, 2013 at 04:45:37PM +0200, Oleg Nesterov wrote:
> On 04/26, Oleg Nesterov wrote:
>
> > On 04/26, H. Peter Anvin wrote:
> > >
> > > On 04/26/2013 09:38 AM, Oleg Nesterov wrote:
> > > >
> > > > - do_debug:
> > > >
> > > > dr6 &= ~DR6_RESERVED;
> > > >
> > > >
2013/4/27 Olivier Langlois :
>
>
> Forbids the cputimer to drift ahead of its process clock by
> blocking its update when a tick occurs while a autoreaping task
> is currently in do_exit() between the call to release_task() and
> its final call to schedule().
>
> Any task stats update after having
gt; The patch also adds another simple helper, ptrace_fill_bp_fields(),
> to factor out the arch_bp_generic_fields() logic in register/modify.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe lin
-tests,
> see https://bugzilla.redhat.com/show_bug.cgi?id=660204.
>
> Reported-by: Jan Kratochvil
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord
>
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Thu, Apr 18, 2013 at 08:44:25PM +0200, Oleg Nesterov wrote:
> Change ptrace_detach() to call flush_ptrace_hw_breakpoint(child).
> This frees the slots for non-ptrace PERF_TYPE_BREAKPOINT users, and
> this ensures that the tracee won't be killed by SIGTRAP triggered by
> the active breakpoints.
>
2013/3/21 Gleb Natapov :
> Isn't is simpler for kernel/context_tracking.c to define empty
> __guest_enter()/__guest_exit() if !CONFIG_KVM.
That doesn't look right. Off-cases are usually handled from the
headers, right? So that we avoid iffdeffery ugliness in core code.
--
To unsubscribe from this
2013/3/24 Ingo Molnar :
>
> * Frederic Weisbecker wrote:
>
>> Hi Ingo,
>>
>> This settles the initial ground to start a special full dynticks tree in -tip
>> that we can iterate incrementally to accelerate the development.
>> It is based on tip:sched/cor
2013/3/25 Peter Zijlstra :
> On Fri, 2013-03-22 at 14:54 +0100, Frederic Weisbecker wrote:
>> And I have to say this patch is going to be very useful for the full
>> dynticks tree. We are happy to get rid of that tick hook.
>
> I'm sorry to have to disappoint, but th
2013/3/25 Christoph Lameter :
> On Fri, 22 Mar 2013, Paul E. McKenney wrote:
>
>> On Fri, Mar 22, 2013 at 02:38:58PM +, Christoph Lameter wrote:
>> > On Thu, 21 Mar 2013, Paul E. McKenney wrote:
>> >
>> > > So, how long of busy periods are you contemplating for your SCHED_FIFO
>> > > threads?
2013/3/25 Christoph Lameter :
> On Mon, 25 Mar 2013, Frederic Weisbecker wrote:
>
>> > The vm kernel threads do no useful work if no system calls are being done.
>> > If there is no kernel action then they can be deferred indefinitely.
>> >
>>
>&
2013/3/25 Paul E. McKenney :
> On Sun, Mar 24, 2013 at 03:46:40PM +0100, Frederic Weisbecker wrote:
>> 2013/3/24 Ingo Molnar :
>> >
>> > * Frederic Weisbecker wrote:
>> >
>> >> Hi Ingo,
>> >>
>> >> This settles the initial g
2013/3/26 Chen Gang :
> Hello Maintainers:
>
> compiling with EXTRA_CFLAGS=-W:
> make V=1 EXTRA_CFLAGS=-W ARCH=arm s3c2410_defconfig
> make V=1 EXTRA_CFLAGS=-W ARCH=arm menuconfig
> set 'arm-linux-gnu-' for cross chain prefix
> make V=1 EXTRA_CFLAGS=-W ARCH=arm
>
> it will rep
2013/3/26 Ingo Molnar :
>
> * Frederic Weisbecker wrote:
>
>> > That way I will be able to test it automatically via randconfig and
>> > such.
>>
>> Sure, I'm adding such an option.
>>
>> > My next question/request after that would be:
2013/3/26 Stanislaw Gruszka :
> On Mon, Mar 18, 2013 at 03:49:02AM -0700, tip-bot for Frederic Weisbecker
> wrote:
>> Commit-ID: d9a3c9823a2e6a543eb7807fb3d15d8233817ec5
>> Gitweb:
>> http://git.kernel.org/tip/d9a3c9823a2e6a543eb7807fb3d15d8233817ec5
>> A
2013/3/25 Paul E. McKenney :
> On Mon, Mar 25, 2013 at 06:12:12PM +0100, Frederic Weisbecker wrote:
>> 2013/3/25 Paul E. McKenney :
>> > On Sun, Mar 24, 2013 at 03:46:40PM +0100, Frederic Weisbecker wrote:
>> >> 2013/3/24 Ingo Molnar :
>> >&
linux-dynticks.git
timers/nohz
Thanks.
Frederic Weisbecker (4):
nohz: Force boot CPU outside full dynticks range
nohz: Print final full dynticks CPUs range on boot
nohz: Ensure full dynticks CPUs are RCU nocbs
nohz: New option to force all CPUs in full dynticks range
Documentation/k
ckier solution later, expecially for aSMP
architectures that need to assign housekeeping tasks to arbitrary
low power CPUs.
But it's still first pass KISS time for now.
Signed-off-by: Frederic Weisbecker
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben
Given that we are applying a few restrictions on the
full dynticks CPUs range (boot CPU excluded, then
soon the RCU nocb subset requirement), let's print
the final resulting range of full dynticks CPUs to
the user so that he knows what's really going to run.
Signed-off-by: Frederic Weis
is is checked
early in boot time, before any CPU has the opportunity
to stop its tick.
Suggested-by: Steven Rostedt
Signed-off-by: Frederic Weisbecker
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin H
Provide a new kernel config that forces all CPUs to be part
of the full dynticks range, except the boot one for timekeeping.
This is helpful for those who don't need a finegrained range
of full dynticks CPU and also for automated testing.
Suggested-by: Ingo Molnar
Signed-off-by: Fre
Provide an extended version of div64_u64() that
also returns the remainder of the division.
We are going to need this to refine the cputime
scaling code.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Steven Rostedt
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Andrew Morton
ore adding the cputime
of an exiting thread to the signal struct. And then we'll need to
scale one-by-one the live threads cputime in thread_group_cputime(). The
drawback may be a slightly slower code on exit time.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Steven Rostedt
nting freezes
after a week or so of intense cpu-bound workload. This set tries to fix the
issue
by reducing the risk of multiplication overflow in the cputime scaling code.
Thanks.
---
Frederic Weisbecker (2):
math64: New div64_u64_rem helper
sched: Lower chances of cputime scaling overflow
in
2013/3/12 Frederic Weisbecker :
> 2013/3/7 Stanislaw Gruszka :
>>> + } else if (!total) {
>>> stime = rtime;
>>
>> I would prefer stime = rtime/2 (hence utime will be rtime/2 too), but this
>> is not so important.
>
> I can do that.
2013/3/14 Stanislaw Gruszka :
> On Thu, Mar 14, 2013 at 08:14:27AM +0100, Ingo Molnar wrote:
>> Hm, is this a new bug? When was it introduced and is upstream affected as
>> well?
>
> Commit 0cf55e1ec08bb5a22e068309e2d8ba1180ab4239 start to use scalling
> for whole thread group, so increase chances
Ingo,
Please pull the following printk regression fixes from:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
printk/urgent
HEAD: c45a372bdfc0115147afb7eea11313dc057c817e
Thanks.
---
Frederic Weisbecker (1):
printk: Provide a wake_up_klogd() off-case
James
sed by the following
commit:
Commit 00b42959106a9ca1c2899e591ae4e9a83ad6af05 ("irq_work: Don't stop
the tick with pending works") merged in v3.9-rc1.
Signed-off-by: James Hogan
Cc: Frederic Weisbecker
Cc: Steven Rostedt
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
ed code in printk.c that should be moved under
CONFIG_PRINTK. But for now, focus on a minimal fix as we passed
the merged window already.
Reported-by: James Hogan
Cc: James Hogan
Cc: Steven Rostedt
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Andrew Morton
Signed-off-by: Frederic Weisbecker
---
inc
2013/3/14 Andrew Morton :
> On Thu, 14 Mar 2013 15:26:29 +0100 Frederic Weisbecker
> wrote:
>
>> wake_up_klogd() is useless when CONFIG_PRINTK=n because
>> neither printk() nor printk_sched() are in use and there
>> are actually no waiter on log_wait waitqueue. It
2013/3/14 Paul Gortmaker :
> On Thu, Mar 14, 2013 at 4:39 PM, Andrew Morton
> wrote:
>> On Thu, 14 Mar 2013 15:26:29 +0100 Frederic Weisbecker
>> wrote:
>>
>>> wake_up_klogd() is useless when CONFIG_PRINTK=n because
>>> neither printk() nor printk_sch
Hi,
In this version, just a few warnings fixed due to missing type updates. And also
a selftest for quick basic breakage checks.
The branch is pullable from:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/posix-cpu-timers
Thanks.
Frederic Weisbecker
Consolidate the common code amongst per thread and per process
timers list on tick time.
List traversal, expiry check and subsequent updates can be
shared in a common helper.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Andrew Morton
Cc
breakages while hacking
on this subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Andrew Morton
Cc: Steven Rostedt
---
tools/testing/selftests/Makefile |1 +
tools/testing/selftests/timers/Makefile |8 +
tools
. cputime_t can
always fit into it.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Oleg Nesterov
---
include/linux/posix-timers.h | 16 ++-
kernel/posix-cpu-timers.c| 266
Cleaning up the posix cpu timers on task exit shares
some common code among timer list types, most notably the
list traversal and expiry time update.
Unify this in a common helper.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Andrew
2013/3/15 Oleg Nesterov :
> On 03/15, Ming Lei wrote:
>>
>> On Fri, Mar 15, 2013 at 9:46 PM, Oleg Nesterov wrote:
>> > On 03/15, Ming Lei wrote:
>> >>
>> >> On Fri, Mar 15, 2013 at 12:24 AM, Oleg Nesterov wrote:
>> >> > static inline int atomic_inc_unless_negative(atomic_t *p)
>> >> > {
>> >> >
2013/3/15 Oleg Nesterov :
> On 03/15, Frederic Weisbecker wrote:
>>
>> > The lack of the barrier?
>> >
>> > I thought about this, this should be fine? atomic_add_unless() has the same
>> > "problem", but this is documented in atomic_ops.txt
2013/3/18 Viresh Kumar :
> In order to save power, it would be useful to schedule light weight work on
> cpus
> that aren't IDLE instead of waking up an IDLE one.
>
> By idle cpu (from scheduler's perspective) we mean:
> - Current task is idle task
> - nr_running == 0
> - wake_list is empty
>
> Th
2013/3/18 Viresh Kumar :
> On Mon, Mar 18, 2013 at 9:09 PM, Frederic Weisbecker
> wrote:
>> 2013/3/18 Viresh Kumar :
>
>>> +static inline int sched_select_cpu(unsigned int sd_flags)
>>> +{
>>> + return raw_smp_processor_id();
>>
>>
On Mon, Jul 16, 2012 at 03:15:56PM -0700, Paul E. McKenney wrote:
> On Wed, Jul 11, 2012 at 08:26:29PM +0200, Frederic Weisbecker wrote:
> > Hi,
> >
> > There are significant changes this time. I reverted back to using
> > a TIF flag to hook on syscalls slow path an
On Tue, Jul 17, 2012 at 06:12:28PM +0800, Jovi Zhang wrote:
> From 16ed13ee9098ae01705e8456005d1ad6d9909128 Mon Sep 17 00:00:00 2001
> From: Jovi Zhang
> Date: Wed, 18 Jul 2012 01:16:23 +0800
> Subject: [PATCH] uprobe: checking probe event include directory
>
> Currently below command run success
On Wed, Jul 18, 2012 at 04:00:46PM +0530, Naveen N. Rao wrote:
> Please find v2 of the patch from Prasad, based on Peter Zijlstra's
> feedback. This applies on top of v3.5-rc7. This has been tested and
> found to work fine by Edjunior.
>
> Regards,
> Naveen
> __
>
> From: K.Prasad
>
> While
swapper 0 [001] 554.286976: sched_stat_wait: comm=perf pid=1465
delay=0 [ns]
swapper 0 [001] 554.286983: sched_switch: prev_comm=swapper/1
prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=perf
[...]
Signed-off-by: Frederic Weisbecker
Cc: Arnaldo Carvalho de Melo
Cc
ioctl
cmd_record
run_builtin
main
__libc_start_main
Signed-off-by: Frederic Weisbecker
Cc: Arnaldo Carvalho de Melo
Cc: David Ahern
Cc: Ingo Molnar
Cc: Jiri Olsa
Cc: Namhyung Kim
Cc: Peter Zijlstra
Include the omitted number of characters printed for the first entry.
Not that it really matters because nobody seem to care about the number
of printed characters for now. But just in case.
Signed-off-by: Frederic Weisbecker
Cc: Arnaldo Carvalho de Melo
Cc: David Ahern
Cc: Ingo Molnar
Cc
In case you wonder. This doesn't fix a regression so this is
3.6 material.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
On Mon, Apr 21, 2014 at 03:24:57PM +0530, Viresh Kumar wrote:
> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> index 6558b7a..9e9ddba 100644
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -108,7 +108,6 @@ static ktime_t tick_init_jiffy_update(void)
>
Hi Lai,
So actually I'll need to use apply_workqueue_attr() on the next patchset. So
I'm considering this patch.
Some comments below:
On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
> From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
> From: Lai Jiangshan
> D
s CPU's run-queue had tasks waiting on I/O, then this idle
> period's duration will be added to iowait_sleeptime.
> This, along with proper SMP syncronization, fixes the bug where iowait
> counts could go backwards.
>
> Signed-off-by: Denys Vlasenko
> Cc: Frederic Weisb
>
> If iowait_exittime is set, then (iowait_exittime - idle_entrytime)
> gets accounted as iowait, and the remaining (now - iowait_exittime)
> as "true" idle.
>
> Run-tested: /proc/stats no longer go backwards.
>
> Signed-off-by: Denys Vlasenko
> Cc: Frederic Weisbecker
Hi Viresh,
On Thu, Apr 03, 2014 at 12:39:37PM +0530, Viresh Kumar wrote:
> Nothing much, just some nitpicks :)
Thanks for your reviews, but I'm eventually dropping these two patches :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...
Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 76 +++---
1 file changed, 67 insertions(+), 9 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 387ce38..564e034 100644
--- a/kernel/workqueue.c
+++ b/kernel/wor
ide a version of apply_workqueue_attrs() that can be
called when the pool is already locked.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
k
prefer to post the current state now in case I'm wandering off.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
core/workqueue-v3
Thanks,
Frederic
---
Frederic Weisbecker (4):
workqueue: Create low-level unbound workqueues cpumask
workque
: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 63 --
1 file changed, 61 insertions(+), 2 deletions
On Mon, May 19, 2014 at 04:15:31PM -0400, Tejun Heo wrote:
> Hello,
>
> On Sat, May 17, 2014 at 03:41:55PM +0200, Frederic Weisbecker wrote:
> > > > - last_pool = get_work_pool(work);
> > > > + last_pool = wq->flags & __WQ_ORDE
On Sun, May 18, 2014 at 10:34:01PM -0700, Paul E. McKenney wrote:
> On Mon, May 19, 2014 at 04:44:41AM +0200, Mike Galbraith wrote:
> > On Sun, 2014-05-18 at 08:58 -0700, Paul E. McKenney wrote:
> > > On Sun, May 18, 2014 at 10:36:41AM +0200, Mike Galbraith wrote:
> > > > On Sat, 2014-05-17 at 22:
On Tue, May 20, 2014 at 10:35:34AM -0400, Tejun Heo wrote:
> Hello,
>
> On Tue, May 20, 2014 at 04:32:31PM +0200, Frederic Weisbecker wrote:
> > > But that's the same for other pwqs too. Back-to-back requeueing will
> > > hold back pwq switching on any work
On Tue, May 20, 2014 at 08:53:24AM -0700, Paul E. McKenney wrote:
> On Tue, May 20, 2014 at 04:53:52PM +0200, Frederic Weisbecker wrote:
> > I'm not sure that I really understand what you want here.
> >
> > The current state of the art is that when you enable CONF
On Fri, May 16, 2014 at 04:50:50PM -0400, Tejun Heo wrote:
> Hello, Frederic.
>
> On Fri, May 16, 2014 at 06:16:55PM +0200, Frederic Weisbecker wrote:
> > @@ -3643,6 +3643,7 @@ static int apply_workqueue_attrs_locked(struct
> > workqueue_struct *wq,
> > {
>
On Tue, May 20, 2014 at 03:56:56PM -0400, Tejun Heo wrote:
> > > Hmmm... but there's nothing which makes rolling back more likely to
> > > succeed compared to the original applications. It's gonna allocate
> > > more pwqs. Triggering WARN_ON_ONCE() seems weird.
> >
> > Yeah but that's the least
On Tue, May 13, 2014 at 07:09:42PM +0200, Peter Zijlstra wrote:
> On Tue, May 13, 2014 at 04:38:37PM +0200, Frederic Weisbecker wrote:
> > We prepare for executing the full nohz kick through an irq work. But
> > if we do this as is, we'll run into conflicting tick locking: th
On Tue, May 13, 2014 at 10:48:02PM +0200, Peter Zijlstra wrote:
> On Tue, May 13, 2014 at 09:33:29PM +0200, Frederic Weisbecker wrote:
> > On Tue, May 13, 2014 at 07:09:42PM +0200, Peter Zijlstra wrote:
> > > On Tue, May 13, 2014 at 04:38:37PM +0200, Frederic Weisbecker wrote:
imers/nohz-irq-work-v3
Thanks,
Frederic
---
Frederic Weisbecker (3):
irq_work: Implement remote queueing
nohz: Move full nohz kick to its own IPI
nohz: Use IPI implicit full barrier against rq->nr_running r/w
include/linux/irq_work.h | 2 ++
include/linux/tick.h
n
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
include/linux/irq_work.h | 2 ++
kernel/irq_work.c| 19 ++-
kernel/smp.c | 4
3 files changed, 24 insertions(+), 1 deletion(-)
diff --
scheduler IPI that the nohz code was abusing
for its cool "callable anywhere/anytime" properties.
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
include/l
lstra
Cc: Thomas Gleixner
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/sched/core.c | 9 +
kernel/sched/sched.h | 10 --
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fb6dfad..a06cac1 100644
---
On Fri, May 09, 2014 at 02:14:10PM +0530, Viresh Kumar wrote:
> On 23 April 2014 16:42, Viresh Kumar wrote:
> > On 15 April 2014 15:00, Frederic Weisbecker wrote:
> >> Ok, I'm a bit buzy with a conference right now but I'm going to summarize
> >> that
> &
On Wed, May 14, 2014 at 11:06:29AM +0200, Peter Zijlstra wrote:
> On Wed, May 14, 2014 at 12:25:54AM +0200, Frederic Weisbecker wrote:
> > irq work currently only supports local callbacks. However its code
> > is mostly ready to run remote callbacks and we have some potential use
On Wed, May 14, 2014 at 11:09:03AM +0200, Peter Zijlstra wrote:
> On Wed, May 14, 2014 at 12:25:56AM +0200, Frederic Weisbecker wrote:
> > @@ -670,10 +670,11 @@ bool sched_can_stop_tick(void)
> >
> > rq = this_rq();
> >
> > - /* Make sure rq-&g
On Wed, May 14, 2014 at 01:54:06PM +0200, Peter Zijlstra wrote:
> On Wed, May 14, 2014 at 01:38:14PM +0200, Frederic Weisbecker wrote:
> > > > +bool irq_work_queue_on(struct irq_work *work, int cpu)
> > > > +{
> > > > + /* Only queue if no
On Wed, May 14, 2014 at 02:41:50PM +0200, Peter Zijlstra wrote:
> On Wed, May 14, 2014 at 02:11:25PM +0200, Frederic Weisbecker wrote:
> > > I don't think it is, most apic calls do apic_wait_icr_idle() then the
> > > apic op, if an NMI happens in between and writ
On Thu, May 15, 2014 at 12:12:17PM +0530, Srivatsa S. Bhat wrote:
> On 05/13/2014 09:08 PM, Frederic Weisbecker wrote:
> > On Mon, May 12, 2014 at 02:06:49AM +0530, Srivatsa S. Bhat wrote:
> >> Today the smp-call-function code just prints a warning if we get an IPI on
> &g
base. Thanks to Lai!
Thanks,
Frederic
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
core/workqueue-v5
---
Frederic Weisbecker (4):
workqueue: Reorder sysfs code
workqueue: Create low-level unbound workqueues cpumask
workqueue: Split a
Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 81 --
1 file changed, 78 insertions(+), 3 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index
her in the file, right above alloc_workqueue_key()
which reference it.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workque
ide a version of apply_workqueue_attrs() that can be
called when the pool is already locked.
Suggested-by: Tejun Heo
Cc: Christoph Lameter
Cc: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
k
: Kevin Hilman
Cc: Lai Jiangshan
Cc: Mike Galbraith
Cc: Paul E. McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel
McKenney
Cc: Tejun Heo
Cc: Viresh Kumar
Signed-off-by: Frederic Weisbecker
---
kernel/workqueue.c | 69 +-
1 file changed, 42 insertions(+), 27 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c3f076f..c68e84f
On Fri, May 16, 2014 at 04:12:25PM -0400, Tejun Heo wrote:
> Hello,
>
> On Fri, May 16, 2014 at 06:16:51PM +0200, Frederic Weisbecker wrote:
> > From: Lai Jiangshan
> >
> > Changing the attributions of a workqueue imply the addition of new pwqs
> > to repl
>
> If iowait_exittime is set, then (iowait_exittime - idle_entrytime)
> gets accounted as iowait, and the remaining (now - iowait_exittime)
> as "true" idle.
>
> Run-tested: /proc/stat counters no longer go backwards.
>
> Signed-off-by: Denys Vlasenko
> Cc: Freder
On Fri, Apr 25, 2014 at 08:57:29PM +0200, Denys Vlasenko wrote:
> Signed-off-by: Denys Vlasenko
> Cc: Frederic Weisbecker
> Cc: Hidetoshi Seto
> Cc: Fernando Luis Vazquez Cao
> Cc: Tetsuo Handa
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc:
>
> If iowait_exittime is set, then (iowait_exittime - idle_entrytime)
> gets accounted as iowait, and the remaining (now - iowait_exittime)
> as "true" idle.
>
> Run-tested: /proc/stat counters no longer go backwards.
>
> Signed-off-by: Denys Vlasenko
> Cc: Freder
On Sun, Apr 27, 2014 at 02:08:20PM +0200, Ingo Molnar wrote:
>
> * Richard Yao wrote:
>
> > Stack traces are generated by scanning the stack and interpeting
> > anything that looks like it could be a pointer to something. We do
> > not need to do this when we have frame pointers, but we do it
On Thu, Apr 24, 2014 at 10:48:32AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:34PM +0200, Frederic Weisbecker wrote:
> > +static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
> > + const struct workque
On Thu, Apr 24, 2014 at 11:30:48AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:35PM +0200, Frederic Weisbecker wrote:
> > +static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
> > + const struct workqueue_attrs *attrs)
On Thu, Apr 24, 2014 at 11:33:20AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:36PM +0200, Frederic Weisbecker wrote:
> > Ordered unbound workqueues need some special care if we want to
> > modify their CPU affinity. These can't be simply handled through
>
On Thu, Apr 24, 2014 at 11:37:16AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:33PM +0200, Frederic Weisbecker wrote:
> > Create a cpumask that limit the affinity of all unbound workqueues.
> > This cpumask is controlled though a file at the root of the workqueue
>
901 - 1000 of 4554 matches
Mail list logo