No need for syscall slowpath if no CPU is full dynticks,
rather nop this in this case.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t
-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter Zijlstra pet...@infradead.org
Cc: Borislav Petkov b...@alien8.de
Cc: Li Zhong zh
short.
Fix vtime_account_user() that wasn't complying to that rule.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter Zijlstra
This can be useful to track all kernel/user round trips.
And it's also helpful to debug the context tracking subsystem.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc
Prepare for using a static key in the context tracking subsystem.
This will help optimizing the off case on its many users:
* user_enter, user_exit, exception_enter, exception_exit, guest_enter,
guest_exit, vtime_*()
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost
and dynticks
cputime accounting can be tested on the given arch.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter Zijlstra pet
preempt_schedule() and preempt_schedule_context() open
code their preemptability checks.
Use the standard API instead for consolidation.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Li Zhong zh...@linux.vnet.ibm.com
Cc: Paul E. McKenney paul
. Just keep in mind the raw context tracking
itself is still necessary everywhere.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc
tracking even on CPUs
that are not in the full dynticks range. OTOH we can spare the
rcu_user_*() and vtime_user_*() calls there because the tick runs
on these CPUs and we can handle RCU state machine and cputime
accounting through it.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven
combinations
finally work.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter Zijlstra pet...@infradead.org
Cc: Borislav Petkov
request to Ingo in a few days.
Thanks.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/nohz-3.12-preview-v3
---
Frederic Weisbecker (23):
sched: Consolidate open coded preemptible() checks
context_tracing: Fix guest accounting with native vtime
Update a stale comment from the old vtime era and document some
locking that might be non obvious.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t
On Thu, Aug 01, 2013 at 03:01:46PM +0200, Jiri Olsa wrote:
On Tue, Jul 23, 2013 at 02:31:00AM +0200, Frederic Weisbecker wrote:
In case of allocation failure, get_callchain_buffer() keeps the
refcount incremented for the current event.
As a result, when get_callchain_buffers() returns
On Thu, Aug 01, 2013 at 03:13:30PM +0200, Jiri Olsa wrote:
On Tue, Jul 23, 2013 at 02:31:01AM +0200, Frederic Weisbecker wrote:
Gather all the event accounting code to a single place,
once all the prerequisites are completed. This simplifies
the refcounting.
Original-patch-by: Peter
On Thu, Aug 01, 2013 at 03:29:34PM +0200, Jiri Olsa wrote:
On Tue, Jul 23, 2013 at 02:31:00AM +0200, Frederic Weisbecker wrote:
SNIP
if (event-attach_state PERF_ATTACH_TASK)
static_key_slow_inc(perf_sched_events.key);
if (event-attr.mmap
On Thu, Aug 01, 2013 at 03:32:17PM +0200, Jiri Olsa wrote:
On Thu, Aug 01, 2013 at 03:28:34PM +0200, Frederic Weisbecker wrote:
SNIP
also for following case:
count = atomic_inc_return(nr_callchain_events);
if (WARN_ON_ONCE(count 1)) {
err
On Thu, Aug 01, 2013 at 03:31:55PM +0200, Peter Zijlstra wrote:
On Thu, Aug 01, 2013 at 02:46:58PM +0200, Jiri Olsa wrote:
On Tue, Jul 23, 2013 at 02:31:04AM +0200, Frederic Weisbecker wrote:
This is going to be used by the full dynticks subsystem
as a finer-grained information to know
On Thu, Aug 01, 2013 at 03:54:01PM +0200, Jiri Olsa wrote:
On Thu, Aug 01, 2013 at 03:49:36PM +0200, Frederic Weisbecker wrote:
On Thu, Aug 01, 2013 at 03:32:17PM +0200, Jiri Olsa wrote:
On Thu, Aug 01, 2013 at 03:28:34PM +0200, Frederic Weisbecker wrote:
SNIP
also
On Thu, Aug 01, 2013 at 04:03:52PM +0200, Peter Zijlstra wrote:
On Thu, Aug 01, 2013 at 03:55:27PM +0200, Frederic Weisbecker wrote:
On Thu, Aug 01, 2013 at 03:31:55PM +0200, Peter Zijlstra wrote:
Where the freq thing is new and shiney, but we already had the other
two. Of those
On Thu, Aug 01, 2013 at 04:06:15PM +0200, Peter Zijlstra wrote:
On Thu, Aug 01, 2013 at 04:03:52PM +0200, Peter Zijlstra wrote:
On Thu, Aug 01, 2013 at 03:55:27PM +0200, Frederic Weisbecker wrote:
On Thu, Aug 01, 2013 at 03:31:55PM +0200, Peter Zijlstra wrote:
Where the freq thing
On Thu, Aug 01, 2013 at 03:51:02PM +0200, Jiri Olsa wrote:
On Thu, Aug 01, 2013 at 03:42:28PM +0200, Frederic Weisbecker wrote:
On Thu, Aug 01, 2013 at 03:29:34PM +0200, Jiri Olsa wrote:
On Tue, Jul 23, 2013 at 02:31:00AM +0200, Frederic Weisbecker wrote:
SNIP
On Sun, Oct 20, 2013 at 09:34:56AM +0200, Andreas Mohr wrote:
Hi,
just wanted to report that this capricious open-coded (ok, lone-coded :)
converter:
+static inline ktime_t us_to_ktime(u64 us)
+{
+ static const ktime_t ktime_zero = { .tv64 = 0 };
+
+ return
On Sun, Oct 20, 2013 at 01:10:06PM +0200, Andreas Mohr wrote:
Hi,
+u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time)
+{
+ ktime_t iowait, delta = { .tv64 = 0 };
+ struct rq *rq = cpu_rq(cpu);
+ ktime_t now = ktime_get();
+ unsigned int seq;
+
+ do {
+
On Mon, Nov 04, 2013 at 06:52:45PM +0100, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
On Mon, Nov 04, 2013 at 08:05:00AM +0100, Ingo Molnar wrote:
* Davidlohr Bueso davidl...@hp.com wrote:
Btw, do you suggest using a high level tool such as perf
On Wed, Nov 06, 2013 at 09:30:46AM +0100, Ingo Molnar wrote:
* Namhyung Kim namhy...@kernel.org wrote:
Hi Ingo,
On Tue, 5 Nov 2013 12:58:02 +0100, Ingo Molnar wrote:
* Namhyung Kim namhy...@kernel.org wrote:
But the 'cumulative' (btw, I feel a bit hard to type this word..) is
On Wed, Nov 06, 2013 at 12:47:01PM +0100, Ingo Molnar wrote:
* Namhyung Kim namhy...@kernel.org wrote:
On Wed, 6 Nov 2013 09:30:46 +0100, Ingo Molnar wrote:
* Namhyung Kim namhy...@kernel.org wrote:
Hi Ingo,
On Tue, 5 Nov 2013 12:58:02 +0100, Ingo Molnar wrote:
*
On Thu, Nov 07, 2013 at 12:21:11PM +0100, Thomas Gleixner wrote:
Mike,
On Thu, 7 Nov 2013, Mike Galbraith wrote:
On Thu, 2013-11-07 at 04:26 +0100, Mike Galbraith wrote:
On Wed, 2013-11-06 at 18:49 +0100, Thomas Gleixner wrote:
I bet you are trying to work around some of the
A few functions use remote per CPU access APIs when they
deal with local values.
Just to the right conversion to improve performance, code
readability and debug checks.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Use a function with a meaningful name to check the global context
tracking state. static_key_false() is a bit confusing for reviewers.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Cc: Peter Zijlstra pet
to the rearming common code
in posix_cpu_timer_schedule().
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
---
kernel/posix-cpu-timers.c | 3 ++-
1 file changed, 2
---
Frederic Weisbecker (5):
nohz: Convert a few places to use local per cpu accesses
context_tracking: Wrap static key check into more intuitive function name
context_tracking: Rename context_tracking_active() to
context_tracking_cpu_is_enabled()
posix-timers: Spare workqueue
.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Oleg Nesterov o...@redhat.com
Cc: Steven Rostedt rost...@goodmis.org
---
include/linux/context_tracking_state.h | 9
From: Paul Gortmaker paul.gortma...@windriver.com
Signed-off-by: Paul Gortmaker paul.gortma...@windriver.com
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Oleg Nesterov o
2013/11/7 Jan Kara j...@suse.cz:
Provide new irq work flag - IRQ_WORK_UNBOUND - meaning that can be
processed on any cpu. This flag implies IRQ_WORK_LAZY so that things are
simple and we don't have to pick any particular cpu to do the work. We
just do the work from a timer tick on whichever
On Thu, Nov 07, 2013 at 11:19:04PM +0100, Jan Kara wrote:
On Thu 07-11-13 23:13:39, Frederic Weisbecker wrote:
But then, who's going to process that work if every CPUs is idle?
Have a look into irq_work_queue(). There is:
/*
* If the work is not lazy or the tick is stopped
On Thu, Nov 07, 2013 at 11:19:04PM +0100, Jan Kara wrote:
On Thu 07-11-13 23:13:39, Frederic Weisbecker wrote:
But then, who's going to process that work if every CPUs is idle?
Have a look into irq_work_queue(). There is:
/*
* If the work is not lazy or the tick is stopped
2013/11/7 Jan Kara j...@suse.cz:
A CPU can be caught in console_unlock() for a long time (tens of seconds
are reported by our customers) when other CPUs are using printk heavily
and serial console makes printing slow. Despite serial console drivers
are calling touch_nmi_watchdog() this
On Thu, Nov 07, 2013 at 04:43:11PM +, Christoph Lameter wrote:
usermodehelper() threads can currently run on all processors.
This is an issue for low latency cores. Spawnig a new thread causes
cpu holdoffs in the range of hundreds of microseconds to a few
milliseconds. Not good for cores
On Thu, Nov 07, 2013 at 11:50:34PM +0100, Jan Kara wrote:
On Thu 07-11-13 23:23:14, Frederic Weisbecker wrote:
On Thu, Nov 07, 2013 at 11:19:04PM +0100, Jan Kara wrote:
On Thu 07-11-13 23:13:39, Frederic Weisbecker wrote:
But then, who's going to process that work if every CPUs is idle
On Thu, Nov 07, 2013 at 11:57:33PM +0100, Jan Kara wrote:
On Thu 07-11-13 23:43:52, Frederic Weisbecker wrote:
2013/11/7 Jan Kara j...@suse.cz:
A CPU can be caught in console_unlock() for a long time (tens of seconds
are reported by our customers) when other CPUs are using printk heavily
On Thu, Nov 07, 2013 at 06:37:17PM -0500, Steven Rostedt wrote:
On Fri, 8 Nov 2013 00:21:51 +0100
Frederic Weisbecker fweis...@gmail.com wrote:
Offloading to a workqueue would be perhaps better, and writing to the serial
console could then be done with interrupts enabled, preemptible
On Thu, Nov 07, 2013 at 06:37:17PM -0500, Steven Rostedt wrote:
On Fri, 8 Nov 2013 00:21:51 +0100
Frederic Weisbecker fweis...@gmail.com wrote:
Ok I see now.
But then this irq_work based solution won't work if, say, you run in full
dynticks
mode. Also the hook on the timer
On Fri, Nov 08, 2013 at 06:03:31AM -0800, Paul E. McKenney wrote:
On Fri, Nov 08, 2013 at 02:26:28PM +0100, Mike Galbraith wrote:
On Fri, 2013-11-08 at 04:37 -0800, Paul E. McKenney wrote:
On Fri, Nov 08, 2013 at 08:31:20AM +0100, Mike Galbraith wrote:
On Thu, 2013-11-07 at 19:23 -0800,
On Fri, Nov 08, 2013 at 06:45:34AM -0800, Paul E. McKenney wrote:
On Fri, Nov 08, 2013 at 03:29:38PM +0100, Frederic Weisbecker wrote:
On Fri, Nov 08, 2013 at 06:03:31AM -0800, Paul E. McKenney wrote:
On Fri, Nov 08, 2013 at 02:26:28PM +0100, Mike Galbraith wrote:
On Fri, 2013-11-08
On Fri, Nov 08, 2013 at 03:53:47PM +0100, Mike Galbraith wrote:
On Fri, 2013-11-08 at 15:29 +0100, Frederic Weisbecker wrote:
On Fri, Nov 08, 2013 at 06:03:31AM -0800, Paul E. McKenney wrote:
On Fri, Nov 08, 2013 at 02:26:28PM +0100, Mike Galbraith wrote:
On Fri, 2013-11-08 at 04:37
On Fri, Nov 08, 2013 at 03:06:59PM +, Christoph Lameter wrote:
On Thu, 7 Nov 2013, Frederic Weisbecker wrote:
usermodehelper works are created via workqueues, right? And workqueues are
an issue as
well for those who want CPU isolation.
AFAICT usermodehelper can be called from
On Fri, Nov 08, 2013 at 05:05:35PM +, Christoph Lameter wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
But it looks like it always end up calling a workqueue. May be I missed
something though.
Now we can argue that this workqueue seem to create kernel threads, which
On Wed, Oct 02, 2013 at 11:11:06AM -0500, suravee.suthikulpa...@amd.com wrote:
From: Jacob Shin jacob.w.s...@gmail.com
Implement hardware breakpoint address mask for AMD Family 16h and
above processors. CPUID feature bit indicates hardware support for
DRn_ADDR_MASK MSRs. These masks further
On Fri, Nov 08, 2013 at 03:06:11PM -0500, Vince Weaver wrote:
and again, this time after 600k successful syscalls or so.
This is on a core2 machine.
[ 1020.396002] [ cut here ]
[ 1020.396002] WARNING: CPU: 1 PID: 3036 at kernel/watchdog.c:245
watchdog_over)
[
On Fri, Nov 08, 2013 at 07:52:37PM +, Christoph Lameter wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
I understand, but why not solving that from the workqueue affinity? We want
to
solve the issue of unbound workqueues in CPU isolation anyway.
Sure if you can solve
On Fri, Nov 08, 2013 at 03:23:07PM -0500, Vince Weaver wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
There seem to be a loop that takes too long in intel_pmu_handle_irq(). Your
two
previous reports seemed to suggest that lbr is involved, but not this one.
I may be wrong but I
On Fri, Nov 08, 2013 at 04:15:21PM -0500, Vince Weaver wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
On Fri, Nov 08, 2013 at 03:23:07PM -0500, Vince Weaver wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
There seem to be a loop that takes too long
On Fri, Nov 08, 2013 at 04:15:21PM -0500, Vince Weaver wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
On Fri, Nov 08, 2013 at 03:23:07PM -0500, Vince Weaver wrote:
On Fri, 8 Nov 2013, Frederic Weisbecker wrote:
There seem to be a loop that takes too long
On Fri, Nov 08, 2013 at 04:15:21PM -0500, Vince Weaver wrote:
int main(int argc, char **argv) {
/* 1 */
/* fd = 82 */
memset(pe[82],0,sizeof(struct perf_event_attr));
pe[82].type=PERF_TYPE_TRACEPOINT;
pe[82].size=80;
pe[82].config=0x18;
I did some more testing
On Tue, Oct 01, 2013 at 08:55:16AM +0200, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
On Mon, Sep 30, 2013 at 09:07:19AM -0700, Linus Torvalds wrote:
On Mon, Sep 30, 2013 at 7:55 AM, Frederic Weisbecker fweis...@gmail.com
wrote:
...
the chances
On Mon, Sep 30, 2013 at 05:02:47PM +0200, Frederic Weisbecker wrote:
Ingo, Thomas,
Please pull the irq/core-v5 branch that can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/core-v5
HEAD: f6f626fa877c96974fadc595ddd72543d8c6106b
I have
On Mon, Sep 30, 2013 at 09:09:47AM -0500, Suravee Suthikulpanit wrote:
On 4/29/2013 7:30 AM, Oleg Nesterov wrote:
On 04/29, Ingo Molnar wrote:
* Oleg Nesterov o...@redhat.com wrote:
Obviously I can't ack the changes in this area, but to me the whole
series looks fine.
Thanks Oleg - can I add
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really don't know what the regular ratio between
schedule() and io_schedule() is -- and I suspect it can
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really don't know what the regular ratio between
schedule
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
I forgot...
cpu_idletime-idle_start;
cpu_idletime-idle_start = NOW();
grrr.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really
On Tue, Oct 01, 2013 at 05:00:37PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 04:05:27PM +0200, Frederic Weisbecker wrote:
struct cpu_idletime {
nr_iowait,
seqlock,
idle_start,
idle_time,
iowait_time,
} __cacheline_aligned_in_smp
On Tue, Oct 01, 2013 at 05:56:33PM +0200, Peter Zijlstra wrote:
So what's wrong with something like:
struct cpu_idletime {
seqlock_t seqlock;
unsigned long nr_iowait;
u64 start;
u64 idle_time,
u64 iowait_time,
} __cacheline_aligned_in_smp;
On Wed, Sep 25, 2013 at 02:43:28PM -0700, Andrew Morton wrote:
On Wed, 25 Sep 2013 14:32:14 -0700 (PDT) Hugh Dickins hu...@google.com
wrote:
On Wed, 25 Sep 2013, Andrew Morton wrote:
On Wed, 25 Sep 2013 11:06:43 +1000 Stephen Rothwell
s...@canb.auug.org.au wrote:
Hi Andrew,
On Thu, Sep 26, 2013 at 05:58:10PM +0900, Namhyung Kim wrote:
From: Namhyung Kim namhyung@lge.com
At insert time, a hist entry should reference comm at the time
otherwise it'll get the last comm anyway.
Signed-off-by: Namhyung Kim namhy...@kernel.org
Cc: Frederic Weisbecker fweis
Cc: Frederic Weisbecker fweis...@gmail.com
Link: http://lkml.kernel.org/n/tip-d9tcfow6stbrp4btvgs51...@git.kernel.org
Signed-off-by: Namhyung Kim namhy...@kernel.org
Have you tested this patchset when collapsing is not used?
There are fair chances that this patchset does not only improve
On Wed, Oct 02, 2013 at 12:03:50PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:11:56PM +0300, Adrian Hunter wrote:
Hi
It does not seem possible to use set-output between
task contexts of different types (e.g. a software event
to a hardware event)
If you look at
On Wed, Oct 02, 2013 at 01:27:30PM +0200, Peter Zijlstra wrote:
On Wed, Oct 02, 2013 at 12:29:56PM +0200, Frederic Weisbecker wrote:
On Wed, Oct 02, 2013 at 12:03:50PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:11:56PM +0300, Adrian Hunter wrote:
Hi
It does not seem
Ingo,
Please pull the timers/core branch that can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/core
Thanks,
Frederic
---
Kevin Hilman (3):
vtime: Add HAVE_VIRT_CPU_ACCOUNTING_GEN Kconfig
nohz: Drop generic vtime
...@linaro.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Russell King r...@arm.linux.org.uk
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Arm Linux linux-arm-ker...@lists.infradead.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff
.
Feature requested by Frederic Weisbecker.
Signed-off-by: Kevin Hilman khil...@linaro.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Russell King r...@arm.linux.org.uk
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Arm Linux linux-arm-ker...@lists.infradead.org
Signed-off-by: Frederic Weisbecker
Molnar mi...@kernel.org
Cc: Russell King r...@arm.linux.org.uk
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Arm Linux linux-arm-ker...@lists.infradead.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
---
init/Kconfig| 2 +-
kernel/time/Kconfig | 1 -
2 files changed, 1 insertion
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's trading 2 atomics + 1 Lock/Unlock with 2 Lock/Unlock.
It trades
2013/10/2 Daniel Lezcano daniel.lezc...@linaro.org:
The sleep_length is computed in the tick_nohz_stop_sched_tick function but it
is used later in the code with in between the local irq enabled.
cpu_idle_loop
tick_nohz_idle_enter [ exits with local irq enabled ]
On Wed, Oct 02, 2013 at 07:35:49AM -0700, Arjan van de Ven wrote:
On 10/2/2013 5:45 AM, Frederic Weisbecker wrote:
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt
On Wed, Oct 02, 2013 at 06:22:29PM +0200, Daniel Lezcano wrote:
On 10/02/2013 05:57 PM, Frederic Weisbecker wrote:
2013/10/2 Daniel Lezcano daniel.lezc...@linaro.org:
The sleep_length is computed in the tick_nohz_stop_sched_tick function but
it
is used later in the code with in between
On Fri, Oct 04, 2013 at 02:39:48PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
As suggested by Ingo.
Make HW_BREAKPOINTS a config option. HW_BREAKPOINTS depends
on perf. This allows disabling PERF_EVENTS for systems that
don't need it (e.g. anything not used for
On Wed, Oct 02, 2013 at 08:03:39PM +0200, Daniel Lezcano wrote:
On 10/02/2013 06:42 PM, Frederic Weisbecker wrote:
On Wed, Oct 02, 2013 at 06:22:29PM +0200, Daniel Lezcano wrote:
On 10/02/2013 05:57 PM, Frederic Weisbecker wrote:
2013/10/2 Daniel Lezcano daniel.lezc...@linaro.org
On Fri, Sep 13, 2013 at 12:01:56PM +0200, Knut Petersen wrote:
Hi everybody!
Since about July I observe occasional kernel panics happening only during
system shutdown on two systems.
Hardware: mobos: both AOpen i915GMm-hfs mobos, cpus: Pentium-M Dothan /
Banias, mem: 2GB
Although the
On Thu, Sep 12, 2013 at 10:36:58PM +0200, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
The way we handle hists sorted by comm is to first gather them by tid
then in the end merge/collapse hists that end up with the same comm.
But merging hists has shown some
On Fri, Sep 13, 2013 at 03:32:34PM +0900, Namhyung Kim wrote:
Hi Frederic,
On Thu, 12 Sep 2013 22:29:39 +0200, Frederic Weisbecker wrote:
The way we handle hists sorted by comm is to first gather them by tid then
in the end merge/collapse hists that end up with the same comm
On Fri, Sep 13, 2013 at 05:07:06PM +0900, Namhyung Kim wrote:
Hi,
On Thu, 12 Sep 2013 22:29:43 +0200, Frederic Weisbecker wrote:
Now that comm strings are allocated only once and refcounted to be shared
among threads, these can now be safely compared by addresses. This
should remove most
On Fri, Sep 13, 2013 at 01:45:55PM +, Christoph Lameter wrote:
On Thu, 12 Sep 2013, Frederic Weisbecker wrote:
So yeah it's a problem in theory. Now in practice, I have yet to be
convinced because
this should be solved after a few iterations in /proc in most cases.
I have seen
: Frederic Weisbecker fweis...@gmail.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Jiri Olsa jo...@redhat.com
Cc: Mike Galbraith efa...@gmx.de
Cc: Namhyung Kim namhy...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Stephane Eranian eran...@google.com
---
tools/perf/util/session.c | 17
On Sat, Sep 14, 2013 at 11:25:40AM -0600, David Ahern wrote:
On 9/14/13 10:16 AM, Frederic Weisbecker wrote:
@@ -676,7 +682,12 @@ int perf_session_queue_event(struct perf_session *s,
union perf_event *event,
new-timestamp = timestamp;
new-file_offset = file_offset;
- new-event
2013/9/18 Paul Mackerras pau...@samba.org:
Frederic,
On Thu, Sep 05, 2013 at 05:33:21PM +0200, Frederic Weisbecker wrote:
This series is a proposition to fix the crash reported here:
http://lkml.kernel.org/r/1378330796.4321.50.camel%40pasglop
And it has the upside to also consolidate a bit
Ksoftirqd shouldn't need softirq stack since it's executing
in a kernel thread with a callstack that is only beginning at
this stage.
Lets comment about that for clarity.
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Paul Mackerras
this.
Reported-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Tested-by: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Paul Mackerras pau...@au1.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas
Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Frederic Weisbecker fweis...@gmail.com
Tested-by: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Paul Mackerras pau...@au1.ibm.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter
.
---
Frederic Weisbecker (3):
irq: Consolidate do_softirq() arch overriden implementations
irq: Execute softirq on its own stack on irq exit
irq: Comment on the use of inline stack for ksoftirqd
arch/metag/kernel/irq.c| 56 ++-
arch/parisc/kernel
On Tue, Oct 15, 2013 at 04:39:06PM -0400, Steven Rostedt wrote:
Since the NMI iretq nesting has been fixed, there's no reason that
an NMI handler can not take a page fault for vmalloc'd code. No locks
are taken in that code path, and the software now handles nested NMIs
when the fault
On Tue, Oct 15, 2013 at 02:24:40PM -0700, Joe Perches wrote:
On Tue, 2013-10-15 at 23:12 +0200, Frederic Weisbecker wrote:
On Tue, Oct 15, 2013 at 02:00:05PM -0700, Joe Perches wrote:
On Tue, 2013-10-15 at 22:50 +0200, Frederic Weisbecker wrote:
[]
diff --git a/include/linux/printk.h
Committer: Frederic Weisbecker fweis...@gmail.com
CommitDate: Mon, 30 Sep 2013 15:37:05 +0200
ARM: Kconfig: allow full nohz CPU accounting
With the 64-bit requirement removed from VIRT_CPU_ACCOUNTING_GEN,
allow ARM platforms to enable it. Since VIRT_CPU_ACCOUNTING_GEN is a
dependency
On Wed, Oct 16, 2013 at 08:45:18AM -0400, Steven Rostedt wrote:
On Wed, 16 Oct 2013 13:40:37 +0200
Frederic Weisbecker fweis...@gmail.com wrote:
On Tue, Oct 15, 2013 at 04:39:06PM -0400, Steven Rostedt wrote:
Since the NMI iretq nesting has been fixed, there's no reason that
an NMI
On Wed, Oct 16, 2013 at 08:59:28AM -0400, Steven Rostedt wrote:
On Wed, 16 Oct 2013 13:53:56 +0200
Frederic Weisbecker fweis...@gmail.com wrote:
static int done;
if (!done) {
trace_printk(something);
trace_printk(something else);
trace_dump_stack();
done = 1
On Wed, Oct 16, 2013 at 09:14:37AM -0400, Steven Rostedt wrote:
On Wed, 16 Oct 2013 15:08:57 +0200
Frederic Weisbecker fweis...@gmail.com wrote:
Faults can call rcu_user_exit() / rcu_user_enter(). This is not supposed to
happen
between rcu_nmi_enter() and rcu_nmi_exit(). rdtp-dynticks
On Wed, Oct 16, 2013 at 02:44:28PM +, Christoph Lameter wrote:
This is a follow on patch related to the earlier
discussion about restricting the
spawning of kernel threads. See https://lkml.org/lkml/2013/9/5/426
usermodehelper() threads can currently run on all processors.
This is
On Wed, Oct 16, 2013 at 12:36:32PM -0700, Paul E. McKenney wrote:
On Wed, Oct 16, 2013 at 03:08:57PM +0200, Frederic Weisbecker wrote:
On Wed, Oct 16, 2013 at 08:45:18AM -0400, Steven Rostedt wrote:
On Wed, 16 Oct 2013 13:40:37 +0200
Frederic Weisbecker fweis...@gmail.com wrote
On Tue, Oct 15, 2013 at 08:40:25AM +0200, Ingo Molnar wrote:
* Frederic Weisbecker fweis...@gmail.com wrote:
I've been thinking that CONFIG_DEBUG_LIST could help. Unfortunately it's
good to spot list APIs misuse but, if Linus is right, the problem may be
that the list belongs
1001 - 1100 of 8299 matches
Mail list logo