$ ./posix_cpu_timers
6 2278074
After the patch:
$ ./posix_cpu_timers
8 1158766
Before the patch, the elapsed time got two more seconds spuriously accounted.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstr
breakages while hacking
on this subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
tools/testing/selftests/Makefile |1 +
tools
Consolidate the common code amongst per thread and per process timers list
on tick time.
List traversal, expiry check and subsequent updates can be shared in a
common helper.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc
. cputime_t can always fit
into it.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Oleg Nesterov
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
include/linux/posix-timers.h | 16 ++-
kernel
ported-by: Chen Gang
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Oleg Nesterov
Cc: Chen Gang
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
kernel/posix-cpu-timers.c |1 +
1 files ch
On Fri, Jul 05, 2013 at 08:51:13AM +0200, Ingo Molnar wrote:
>
> * H. Peter Anvin wrote:
>
> > On 07/03/2013 07:49 PM, Linus Torvalds wrote:
> > >> [] __schedule+0x94f/0x9c0
> > >> [] schedule_user+0x2e/0x70
> > >> [] retint_careful+0x12/0x2e
> >
> > This call trace does indeed indicate that we
2013/7/4 Peter Zijlstra :
> On Thu, Jul 04, 2013 at 01:34:13PM +0800, Alex Shi wrote:
>
>> If the tsc is marked as constant and nonstop, could we set it as system
>> clocksource when do tsc register? w/o checking it on clocksource_watchdog?
>
> I'd not do that; the BIOS can still screw you over, we
On Sun, Jul 22, 2012 at 02:14:26PM +0200, Jiri Olsa wrote:
> Adding copy_from_user_nmi_nochk that provides the best effort
> copy regardless the requesting size crossing the task boundary.
>
> This is going to be useful for stack dump we need in post
> DWARF CFI based unwind, where we have predefi
can
> provide register dump for compat task if needed in the future.
>
> Signed-off-by: Jiri Olsa
> Original-patch-by: Frederic Weisbecker
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
archs that support register dump.
>
> This is going to be useful to bring Dwarf CFI based stack
> unwinding on top of samples.
>
> Signed-off-by: Jiri Olsa
> Original-patch-by: Frederic Weisbecker
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line
, starting from the
> stack pointer, will be useful to make a post mortem dwarf CFI
> based stack unwinding.
>
> Signed-off-by: Jiri Olsa
> Signed-off-by: Frederic Weisbecker
If you keep the SOB of the author then you need to preserve its
authorship (git am --author= / git commit
nterface provides following function:
> unwind__get_entries
>
> And callback (specified in above function) to retrieve
> the backtrace entries:
> typedef int (*unwind_entry_cb_t)(struct unwind_entry *entry,
>void *arg);
>
> Signed-off-by: Jiri Ols
On Sun, Jul 22, 2012 at 02:14:39PM +0200, Jiri Olsa wrote:
> Adding dso data caching so we don't need to open/read/close,
> each time we want dso data.
>
> The DSO data caching affects following functions:
> dso__data_read_offset
> dso__data_read_addr
>
> Each DSO read tries to find the data
On Sun, Jul 22, 2012 at 02:14:23PM +0200, Jiri Olsa wrote:
> hi,
>
> patches available also as tarball in here:
> http://people.redhat.com/~jolsa/perf_post_unwind_v7.tar.bz2
>
> v7 changes:
>- omitted v6 patches 9 and 15
> They need more work and will be sent separately. I dont want to h
On Wed, Jul 25, 2012 at 02:16:55PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Jul 25, 2012 at 07:05:33PM +0200, Frederic Weisbecker escreveu:
> > > +#ifdef ARCH_X86_64
> > > +int unwind__arch_reg_id(int regnum)
> >
> > Please try to avoid __ in functi
On Wed, Jul 25, 2012 at 07:16:43PM +0200, Jiri Olsa wrote:
> On Wed, Jul 25, 2012 at 06:11:53PM +0200, Frederic Weisbecker wrote:
> > On Sun, Jul 22, 2012 at 02:14:26PM +0200, Jiri Olsa wrote:
> > > Adding copy_from_user_nmi_nochk that provides the best effort
> > > cop
On Wed, Jul 25, 2012 at 07:30:31PM +0200, Jiri Olsa wrote:
> On Wed, Jul 25, 2012 at 07:16:43PM +0200, Jiri Olsa wrote:
> > On Wed, Jul 25, 2012 at 06:11:53PM +0200, Frederic Weisbecker wrote:
> > > On Sun, Jul 22, 2012 at 02:14:26PM +0200, Jiri Olsa wrote:
> > > > A
On Mon, Nov 18, 2013 at 03:10:21PM +0100, Peter Zijlstra wrote:
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -746,13 +746,23 @@ void irq_exit(void)
> #endif
>
> account_irq_exit_time(current);
> - trace_hardirq_exit();
> sub_preempt_count(HARDIRQ_OFFSET);
> - if (!i
fact that
softirqs can be called from hardirqs while hardirqs can nest on softirqs
but those two cases have very different semantics and only the latter
case assume both states.
Signed-off-by: Frederic Weisbecker
Cc: Sebastian Andrzej Siewior
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Linus Torvalds
On Wed, Nov 20, 2013 at 01:07:34AM +0100, Frederic Weisbecker wrote:
> Instead of saving the hardirq state on a per CPU variable, which require
> an explicit call before the softirq handling and some complication,
> just save and restore the hardirq tracing state through functions
> r
On Tue, Nov 19, 2013 at 08:21:45PM +0100, Oleg Nesterov wrote:
> DR6_RESERVED and DR_CONTROL_RESERVED are used to clear the unwanted
> bits in the "unsigned long" data, but "ulong &= ~int" also clears the
> upper bits that are not specified in mask.
>
> This is actually fine, dr6[32:63] are reserv
ly add options for sym, dso and ip if callchains are present
>
> Signed-off-by: David Ahern
> Cc: Frederic Weisbecker
Thanks, looks good, just a few things:
> ---
> tools/perf/builtin-script.c | 24
> 1 file changed, 24 insertions(+)
>
> diff -
mplicating the code a lot
* Consolidate local and remote clock read
* Remove dead leftovers
* Optimize the locking by removing unnecessary uses of tasklist lock
* Various other cleanups...
Thanks,
Frederic
---
Frederic Weisbecker (10):
posix-timers: Remove dead thread posix cpu timers caching
to release the timer and its associated
ressources by calling timer_delete() after it buries the target
tasks.
Remove this to simplify the code.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
-
Now that we've removed all the optimizations that could
result in NULL timer's targets, we can remove all the
associated special case handling.
Also add some warnings on NULL targets to spot any possible
leftover.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo
cts us against concurrent
timer firing.
The rest only need the targets sighand to be locked.
So hold it and drop the use of tasklist_lock there.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
---
The posix cpu timers code makes a heavy use of BUG_ON()
but none of these concern fatal issues that require
to stop the machine. So let's just warn the user when
some internal state slips out of our hands.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc:
ese places instead.
Also update the comments about locking.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
---
kernel/posix-cpu-timers.c | 74 -
also
result in group leader that can change
To protect against these, locking the target's sighand is enough.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
---
kernel/posix-cpu-tim
o big deal as this actually harmonize
the behaviour when the remote sample is actually a local one.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
---
kernel/posix-cpu-timers.c
a0b2062b0904ef07944c4a6e4d0f88ee44f1e9f2
("posix_timers: fix racy timer delta caching on task exit") forgot
to remove the arguments used for timer caching.
Fix this leftover.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Ne
bly not worth it. So lets get
rid of it.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Morton
---
kernel/posix-cpu-timers.c | 34 +-
1 file changed, 1 insertion(
ontended anyway.
All in one this caching doesn't seem to be justified.
Given that it complicates the code significantly for
few wins, let's remove it on single thread timers.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
It was initially a single patch that Oleg sent me a few weeks ago. Thinking
about it I think it may need a stable backport even though it doesn't look
very dangerous, but just in case.
So I've splitted the patch in 4 different parts because it may need
backporting on different tree version for ea
ff-by: Oleg Nesterov
Fixes: 0067f1297241ea567f2b22a455519752d70fcca9
Cc:
Signed-off-by: Frederic Weisbecker
---
arch/x86/kernel/hw_breakpoint.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index f66ff16..11
From: Oleg Nesterov
arch_check_bp_in_kernelspace() tries to avoid the overflow and does 2
TASK_SIZE checks but it needs OR, not AND. Consider va = TASK_SIZE -1
and len = 2 case.
Signed-off-by: Oleg Nesterov
Fixes: 478fcb2cdb2351dcfc3fb23f42d76f4436ee4149
Cc:
Signed-off-by: Frederic Weisbecker
From: Oleg Nesterov
arch_check_bp_in_kernelspace() tries to avoid the overflow and does 2
TASK_SIZE checks but it needs OR, not AND. Consider va = TASK_SIZE -1
and len = 2 case.
Signed-off-by: Oleg Nesterov
Fixes: 09a072947791088b88ae15111cf68fc5aaaf758d
Cc:
Signed-off-by: Frederic Weisbecker
From: Oleg Nesterov
arch_check_bp_in_kernelspace() tries to avoid the overflow and does 2
TASK_SIZE checks but it needs OR, not AND. Consider va = TASK_SIZE -1
and len = 2 case.
Signed-off-by: Oleg Nesterov
Fixes: f81ef4a920c8e1af75adf9f15042c2daa49d3cb3
Cc:
Signed-off-by: Frederic Weisbecker
On Mon, Dec 16, 2013 at 07:18:52AM -0800, Andi Kleen wrote:
> > So we could make the old ABI a CONFIG_PERF_EVENTS_COMPAT_X86_BTS kind
> > of legacy option, turned off by default. That allows us its eventual
> > future phasing out.
> >
> > It all depends on how useful the new tooling becomes: if
On Mon, Dec 16, 2013 at 07:45:27AM -0800, Andi Kleen wrote:
> > You're right it's extremely slow. But it can still be relevant for
> > debugging,
> > at least for apps that don't do too much CPU bound stuffs.
>
> There are patches from Markus already for gdb to use it (using the old
> BTS perf in
On Tue, Dec 17, 2013 at 08:35:39AM -0800, Kevin Hilman wrote:
> Viresh Kumar writes:
>
> > Sorry for the delay, was on holidays..
> >
> > On 11 December 2013 18:52, Frederic Weisbecker wrote:
> >> On Tue, Dec 03, 2013 at 01:57:37PM +0530, Viresh Kumar wr
From: Alex Shi
Code usually starts with 'tab' instead of 7 'space' in kernel
Signed-off-by: Alex Shi
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc: Alex S
Now that we have an API to determine if a CPU is allowed to handle
timekeeping duty, use it now on timekeeper selection time for clarity.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc
inition so we need
to wake up a timekeeper such that it can handle the timekeeping
duty on behalf of the freshly awoken full dyntick CPU.
To achieve this and ensure that this CPU won't deal with stale
jiffies values, lets wake up the default timekeeper using the right
API.
Signed-off-by: Fred
Now that we have all the infrastructure in place and ready to support
timekeeping duty balanced across every non full dynticks CPUs, we can
hereby extend the timekeeping duty affinity.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
, lets take the scheduler IPI everytime as long
as there is at least one full dynticks CPU around. Full dynticks CPUs
are interested too in taking scheduler IPIs to reevaluate their tick.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven
until all full dynticks CPUs go to sleep.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc: Alex Shi
Cc: Kevin Hilman
---
kernel/time/tick-sched.c | 67 -
fault timekeeping when the current
timekeeper goes offline, so that the duty is relayed.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc: Alex Shi
Cc: Kevin Hilman
---
include/linux/tick.h
We don't need to fetch the timekeeping max deferment under the
jiffies_lock seqlock.
If the clocksource is updated concurrently while we stop the tick,
stop machine is called and the tick will be reevaluated again along with
uptodate jiffies and its related values.
Signed-off-by: Fre
o the tick is stopped on irq exit and timekeeping
catches up with the tickless time elapsed until we reach irq entry.
This rename was suggested by Peter Zijlstra a long while ago but it
got forgotten in the mass.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zij
) API which checks if a CPU is allowed to handle
timekeeping duty. If so we can conclude that it's not full dynticks and
it can maintain timekeeping by itself and as such it can be excluded
from the sysidle detection.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
from:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/full_sysidle-rfc
Thanks,
Frederic
---
Frederic Weisbecker (12):
tick: Rename tick_check_idle() to tick_irq_enter()
time: New helper to check CPU eligibility to handle timekeeping
otential timekeeping CPU that is already running a non idle task.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc: Alex Shi
Cc: Kevin Hilman
---
include/linux/tick.h | 16
order to enforce and consolidate this behaviour, provide an API that
core subsystems can use to check if a CPU is allowed to handle the
timekeeping duty.
This is going to be used by the timer subsystem before assigning a
timekeeper and by RCU for the full sysidle detection.
Signed-off-by: Fre
dynticks, like our
timekeeper.
To fix this, use the smp_send_reschedule() function directly.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: John Stultz
Cc: Alex Shi
Cc: Kevin Hilman
---
kernel/rcu
On Tue, Dec 17, 2013 at 03:27:14PM -0800, Paul E. McKenney wrote:
> On Tue, Dec 17, 2013 at 11:51:22PM +0100, Frederic Weisbecker wrote:
> > The purpose of the full system idle detection is to notify the CPU
> > handling the timekeeping when the rest of the system is idle so that i
On Tue, Dec 17, 2013 at 03:34:54PM -0800, Paul E. McKenney wrote:
> On Tue, Dec 17, 2013 at 11:51:30PM +0100, Frederic Weisbecker wrote:
> > When a full dynticks CPU wakes up from sysidle state, which means that
> > all full dynticks CPUs were previously sleeping, it's possi
On Wed, Dec 11, 2013 at 06:28:23AM -0600, suravee.suthikulpa...@amd.com wrote:
> @@ -295,11 +301,17 @@ static int arch_build_bp_info(struct perf_event *bp)
> break;
> #endif
> default:
> - return -EINVAL;
> + if (!is_power_of_2(bp->attr.bp_len))
> +
Hi Suravee,
On Wed, Jan 08, 2014 at 01:00:36PM -0600, Suravee Suthikulanit wrote:
> Ping. Are there any other concerns regarding this patch?
>
> Thank you,
>
> Suravee
The patches look good. I'll apply the series and propose it to the perf
maintainers.
Thanks!
--
To unsubscribe from this lis
On Wed, Jan 08, 2014 at 02:48:37PM -0700, David Ahern wrote:
> The existing code does not work. Your unstable tsc patch did not
> work. I have not tried Joseph's patch. Are you proposing that one or
> do you have something else in mind?
I think we should integrate Joseph's patch (or mine, or some
On Mon, Jan 06, 2014 at 10:37:27AM -0800, Kevin Hilman wrote:
> Frederic Weisbecker writes:
>
> > On Tue, Dec 17, 2013 at 01:23:07PM -0800, Kevin Hilman wrote:
> >> Allow debugfs override of sched_tick_max_deferment in order to ease
> >> finding/fixing the r
On Sat, Jan 04, 2014 at 07:22:32PM +0100, Alexander Gordeev wrote:
> Hello,
>
> This is version 2 of RFC "perf: IRQ-bound performance events". That is an
> introduction of IRQ-bound performance events - ones that only count in a
> context of a hardware interrupt handler. Ingo suggested to extend t
On Sat, Nov 30, 2013 at 03:11:06PM +0100, Ingo Molnar wrote:
>
> * Ingo Molnar wrote:
>
> >
> > * Frederic Weisbecker wrote:
> >
> > > Ingo,
> > >
> > > Please pull the timers/core branch that can be found at:
> > >
>
XIT(9486:9486):(9486:9486)
>
> Signed-off-by: Namhyung Kim
> Suggested-by: Frederic Weisbecker
Thanks a lot for doing this Namhyung!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Sat, Nov 30, 2013 at 02:59:07PM +0100, Ingo Molnar wrote:
>
> * Frederic Weisbecker wrote:
>
> > Ingo,
> >
> > Please pull the timers/core branch that can be found at:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
On Sat, Nov 23, 2013 at 04:37:10PM +0100, Frederic Weisbecker wrote:
> Ingo, Thomas,
>
> Please pull the timers/posix-timers-for-tip branch that can be found at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> timers/posix-timers
2013/11/11 Oleg Nesterov :
> On 11/11, Frederic Weisbecker wrote:
>>
>> On Sat, Nov 09, 2013 at 04:54:28PM +0100, Oleg Nesterov wrote:
>> >
>> > Up to you and Suravee, but can't we cleanup this later?
>> >
>> > This series was updated many
full
dynticks CPUs (not a regression though as it only impacts full dynticks and
the
bug is there since we merged full dynticks).
Let me know if you find any issue.
Thanks,
Frederic
---
Frederic Weisbecker (5):
nohz: Convert a few places to use local per cpu accesses
co
tick on the timer as expected
This patch fixes this bug by handling both cases in one. All we need
is to move the kick to the rearming common code in posix_cpu_timer_schedule().
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven
Use a function with a meaningful name to check the global context
tracking state. static_key_false() is a bit confusing for reviewers.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven Rostedt
---
include/linux
.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven Rostedt
---
include/linux/tick.h | 6 +++---
kernel/softirq.c | 4 +---
kernel/time/tick-broadcast.c | 6 +++---
kernel/time/tick-internal.h | 4
igned-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven Rostedt
---
kernel/posix-cpu-timers.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
index c7
.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven Rostedt
---
include/linux/context_tracking_state.h | 9 +
include/linux/vtime.h | 2 +-
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a
From: Paul Gortmaker
Signed-off-by: Paul Gortmaker
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Steven Rostedt
---
init/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/init/Kconfig b/init
On Wed, Dec 04, 2013 at 02:04:09PM +0100, Oleg Nesterov wrote:
> while_each_thread() and next_thread() should die, almost every
> lockless usage is wrong.
>
> 1. Unless g == current, the lockless while_each_thread() is not safe.
>
>while_each_thread(g, t) can loop forever if g exits, next_thr
On Wed, Dec 04, 2013 at 02:49:17PM +0100, Oleg Nesterov wrote:
> On 12/04, Frederic Weisbecker wrote:
> >
> > On Wed, Dec 04, 2013 at 02:04:09PM +0100, Oleg Nesterov wrote:
> >
> > > For example, do/while_each_thread() always
> > > sees at least one task, wh
On Tue, Nov 12, 2013 at 09:58:29PM -0700, David Ahern wrote:
> Hi Namhyung and Frederic:
>
> If you recall I mentioned noting a problem with the callchain series
> showing comm's. Well, it fails on acme's perf/core. git bisect
> points to:
>
> $ git bisect bad
> 4dfced359fbc719a35527416f1b4b39996
On Wed, Nov 13, 2013 at 03:07:46PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Nov 13, 2013 at 07:03:47PM +0100, Frederic Weisbecker escreveu:
> > On Tue, Nov 12, 2013 at 09:58:29PM -0700, David Ahern wrote:
> > > Hi Namhyung and Frederic:
> > >
> > >
On Wed, Nov 13, 2013 at 11:06:11AM -0700, David Ahern wrote:
> On 11/13/13, 11:03 AM, Frederic Weisbecker wrote:
> >I see. I can reproduce, I'll check and see what happens. It would be nice if
> >we could have an option to dump internal perf events like comm events as well
&
On Thu, Nov 14, 2013 at 04:33:01PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 14, 2013 at 04:23:04PM +0100, Peter Zijlstra wrote:
> > /*
> > + * We must dis-allow sampling irq_work_exit() because perf event sampling
> > + * itself can cause irq_work, which would lead to an infinite loop;
> > + *
>
On Thu, Nov 14, 2013 at 04:23:04PM +0100, Peter Zijlstra wrote:
> On Sat, Nov 09, 2013 at 04:10:14PM +0100, Peter Zijlstra wrote:
> > Cute.. so what appears to happen is that:
> >
> > 1) we trace irq_work_exit
> > 2) we generate event
> > 3) event needs to deliver signal
> > 4) we queue irq_work t
2013/11/15 David Ahern :
> The intent of perf-script is to dump the events and information
> in the file. H/W, S/W and raw events all dump callchains if they
> are present; might as well make that the default for tracepoints
> too.
>
> Signed-off-by: David Ahern
> Cc
On Fri, Nov 15, 2013 at 09:15:21AM -0500, Steven Rostedt wrote:
> On Fri, 15 Nov 2013 13:28:33 +0100
> Peter Zijlstra wrote:
>
> > On Fri, Nov 15, 2013 at 10:16:18AM +0900, Masami Hiramatsu wrote:
> > > Kprobes itself can detect nested call by using per-cpu current-running
> > > kprobe pointer. A
On Fri, Nov 15, 2013 at 09:29:51AM -0700, David Ahern wrote:
> HI Frederic:
>
> On 11/13/13, 11:03 AM, Frederic Weisbecker wrote:
> >
> >I see. I can reproduce, I'll check and see what happens. It would be nice if
> >we could have an option to dump internal perf
On Thu, Oct 03, 2013 at 05:40:40PM +, Christoph Lameter wrote:
> V2->V3:
> - Introduce a new tick_get_housekeeping_cpu() function. Not sure
> if that is exactly what we want but it is a start. Thomas?
Not really. Thomas suggested an infrastructure to move CPU-local periodic
jobs handling to
On Wed, Jan 01, 2014 at 11:37:55AM -0700, David Ahern wrote:
> On 12/26/13, 8:30 AM, Frederic Weisbecker wrote:
> >On Thu, Dec 26, 2013 at 10:24:03AM -0500, David Ahern wrote:
> >>On 12/26/13, 10:14 AM, Frederic Weisbecker wrote:
> >>>>I was carrying that patch wh
On Fri, Jan 03, 2014 at 03:45:36PM -0700, David Ahern wrote:
> On 1/3/14, 3:07 PM, Frederic Weisbecker wrote:
> >I'm not sure I understand why we need that. Why doesn't it work by simply
> >flushing
> >events prior to the earliest timestamp among every CPUs last
d on 32-bit ARM platform when extending the max
> deferment value.
>
> Cc: Frederic Weisbecker
> Signed-off-by: Kevin Hilman
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
gt; is disabled (scheduler_tick_max_deferment() returns KTIME_MAX.)
>
> Cc: Frederic Weisbecker
> Signed-off-by: Kevin Hilman
> ---
> kernel/sched/core.c | 16 +++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
&g
2013/12/23 Jan Kara :
> From: Christoph Hellwig
>
> Make smp_call_function_single and friends more efficient by using
> a lockless list.
>
> Signed-off-by: Christoph Hellwig
> Signed-off-by: Jan Kara
FWIW, I really like that patch.
Reviewed-by: Frederic Weisbecker
--
To
bly not worth it. So lets get
rid of it.
Also remove the sample snapshot on dying process timer
that is now useless, as suggested by Kosaki.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Kosaki Motohiro
Cc: Andrew Mort
target cleanup
* Consolidate some timer sampling code
* Remove use of tasklist lock
* Robustify sighand locking against exec and exit by using the safer
lock_task_sighand() API instead of sighand raw locking.
* Convert some unnecessary BUG_ON() to WARN_ON()
Thanks,
Frederic
---
Frederic
On Sun, Nov 17, 2013 at 01:55:10AM -0800, Joe Perches wrote:
> Possible speed improvement of the __do_softirq function by using ffs
> instead of using a while loop with an & 1 test then single bit shift.
>
> Signed-off-by: Joe Perches
> ---
> kernel/softirq.c | 43 ++-
On Mon, Dec 09, 2013 at 10:56:46AM -0800, Joe Perches wrote:
> On Mon, 2013-12-09 at 19:44 +0100, Frederic Weisbecker wrote:
> > On Sun, Nov 17, 2013 at 01:55:10AM -0800, Joe Perches wrote:
> > > Possible speed improvement of the __do_softirq function by using ffs
> > &g
On Tue, Dec 03, 2013 at 08:35:12PM +0800, Alex Shi wrote:
> We are not always 0 when update nohz cpu load, after nohz_full enabled.
> But current code still treat the cpu as idle. that is incorrect.
> Fix it to use correct cpu_load.
>
> Signed-off-by: Alex Shi
> ---
> kernel/sched/proc.c | 8 +++
On Wed, Dec 04, 2013 at 06:50:37PM -0800, Paul E. McKenney wrote:
> On Thu, Dec 05, 2013 at 02:20:55AM +0100, Frederic Weisbecker wrote:
> > On Wed, Dec 04, 2013 at 11:39:57AM -0800, Paul E. McKenney wrote:
> > > Hello, Frederic,
> > >
> > > Just realized that
On Wed, Dec 04, 2013 at 02:57:43PM +0100, Oleg Nesterov wrote:
> On 12/03, Frederic Weisbecker wrote:
> >
> > 2013/11/11 Oleg Nesterov :
> > > On 11/11, Frederic Weisbecker wrote:
> > >>
> > >> On Sat, Nov 09, 2013 at 04:54:28PM +0100, Oleg Nestero
On Wed, Dec 04, 2013 at 02:57:43PM +0100, Oleg Nesterov wrote:
> > Ideally it would be nice if we drop bp_mask and use extended ranges
> > only when len > 8. How does that sound?
>
> Again, iirc, this is what the code does. except (in essence) it checks
> mask != 0 instead of len > 8.
>
> And yes
On Wed, Oct 02, 2013 at 11:11:06AM -0500, suravee.suthikulpa...@amd.com wrote:
> From: Jacob Shin
>
> Implement hardware breakpoint address mask for AMD Family 16h and
> above processors. CPUID feature bit indicates hardware support for
> DRn_ADDR_MASK MSRs. These masks further qualify DRn/DR7 ha
On Wed, Oct 02, 2013 at 11:11:07AM -0500, suravee.suthikulpa...@amd.com wrote:
> From: Jacob Shin
>
> Currently bp_len is given a default value of 4. Allow user to override it:
>
> $ perf stat -e mem:0x1000/8
> ^
> bp_len
>
> If no value
1701 - 1800 of 4554 matches
Mail list logo