some numbers requested by Paolo.
- Added a commit message to PeterZ's patch. Hope he likes it.
Daniel Wagner (4):
rcu: Do not call swake_up_all with rnp->lock holding
gadgetfs: Fix fallout of wait to swait completion change
usb: gadget: f_fs: Fix fallout of wait to swait completion cha
The completion code has been changed using swait (simple wait) instead
of the more complex wait implementation. ep_io() is using not using
the wait_for_completation_*() helper function we need to update this
function accordingly.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de&
The completion code has been changed using swait (simple wait) instead
of the more complex wait implementation. ezusb_req_ctx_wait() is using
not using the wait_for_completation_*() helper function we need to
update this function accordingly.
Signed-off-by: Daniel Wagner <daniel.wag...@
umbers for tscdeadline_latency test.]
Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: linux-kernel@vger.kern
it accordingly.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: Felipe Balbi <ba...@ti.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
Cc: Michal Nazarewicz <min...@mina86.com>
Cc: Al Viro <v...@zeniv.linux.org.uk>
Cc: Robert Baldyga <r.ba
chunks,
dropped, and updated to align with names that were chosen to match the
simple waitqueue support.
Originally-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Paul Gortmaker <paul.gortma...@windriver.com>
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de
k_irq+0x30/0x60
[] ? kthread_create_on_node+0x260/0x260
[] ret_from_fork+0x3f/0x70
[] ? kthread_create_on_node+0x260/0x260
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Th
On 09/04/2015 03:34 PM, Daniel Wagner wrote:
> In order to get rid of all CPU_*_FROZEN states we need to convert all
> users first.
>
> cpu_check_up_prepare() wants to report different errors depending on
> an ongoing suspend or not. freeze_active() reports back if that is the
>
On 09/07/2015 11:44 PM, Rafael J. Wysocki wrote:
> On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
>> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
>>> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>>>> Instead encode the FREEZE st
On 09/07/2015 11:44 PM, Rafael J. Wysocki wrote:
> On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
>> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
>>> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>>>> Instead encode the FREEZE st
On 09/04/2015 03:34 PM, Daniel Wagner wrote:
> In order to get rid of all CPU_*_FROZEN states we need to convert all
> users first.
>
> cpu_check_up_prepare() wants to report different errors depending on
> an ongoing suspend or not. freeze_active() reports back if that is the
>
On 08/29/2015 05:35 AM, Paul E. McKenney wrote:
> +extern bool __rcu_sync_is_idle(struct rcu_sync *);
> +
> /**
> * rcu_sync_is_idle() - Are readers permitted to use their fastpaths?
> * @rsp: Pointer to rcu_sync structure to use for synchronization
> @@ -50,7 +52,11 @@ struct rcu_sync {
>
On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>> Instead encode the FREEZE state via the CPU state we allow the
>> interesting subsystems (MCE, microcode) to query the power
>> subsystem directly.
&g
On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>> Instead encode the FREEZE state via the CPU state we allow the
>> interesting subsystems (MCE, microcode) to query the power
>> subsystem directly.
&g
On 08/29/2015 05:35 AM, Paul E. McKenney wrote:
> +extern bool __rcu_sync_is_idle(struct rcu_sync *);
> +
> /**
> * rcu_sync_is_idle() - Are readers permitted to use their fastpaths?
> * @rsp: Pointer to rcu_sync structure to use for synchronization
> @@ -50,7 +52,11 @@ struct rcu_sync {
>
-by: Daniel Wagner
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Don Zickus
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Lai Jiangshan
Cc: Peter Zijlstra
Cc: "Paul E. McKenney"
Cc: linux-kernel@vger.kernel.org
---
kernel/smpboot.c | 30 +++---
1 file changed, 15
of the users of the CPU
notifiers.
Signed-off-by: Daniel Wagner
Cc: "Rafael J. Wysocki"
Cc: Len Brown
Cc: Pavel Machek
Cc: linux...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
include/linux/suspend.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/include/linux/suspend.h
or resume is ongoing.
Signed-off-by: Daniel Wagner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: linux-kernel@vger.kernel.org
---
kernel/sched/core.c | 48 +---
1 file changed, 25 insertions(+), 23 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched
in
8038dad7e888581266c76df15d70ca457a3c5910 smpboot: Add common code for
notification from dying CPU
2a442c9c6453d3d043dfd89f2e03a1deff8a6f06 x86: Use common
outgoing-CPU-notification code
Signed-off-by: Daniel Wagner
Cc: "H. Peter Anvin"
Cc: "Paul E. McKenney"
Cc: Andr
The CPU state encodes if the CPU hotplug operation happens during suspend
or hibernate operations. Instead at looking at the encoded fields in the
CPU state variable, ask the PM subsystem directly.
Signed-off-by: Daniel Wagner
Cc: Tony Luck
Cc: Borislav Petkov
Cc: Thomas Gleixner
Cc: Ingo
There is no user left of the CPU_*_FROZEN states. Any subsystem
which needs do to know if tasks are frozen due to a suspend
operation can ask directly via freeze_active().
Signed-off-by: Daniel Wagner
[This patch contains only things like
- if ((action & ~CPU_TASKS_FR
checks on everyone.
Signed-off-by: Thomas Gleixner
Signed-off-by: Daniel Wagner
Cc: "Paul E. McKenney"
Cc: Ingo Molnar
Cc: Greg Kroah-Hartman
Cc: Paul Gortmaker
Cc: Vitaly Kuznetsov
Cc: Mathias Krause
Cc: David Hildenbrand
Cc: linux-kernel@vger.kernel.org
---
kernel/
in cpu_up()' argument false.
Signed-off-by: Daniel Wagner
Cc: "Rafael J. Wysocki"
Cc: Akinobu Mita
Cc: Jonathan Corbet
Cc: Len Brown
Cc: Pavel Machek
Cc: linux-...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
Documentation/cpu-hotplug.txt
/linux.git/commit/?id=8bb7844286fb8c9fce6f65d8288aeb09d03a5e0d
"H. Peter Anvin"
"Paul E. McKenney"
"Rafael J. Wysocki"
Akinobu Mita
Andrew Morton
Boris Ostrovsky
Borislav Petkov
Chris Metcalf
Daniel Wagner
David Hildenbrand
David Vrabel
Don Zickus
Greg
There is no user left of CPU_TASKS_FROZEN, so we can stop propagating
this information.
Signed-off-by: Daniel Wagner
Cc: "Paul E. McKenney"
Cc: Andrew Morton
Cc: David Hildenbrand
Cc: Greg Kroah-Hartman
Cc: Ingo Molnar
Cc: Mathias Krause
Cc: Nicolas Iooss
Cc: Paul Gortmaker
There is no user left of the CPU_*_FROZEN states. Any subsystem
which needs do to know if tasks are frozen due to a suspend
operation can ask directly via freeze_active().
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
[This patch contains only things like
- if ((
The CPU state encodes if the CPU hotplug operation happens during suspend
or hibernate operations. Instead at looking at the encoded fields in the
CPU state variable, ask the PM subsystem directly.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: Tony Luck <tony.l...@inte
.@linux-foundation.org>
Boris Ostrovsky <boris.ostrov...@oracle.com>
Borislav Petkov <b...@alien8.de>
Chris Metcalf <cmetc...@ezchip.com>
Daniel Wagner <daniel.wag...@bmw-carit.de>
David Hildenbrand <d...@linux.vnet.ibm.com>
David Vrabel <david.vra...@citr
There is no user left of CPU_TASKS_FROZEN, so we can stop propagating
this information.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: David Hildenbrand
-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Chris Metcalf <cmetc...@ezchip.com>
Cc: Don Zickus <dzic...@redhat.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Lai Jiangs
in
8038dad7e888581266c76df15d70ca457a3c5910 smpboot: Add common code for
notification from dying CPU
2a442c9c6453d3d043dfd89f2e03a1deff8a6f06 x86: Use common
outgoing-CPU-notification code
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "H. Peter Anvin" <h...@zyt
of the users of the CPU
notifiers.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "Rafael J. Wysocki" <r...@rjwysocki.net>
Cc: Len Brown <len.br...@intel.com>
Cc: Pavel Machek <pa...@ucw.cz>
Cc: linux...@vger.kernel.org
Cc: linux-kernel@vger.kernel.or
or resume is ongoing.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: linux-kernel@vger.kernel.org
---
kernel/sched/core.c | 48 +---
1 file chan
in cpu_up()' argument false.
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "Rafael J. Wysocki" <r...@rjwysocki.net>
Cc: Akinobu Mita <akinobu.m...@gmail.com>
Cc: Jonathan Corbet <cor...@lwn.net>
Cc: Len Brown <len.br...@intel.com>
C
it w/o
imposing that extra state checks on everyone.
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Cc: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Greg Kroah-Hartm
Hi Clark,
On 08/05/2015 03:30 PM, Daniel Wagner wrote:
> It's a while since the last attempt by Paul to get simple wait ready
> for mainline [1]. At the last realtime workshop it was discussed how
> the swait implementation could be made preempt aware. Peter posted an
> unte
Hi Clark,
On 08/05/2015 03:30 PM, Daniel Wagner wrote:
It's a while since the last attempt by Paul to get simple wait ready
for mainline [1]. At the last realtime workshop it was discussed how
the swait implementation could be made preempt aware. Peter posted an
untested version of it here [2
voided by introducing another list which contains
non field members of struct trace_entry.
Signed-off-by: Daniel Wagner
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: linux-kernel@vger.kernel.org
---
kernel/trace/trace_events.c| 25 ++
kernel/trace/trace_events_filter.
by introducing another list which contains
non field members of struct trace_entry.
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Cc: Steven Rostedt rost...@goodmis.org
Cc: Ingo Molnar mi...@redhat.com
Cc: linux-kernel@vger.kernel.org
---
kernel/trace/trace_events.c| 25
On 08/07/2015 08:42 AM, Daniel Wagner wrote:
> On 08/05/2015 03:30 PM, Daniel Wagner wrote:
>> My test system didn't crash or showed any obvious defects, so I
>> decided to apply some benchmarks utilizing mmtests. I have picked some
>
> As it turns out, this is not really tru
On 08/05/2015 03:30 PM, Daniel Wagner wrote:
> My test system didn't crash or showed any obvious defects, so I
> decided to apply some benchmarks utilizing mmtests. I have picked some
As it turns out, this is not really true. I forgot to enable lockdep:
[0.
On 08/05/2015 03:30 PM, Daniel Wagner wrote:
My test system didn't crash or showed any obvious defects, so I
decided to apply some benchmarks utilizing mmtests. I have picked some
As it turns out, this is not really true. I forgot to enable lockdep:
[0.053193
On 08/07/2015 08:42 AM, Daniel Wagner wrote:
On 08/05/2015 03:30 PM, Daniel Wagner wrote:
My test system didn't crash or showed any obvious defects, so I
decided to apply some benchmarks utilizing mmtests. I have picked some
As it turns out, this is not really true. I forgot to enable
From: Paul Gortmaker
As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce
proper blocking to no-CBs kthreads GP waits") the RCU subsystem started
making use of wait queues.
Here we convert all additions of RCU wait queues to use simple wait queues,
since they don't need the
From: Paul Gortmaker
Completions have no long lasting callbacks and therefore do not need
the complex waitqueue variant. Use simple waitqueues which reduces
the contention on the waitqueue lock.
This was a carry forward from v3.10-rt, with some RT specific chunks,
dropped, and updated to align
Hi,
It's a while since the last attempt by Paul to get simple wait ready
for mainline [1]. At the last realtime workshop it was discussed how
the swait implementation could be made preempt aware. Peter posted an
untested version of it here [2].
In order to test it, I used Paul's two patches
From: Peter Zijlstra
On Tue, Feb 17, 2015 at 06:44:19PM +0100, Sebastian Andrzej Siewior wrote:
> * Peter Zijlstra | 2015-01-21 16:07:16 [+0100]:
>
> >On Tue, Jan 20, 2015 at 01:16:13PM -0500, Steven Rostedt wrote:
> >> I'm actually wondering if we should just nuke the _interruptible()
> >>
From: Peter Zijlstra pet...@infradead.org
On Tue, Feb 17, 2015 at 06:44:19PM +0100, Sebastian Andrzej Siewior wrote:
* Peter Zijlstra | 2015-01-21 16:07:16 [+0100]:
On Tue, Jan 20, 2015 at 01:16:13PM -0500, Steven Rostedt wrote:
I'm actually wondering if we should just nuke the
Hi,
It's a while since the last attempt by Paul to get simple wait ready
for mainline [1]. At the last realtime workshop it was discussed how
the swait implementation could be made preempt aware. Peter posted an
untested version of it here [2].
In order to test it, I used Paul's two patches
From: Paul Gortmaker paul.gortma...@windriver.com
As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 (rcu: Introduce
proper blocking to no-CBs kthreads GP waits) the RCU subsystem started
making use of wait queues.
Here we convert all additions of RCU wait queues to use simple wait queues,
From: Paul Gortmaker paul.gortma...@windriver.com
Completions have no long lasting callbacks and therefore do not need
the complex waitqueue variant. Use simple waitqueues which reduces
the contention on the waitqueue lock.
This was a carry forward from v3.10-rt, with some RT specific chunks,
On 07/02/2015 11:41 AM, Peter Zijlstra wrote:
> On Wed, Jul 01, 2015 at 02:54:59PM -0700, Linus Torvalds wrote:
>> On Tue, Jun 30, 2015 at 10:57 PM, Daniel Wagner wrote:
>>>
>>> And an attempt at visualization:
>>>
>>> http://monom.org/posix01/sweep-
On 07/02/2015 11:41 AM, Peter Zijlstra wrote:
On Wed, Jul 01, 2015 at 02:54:59PM -0700, Linus Torvalds wrote:
On Tue, Jun 30, 2015 at 10:57 PM, Daniel Wagner w...@monom.org wrote:
And an attempt at visualization:
http://monom.org/posix01/sweep-4.1.0-02756-ge3d06bd.png
http://monom.org
Hi,
I did a sweep over the parameters for posix01. The parameters are number
of processes and number of locks taken per process. In contrast to the
other test, it looks like there is no set which ends a nice stable
result (read low variance). I have tried several things including
pinning down all
Hi,
I did a sweep over the parameters for posix01. The parameters are number
of processes and number of locks taken per process. In contrast to the
other test, it looks like there is no set which ends a nice stable
result (read low variance). I have tried several things including
pinning down all
On 06/24/2015 10:46 AM, Ingo Molnar wrote:
> So I'd suggest to first compare preemption behavior: does the workload
> context-switch heavily, and is it the exact same context switching rate and
> are
> the points of preemption the same as well between the two kernels?
If I read this correctly,
On 06/24/2015 10:46 AM, Ingo Molnar wrote:
So I'd suggest to first compare preemption behavior: does the workload
context-switch heavily, and is it the exact same context switching rate and
are
the points of preemption the same as well between the two kernels?
If I read this correctly, the
On 06/23/2015 04:34 PM, Peter Zijlstra wrote:
> On Tue, Jun 23, 2015 at 11:35:24AM +0200, Daniel Wagner wrote:
>> flock01
>> mean variance sigmamaxmin
>> 4.1.011.7075 816.334128.5716 12
On 06/22/2015 09:05 PM, Peter Zijlstra wrote:
> On Mon, Jun 22, 2015 at 08:11:14PM +0200, Daniel Wagner wrote:
>> On 06/22/2015 02:16 PM, Peter Zijlstra wrote:
>>> Also, since Linus thinks lglocks is a failed locking primitive (which I
>>> whole
>>> heart
On 06/23/2015 04:34 PM, Peter Zijlstra wrote:
On Tue, Jun 23, 2015 at 11:35:24AM +0200, Daniel Wagner wrote:
flock01
mean variance sigmamaxmin
4.1.011.7075 816.334128.5716 125.6552
0.0021
4.1.0
On 06/22/2015 09:05 PM, Peter Zijlstra wrote:
On Mon, Jun 22, 2015 at 08:11:14PM +0200, Daniel Wagner wrote:
On 06/22/2015 02:16 PM, Peter Zijlstra wrote:
Also, since Linus thinks lglocks is a failed locking primitive (which I
whole
heartedly agree with, its preempt-disable latencies
On 06/22/2015 02:16 PM, Peter Zijlstra wrote:
> Also, since Linus thinks lglocks is a failed locking primitive (which I whole
> heartedly agree with, its preempt-disable latencies are an abomination), it
> also converts the global part of fs/locks's usage of lglock over to a
> percpu-rwsem and
On 06/20/2015 10:14 AM, Daniel Borkmann wrote:
> I think it would be useful to perhaps have two options:
>
> 1) User specifies a specific CPU and gets one such an output above.
Good point. Will do.
> 2) Summary view, i.e. to have the samples of each CPU for comparison
>next to each other in
On 06/20/2015 10:14 AM, Daniel Borkmann wrote:
I think it would be useful to perhaps have two options:
1) User specifies a specific CPU and gets one such an output above.
Good point. Will do.
2) Summary view, i.e. to have the samples of each CPU for comparison
next to each other in
On 06/22/2015 02:16 PM, Peter Zijlstra wrote:
Also, since Linus thinks lglocks is a failed locking primitive (which I whole
heartedly agree with, its preempt-disable latencies are an abomination), it
also converts the global part of fs/locks's usage of lglock over to a
percpu-rwsem and uses a
0 ||
32768 -> 65535: 72 ||
65536 -> 131071 : 32 ||
131072 -> 262143 : 26 ||
262144 -> 524287
0 ||
32768 -> 65535: 72 ||
65536 -> 131071 : 32 ||
131072 -> 262143 : 26 ||
262144 -> 524287
On 06/18/2015 07:06 PM, Alexei Starovoitov wrote:
> On 6/18/15 4:40 AM, Daniel Wagner wrote:
>> BPF offers another way to generate latency histograms. We attach
>> kprobes at trace_preempt_off and trace_preempt_on and calculate the
>> time it takes to from seeing
||
524288 - 1048575 : 298 ||
All this is based on the trace3 examples written by
Alexei Starovoitov a...@plumgrid.com.
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Cc: Alexei Starovoitov a...@plumgrid.com
||
524288 - 1048575 : 298 ||
All this is based on the trace3 examples written by
Alexei Starovoitov a...@plumgrid.com.
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Cc: Alexei Starovoitov a...@plumgrid.com
On 06/18/2015 07:06 PM, Alexei Starovoitov wrote:
On 6/18/15 4:40 AM, Daniel Wagner wrote:
BPF offers another way to generate latency histograms. We attach
kprobes at trace_preempt_off and trace_preempt_on and calculate the
time it takes to from seeing the off/on transition.
The first array
0 ||
32768 -> 65535: 72 ||
65536 -> 131071 : 32 ||
131072 -> 262143 : 26 ||
262144 -> 524287
||
524288 - 1048575 : 298 ||
All this is based on the trace3 examples written by
Alexei Starovoitov a...@plumgrid.com.
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Cc: Alexei Starovoitov a...@plumgrid.com
On 06/17/2015 10:11 AM, Daniel Wagner wrote:
> On 06/16/2015 07:20 PM, Alexei Starovoitov wrote:
>> On 6/16/15 5:38 AM, Daniel Wagner wrote:
>>> static int free_thread(void *arg)
>>> +{
>>> +unsigned long flags;
>>> +struct htab_elem *
On 06/16/2015 07:20 PM, Alexei Starovoitov wrote:
> On 6/16/15 5:38 AM, Daniel Wagner wrote:
>> static int free_thread(void *arg)
>> +{
>> +unsigned long flags;
>> +struct htab_elem *l;
>> +
>> +while (!kthread_should_stop()) {
>> +
On 06/16/2015 07:20 PM, Alexei Starovoitov wrote:
On 6/16/15 5:38 AM, Daniel Wagner wrote:
static int free_thread(void *arg)
+{
+unsigned long flags;
+struct htab_elem *l;
+
+while (!kthread_should_stop()) {
+spin_lock_irqsave(elem_freelist_lock, flags);
+while
On 06/17/2015 10:11 AM, Daniel Wagner wrote:
On 06/16/2015 07:20 PM, Alexei Starovoitov wrote:
On 6/16/15 5:38 AM, Daniel Wagner wrote:
static int free_thread(void *arg)
+{
+unsigned long flags;
+struct htab_elem *l;
+
+while (!kthread_should_stop
On 06/16/2015 06:07 PM, Paul E. McKenney wrote:
On Tue, Jun 16, 2015 at 11:43:42AM -0400, Steven Rostedt wrote:
On Tue, 16 Jun 2015 07:16:26 -0700
"Paul E. McKenney" wrote:
Just for the record: Using a thread for freeing the memory is curing the
problem without the need to modify
On 06/16/2015 05:41 PM, Steven Rostedt wrote:
On Tue, 16 Jun 2015 14:38:53 +0200
Daniel Wagner wrote:
*map, void *key)
if (l) {
hlist_del_rcu(>hash_node);
htab->count--;
- kfree_rcu(l, rcu);
+ /* kfree_rcu(l, rcu); *
On 06/16/2015 02:27 PM, Paul E. McKenney wrote:
> On Mon, Jun 15, 2015 at 10:45:05PM -0700, Alexei Starovoitov wrote:
>> On 6/15/15 7:14 PM, Paul E. McKenney wrote:
>>>
>>> Why do you believe that it is better to fix it within call_rcu()?
>>
>> found it:
>> diff --git a/kernel/rcu/tree.c
On 06/16/2015 08:46 AM, Alexei Starovoitov wrote:
> On 6/15/15 11:34 PM, Daniel Wagner wrote:
>> On 06/16/2015 08:25 AM, Alexei Starovoitov wrote:
>>> On 6/15/15 11:06 PM, Daniel Wagner wrote:
>>>>> with the above 'fix' the trace.patch is now passing.
>>>&
On 06/16/2015 08:25 AM, Alexei Starovoitov wrote:
> On 6/15/15 11:06 PM, Daniel Wagner wrote:
>>> with the above 'fix' the trace.patch is now passing.
>> It still crashes for me with the original test program
>>
>> [ 145.908013] [] ? __rcu_reclai
On 06/16/2015 07:45 AM, Alexei Starovoitov wrote:
> On 6/15/15 7:14 PM, Paul E. McKenney wrote:
>>
>> Why do you believe that it is better to fix it within call_rcu()?
>
> found it:
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8cf7304b2867..a3be09d482ae 100644
> ---
On 06/16/2015 02:27 PM, Paul E. McKenney wrote:
On Mon, Jun 15, 2015 at 10:45:05PM -0700, Alexei Starovoitov wrote:
On 6/15/15 7:14 PM, Paul E. McKenney wrote:
Why do you believe that it is better to fix it within call_rcu()?
found it:
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
On 06/16/2015 05:41 PM, Steven Rostedt wrote:
On Tue, 16 Jun 2015 14:38:53 +0200
Daniel Wagner w...@monom.org wrote:
*map, void *key)
if (l) {
hlist_del_rcu(l-hash_node);
htab-count--;
- kfree_rcu(l, rcu);
+ /* kfree_rcu(l, rcu
On 06/16/2015 06:07 PM, Paul E. McKenney wrote:
On Tue, Jun 16, 2015 at 11:43:42AM -0400, Steven Rostedt wrote:
On Tue, 16 Jun 2015 07:16:26 -0700
Paul E. McKenney paul...@linux.vnet.ibm.com wrote:
Just for the record: Using a thread for freeing the memory is curing the
problem without the
On 06/16/2015 07:45 AM, Alexei Starovoitov wrote:
On 6/15/15 7:14 PM, Paul E. McKenney wrote:
Why do you believe that it is better to fix it within call_rcu()?
found it:
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8cf7304b2867..a3be09d482ae 100644
--- a/kernel/rcu/tree.c
On 06/16/2015 08:25 AM, Alexei Starovoitov wrote:
On 6/15/15 11:06 PM, Daniel Wagner wrote:
with the above 'fix' the trace.patch is now passing.
It still crashes for me with the original test program
[ 145.908013] [810d1da1] ? __rcu_reclaim+0x101/0x3d0
[ 145.908013
On 06/16/2015 08:46 AM, Alexei Starovoitov wrote:
On 6/15/15 11:34 PM, Daniel Wagner wrote:
On 06/16/2015 08:25 AM, Alexei Starovoitov wrote:
On 6/15/15 11:06 PM, Daniel Wagner wrote:
with the above 'fix' the trace.patch is now passing.
It still crashes for me with the original test program
On 06/12/2015 07:17 PM, Alexei Starovoitov wrote:
> On 6/12/15 7:33 AM, Daniel Wagner wrote:
>> On 06/12/2015 08:12 AM, Daniel Wagner wrote:
>> Attaching kprobes to trace_preempt_[on|off] works fine. Empty BPF
>> programs connected to the probes is no problem as well. So
On 06/13/2015 10:45 AM, Daniel Wagner wrote:
> On 06/12/2015 06:01 PM, Steven Rostedt wrote:
>>> Signed-off-by: Tom Zanussi
>>> Signed-off-by: Daniel Wagner
>>
>> Why is Daniel signed off by here?
>
> I have reported the issue and send a fix for this pat
in mmap(), but if so, I cannot locate them now.
>
> Reported-and-tested-by: Prarit Bhargava
> Reported-by: Daniel Wagner
Reported-and-tested-by: Daniel Wagner
Sorry for the long delay. It took me a while to figure out my original
setup. I could verify that this patch made the lockdep m
On 06/12/2015 07:17 PM, Alexei Starovoitov wrote:
On 6/12/15 7:33 AM, Daniel Wagner wrote:
On 06/12/2015 08:12 AM, Daniel Wagner wrote:
Attaching kprobes to trace_preempt_[on|off] works fine. Empty BPF
programs connected to the probes is no problem as well. So I changed the
BPF program to use
Bhargava pra...@redhat.com
Reported-by: Daniel Wagner w...@monom.org
Reported-and-tested-by: Daniel Wagner w...@monom.org
Sorry for the long delay. It took me a while to figure out my original
setup. I could verify that this patch made the lockdep message go away
on 4.0-rc6 and also on 4.1-rc8
On 06/13/2015 10:45 AM, Daniel Wagner wrote:
On 06/12/2015 06:01 PM, Steven Rostedt wrote:
Signed-off-by: Tom Zanussi tom.zanu...@linux.intel.com
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Why is Daniel signed off by here?
I have reported the issue and send a fix
a trace record
(e.g. one with post_trigger set) could be invoked without one.
Likewise a trigger's cond flag should be reset after it's disabled,
not before.
Signed-off-by: Tom Zanussi
Signed-off-by: Daniel Wagner
Why is Daniel signed off by here?
I have reported the issue and send a fix
that's expecting to process a trace record
(e.g. one with post_trigger set) could be invoked without one.
Likewise a trigger's cond flag should be reset after it's disabled,
not before.
Signed-off-by: Tom Zanussi tom.zanu...@linux.intel.com
Signed-off-by: Daniel Wagner daniel.wag...@bmw-carit.de
Why
On 06/12/2015 08:12 AM, Daniel Wagner wrote:
> On 06/12/2015 12:08 AM, Alexei Starovoitov wrote:
>> On 6/11/15 12:25 AM, Daniel Wagner wrote:
>> If you have any suggestions on where to look, I'm all ears.
>> My stack traces look like:
>> Running with 10*40 (== 400) task
On 06/12/2015 12:08 AM, Alexei Starovoitov wrote:
> On 6/11/15 12:25 AM, Daniel Wagner wrote:
>> In both cases BPF or based on Tom's 'hist' triggers' patches, there is
>> some trickery necessary to get it working. While the first approach
>> has more flexibility what you want
On 06/12/2015 12:08 AM, Alexei Starovoitov wrote:
On 6/11/15 12:25 AM, Daniel Wagner wrote:
In both cases BPF or based on Tom's 'hist' triggers' patches, there is
some trickery necessary to get it working. While the first approach
has more flexibility what you want to measure or how you want
901 - 1000 of 1253 matches
Mail list logo