On 11/21, Ivo Sieben wrote:
> Hi
>
> 2012/11/19 Oleg Nesterov :
> >
> > Because on a second thought I suspect this change is wrong.
> >
> > Just for example, please look at kauditd_thread(). It does
> >
> > set_current_state(TASK_INTERRUPTIBLE);
> >
> > add_wait_queue(_wait, );
> >
> But we try to understand your fault scenario: How can the LOAD leak
> into the critical section? As far as we understand the spin_unlock()
> function also contains a memory barrier to prevent such a reordering
> from happening.
It does - it would be very interesting for someone to look at the
Hi
2012/11/19 Oleg Nesterov :
>
> Because on a second thought I suspect this change is wrong.
>
> Just for example, please look at kauditd_thread(). It does
>
> set_current_state(TASK_INTERRUPTIBLE);
>
> add_wait_queue(_wait, );
>
> if (!CONDITION) // <-- LOAD
>
Hi
2012/11/19 Oleg Nesterov o...@redhat.com:
Because on a second thought I suspect this change is wrong.
Just for example, please look at kauditd_thread(). It does
set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(kauditd_wait, wait);
if (!CONDITION)
But we try to understand your fault scenario: How can the LOAD leak
into the critical section? As far as we understand the spin_unlock()
function also contains a memory barrier to prevent such a reordering
from happening.
It does - it would be very interesting for someone to look at the
On 11/21, Ivo Sieben wrote:
Hi
2012/11/19 Oleg Nesterov o...@redhat.com:
Because on a second thought I suspect this change is wrong.
Just for example, please look at kauditd_thread(). It does
set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(kauditd_wait,
On 11/19, Ivo Sieben wrote:
>
> Hi
>
> 2012/11/19 Oleg Nesterov :
> >
> > I am wondering if it makes sense unconditionally. A lot of callers do
> >
> > if (waitqueue_active(q))
> > wake_up(...);
> >
> > this patch makes the optimization above pointless and adds mb().
> >
>
Hi
2012/11/19 Oleg Nesterov :
>
> I am wondering if it makes sense unconditionally. A lot of callers do
>
> if (waitqueue_active(q))
> wake_up(...);
>
> this patch makes the optimization above pointless and adds mb().
>
>
> But I won't argue.
>
> Oleg.
>
This patch solved
On 11/19, Ivo Sieben wrote:
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
> {
> unsigned long flags;
>
> - spin_lock_irqsave(>lock, flags);
> - __wake_up_common(q, mode, nr_exclusive, 0,
Hi Ivo,
On 11/19/2012 01:00 PM, Ivo Sieben wrote:
> Check the waitqueue task list to be non empty before entering the critical
> section. This prevents locking the spin lock needlessly in case the queue
> was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
> system.
>
>
Hi Ivo,
On 11/19/2012 01:00 PM, Ivo Sieben wrote:
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
On 11/19, Ivo Sieben wrote:
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
{
unsigned long flags;
- spin_lock_irqsave(q-lock, flags);
- __wake_up_common(q, mode, nr_exclusive, 0, key);
-
Hi
2012/11/19 Oleg Nesterov o...@redhat.com:
I am wondering if it makes sense unconditionally. A lot of callers do
if (waitqueue_active(q))
wake_up(...);
this patch makes the optimization above pointless and adds mb().
But I won't argue.
Oleg.
This patch
On 11/19, Ivo Sieben wrote:
Hi
2012/11/19 Oleg Nesterov o...@redhat.com:
I am wondering if it makes sense unconditionally. A lot of callers do
if (waitqueue_active(q))
wake_up(...);
this patch makes the optimization above pointless and adds mb().
But
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben
---
a second repost of this patch v2:
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben meltedpiano...@gmail.com
---
a second
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben
---
repost:
Did I apply the memory
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben meltedpiano...@gmail.com
---
repost:
18 matches
Mail list logo