[ cc: lkml ]
>> > There is a property of shadow memory that I would like to exploit
>> > - any region of shadow memory can be reset to zero at any point
>> > w/o any bad consequences (it can lead to missed data
>> > races, but it's better than OOM kill).
>> > I've tried to execute
[ cc: lkml ]
There is a property of shadow memory that I would like to exploit
- any region of shadow memory can be reset to zero at any point
w/o any bad consequences (it can lead to missed data
races, but it's better than OOM kill).
I've tried to execute madvise(MADV_DONTNEED) every
On 23/02/2008, Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> On Sat, 23 Feb 2008, Dmitry Adamushko wrote:
> >
> > it's not a LOAD that escapes *out* of the region. It's a MODIFY that gets
> *in*:
>
>
> Not with the smp_wmb(). That's the whole poi
*in*:
(1)
MODIFY(a);
LOCK
LOAD(b);
UNLOCK
can become:
(2)
LOCK
MOFIDY(a)
LOAD(b);
UNLOCK
and (reordered)
(3)
LOCK
LOAD(a)
MODIFY(b)
UNLOCK
and this last one is a problem. No?
>
> Linus
>
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send t
requirement (and only for situation like above) is that there is a
full mb between possible write ops. that have taken place before
try_to_wake_up() _and_ a load of p->state inside try_to_wake_up().
does it make sense #2 ? :-)
(yeah, maybe I'm just too paranoid :-)
>
>
:-)
Linus
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org
(a)
MODIFY(b)
UNLOCK
and this last one is a problem. No?
Linus
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 23/02/2008, Linus Torvalds [EMAIL PROTECTED] wrote:
On Sat, 23 Feb 2008, Dmitry Adamushko wrote:
it's not a LOAD that escapes *out* of the region. It's a MODIFY that gets
*in*:
Not with the smp_wmb(). That's the whole point.
Ie the patch I'm suggesting is sufficient is appended
when Stop Machine is triggered. Stop Machine is currently only
+ used by the module insertion and removal.
this "only" part. What about e.g. a 'cpu hotplug' case (_cpu_down())?
(or we should abstract it a bit to the point that e.g. a cpu can be
considered as 'a module'? :-)
-
is currently only
+ used by the module insertion and removal.
this only part. What about e.g. a 'cpu hotplug' case (_cpu_down())?
(or we should abstract it a bit to the point that e.g. a cpu can be
considered as 'a module'? :-)
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: call wake_up_process() whithout the lock being held
- from the POV of synchronization, there should be no need to call
wake_up_process()
with the 'kthread_create_lock' being held;
- moreover, in order to support a lo
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: add a missing memory barrier to kthread_stop()
We must ensure that kthread_stop_info.k has been updated before
kthread's wakeup. This is required to properly support
the use of kthread_should_stop() in the main loop of k
() whithout the lock being held
---
(this one is from Ingo's sched-devel tree)
softlockup: fix task state setting
kthread_stop() can be called when a 'watchdog' thread is executing after
kthread_should_stop() but before set_task_state(TASK_INTERRUPTIBLE).
Signed-off-by: Dmitry Adamushko <[EM
is from local_bh_enable() and
> from ksoftirqd/n threads (by calling do_softirq()). AFAIK, both
> invocations occur in a _nont-interrupt_ context (exception context).
>
> So, where does the interrupt-context tasklets invocation really
> occur ?
Look at irq_exit() in softirq.c.
The common sequence is ... -> do_IRQ() --> irq_exit() --> invoke_softirq()
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
context).
So, where does the interrupt-context tasklets invocation really
occur ?
Look at irq_exit() in softirq.c.
The common sequence is ... - do_IRQ() -- irq_exit() -- invoke_softirq()
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
() whithout the lock being held
---
(this one is from Ingo's sched-devel tree)
softlockup: fix task state setting
kthread_stop() can be called when a 'watchdog' thread is executing after
kthread_should_stop() but before set_task_state(TASK_INTERRUPTIBLE).
Signed-off-by: Dmitry Adamushko [EMAIL
From: Dmitry Adamushko [EMAIL PROTECTED]
Subject: kthread: add a missing memory barrier to kthread_stop()
We must ensure that kthread_stop_info.k has been updated before
kthread's wakeup. This is required to properly support
the use of kthread_should_stop() in the main loop of kthread
From: Dmitry Adamushko [EMAIL PROTECTED]
Subject: kthread: call wake_up_process() whithout the lock being held
- from the POV of synchronization, there should be no need to call
wake_up_process()
with the 'kthread_create_lock' being held;
- moreover, in order to support a lockless check
ask);
spin_unlock(_create_lock);
+ wake_up_process(kthreadd_task);
wait_for_completion();
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo
exited on its own, w/o kthread_stop. Check. */
if (kthread_should_stop()) {
kthread_stop_info.err = ret;
complete(_stop_info.done);
}
return 0;
}
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line &q
as part of this patch. Finally, I think the comment as is is
> hard to understand I got the sense of it backwards on first reading;
> perhaps something like this:
>
> /*
> * Ensure kthread_stop_info.k is visible before wakeup, paired
> * with barrier in set_curre
On 19/02/2008, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> [ ... ]
> > >
> > > From: Dmitry Adamushko <[EMAIL PROTECTED]>
> > > Subject: kthread: add a memory barrier to kthread_stop()
> > >
> > > 'kthread' threads do a check in the f
On 19/02/2008, Peter Zijlstra [EMAIL PROTECTED] wrote:
[ ... ]
From: Dmitry Adamushko [EMAIL PROTECTED]
Subject: kthread: add a memory barrier to kthread_stop()
'kthread' threads do a check in the following order:
- set_current_state(TASK_INTERRUPTIBLE
is visible before wakeup, paired
* with barrier in set_current_state().
*/
Yes, I'll try to come up with a better description.
-apw
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
kthread_stop. Check. */
if (kthread_should_stop()) {
kthread_stop_info.err = ret;
complete(kthread_stop_info.done);
}
return 0;
}
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
, kthread_create_list);
- wake_up_process(kthreadd_task);
spin_unlock(kthread_create_lock);
+ wake_up_process(kthreadd_task);
wait_for_completion(create.done);
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
e, kthread_stop_info.k is not yet visible
- schedule()
...
we missed a 'kthread_stop' event.
hum?
TIA,
---
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Subject: kthread: add a memory barrier to kthread_stop()
'kthread' threads do a check in the following order:
- set_curr
visible
- schedule()
...
we missed a 'kthread_stop' event.
hum?
TIA,
---
From: Dmitry Adamushko [EMAIL PROTECTED]
Subject: kthread: add a memory barrier to kthread_stop()
'kthread' threads do a check in the following order:
- set_current_state(TASK_INTERRUPTIBLE);
- kthread_should_stop
t_for_common+0x34/0x170
> [] ? try_to_wake_up+0x77/0x200
> [] wait_for_completion+0x18/0x20
> [ ... ]
does a stack trace always look like this?
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message t
[8022fdb7] ? try_to_wake_up+0x77/0x200
[804ddec8] wait_for_completion+0x18/0x20
[ ... ]
does a stack trace always look like this?
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
On 03/02/2008, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> Dmitry Adamushko wrote:
> > Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
> >
> > It looks like there is no need to loop any longer when 'same == 0'.
>
> thanks for the contribution!
&g
On 03/02/2008, Arjan van de Ven [EMAIL PROTECTED] wrote:
Dmitry Adamushko wrote:
Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
It looks like there is no need to loop any longer when 'same == 0'.
thanks for the contribution!
while I like your patch, I wonder if we should
Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
It looks like there is no need to loop any longer when 'same == 0'.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
diff --git a/kernel/latencytop.c b/kernel/latencytop.c
index b4e3c85..61f7da0 100644
--- a/kernel/latenc
Subject: latencytop: optimize LT_BACKTRACEDEPTH loops a bit.
It looks like there is no need to loop any longer when 'same == 0'.
Signed-off-by: Dmitry Adamushko [EMAIL PROTECTED]
diff --git a/kernel/latencytop.c b/kernel/latencytop.c
index b4e3c85..61f7da0 100644
--- a/kernel/latencytop.c
On 02/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
>
> > yeah, I was already on a half-way to check it out.
> >
> > It does fix a problem for me.
> >
> > Don't forget to take along these 2
On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
>
> > > I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so
> > > the 10s. delay of msleep_interruptible() might be related but I'm
> >
On 01/02/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 01/02/2008, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> >
> > thanks - i cannot reproduce it on my usual suspend/resume testbox
> > because e1000 broke on it, and this is a pretty annoying regression
I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so the
10s. delay of msleep_interruptible() might be related but
I'm still looking for the reason why this fix helps (and what goes
wrong with the current code).
>
> Ingo
>
--
Best regards,
Dmitry Adamushko
--
00) timeouts? On
average, it would take +-5 sec. and might explain the first
observation of Ravael -- "...adds a 5 - 10 sec delay..." (although,
lately he reported up to +30 sec. delays).
(/me goint to also try reproducing it later today)
> [ ... ]
--
Best regards,
Dmitry Adamushko
--
To u
, it would take +-5 sec. and might explain the first
observation of Ravael -- ...adds a 5 - 10 sec delay... (although,
lately he reported up to +30 sec. delays).
(/me goint to also try reproducing it later today)
[ ... ]
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send
to ~8 s. (out of ~20 tests) so the
10s. delay of msleep_interruptible() might be related but
I'm still looking for the reason why this fix helps (and what goes
wrong with the current code).
Ingo
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line
On 01/02/2008, Dmitry Adamushko [EMAIL PROTECTED] wrote:
On 01/02/2008, Ingo Molnar [EMAIL PROTECTED] wrote:
thanks - i cannot reproduce it on my usual suspend/resume testbox
because e1000 broke on it, and this is a pretty annoying regression.
We'll have to undo the hung-tasks detection
On 01/02/2008, Ingo Molnar [EMAIL PROTECTED] wrote:
* Dmitry Adamushko [EMAIL PROTECTED] wrote:
I've observed delays from ~3 s. up to ~8 s. (out of ~20 tests) so
the 10s. delay of msleep_interruptible() might be related but I'm
still looking for the reason why this fix helps (and what
On 02/02/2008, Ingo Molnar [EMAIL PROTECTED] wrote:
* Dmitry Adamushko [EMAIL PROTECTED] wrote:
yeah, I was already on a half-way to check it out.
It does fix a problem for me.
Don't forget to take along these 2 fixes from Peter's patch:
- fix break usage in do_each_thread
g this commit (it reverts with some minor modifications) fixes the
> problem for me.
What if you use the same kernel that triggers a problem and just disable
this new 'softlockup' functionality:
echo 0 > /proc/sys/kernel/hung_task_timeout_secs
does the problem disapear?
TIA,
>
> Thanks,
> Rafael
>
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
a problem and just disable
this new 'softlockup' functionality:
echo 0 /proc/sys/kernel/hung_task_timeout_secs
does the problem disapear?
TIA,
Thanks,
Rafael
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On 20/01/2008, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> Hello Arjan,
>
> a few comments on the current locking scheme.
heh... now having read the first message in this series ("[Announce]
Development release 0.1 of the LatencyTOP tool"), I finally see that
"
seq_puts(m, "Latency Top version : v0.1\n");
> +
> + for (i = 0; i < 32; i++) {
> + if (task->latency_record[i].reason)
for (i = 0; i < LT_SAVECOUNT; i++) {
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line
regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 20/01/2008, Dmitry Adamushko [EMAIL PROTECTED] wrote:
Hello Arjan,
a few comments on the current locking scheme.
heh... now having read the first message in this series ([Announce]
Development release 0.1 of the LatencyTOP tool), I finally see that
fine-grained locking is still on your todo
sched_class_fair :: load_balance_fair()
upon getting a PRE_SCHEDULE load-balancing point.
IMHO, it would look nicer this way _BUT_ yeah, this 'full' abstraction
adds additional overhead to the hot-path (which might make it not that
worthy).
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from
' abstraction
adds additional overhead to the hot-path (which might make it not that
worthy).
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
From: Dmitry Adamushko <[EMAIL PROTECTED]>
Clean-up try_to_wake_up().
Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one #ifdef
section less ].
Also remove a few redundant blank lines.
Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
---
diff --git a/kern
From: Dmitry Adamushko <[EMAIL PROTECTED]>
No need to do a check for 'affine wakeup and passive balancing possibilities' in
select_task_rq_fair() when task_cpu(p) == this_cpu.
I guess, this part got missed upon introduction of per-sched_class
select_task_rq()
in try_to_wake_up().
Sign
rface would become less straightforward,
logically-wise.
Something like:
rq = activate_task(rq, ...) ; /* may unlock rq and lock/return another one */
would complicate the existing use cases.
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kern
,
logically-wise.
Something like:
rq = activate_task(rq, ...) ; /* may unlock rq and lock/return another one */
would complicate the existing use cases.
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
From: Dmitry Adamushko [EMAIL PROTECTED]
No need to do a check for 'affine wakeup and passive balancing possibilities' in
select_task_rq_fair() when task_cpu(p) == this_cpu.
I guess, this part got missed upon introduction of per-sched_class
select_task_rq()
in try_to_wake_up().
Signed-off
From: Dmitry Adamushko [EMAIL PROTECTED]
Clean-up try_to_wake_up().
Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one #ifdef
section less ].
Also remove a few redundant blank lines.
Signed-off-by: Dmitry Adamushko [EMAIL PROTECTED]
---
diff --git a/kernel/sched.c b/kernel
experiment) fix them to different
CPUs as well?
sure, the scenario is highly dependent on a nature of those
'events'... and I can just speculate here :-) (but I'd imagine
situations when such a scenario would scale better).
>
> Thank you again,
> --Micah
>
--
Best regards,
Dmitry Ad
on a nature of those
'events'... and I can just speculate here :-) (but I'd imagine
situations when such a scenario would scale better).
Thank you again,
--Micah
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
unning == 0)
return idle_sched_class.pick_next_task(rq);
at the beginning of pick_next_task().
(or maybe put it at the beginning of the
if (likely(rq->nr_running == rq->cfs.nr_running)) {} block as we
already have 'likely()' there).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list:
(rq);
at the beginning of pick_next_task().
(or maybe put it at the beginning of the
if (likely(rq-nr_running == rq-cfs.nr_running)) {} block as we
already have 'likely()' there).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel
On 22/11/2007, Micah Dowty <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 20, 2007 at 10:47:52PM +0100, Dmitry Adamushko wrote:
> > btw., what's your system? If I recall right, SD_BALANCE_NEWIDLE is on
> > by default for all configs, except for NUMA nodes.
>
> It's a du
On 22/11/2007, Micah Dowty [EMAIL PROTECTED] wrote:
On Tue, Nov 20, 2007 at 10:47:52PM +0100, Dmitry Adamushko wrote:
btw., what's your system? If I recall right, SD_BALANCE_NEWIDLE is on
by default for all configs, except for NUMA nodes.
It's a dual AMD64 Opteron.
So, I recompiled my
_table_entry([8], "imbalance_pct", >imbalance_pct,
sizeof(int), 0644, proc_dointvec_minmax);
- set_table_entry([10], "cache_nice_tries",
+ set_table_entry([9], "cache_nice_tries",
>cache_nice_tries,
sizeof(in
);
- set_table_entry(table[12], flags, sd-flags,
+ set_table_entry(table[10], flags, sd-flags,
sizeof(int), 0644, proc_dointvec_minmax);
return table;
---
--Micah
--
Best regards,
Dmitry Adamushko
--- kernel/sched.c-old 2007-11-20 22:33:22.0 +0100
ake any difference?
moreover, /proc/sys/kernel/sched_domain/cpu1/domain0/newidle_idx seems
to be responsible for a source of the load for calculating the busiest
group. e.g. with newidle_idx == 0, the current load on the queue is
used instead of cpu_load[].
>
> Thanks,
> --Micah
>
--
Best reg
the busiest
group. e.g. with newidle_idx == 0, the current load on the queue is
used instead of cpu_load[].
Thanks,
--Micah
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info
/schedstat
... wait either a few seconds or until the problem disappears
(whatever comes first)
# cat /proc/schedstat
TIA,
>
> --Micah
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EM
... wait either a few seconds or until the problem disappears
(whatever comes first)
# cat /proc/schedstat
TIA,
--Micah
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info
le() is about... so maybe there are
some factors resulting in its inconsistency/behavioral differences on
different kernels.
Let's say we change a pattern for the niced task: e.g. run for 100 ms.
and then sleep for 300 ms. (that's ~25% of cpu load) in the loop. Any
behavioral changes?
>
ry tick (sched.c :: update_cpu_load()) and consider
this_rq->ls.load.weight at this particular moment (that is the sum of
'weights' for all runnable tasks on this rq)... and it may well be
that the aforementioned high-priority task is just never (or likely,
rarely) runnable at this particul
on this rq)... and it may well be
that the aforementioned high-priority task is just never (or likely,
rarely) runnable at this particular moment (it runs for short interval
of time in between ticks).
Ingo
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line
ms.
and then sleep for 300 ms. (that's ~25% of cpu load) in the loop. Any
behavioral changes?
Thanks much,
--Micah
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info
HZ value (I can't see the kernel config
immediately on the bugzilla page) and a task niced to the lowest
priority (is this 'kjournald' mentioned in the report of lower prio? )
running for a full tick, 'tmp' can be such a big value... hmm?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: s
of lower prio? )
running for a full tick, 'tmp' can be such a big value... hmm?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
before hitting this "once in 10
minutes" point?
say, with 256 Mb. the blips could just become lower (e.g. 2 ms.) and
are not reported as "big ones" (>5 ms. in your terms)...
Quite often the source of high periodic latency is SMI (System
Management Interrupts)... I don't know tho
. in your terms)...
Quite often the source of high periodic latency is SMI (System
Management Interrupts)... I don't know though, whether any of SMI
activities are somehow dependent on the size of RAM.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe
ice spec.)...
--> ISR runs and due to some error e.g. loops endlessly/deadlocks/etc.
Tried placing printk() at the beginning of ISR?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTE
/deadlocks/etc.
Tried placing printk() at the beginning of ISR?
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
result, cfs_rq->curr can be NULL
> for the child.
Would it be better, logically-wise, to use is_same_group() instead?
Although, we can't have 2 groups with cfs_rq->curr != NULL on the same
CPU... so if the child belongs to another group, it's cfs_rq->curr is
automatically NULL indeed.
--
Best re
s to be separate.
Humm... the 'current' is not kept within the tree but
current->se.on_rq is supposed to be '1' ,
so the old code looks ok to me (at least for the 'leaf' elements).
Maybe you were able to get more useful oops on your site?
> --
> Regards,
> vatsa
>
--
Best regards,
(at least for the 'leaf' elements).
Maybe you were able to get more useful oops on your site?
--
Regards,
vatsa
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
it be better, logically-wise, to use is_same_group() instead?
Although, we can't have 2 groups with cfs_rq-curr != NULL on the same
CPU... so if the child belongs to another group, it's cfs_rq-curr is
automatically NULL indeed.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send
antages here as well. e.g. we would
likely need to remove 'max 3 tasks at once' limit and get,
theoretically, unbounded time spent in push_rt_tasks() on a single
CPU).
>
> -- Steve
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel
. e.g. we would
likely need to remove 'max 3 tasks at once' limit and get,
theoretically, unbounded time spent in push_rt_tasks() on a single
CPU).
-- Steve
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On 21/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 20/10/2007, Jeff Garzik <[EMAIL PROTECTED]> wrote:
> > Chuck Ebbert wrote:
> > > On 10/19/2007 05:39 PM, Jeff Garzik wrote:
> > >> On my main devel box, vanilla 2.6.23 on x86-64/Fedora-7, I'm
the
task is currently active.
Let's start a buzy-loop task on this cpu and see wheather it's able to
make any progress (TIME counter in 'ps')
# taskset -c THIS_CPU some_busy_looping_prog
TIA,
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
dditional actions are just overhead or there
is some problem with the algorithm (ah well, or with my understanding
:-/ )
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
l_rt_task()
for the 'next' in a similar way as it's done in push_rt_task() .
>
> [ ... ]
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vge
way as it's done in push_rt_task() .
[ ... ]
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
that the pull/push algorithm should be able to
naturally accomplish the proper job pushing/pulling 1 task at once (as
described above)... any additional actions are just overhead or there
is some problem with the algorithm (ah well, or with my understanding
:-/ )
--
Best regards,
Dmitry Adamushko
.
again, one of the fields of /proc/PID/stat is the CPU# on which the
task is currently active.
Let's start a buzy-loop task on this cpu and see wheather it's able to
make any progress (TIME counter in 'ps')
# taskset -c THIS_CPU some_busy_looping_prog
TIA,
--
Best regards,
Dmitry Adamushko
On 21/10/2007, Dmitry Adamushko [EMAIL PROTECTED] wrote:
On 20/10/2007, Jeff Garzik [EMAIL PROTECTED] wrote:
Chuck Ebbert wrote:
On 10/19/2007 05:39 PM, Jeff Garzik wrote:
On my main devel box, vanilla 2.6.23 on x86-64/Fedora-7, I'm seeing a
certain behavior at least once a day. I'll
next_task_rt(struct rq *rq)
{
struct rt_prio_array *array = >rt.active;
struct task_struct *next;
struct list_head *queue;
int idx;
-idx = sched_find_first_bit(array->bitmap);
+ rq->highest_prio = idx = sched_find_first_bit(array->bitmap);
[ ...
prio);
> __set_bit(p->prio, array->bitmap);
> +
> + inc_rt_tasks(p, rq);
why do you need the rt_task(p) check in {inc,dec}_rt_tasks() ?
{enqueue,dequeue}_task_rt() seem to be the only callers and they will
crash (or corrupt memory) anyway in the case of ! rt_task(p) (sure,
,dequeue}_task_rt() seem to be the only callers and they will
crash (or corrupt memory) anyway in the case of ! rt_task(p) (sure,
this case would mean something is broken somewhere wrt sched_class
handling).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line
-bitmap);
[ ... ]
additionally, if we can tolerate the 'latency' (of updating
highest_prio) == the worst case scheduling latency, then
rq_prio_add_task() is not necessary at all.
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
k_fair() -->
__enqueue_task() --> rb_insert_color()) that you are already aware of
... (/me will continue tomorrow).
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordom
ER_OF_CPUS) and SD_SCHED_FORK
(actually, sched_balance_self() from sched_fork()) is just an overhead
in this case...
although, sched_balance_self() is likely to be responsible for a minor
% of the time taken to create a new context so optimizing it away
(esp. for some corner cases) won't improve
1 - 100 of 213 matches
Mail list logo