On Mon, Jan 15, 2007 at 07:55:16PM +0300, Oleg Nesterov wrote:
> > What if 'singlethread_cpu' dies?
>
> Still can't understand you. Probably you missed what singlethread_cpu is.
oops yes ..I had mistakenly thought that create_workqueue_thread() will
bind worker thread to singlethread_cpu for sing
On Mon, Jan 15, 2007 at 03:54:01PM +0300, Oleg Nesterov wrote:
> > - singlethread_cpu needs to be hotplug safe (broken currently)
>
> Why? Could you explain?
What if 'singlethread_cpu' dies?
> > - Any reason why cpu_populated_map is not modified on CPU_DEAD?
>
> Because CPU_DEAD/CPU_UP_CANCELED
On Mon, Jan 15, 2007 at 02:54:10AM +0300, Oleg Nesterov wrote:
> How about the pseudo-code below?
Some quick comments:
- singlethread_cpu needs to be hotplug safe (broken currently)
- Any reason why cpu_populated_map is not modified on CPU_DEAD?
- I feel more comfortable if workqueue_cpu_callba
On Wed, Jan 10, 2007 at 10:20:28AM -0800, Christoph Lameter wrote:
> I have got a bad feeling about upcoming deadlock problems when looking at
> the mutex_lock / unlock code in cpuup_callback in slab.c. Branches
> that just obtain a lock or release a lock? I hope there is some
> control of what ha
On Tue, Jan 09, 2007 at 07:38:15PM +0300, Oleg Nesterov wrote:
> We can't do this. We should thaw cwq->thread (which was bound to the
> dead CPU) to complete CPU_DEAD event. So we still need some changes.
I noticed that, but I presumed kthread_stop() will post a wakeup which
will bring it out of f
On Tue, Jan 09, 2007 at 06:07:55PM +0300, Oleg Nesterov wrote:
> but at some point we should thaw processes, including cwq->thread which
> should die.
I am presuming we will thaw processes after all CPU_DEAD handlers have
run.
> So we are doing things like take_over_work() and this is the
> sourc
On Tue, Jan 09, 2007 at 01:17:38PM +0100, Heiko Carstens wrote:
> missing in kernel cpu.c in _cpu_down() in case CPU_DOWN_PREPARE
> returned with NOTIFY_BAD. However... this reveals that there is just a
> more fundamental problem.
>
> The workqueue code grabs a lock on CPU_[UP|DOWN]_PREPARE and re
On Tue, Jan 09, 2007 at 01:51:52AM -0800, Andrew Morton wrote:
> > This thread makes absolutely -no- calls to try_to_freeze() in its lifetime.
>
> Looks like a bug to me. powerpc does appear to try to support the freezer.
>
> > 1. Does this mean that the thread can't be frozen? (lets say that th
On Mon, Jan 08, 2007 at 09:26:56PM -0800, Andrew Morton wrote:
> That's not correct. freeze_processes() will freeze *all* processes.
I am not arguing whether all processes will be frozen. However my question was
on the freeze point. Let me ask the question with an example:
rtasd thread (arch/po
On Mon, Jan 08, 2007 at 03:54:28PM -0800, Andrew Morton wrote:
> Furthermore I don't know which of these need to be tossed overboard if/when
> we get around to using the task freezer for CPU hotplug synchronisation.
> Hopefully, a lot of them. I don't really understand why we're continuing
> to st
On Fri, Dec 29, 2006 at 08:18:27PM +0300, Oleg Nesterov wrote:
> Remove ->remove_sequence, ->insert_sequence, and ->work_done from struct
> cpu_workqueue_struct. To implement flush_workqueue() we can queue a barrier
> work on each CPU and wait for its completition.
Oleg,
Because of this ch
On Mon, Jan 08, 2007 at 08:06:35PM +0300, Oleg Nesterov wrote:
> Ah, missed you point, thanks. Yet another old problem which was not introduced
> by recent changes. And yet another indication we should avoid kthread_stop()
> on CPU_DEAD event :) I believe this is easy to fix, but need to think more
On Mon, Jan 08, 2007 at 10:37:25AM -0800, Pallipadi, Venkatesh wrote:
> One other approach I was thinking about, was to do all the hardwork in
> workqueue CPU_DOWN_PREPARE callback rather than in CPU_DEAD.
Between DOWN_PREPARE and DEAD, more work can get added to the cpu's
workqueue. So DOWN_PREPA
On Mon, Jan 08, 2007 at 06:56:38PM +0300, Oleg Nesterov wrote:
> > Spotted atleast these problems:
> >
> > 1. run_workqueue()->work.func()->flush_work()->mutex_lock(workqueue_mutex)
> >deadlocks if we are blocked in cleanup_workqueue_thread()->kthread_stop()
> >for the same worker thread to
On Sun, Jan 07, 2007 at 11:59:57AM -0800, Andrew Morton wrote:
> > How would this provide a stable access to cpu_online_map in functions
> > that need to block while accessing it (as flush_workqueue requires)?
>
> If a thread simply blocks, that will not permit a cpu plug/unplug to proceed.
>
> T
On Mon, Jan 08, 2007 at 12:51:03AM +0300, Oleg Nesterov wrote:
> Change flush_workqueue() to use for_each_possible_cpu(). This means that
> flush_cpu_workqueue() may hit CPU which is already dead. However in that
> case
>
> if (!list_empty(&cwq->worklist) || cwq->current_work != NULL)
>
> m
On Sun, Jan 07, 2007 at 10:13:44PM +0530, Srivatsa Vaddagiri wrote:
> If CPU_DEAD does nothing, then the dead cpu's workqueue list may be
> non-empty. How will it be flushed, given that no thread can run on the
> dead cpu?
>
> We could consider CPU_DEAD moving over work atle
On Sun, Jan 07, 2007 at 05:22:46PM +0300, Oleg Nesterov wrote:
> On 01/07, Oleg Nesterov wrote:
> >
> > Thoughts?
>
> How about:
>
> CPU_DEAD does nothing. After __cpu_disable() cwq->thread runs on
> all CPUs and becomes idle when it flushes cwq->worklist: nobody
^^^
all exce
On Sun, Jan 07, 2007 at 03:56:03PM +0300, Oleg Nesterov wrote:
> Srivatsa, I'm completely new to cpu-hotplug, so please correct me if I'm
> wrong (in fact I _hope_ I am wrong) but as I see it, the hotplug/workqueue
> interaction is broken by design, it can't be fixed by changing just locking.
>
>
On Sat, Jan 06, 2007 at 11:11:17AM -0800, Andrew Morton wrote:
> Has anyone thought seriously about using the process freezer in the
> cpu-down/cpu-up paths? That way we don't need to lock anything anywhere?
How would this provide a stable access to cpu_online_map in functions
that need to block
On Sat, Jan 06, 2007 at 08:34:16PM +0300, Oleg Nesterov wrote:
> I suspect this can't help either.
>
> The problem is that flush_workqueue() may be called while cpu hotplug event
> in progress and CPU_DEAD waits for kthread_stop(), so we have the same dead
> lock if work->func() does flush_workque
On Sat, Jan 06, 2007 at 07:30:35PM +0300, Oleg Nesterov wrote:
> Stupid me. Thanks.
>
> I'll try to do something else tomorrow. Do you see a simple soulution?
Sigh ..I dont see a simple solution, unless we have something like
lock_cpu_hotplug() ..
Andrew,
This workqueue problem has expos
On Sat, Jan 06, 2007 at 06:10:36PM +0300, Oleg Nesterov wrote:
> Increment hotplug_sequence earlier, under CPU_DOWN_PREPARE. We can't
> miss the event, the task running flush_workqueue() will be re-scheduled
> at least once before CPU actually disappears from cpu_online_map.
Eww ..what happens if
On Fri, Jan 05, 2007 at 05:07:17PM +0300, Oleg Nesterov wrote:
> How about block_cpu_down() ?
Maybe ..not sure
If we do introduce such a function, we may need to convert several
preempt_disable() that are there already (with intent of blocking
cpu_down) to block_cpu_down() ..
> These cpu-hotplu
On Fri, Jan 05, 2007 at 03:42:46PM +0300, Oleg Nesterov wrote:
> preempt_disable() can't prevent cpu_up, but flush_workqueue() doesn't care
> _unless_ cpu_down also happened meantime (and hence a fresh CPU may have
> pending work_structs which were moved from a dead CPU).
Yes, that was what I had
On Thu, Jan 04, 2007 at 10:31:07AM -0800, Andrew Morton wrote:
> But before we do much more of this we should have a wrapper. Umm
>
> static inline void block_cpu_hotplug(void)
> {
> preempt_disable();
> }
Nack.
This will only block cpu down, not cpu_up and hence is a misnomer. I would be
On Thu, Jan 04, 2007 at 09:18:50AM -0800, Andrew Morton wrote:
> This?
This can still lead to the problem spotted by Oleg here:
http://lkml.org/lkml/2006/12/30/37
and you would need a similar patch he posted there.
> void fastcall flush_workqueue(struct workqueue_struct *wq)
> {
> -
On Thu, Jan 04, 2007 at 07:31:39PM +0300, Oleg Nesterov wrote:
> > AFAIK this deadlock originated from Andrew's patch here:
> >
> > http://lkml.org/lkml/2006/12/7/231
>
> I don't think so. The core problem is not that we are doing unlock/sleep/lock
> with this patch. The thing is: work->func()
On Thu, Jan 04, 2007 at 05:29:36PM +0300, Oleg Nesterov wrote:
> Thanks, I need to think about this.
>
> However I am not sure I fully understand the problem.
>
> First, this deadlock was not introduced by recent changes (including "single
> threaded flush_workqueue() takes workqueue_mutex too"),
On Mon, Dec 18, 2006 at 01:34:16AM +0300, Oleg Nesterov wrote:
> void fastcall flush_workqueue(struct workqueue_struct *wq)
> {
> - might_sleep();
> -
> + mutex_lock(&workqueue_mutex);
> if (is_single_threaded(wq)) {
> /* Always use first cpu's area. */
> -
On Tue, Dec 19, 2006 at 03:43:19AM +0300, Oleg Nesterov wrote:
> > Taking workqueue_mutex() unconditionally in flush_workqueue() means
> > that we'll deadlock if a single-threaded workqueue callback handler calls
> > flush_workqueue().
>
> Well. But flush_workqueue() drops workqueue_mutex before g
On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
> One quite different way of addressing all of this is to stop using
> stop_machine_run() for hotplug synchronisation and switch to the swsusp
> freezer infrastructure: all kernel threads and user processes need to stop
> and park thems
On Sun, Dec 10, 2006 at 09:26:16AM +0100, Ingo Molnar wrote:
> something like the pseudocode further below - when applied to a data
> structure it has semantics and scalability close to that of
> preempt_disable(), but it is still preemptible and the lock is specific.
Ingo,
The psuedo-code
Somebody was asking this : "Does any 32-bit Linux kernel support running 64-bit
app on top of it (in a 64-bit platform that is)?"
AFAIK its not supported, but wanted to make sure ..
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mess
On Thu, Dec 07, 2006 at 08:54:07PM -0800, Andrew Morton wrote:
> Could do, not sure.
AFAICS it will deadlock for sure.
> I'm planning on converting all the locking around here
> to preempt_disable() though.
Will look forward to that patch. Its hard to dance around w/o a
lock_cpu_hotplug() ..:)
On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
> -static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
> +/*
> + * If cpu == -1 it's a single-threaded workqueue and the caller does not hold
> + * workqueue_mutex
> + */
> +static void flush_cpu_workqueue(struct cpu_workq
On Thu, Dec 07, 2006 at 11:47:01AM +0530, Srivatsa Vaddagiri wrote:
> - Make it rw-sem
I think rw-sems also were shown to hit deadlocks (recursive read-lock
attempt deadlocks when a writer comes between the two read attempts by the same
thread). So below suggestion only seems to makes se
On Wed, Dec 06, 2006 at 05:26:14PM -0700, Bjorn Helgaas wrote:
> loadkeys is holding the cpu_hotplug lock (acquired in flush_workqueue())
> and waiting in flush_cpu_workqueue() until the cpu_workqueue drains.
>
> But events/4 is responsible for draining it, and it is blocked waiting
> to acquire t
u_meter_limit
# # Assign 20% bandwidth to less_imp_grp
# echo 20 > less_imp_grp/cpu_meter_limit
# echo $very_imp_task1_pid > very_imp_grp/tasks
# echo $very_imp_task2_pid > very_imp_grp/tasks
# echo $less_imp_task1_pid > less_imp_grp/tasks
deadlock on
container_lock(). Avoid this by introducing __update_flag, which
doesnt take container_lock().
(I have also hit some lockdep warnings. Will post them after some
review, to make sure that they are not introduced by my patches).
Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]
mpnice later if required (http://lkml.org/lkml/2006/9/28/244)
Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
linux-2.6.19-rc6-vatsa/include/linux/sched.h |3
linux-2.6.19-rc6-vatsa/kernel/sched.c| 195 ++-
2 files changed, 195 insertions(+),
imer->reprogram();
> check_cpu_mask(nohz_cpu_mask);
> if (we_are_last_idle)
> enter_all_cpus_idle();
Looks fine!
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-
ondering
> how to gracefully handle the SMP case. Or is that not a problem?
I don't see that current_ticksource/current_dyn_tick_timer to be write-heavy.
In fact I see them to be initialied during bootup and after that mostly
read-only. That may not warrant a per-CPU structure.
--
e callee of reprogram_timer itself.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
sts -
include/asm-i386/dyn-tick.h and arch/i386/kernel/dyn-tick.c ..
IMO current abstraction of 'dyn_tick_timer' is good enough to unify all the
ports of no-idle-hz. We probably need to just iron out the differences between
how ARM and x86 defines this.
As far as the problem of
maybe just
at the max limit allowed for ACPI PM timer.
I will test this code with the lost-tick recovery fixes
for ACPI PM timer that I sent out and let you know
how it performs!
> for (i=0; i<10; i++)
> asm volatile("");
--
ink, until John's TOD comes along.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info
rn
> immediately from the callback?
Don't know. It just felt nice to avoid any unnecessary invocations.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel
On Mon, Sep 05, 2005 at 12:30:53PM +0530, Srivatsa Vaddagiri wrote:
> > Thus, for x86, we would have a dyn_tick_timer structure for the PIT,
> > APIC, ACPI PM-timer and the HPET. These structures could be put in
>
> Does the ACPI PM-timer support generating interrupts also? Same
if we are coming out of 'all-cpus-were-asleep'
state. In case of ARM, dyn_tick_timer->handler could be called
for this purpose.
> This seems to only recover one tick. What if multiple ticks were lost?
cur_timer->mark_offset() recovers the rest.
--
tion for not raising the softirq at all
if the CPU was woken up w/o having skipped any ticks (becasue
of some external interrupt).
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "
p;-ing?
Everything can be represented in bits! I was just comparing composition
of structures in ARM and x86. The state bitfield is part of
'struct dyn_tick_timer' itself in ARM while it is part of a separate structure
(dyn_tick_state) in x86. Similar minor points need to be sorted out w
e whereas
ARM uses dyn_tick_timer strucuture itself to store the state etc).
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
o simulate these lost ticks!
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http:
RROR status. The negative latencies doesn't seem to sound
good. Do you see them too? I ran your test on my RH9 based T30 and
find several negative latencies there too.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsub
). Hence I consider this
particular patch will need more review/work.
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
linux-2.6.13-mm1-root/arch/i386/kernel/timers/timer_pm.c | 48 ++-
1 files changed, 22 insertions(+), 26 deletions(-)
diff -puN arch/i386/kernel/t
be really nice to sync
up with what is there in ARM/s390. I havent looked closely at both
implementations. Will have a look and post an update which should keep the
interfaces alike on all platforms.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalo
posting a consolidated version on his site against 2.6.13-mm1
pretty soon I hope.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
ake 4 seconds.
>
> I don't know yet if this is the problem George Anzinger mentioned with
> next_timer_interrupt(), or if this is OMAP specific. But it only seems
Will let you know if I see it on x86 too.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Cent
next patch that I am trying out. Will post it if I happen
to have success :)
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a me
Z of a second (see definition
of LATCH and pm_ticks_per_jiffy in my patch).
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mes
amic tick.
I still see zero lost ticks being reported with your patch (during
bootup atleast) which means all is still not well?
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "
eing managed by dyn-tick patch.
Are you referring to some old version which I havent seen perhaps?
If so, what were those event queues used for?
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: s
On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
> Following patches related to dynamic tick are posted in separate mails,
> for convenience of review. The first patch probably applies w/o dynamic
> tick consideration also.
>
> Patch 2/3 -> Dyn-tick cleanups
On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
> Following patches related to dynamic tick are posted in separate mails,
> for convenience of review. The first patch probably applies w/o dynamic
> tick consideration also.
>
> Patch 3/3 -> Use lost tick
On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
> Following patches related to dynamic tick are posted in separate mails,
> for convenience of review. The first patch probably applies w/o dynamic
> tick consideration also.
>
> Patch 1/3 -> Fixup lost t
On Wed, Aug 31, 2005 at 04:47:05PM +0530, Srivatsa Vaddagiri wrote:
> On Wed, Aug 31, 2005 at 01:03:05PM +0200, Arjan van de Ven wrote:
> > that sounds like a fundamental issue that really needs to be fixed
> > first!
>
> It should be fixed by the patch here:
> http://
ade
some changes to the lost tick calculation in timer_pm.c after which
it seems to be stable on some machines, but I cant repeat that
on other (maybe newer) machines. Will post out all the changes I have
pretty soon.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Softw
ed
because of sleeping idle CPUs". I had posted the patch here:
http://marc.theaimsgroup.com/?l=linux-kernel&m=111556608901657&w=2
Will send out this patch against latest tree for Andrew to pick it.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs
runtime caused what I described to you as PIT mode (long stalls etc).
I think I have recreated this on a machine here. Disabling
CONFIG_DYN_TICK_APIC at compile-time didnt seem to make any difference. Will
look at this problem next.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technolog
il/linux/kernel/0508.1/0982.html
Oops ..Thanks for pointing it out! Will try this patch and let you
know how stable time is with dyn-tick.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the li
post an update as soon as I get more information.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
M
want to know if your
hardware has local APIC that is enabled by the kernel).
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
-2.patch
Thanks for consolidating all the patches and putting it up on your website!
Makes it easier for me to send any further patches on top of the above one.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscrib
have tested this patch on my Laptop (P4) that HZ goes down to ~25 with
dyn-ticks enabled (but Power consumption goes _up_ as Ted had noted earlier
- I need to try some of the ACPI patches that were pointed out in the thread).
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
rrupts instead of
reprogramming it and conditionally running local timers) comes from VST
(Variable Sleep Time).
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
linux-2.6.13-rc6-work-root/arch/i386/kernel/apic.c | 16 -
linux-2.6.13-rc6-work-root/arch/i386/kernel/dy
dering that we disable PIT only for short duration
in practice (few seconds maybe) _and_ that we don't have HRT support yet?
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line &q
but that is probably
not a concern now!
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majo
likely a candidate for merging. I will send out my SMP-support
changes to dynamic tick soon.
> You may also want to check out the ARM implementation as it does not have
> the issues listed above, which are mostly x86 specific issues.
Thanks for the pointer. Will look at it.
--
Thanks an
egards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
ted of my progress with dynamic tick patch.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majo
biggest bottleneck I see in VST going mainline is
its dependency on HRT patch but IMO it should be possible to write a small patch
to support VST w/o HRT.
George, what do you think?
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 5600
On Thu, Jun 30, 2005 at 06:17:11PM +0530, Srivatsa Vaddagiri wrote:
> Digging further revealed that this max time was restricted by
> various timers kernel uses. Mostly it was found to be because of
> the slab allocator reap timer (it requests a timer every ~2sec on
> every CPU) and m
ded to use 64-bit number, we may have to ensure that this limit (1193 hrs)
is not exceeded.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
management in embedded
platforms. Even (virtualized) servers will benefit from this patch, by
making use of the (virtual) CPU resources more efficiently.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this
[Sorry about sending my response from a different account. Can't seem
to access my ibm account right now]
* Ingo wrote:
> Another, more effective, less intrusive but also more complex approach
> would be to make a distinction between 'totally idle' and 'partially
> idle or busy' system states. Wh
balance_interval of about-to-sleep idle CPU, don't we still run the
risk of idle cpu being woken up and going immediately back to sleep (because
there was no imbalance)?
Moreover we may be greatly reducing the amount of time a CPU is allowed to
sleep this way ...
--
Thanks and Re
if (!busiest || this_load >= max_load)
goto out_balanced;
_
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
401 - 488 of 488 matches
Mail list logo