Hello,
        I need some inputs from the community (specifically from virtual
machine and embedded/power-management folks) on something that I am working on.

        This is regarding cutting off the regular timer ticks when a CPU
becomes idle and it does not have any next timer set to expire in the "near"
term. Both CONFIG_VST and CONFIG_NO_IDLE_HZ deal with this. Both embedded and
virtualized platforms (ex: UML/S390) benefit from this. For ex: if 100s
of guest are running on a single box, then cutting off some useless HZ ticks
in the idle CPUs of all guests will lead to efficient use of host CPU's cycles.

Cutting of local timer ticks has an effect on the scheduler load balance
activity and I am trying to see how best to reduce the impact.

Two solutions have been proposed so far:

        A. As per Nick's suggestion, impose a max limit (say some 100 ms or
           say a second, Nick?) on how long a idle CPU can avoid taking
           local-timer ticks. As a result, the load imbalance could exist only
           for this max duration, after which the sleeping CPU will wake up
           and balance itself. If there is no imbalance, it can go and sleep
           again for the max duration.

           For ex, lets say a idle CPU found that it doesn't have any near timer
           for the next 1 minute. Instead of letting it sleep for 1 minute in
           a single stretch, we let it sleep in bursts of 100 msec (or whatever
           is the max. duration chosen). This still is better than having
           the idle CPU take HZ ticks a second.

           As a special case, when all the CPUs of an image go idle, we
           could consider completely shutting off local timer ticks
           across all CPUs (till the next non-timer interrupt).


        B. Don't impose any max limit on how long a idle CPU can sleep.
           Here we let the idle CPU sleep as long as it wants. It is
           woken up by a "busy" CPU when it detects an imbalance. The
           busy CPU acts as a watchdog here. If there are no such
           busy CPUs, then it means that nobody will acts as watchdogs
           and idle CPUs sleep as long as they want. A possible watchdog
           implementation has been discussed at:

        http://marc.theaimsgroup.com/?l=linux-kernel&m=111287808905764&w=2

A is obviously more simpler to implement compared to B!
Whether both are more or less equally efficient is something that I dont know.

To help us decide which way to go, could I have some comments from the virtual
machine and embedded folks on which solution they prefer and why?



--


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017


-------------------------------------------------------
This SF.Net email is sponsored by: NEC IT Guy Games.
Get your fingers limbered up and give it your best shot. 4 great events, 4
opportunities to win big! Highest score wins.NEC IT Guy Games. Play to
win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20
_______________________________________________
User-mode-linux-devel mailing list
User-mode-linux-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel

Reply via email to