Andi Kleen wrote:
hmm stolen time could even be useful without virtualization; to a large degree, if cpufreq reduces the speed of your cpu you have "stolen cycles" that way... I wonder if this concept can be used for that as well...
I don't see the point, frankly.
In a virtualized environment, steal time shows the amount of contention between guests. If you have two guests trying to use 100% of the CPU, but they have to share the CPU and each gets 50%, then the steal time will be 50% on each guest. This allows the sysadmin to see that the guests would have been able to run faster, if only they were not fighting over the same CPU. Performance could have been improved by doing live migration. Contrast this to a client/server (or backend/middle tier) application on one system, where both virtual machines work together. They can still end up getting 50% of the CPU each, but a lot of the time they do not want the CPU simultaneously. In that case, there will be idle time and the amount of steal time will be way lower. Steal time allows you to distinguish between "the CPU is not fast enough" and "I have too many virtual machines on the CPU" and "things are running OK". As for steal time in a non-virtualized environment, I am not quite sure either. I can't think of any action the sysadmin (or some load balancing program) could take, based on the information... -- All Rights Reversed - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

