On Sun, 22 Aug 1999 18:26:49 -0400, Kevin Lawton wrote:

>You point out that on a slower CPU the interrupts would come
>faster.  Well, that's true, but perhaps nowhere's near as
>fast as you could feed them into the guest trying to catch up.
>If you're only getting 10% of the processor for a while, the
>VM would have to see interrupts occurring at over 10 times
>the normal rate.  In addition, certain guest OSes like Windows
>are going to require more intervention via the monitor than
>Linux.  That will make the factor even higher.

>When the guest is running natively, it is used
>to taking a certain amount of time to handle an interrupt and
>scheduling even if the application load is high.  For the
>locality of the system code it is used to seeing 1/1 of the
>processor time in native mode.

So, basically, because the guest OS' kernel can be preempted (which, to an
extent, couldn't happen in the real OS on real hardware), it's not only a
matter of the increase in amount of interrupts generated, but the fact that
processing those interrupts will *also* take much longer, due to
virtualization overhead and host preemption.  Both of these items mean that
increasing the number of interrupts per given block of code may have a
tremendous effect on performance, depending on what percentage of time the
guest is busy processing data. In an environment where the guest is spending
90% of its time waiting, it will not be a problem, but in an environment
where the guest is spending 90% of its time processing real information, the
extra interrupts necessary to syncronize time (and the extra latency that
comes from processing these interrupts) will kill performance.

Sounds right to me.  And as I think everybody is saying, the key here is to
strike a (user-adjustable) balance, depending on the circumstances...

Thanks for clarifying that.  Now *both* sides make sense!  :)

Tim Massey


Reply via email to