"Timothy J. Massey" wrote:
> 
> On Sun, 22 Aug 1999 18:26:49 -0400, Kevin Lawton wrote:
> 
> >You point out that on a slower CPU the interrupts would come
> >faster.  Well, that's true, but perhaps nowhere's near as
> >fast as you could feed them into the guest trying to catch up.
> >If you're only getting 10% of the processor for a while, the
> >VM would have to see interrupts occurring at over 10 times
> >the normal rate.  In addition, certain guest OSes like Windows
> >are going to require more intervention via the monitor than
> >Linux.  That will make the factor even higher.
> 
> >When the guest is running natively, it is used
> >to taking a certain amount of time to handle an interrupt and
> >scheduling even if the application load is high.  For the
> >locality of the system code it is used to seeing 1/1 of the
> >processor time in native mode.
> 
> So, basically, because the guest OS' kernel can be preempted (which, to an
> extent, couldn't happen in the real OS on real hardware), it's not only a
> matter of the increase in amount of interrupts generated, but the fact that
> processing those interrupts will *also* take much longer, due to
> virtualization overhead and host preemption.  Both of these items mean that
> increasing the number of interrupts per given block of code may have a
> tremendous effect on performance, depending on what percentage of time the
> guest is busy processing data. In an environment where the guest is spending
> 90% of its time waiting, it will not be a problem, but in an environment
> where the guest is spending 90% of its time processing real information, the
> extra interrupts necessary to syncronize time (and the extra latency that
> comes from processing these interrupts) will kill performance.
> 
> Sounds right to me.  And as I think everybody is saying, the key here is to
> strike a (user-adjustable) balance, depending on the circumstances...

Ah, the voice of reason. :^)

Since the issues are understood, we can use this to see if we can
make it work.  Multimedia won't be the first thing we run, but you
folks are right that it's what people will want to run.

A quick look at the issues.  Here's what the host OS gives us,
say when there are 3 things going on, frames being time quantums:

|frame0  |frame1  |frame2  |frame3  |frame4  |frame5  |frame6  |
 guest0    xxxxx   xxxxx    guest1    xxxx     xxxx    guest2
 1 irq0   1 irq0   1 irq0   1irq 0   1irq 0   1irq 0   1irq 0

Here's what the guest expects:

|frame0  |frame1  |frame2  |frame3  |frame4  |frame5  |frame6  |
 guest0   guest1   guest2
 1 irq0   1 irq0   1 irq 0

So for the locality of this example, we need to accelerate the
time facilities (which drive the timer interrupts etc) in the guest
by a factor of 3.  Even though the same workload was not accomplished,
at least the guest's time reference would be equivalent to the
host's.  So we effectively have the following based on the host's
timeframe reference.


|frame0  |frame3  |frame6  |
 guest0   guest1   guest2
 3 irq0   3 irq0   3 irq 0

What we need to to is establish some thresholds where we decide
we can not achieve sync with the host because our goal is falling
away from us, and we're making it worse by accelerating any
further.  Then, just drop trying to sync to host frameX, and
start the synchronization on frameX+1 or other frame.

In multimedia, this drop would manifest itself as a period where
your video/sound was not delivered on time, then hopefully
it would go back to normal given the system load went down.
Kinda like a slow spot on a casette or video tape.  That's
something I think people will live with.  What we don't
want to do is to chase our tails and try eternally to keep
in sync with each host timeframe.

What we don't know is what these thresholds are.  We don't
need to worry too much until we get something useful
going.  Things will be more obvious then as to performance,
and which things will suffer.  Trial and error and some
sampling of the duration of time it takes to process some
virtualization tasks will let us know where we stand.

gotta run...

-Kevin

Reply via email to