Kevin Lawton wrote:
> Essentially, the guest will not realize the skew at all
> using the method I talked about. Perceptually, the
> user will notice a slowdown of things which are real-time
> based such as video and sound, as you mention, where frames
> of data are expected to be delivered on time boundaries based
> on the host OS's time reference since that's where the user
> lives. For normal graphics updates, this is not a concern.
> Things will run a little slower - no big deal.
Of course. But multimedia is very important nowadays, and
I'd like freemware to support it. I think this is a very
important aspect of the VM.
> For cases where it matters, I have no problem with adding
> an ability in the timing facility to attempt tracking the
> host OS's time reference, and as I mentioned there should
> be an aggression factor, so the user can weight how important
> it is to track time, trading emulation accuracy for better
> delivery time of video & sound for instance. I want
> the ability to tweak the factor to 0, which *is* the
> accurate way to do this.
Okay, let me explain what I had in mind.
As you may have noticed, I don't really like the idea of
having hardware emulation drivers inside the kernel module,
but timing does need to be in it. What I had in mind was
that the kernel module doesn't support things like the PIT,
RTC, etc., but it simply has a generic timing interface
that user-level drivers can use to set timed interrupts
using an ioctl() call. This way none of the drivers need
to worry about the actual timing details of the VM, because
they use the VM's timer interface. The VM kernel module
is then responsible for when it produces interrupts,
and it can be tweaked with an agression factor if you want
that.
What do you think ?
> Certain guest OSes will likely handle being stretched more
> than others. Linux will probably respond one heck of a lot
> better than Windows to this. But if you are running 5 ray
> tracing programs on your machine, and it is also the server
> for Slashdot, you better have some intelligence in the
> stretching that can give up on synchonizing one or more
> past host OS timeframes, and then begins synchronizing on
> future ones. A user would see the manifestation of this kind
> of technique as a window of slowdown of video frames, for example,
> followed by normal displaying of the next ones, if there was
> say a sharp burst of host activity that subsided thereafter.
> What we should not do, is insist we keep up, if we continuously
> fall behind.
Okay, I agree with you. If you run a system that heavily loaded
you can't expect realtime performance :)
I do like the idea, that in stead of specifying an agression
factor the monitor code automatically moves the timing
agression with processor speed and system load. If we can
get that to work reliably then we have a system performing
optimally in every situation.
> > There's no work about timing in there.
Typo: should be "word"
> > I'll have to dig into the library and look up all those articles
> > that are referenced in the DISCO article... (some nice review
> > articles in there, too.) I'll do that next time I'm at school.
>
> OK.
>
> If you don't find anything, its really not important anyways.
> I pretty much know how to do it already anyways. If you take
> time reference samples in the host kernel module each time before
> you run the monitor/guest, and the monitor is taking them for each
> block of time bounded by exceptions generated in the guest and before
> it returns to the host, then you have time deltas for each of the
> host and guest, say M and N.
>
> The ratio M/N gives you an idea how much you need to accelerate
> the timing facilities, which distribute time by way of callbacks
> to the device emulation. We also factor in the aggression factor
> given by the user's preference. If we want to attempt wall-clock
> synchronization, then we could get feedback from the CMOS RTC clock
> emulation, finding out how far behind we are, and factoring this
> in to the accelaration (stretching). Note that the RTC clock
> can arbitrarily be stopped/started/reprogrammed. When it stops,
> take it's weighting out of the picture. When it start again, put
> it back in. When it's reprogrammed, maintain an offset from it's
> new value and the host OS's clock, but everything else is the same.
Yeah, that's pretty much what I was thinking of, too.
However, if I can find research on this subject it can only be
useful... learn from somebody else's experiences :)
Ramon