Ramon van Handel wrote:
> 
> >To have semi accurate delivery of video/sound/etc, you really need to
> >know that user process will be notified (roughly) at a particular time,
> >so using a timer is the only way I can think of to do this properly.
> 
> I guess it depends on which way you look at it... IMO, as the real VGA works
> synchronously, you don't want to deliver video "at a specific time" but rather
> "as soon as it's written to the video memory".  As soon as possible, anyway.
> Sound is a different matter entirely.

How the real VGA works is immaterial.  How to display data correctly
to the user, with reasonable performance is.

We certainly don't want to jam more data at X windows every time a byte
is written to the framebuffer.  If we depend on say every time the
monitor returns from an interrupt to the host kernel driver, then
we may experience the undersample/oversample problem again, depending
on other host OS activities.

However, since we will receive periodic control in the monitor
due to exceptions and host interrupts, in which case we will
sample the TSC, we could use monitor timing while the
monitor/guest is running and then host timer based timing
while it's not, to derive timing for components of device emulation
which require periodic action.

For certain components such as the video emulation, if they
are offloaded into a thread of their own, then perhaps we
won't need to use a signal() timer, since we could invoke a blocking
call to X to receive the next keyboard/mouse event.

But for sound, you need to deliver packets on certain time boundaries.
Other devices may need this as well, so we need the infrastructure
in there.


> >100hz is too frequent for now.  Though, if we make it a configurable
> >option, then you could use it.  I saw mention of people interested
> >in changing the Linux timer from 100Hz to 1000Hz,
> 
> (offtopic) I think this is a bad idea --- there's a HUGE amount of overhead
> associated with that.  The scheduler I wrote uses variably-spaced timer
> ticks (using mode 0 of the timer), but that's probably not suitable for
> linux use.

True, but the idea was that as machines become more capable, certain
users may find it worth trading off more overhead for finer grained
timing.  I'm not saying you'll want to do this; just that it is a
possibility and thus someone will certainly find a use for it
in FreeMWare.  Better to make things configurable and let other
people decide how to abuse themselves.

> >Just keep in mind, the way thing are currently done is a hack, since we
> >don't support the IO mapped frame buffer yet.
> 
> Uhhhhh... what are you referring to ???

The memory range 0xA0000 .. 0xBFFFF is memory mapped IO.  With such
memory, you can not expect that what you write into it, is what you will
read out of it, because it could represent anything.  It could represent
IO ports, or in the case of the VGA in planar mode, weird latching
takes place which operates on multiple bytes at a time.

Thus we would need to virtualize IO memory mapped ranges, and redirect
IO ins and outs to the VGA hardware emulation, to do this correctly.

Currently, we assme the video card is always in text mode, in which
case the frame buffer is what-you-write-is-what-you-read, so we can
treat it like regular memory.  This is a hack.  So is programming the
VGA card with IO ins/outs from a previous boot into DOS (from bochs).
But it does the job for now.

-Kevin

Reply via email to