Kevin Lawton wrote:
>
> Ramon van Handel wrote:
>
> > About the timers I agree that putting them in the monitor
> > is probably best (it's not such a big shame design-wise,
> > either: newer x86's and clones have built-in timing
> > facilities.) However, why would you want to stick VGA
> > in the monitor ? Or the floppy ? I'd rather keep the
> > design clean --- these devices are not so time-critical
> > that I expect putting them in the monitor will make that
> > much of a difference...
[snip]
> If we can cut out a lot of these steps by moving emulation
> of frequently accessed I/O port devices into the monitor
> domain, then I think we should at some point. We may
> be able to make things so they will compile for either.
> If we have a well defined interface, it should plug into
> either the monitor or host app code, and thus we could
> move stuff around. This would be great to suit the
> needs of both development and non-development situations.
Well, okay. But I still think it's ugly.
Perhaps we could handle it in a different way ? For
instance, I can imagine thay we could try to "buffer" I/O.
On an I/O output (outportb) operation, you'd just throw
the I/O data in the buffer, and you continue doing that
until (1) there's a read attempt from a device that still
has writes in the buffer, or (2) a certain timeout period
elapses (we can make this easier by simply making this
timeout period the end of the quantum.) Then when it
switches back to host, the buffer is flushed.
What do you think ?
> Speaking of DMA, that's fundamental enough it may have
> to be implemented at some point in the monitor space.
Why ? A DMA driver in user space can simply mmap() the
relevant part of the guest memory.
[snip RTC stuff, because it all gets down to this:]
> > There's a catch here: if we only count the time that the guest
> > code actually ran, then we're completely out of sync with real
> > time, which isn't good either: I mean, say we virtualise
> > linux on linux. Now if the virtual copy of linux runs a
> > program that executes sleep(1), then we DO want that the
> > actual sleep time somewhat resembles one second, which, if I
> > understood it correctly, your method cannot do.
>
> You're right that my method does not do this, and that is
> the way it should be. It does not matter how long the
> sleep(1) actually takes. Just that it takes 1 second of
> guest execution time, so that everything happens in the
> guest, such as interrupts etc, at exactly the point the
> guest code expects it. The guest has no concept of
> "external" time, other than going through the network
> adapter emulation to the host. For this, you run "rdate".
I very, very, seriously disagree with you. You can't just
do that, because everything happening in the guest code will
appear "slowed down" at host (that is, user) level. Just
imagine what would happen to things like games running in
the guest context... they'd go much slower than they should,
because they're not synchronising to real time but to guest
time. Moreover, the apparent speed will depend on the system
load, because the system load determines how much CPU time
a process gets !!! You can't do that.
DOSEMU solves the problem by actually speeding up the guest
time by a certain factor. Because guest time goes faster
than its real runtime, it'll seem to the user that it works
at the right speed, and that's exactly the way it should be.
Please read this explanation from the DOSEMU guys:
http://www.dosemu.org/docs/README-tech/0.99/README-tech-12.html
> > The problem is that we still don't know whether this timer
> > is present on AMD. I fear that it may not be.
>
> Here are the reactions from someone into this stuff:
>
> First off, typical desktop platforms (Intel and AMD) don't support,
> or don't initialize the APIC.
Uh, all pentiums, except for the very first models, have a LAPIC
on-chip. You just need to turn it on.
> On K6, Cyrix, IDT, etc, based systems: the Local APIC is not
> implemented.
Then hadn't we better drop the idea ? Or at least have a good
alternative ready.
> For details, see Vol3 chapter 7 of the Pentium manual, and MPS
> 1.4 Spec available from the Intel Developer web sight.
Yes, I read those specs long ago. Should reread them sometime :)
> As an alternative to the LAPIC and/or PIT timer, at least one OS I
> know of uses IRQ8 for its timer tick.
Really ? Which one ?
Linux has an RTC driver, but it can be turned off if neccessary.
Ramon