Here is an idea:
upon initializing plex86, if "dual-SVGA mode" is enabled, use PCI to
find out which SVGA cards exist that are not in use. we assume there is
at least one SVGA card that is not in use, and, we assume, that it is
presently deactivated (the BIOS left it that way).
we assume, further, that such a card is found and identified to be a
card with a chipset we have written dual-SVGA support drivers for. then
here's what we do:
deactivate the current SVGA card, and activate the other SVGA card. this
can be done simply by writing to a PCI register. I've done it before and
I learned how to do this from Ralph Brown's home page which provides PCI
information and source code for a DOS program that lets you use multiple
SVGA cards on a single PC, with only one SVGA card active at a time.
upon doing this 'switch', the screen will be blank. it is now as if the
newly active card is the only card that exists. call C000:3 now and then
use INT 10 to interact with the newly active card. if you plugged in
your monitor to the newly active card, it would be clear that it's
working, but, all we'll use INT 10 for, is to set an SVGA mode that
supports linear framebuffering. this can be done via the VESA VBE INT 10
functions.
set the video mode that supports a linear framebuffer that uses the most
video RAM. do not enable any other acceleration via the BIOS, however.
now copy the video card's BIOS (which is in ROM) to RAM. this will allow
us to present it to the guest OS as if it were located where it is now
located, even though it won't be (the original card's BIOS will be at
this physical location).
map the BIOS to an unused physical area of RAM (E0000 for example, but
it doesn't matter where). this will allow the guest to write to its
C0000 physical without causing a page fault. the write will be ignored
by hardware, and will go, for example, to E0000 physical. I have
selected E0000 arbitrarilly and not because all VGA-compatible video
cards must work from ROM at C0000 and E0000; we could just as well have
selected D0000.
now, via PCI, disable the card's response to I/O and enable its response
to memory-mapped I/O. map its I/O ports, via PCI, to an unused physical
memory area (a place where there's no RAM, no other cards, and where we
didn't just map the ROM!) this works like so: at a 16MB-aligned (on the
S3; only newer cards can be supported, I'm afraid) area of physical
memory, accesses to the first byte act _precisely_ like accesses to 3D8
(or 3B8 in mono emulation mode) would act. offset 1 in memory
corresponds to port 3D9, etc.
since we have deactivated the card's use of 3D8 etc. physical, it seems
at first like we can activate the original SVGA card _without_
deactivating this card.
In that case, the guest would think it's running on a machine where the
second card is active, and it'd be right, but it'd think that the BIOS
of that card is at C0000 when in fact it's at, for example E0000, and
it'd think it'd be interacting with port 3D8 when in fact, when it
reads/writes to that port, plex86 would perform, on its "behalf", such a
read/write from the area of linear memory where that physical area of
memory is that, via PCI, we have mapped the second cards physical I/O.
note that this is done in a slightly different way for different
manufacturers, but it's documented enough to get it to work. older S3's
won't work since they demand their I/O be mapped to 16MB physical where
we have RAM or else at another conflicting location.
But we've forgotten two things:
First: PCI is emulated, but to fully allow a guest OS to detect and use
the SVGA card, we'd have to allow for some PCI accesses to fall thru, to
the real second card. We'd have to fake a things: namely we'd tell the
guest that the card's port I/O _is_ enabled (when it really isn't) and
so on.
Second: what about memory I/O?
To avoid conflicts we'd need to, before activating the original card,
disable the cards response to A0000..BFFFF.
Can accesses to A0000..BFFFF 'physical' by the guest fall thru to the
real hardware?
No.
We can, before switching the initial video card on, call C0000:3 in vm86
mode, allowing A0000..BFFFF to fall thru to the actual hardware (before
deactivating the card's response to the legacy memory area and to legacy
port I/O0.
After doing that, we'd set a video mode that uses a linear frame buffer,
without any other accelerations, that uses the most video RAM As
possible.
Next, we'd use PCI to move the cards linear frame buffer address to a
place not being used, for example, by X windows.
Then, we'd switch off this card and deactivate/reactivate the other
things I mentioned above.
Now, if the guest accesses video memory in the A0000..BFFFF range, we'd
_emulate_ it.
So what's the point?
The point is: when the guest switches to a video mode that uses a linear
frame buffer, its easy to see that NO video emulation would be necessary
other than redirecting I/O ports.
Now with the original card active, we'd have two SVGA cards. The second
card would work in VGA mode (requiring only emulation of its hardware
A0000..BFFFF range) and in SVGA mode; in all cases memory I/O and the
BIOS would be redirected, but that's (more or less) trivial.
What if you plug in a monitor to the second video card? What'd you see?
In SVGA mode, you'd see the "guest" screen. In VGA mode, you'd see
nothing at all.
But, once again, what's the point?
The point is that we could 1) support full screen mode and 2) in an X
windows box, we could copy from the second video card's video RAM to the
plex86 window.