Jens Nerche wrote:
> Last lines in this article:
> "The use of the segment-descriptor cache is highly implementation specific,
> meaning the behavior and layout of the segment-descriptor cache is dependant
> upon the implementation of the specific microprocessor. Intel doesn't
> guarantee that the behavior of the descriptor cache will remain the same from
> microprocessor to microprocessor. Therefore, it would be foolhardy to write
> any production-quality source code which depends upon this behavior
> (except unreal mode)."
Ah, life on the bleeding edge. If we could #ifdef and if ()
our way around this and it worked on Pentium+ and AMDs, it might
make a good option. Or like some of my other ideas, a really bad
one. :^)
> I think, we do not should use the VME and PVI. Ulrich pointed it
> out:
> >Yet another thing to consider is that the guest OS might want
> >to *use* the PVI and/or VME modes itself ;-) Especially in the
> >case of Windows as guest this is rather likely, as those modes
> >were basically *designed* for Windows
Your point is understood. The question is whether ring3
code even *knows* it is running using these virtualized
interrupt facilities. I plan to look into this more, but
does the CPU push the virtualized interrupt bit when
_remaining_ in ring3? If not, then we may be able to
use this, even if the guest OS is using it.
-Kevin