On Sat, 2007-09-15 at 12:38 +0300, Avi Kivity wrote:
> I don't see why there is a difference.  With mmio, the host tells the
> guest where the ring is.  With dma, the guest tells the host where the
> ring is.  In both cases, you need some form of communication (read-only
> for mmio, write-only for dma).
> 
> For mmio, the mechanism is standardized within pci; for dma it is not,
> but it is still just as simple, write to some word in pci config space
> and you're done.

No, you already need a r/o, whatever you use.  That's because you need
to describe the features of the device (eg disk size).

> If early printk can't handle pci, we can provide a pio port that does
> byte-at-a-time output.

It's not that it can't handle PCI, it's that it now needs to find a page
to use.  That's less trivial than using an already-existing page.

As for making suspend/resume more complex, I can't see it.  Make the
guest memory a few pages bigger, and don't tell the guest about those
extra pages (that's waht lguest does today: those mmio pages are just
above top of "normal" RAM).

Now, we might want some mmio space for our "kick", rather than a
hypercall, but that's separate from the ring buffers.

Rusty.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to