Sheng Yang wrote:
Yes... But it's easy to do with assigned devices' mmio, but what if guest specific some non-mmio memory's memory type? E.g. we have met one issue in Xen, that a assigned-device's XP driver specific one memory region as buffer, and modify the memory type then do DMA.

Only map MMIO space can be first step, but I guess we can modify assigned memory region memory type follow guest's?

With ept/npt, we can't, since the memory type is in the guest's pagetable entries, and these are not accessible.

Looks like a conflict between the requirements of a hypervisor supporting device assignment, and the memory type constraints of mapping everything with the same memory type. As far as I can see, the only solution is not to map guest memory in the hypervisor, and do all accesses via dma. This is easy for virtual disk, somewhat harder for virtual networking (need a dma engine or a multiqueue device).

Since qemu will only access memory on demand, we don't actually have to unmap guest memory, only to ensure that qemu doesn't touch it. Things like live migration and page sharing won't work, but they aren't expected to with device assignment anyway.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to