On 2020/2/12 下午4:53, Jason Wang wrote:
On 2020/2/12 下午4:18, Michael S. Tsirkin wrote:
On Wed, Feb 12, 2020 at 11:39:54AM +0800, Jason Wang wrote:
On 2020/2/11 下午7:33, Michael S. Tsirkin wrote:
On Mon, Feb 10, 2020 at 05:05:17PM +0800, Zha Bin wrote:
From: Liu Jiang<ge...@linux.alibaba.com>
The standard virtio-mmio devices use notification register to signal
backend. This will cause vmexits and slow down the performance
when we
passthrough the virtio-mmio devices to guest virtual machines.
We proposed to update virtio over MMIO spec to add the per-queue
notify feature VIRTIO_F_MMIO_NOTIFICATION[1]. It can allow the VMM to
configure notify location for each queue.
[1]https://lkml.org/lkml/2020/1/21/31
Signed-off-by: Liu Jiang<ge...@linux.alibaba.com>
Co-developed-by: Zha Bin<zha...@linux.alibaba.com>
Signed-off-by: Zha Bin<zha...@linux.alibaba.com>
Co-developed-by: Jing Liu<jing2....@linux.intel.com>
Signed-off-by: Jing Liu<jing2....@linux.intel.com>
Co-developed-by: Chao Peng<chao.p.p...@linux.intel.com>
Signed-off-by: Chao Peng<chao.p.p...@linux.intel.com>
Hmm. Any way to make this static so we don't need
base and multiplier?
E.g page per vq?
Thanks
Problem is, is page size well defined enough?
Are there cases where guest and host page sizes differ?
I suspect there might be.
Right, so it looks better to keep base and multiplier, e.g for vDPA.
But I also think this whole patch is unproven. Is someone actually
working on QEMU code to support pass-trough of virtio-pci
as virtio-mmio for nested guests? What's the performance
gain like?
I don't know.
Thanks
Btw, I think there's no need for a nested environment to test. Current
eventfd hook to MSIX should still work for MMIO.
Thanks