On Fri, 2011-12-16 at 15:40 +0800, Zang Hongyong wrote:
> 于 2011/12/16,星期五 15:05, Sasha Levin 写道:
> > On Fri, 2011-12-16 at 13:32 +0800, zanghongy...@huawei.com wrote:
> >> From: Hongyong Zang<zanghongy...@huawei.com>
> >>
> >> Vhost-net uses its own vhost_memory, which results from user space (qemu) 
> >> info,
> >> to translate GPA to HVA. Since kernel's kvm structure already maintains the
> >> address relationship in its member *kvm_memslots*, these patches use 
> >> kernel's
> >> kvm_memslots directly without the need of initialization and maintenance of
> >> vhost_memory.
> > Conceptually, vhost isn't aware of KVM - it's just a driver which moves
> > data from vq to a tap device and back. You can't simply add KVM specific
> > code into vhost.
> >
> > Whats the performance benefit?
> >
> But vhost-net is only used in virtualization situation. vhost_memory is 
> maintained
> by user space qemu.
> In this way, the memory relationship can be accquired from kernel 
> without the
> need of maintainence of vhost_memory from qemu.

You can't assume that vhost-* is used only along with qemu/kvm. Just as
virtio has more uses than just virtualization (heres one:
https://lkml.org/lkml/2011/10/25/139 ), there are more uses for vhost as
well.

There has been a great deal of effort to keep vhost and kvm untangled.
One example is the memory translation it has to do, another one is the
eventfd/irqfd thing it does just so it could signal an IRQ in the guest
instead of accessing the guest directly.

If you do see a great performance increase when tying vhost and KVM
together, it may be worth it to create some sort of an in-kernel
vhost-kvm bridging thing, but if the performance isn't noticeable we're
better off just leaving it as is and keeping the vhost code general.

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to