On Thu, May 21, 2020 at 8:40 PM Stefan Hajnoczi <stefa...@gmail.com> wrote: > > On Sat, May 09, 2020 at 12:32:14AM +0800, Cindy Lu wrote: > > From: Tiwei Bie <tiwei....@intel.com> > > > > Currently we have 2 types of vhost backends in QEMU: vhost kernel and > > vhost-user. The above patch provides a generic device for vDPA purpose, > > this vDPA device exposes to user space a non-vendor-specific configuration > > interface for setting up a vhost HW accelerator, this patch set introduces > > a third vhost backend called vhost-vdpa based on the vDPA interface. > > > > Vhost-vdpa usage: > > > > qemu-system-x86_64 -cpu host -enable-kvm \ > > ...... > > -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-id,id=vhost-vdpa0 \ > > -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on \ > > I haven't looked at vDPA in depth. What is different here compared to > the existing vhost-backend.c kernel backend? > > It seems to be making the same ioctls calls so I wonder if it makes > sense to share the vhost-backend.c kernel code? > > Stefan Hi Stefan, Sorry for the late reply and Thanks for these suggestions. I think the most difference between vhost kernel and vdpa is vdpa depends on a real hardware. The point is that vDPA devices work as a virtio device, but vhost-vdpa qemu must present a vhost-like device in qemu vhost layer. The ioctl calls are similar with vhost-backend.c now, but after more and more NIC support vdpa. The difference between vhost-backend.c and vpda will become more and more big. It will make the code complicated to share the code with kernel code.
Thanks Cindy