Hi,

Could you please tell me if there has been any more work on
virtio-vhost-user or vhost-pci? The last messages that I could find were
from January 2018, from this thread [1], and from what I see the latest
Qemu code does not have that included.

I am currently running multiple VMs, connected in between by the DPDK
vhost-switch. A VM can start, reboot, shutdown, so much of this is dynamic
and the vhost-switch handles all of these. So these VMs are some sort of
"endpoints" (I could not find a better naming).

The code which runs on the VM endpoints is somehow tied to the vhost-switch
code, and if I change something on the VM which breaks the compatibility, I
need to recompile the vhost-switch and restart. The problem is that most of
the time I forget to update the vhost-switch, and I run into other problems.

If I could use a VM as a vhost-switch instead of the DPDK app, then I hope
that in my endpoint code which runs on the VM, I can add functionality to
make it also run as a switch, and forward the packets between the other VMs
like the current DPDK switch does. Doing so would allow me to catch this
out-of-sync between the VM endpoint code and the switch code at compile
time, since they will be part of the same app.

This would be a two-phase process. First to run the DPDK vhost-switch
inside a guest VM, and then to move the tx-rx part into my app.

Both Qemu and the DPDK app use "vhost-user". I was happy to see that I can
start Qemu in vhost-user server mode:

    <interface type='vhostuser'>
      <mac address='52:54:00:9c:3a:e3'/>
      <source type='unix' path='/home/cosmin/vsocket.server' mode='server'/>
      <model type='virtio'/>
      <driver queues='2'>
        <host mrg_rxbuf='on'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </interface>

This would translate to these Qemu arguments:

-chardev socket,id=charnet1,path=/home/cosmin/vsocket.server,server -netdev
type=vhost-user,id=hostnet1,chardev=charnet1,queues=2 -device
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=52:54:00:9c:3a:e3,bus=pci.0,addr=0x4

But at this point Qemu will not boot the VM until there is a vhost-user
client connecting to Qemu. I even tried adding the "nowait" argument, but
Qemu still waits. This will not work in my case, as the endpoint VMs could
start and stop at any time, and I don't even know how many network
interfaces the endpoint VMs will have.

I then found the virtio-vhost-user transport protocol [2], and was thinking
that I could at least start the packet-switching VM, and then let the DPDK
app inside it do the forwarding of the packets. But from what I understand,
this creates a single network interface inside the VM on which the DPDK app
can bind. The limitation here is that if another VM wants to connect to the
packet-switching VM, I need to manually add another virtio-vhost-user-pci
device (and a new vhost-user.sock) before this packet-switching VM starts,
so this is not dynamic.

The second approach for me would be to use vhost-pci [3], which I could not
fully understand how it works, but I think it presents a network interface
to the guest kernel after another VM connects to the first one.

I realize I made a big story and probably don't make too much sense, but
one more thing. The ideal solution for me would be a combination of the
vhost-user socket and the vhost-pci socket. The Qemu will start the VM and
the socket will wait in the background for vhost-user connections. When a
new connection is established, Qemu should create a hot-plugable PCI
network card and either the guest kernel or the DPDK app inside to handle
the vhost-user messages.

Any feedback will be welcome, and I really appreciate all your work :)

Cosmin.

[1] https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg04806.html
[2] https://wiki.qemu.org/Features/VirtioVhostUser
[3] https://github.com/wei-w-wang/vhost-pci

Reply via email to