Hi Maxime,

I don't know which version Ladi uses, we use OVS v2.6.1

BR,
Zoltan

On 11/29/2017 02:06 PM, Maxime Coquelin wrote:


On 11/29/2017 11:42 AM, Ladi Prosek wrote:
On Wed, Nov 29, 2017 at 9:57 AM, Maxime Coquelin
<maxime.coque...@redhat.com> wrote:
Hi Ladi,

Sorry for the late reply.

On 11/27/2017 05:01 PM, Ladi Prosek wrote:

I think I understand what's going on. DPDK simply won't consider the
interface 'ready' until after all queues have been initialized.

http://dpdk.org/browse/dpdk/tree/lib/librte_vhost/vhost_user.c#n713

It looks like Maxime is the right person to bug about this. One of his
recent commits appears to be somewhat related:
http://dpdk.org/browse/dpdk/commit/?id=eefac9536a

Maxime, iPXE has a simple virtio-net driver that never negotiates the
VIRTIO_NET_F_MQ feature and never initializes more than one queue.
This makes it incompatible with vhost-user configured with mq=on, as
Rafael and Zoltan have discovered.

Is there any chance DPDK can be made aware of the VIRTIO_NET_F_MQ
feature bit being acked by the guest driver, and successfully operate
with one queue in case it was not acked? There's some context below in
this email. I can provide instructions on how to build iPXE and launch
QEMU to test this if you're interested.


I think I get your problem. I'm interested in instructions to reproduce
the issue.

Here it is, let me know if you run into any issues:

$ git clone git://git.ipxe.org/ipxe.git
$ cd ipxe/src
$ make bin/1af41000.rom DEBUG=virtio-net:2
$ ln -s bin/1af41000.rom efi-virtio.rom

Then run QEMU without changing the current directory (i.e. should
still be .../ipxe/src):

qemu-system-x86_64 \
-machine pc,accel=kvm -m 128M -boot strict=on -device cirrus-vga \
-monitor stdio \
-object memory-backend-file,id=mem,size=128M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem \
-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user0 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,bootindex=0

You'll see a bunch of "enqueuing iobuf" debug messages on the screen,
followed by at least "tx complete". Maybe also "rx complete" depending
on what /var/run/openvswitch/vhost-user0 is connected to.

Now if you enable multiqueue by replacing the last two lines with:

-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce,queues=16 \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mq=on,vectors=34,bootindex=0

you'll see only "enqueuing iobuf" without any completion, indicating
that the host is not processing packets placed in the tx virtqueue by
iPXE.


Thanks, just tested with DPDK v16.11 & DPDK v17.11 using testpmd instead
of OVS.

In my case, the packets send by iPXE are well received both with and
without mq=on.

I don't think there is an issue with the virtio_is_ready() code you mentioned. Indeed, nr_vrings gets incremented only when receiving
vhost-user protocol requests for a new ring. This code has changed
between v16.11 and v17.11 but idea remains the same.

In he case of iPXE, it only sends requests for queues 0 & 1, so
nr_vrings is two.

I will try with OVS to try to reproduce your issue, what OVS version are
you using?

Thanks,
Maxime
Thanks!
Ladi


_______________________________________________
ipxe-devel mailing list
ipxe-devel@lists.ipxe.org
https://lists.ipxe.org/mailman/listinfo.cgi/ipxe-devel

Reply via email to