Hi Mechthild,

I've tried to reproduce this issue on my setup (Fedora 22 kerenl 4.1.8) but 
have not been able to reproduce it.

A few questions to help the investigation

1. Are you running 1 or 2 VMs in the setup (i.e. 1 vm with 2 vhost user ports 
or 2 vms with 1 vhost user port each?)
2. What are the parameters being used to launch the VM/s attached to the vhost 
user ports?
3. Inside the VM, are the interfaces bound to? igb_uio (i.e. using dpdk app 
inside the guest) or are the interfaces being used as kernel devices inside the 
VM?
4. What parameters are you launching OVS with?
5. Can you provide an ovs log?
6. Can you confirm the DPDK version you are using in the host/VM (If being used 
in the VM).

Thanks
Ian

> From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of Mechthild 
> Buescher
> Sent: Wednesday, July 06, 2016 1:54 PM
> To: b...@openvswitch.org
> Subject: [ovs-discuss] bug in ovs-vswitchd?!
> 
> Hi all,
> 
> we are using ovs with dpdk-interfaces and vhostuser-interfaces and want to 
> try the VMs with different multi-queue settings. When we specify 2 cores and 
> 2 multi-queues for a dpdk-interface but only one 
> queue >for the vhost-interfaces, ovs-vswitchd crashes at start of the VM (or 
> latest when traffic is sent).
>
> The version of ovs is : 2.5.90, master branch, latest commit 
> 7a15be69b00fe8f66a3f3929434b39676f325a7a)
> It has been built and is running on: Linux version 3.13.0-87-generic 
> (buildd@lgw01-25) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) ) 
> #133-Ubuntu SMP Tue May 24 18:32:09 UTC 2016
>
> The configuration is:
> ovs-vsctl show
> 0e191ed4-040b-458c-bad8-feb6f7c90e3a
>     Bridge br-prv
>         Port br-prv
>             Interface br-prv
>                 type: internal
>         Port "dpdk0"
>             Interface "dpdk0"
>                 type: dpdk
>                 options: {n_rxq="2"}
>     Bridge br-int
>         Port br-int
>             Interface br-int
>                 type: internal
>         Port "vhost112"
>             Interface "vhost112"
>                 type: dpdkvhostuser
>                 options: {n_rxq="1"}
>         Port "vhost111"
>             Interface "vhost111"
>                 type: dpdkvhostuser
>                options: {n_rxq="1"}
>         Port "vxlan0"
>             Interface "vxlan0"
>                 type: vxlan
>                 options: {key=flow, remote_ip="10.1.2.2"}
>
> ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 1:
>                 port: vhost112   queue-id: 0
>                 port: dpdk0        queue-id: 1
> pmd thread numa_id 0 core_id 2:
>                 port: dpdk0        queue-id: 0
>                 port: vhost111   queue-id: 0
>
> appctl-m dpif/show
>                 br-int:
>                                 br-int 65534/6: (tap)
>                                 vhost111 11/3: (dpdkvhostuser: 
>configured_rx_queues=1, configured_tx_queues=1, requested_rx_queues=1, 
>requested_tx_queues=21)
>                                 vhost112 12/5: (dpdkvhostuser: 
>configured_rx_queues=1, configured_tx_queues=1, requested_rx_queues=1, 
>requested_tx_queues=21)
>                                 vxlan0 100/4: (vxlan: key=flow, 
>remote_ip=10.1.2.2)
>                 br-prv:
>                                 br-prv 65534/1: (tap)
>                                 dpdk0 1/2: (dpdk: configured_rx_queues=2, 
>configured_tx_queues=21, requested_rx_queues=2, requested_tx_queues=21)    
> 
>  (gdb) bt
> #0  0x00000000005356e4 in ixgbe_xmit_pkts_vec ()
> #1  0x00000000006df384 in rte_eth_tx_burst (nb_pkts=<optimized out>, 
> tx_pkts=<optimized out>, queue_id=1, port_id=<optimized out>)
>     at /opt/dpdk-16.04/x86_64-native-linuxapp-gcc//include/rte_ethdev.h:2791
> #2  dpdk_queue_flush__ (qid=<optimized out>, dev=<optimized out>) at 
> lib/netdev-dpdk.c:1099
> #3  dpdk_queue_flush (qid=<optimized out>, dev=<optimized out>) at 
> lib/netdev-dpdk.c:1133
> #4  netdev_dpdk_rxq_recv (rxq=0x7fbe127ad4c0, packets=0x7fc26761e408, 
> c=0x7fc26761e400) at lib/netdev-dpdk.c:1312
> #5  0x000000000061be98 in netdev_rxq_recv (rx=<optimized out>, 
> batch=batch@entry=0x7fc26761e400) at lib/netdev.c:628
> #6  0x00000000005f17bb in dp_netdev_process_rxq_port 
> (pmd=pmd@entry=0x29ea810, rxq=<optimized out>, port=<optimized out>, 
> port=<optimized out>)
>     at lib/dpif-netdev.c:2619
> #7  0x00000000005f1b27 in pmd_thread_main (f_=0x29ea810) at 
> lib/dpif-netdev.c:2864
> #8  0x000000000067dde4 in ovsthread_wrapper (aux_=<optimized out>) at 
> lib/ovs-thread.c:342
> #9  0x00007fc26b90e184 in start_thread (arg=0x7fc26761f700) at 
> pthread_create.c:312
> #10 0x00007fc26af2237d in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> 
> This is the minimal configuration which leads to the fault. Our complete 
> configuration contains more vhostuser interfaces than above. We observed that 
> only the combination of 2 cores/queues for dpdk-
> interface and 1 queue for vhostuser interfaces results in an ovs-vswitchd 
> crash, in detail:
> Dpdk0: 1 cores/queues & all vhost-ports: 1 queue => successful
> Dpdk0: 2 cores/queues & all vhost-ports: 1 queue => crash
> Dpdk0: 2 cores/queues & all vhost-ports: 2 queue => successful
> Dpdk0: 4 cores/queues & all vhost-ports: 1 queue => successful
> Dpdk0: 4 cores/queues & all vhost-ports: 2 queue => successful
> Dpdk0: 4 cores/queues & all vhost-ports: 4 queue => successful
> 
> Do you have any suggestions? 
>
> Best regards,
>
> Mechthild Buescher



_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to