I did some more testing today on this issue. I will include some more information lest anyone be able to provide me a suggestion on how to fix this.
Next, we will add our dpdkvhost1 vhostuser port. Let’s dump the log of ovs-vswitchd.log – which shows everything to be in order. 2020-09-14T20:04:28.237Z|00217|bridge|INFO|bridge br-tun: deleted interface dpdkvhost1 on port 2 2020-09-14T20:04:28.237Z|00218|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdk0' rx queue 0 (measured processing cycles 0). 2020-09-14T20:05:19.296Z|00219|dpdk|INFO|VHOST_CONFIG: vhost-user server: socket created, fd: 49 2020-09-14T20:05:19.296Z|00220|netdev_dpdk|INFO|Socket /var/run/openvswitch/dpdkvhost1 created for vhost-user port dpdkvhost1 2020-09-14T20:05:19.296Z|00221|dpdk|INFO|VHOST_CONFIG: bind to /var/run/openvswitch/dpdkvhost1 2020-09-14T20:05:19.296Z|00222|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdkvhost1' rx queue 0 (measured processing cycles 0). 2020-09-14T20:05:19.296Z|00223|dpif_netdev|INFO|Core 3 on numa node 0 assigned port 'dpdk0' rx queue 0 (measured processing cycles 0). 2020-09-14T20:05:19.296Z|00224|bridge|INFO|bridge br-tun: added interface dpdkvhost1 on port 2 Next, we will launch our virtual machine with virt-manager GUI. Here is our xml: Here is my xml file snippet: <interface type=’vhostuser’> <mac address=’52:54:00:d1:ba:7a’/> <source type=’unix’ path=’/var/run/openvswitch/dpdkvhost1’ mode=’client’/> <model type=’virtio’/> <address type=’pci’ domain=’0x0000’ bus=’0x00’ slot=’0x0b’ function=’0x0’/> </interface> VM comes up. Let’s check our openvswitch log to see if the port connected properly. 2020-09-14T20:11:54.407Z|00165|dpdk|INFO|VHOST_CONFIG: new vhost user connection is 52 2020-09-14T20:11:54.407Z|00166|dpdk|INFO|VHOST_CONFIG: new device, handle is 0 2020-09-14T20:11:54.417Z|00167|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES 2020-09-14T20:11:54.417Z|00168|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES 2020-09-14T20:11:54.417Z|00169|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES 2020-09-14T20:11:54.417Z|00170|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM 2020-09-14T20:11:54.419Z|00171|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD 2020-09-14T20:11:54.419Z|00172|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_OWNER 2020-09-14T20:11:54.419Z|00173|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES 2020-09-14T20:11:54.419Z|00174|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL 2020-09-14T20:11:54.419Z|00175|dpdk|INFO|VHOST_CONFIG: vring call idx:0 file:54 2020-09-14T20:11:54.419Z|00176|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL 2020-09-14T20:11:54.419Z|00177|dpdk|INFO|VHOST_CONFIG: vring call idx:1 file:77 2020-09-14T20:11:59.816Z|00178|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE 2020-09-14T20:11:59.816Z|00179|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0 2020-09-14T20:11:59.818Z|00180|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch/dpdkvhost1' changed to 'enabled' 2020-09-14T20:11:59.818Z|00181|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE 2020-09-14T20:11:59.818Z|00182|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1 2020-09-14T20:11:59.818Z|00183|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE 2020-09-14T20:11:59.818Z|00184|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0 2020-09-14T20:11:59.818Z|00185|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch/dpdkvhost1' changed to 'enabled' 2020-09-14T20:11:59.818Z|00186|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE 2020-09-14T20:11:59.818Z|00187|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1 2020-09-14T20:11:59.818Z|00188|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_FEATURES Things look good so far…let’s check out bridges… # ovs-vsctl show 2d46de50-e5b8-47be-84b4-a7e85ce29526 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port int-br-prv Interface int-br-prv type: patch options: {peer=phy-br-prv} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "dpdkvhost1" Interface "dpdkvhost1" type: dpdkvhostuser Port "dpdk0" Interface "dpdk0" type: dpdk options: {dpdk-devargs="0000:01:00.0"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Bridge br-prv Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-prv Interface br-prv type: internal Port phy-br-prv Interface phy-br-prv type: patch options: {peer=int-br-prv} Now – ping from the VM…. # ./dump-bridge-ports.sh br-tun OFPST_PORT reply (xid=0x2): 4 ports port LOCAL: rx pkts=405896, bytes=17725121, drop=1, errs=0, frame=0, over=0, crc=0 --> ping packets from VM arrive on LOCAL port! Not the dpdkvhost1 port! tx pkts=12676, bytes=1223882, drop=0, errs=0, coll=0 port dpdk0: rx pkts=12762, bytes=1285498, drop=0, errs=0, frame=?, over=?, crc=? tx pkts=12458, bytes=1248562, drop=0, errs=0, coll=? port dpdkvhost1: rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=? --> why are the ping packets not arriving on the dpdkvhost1 port? tx pkts=0, bytes=0, drop=0, errs=?, coll=? port "patch-int": rx pkts=0, bytes=0, drop=?, errs=?, frame=?, over=?, crc=? tx pkts=205, bytes=24012, drop=?, errs=?, coll=? # ./dump-port-numbers.sh br-tun OFPT_FEATURES_REPLY (xid=0x2): dpid:0000001b21c57204 n_tables:254, n_buffers:0 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(dpdk0): addr:00:1b:21:c5:72:04 config: 0 state: 0 current: 1GB-FD AUTO_NEG speed: 1000 Mbps now, 0 Mbps max 2(dpdkvhost1): addr:00:00:00:00:00:00 --> no mac address. This does not look right. config: 0 state: LINK_DOWN --> why is this port in Link Down state? speed: 0 Mbps now, 0 Mbps max 3(patch-int): addr:5a:11:f1:72:d7:5f config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-tun): addr:00:1b:21:c5:72:04 config: 0 state: 0 current: 10MB-FD COPPER speed: 10 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 So clearly, this port is not functioning correctly. From: "Wittling, Mark (CCI-Atlanta)" <mark.wittl...@cox.com> Date: Wednesday, September 9, 2020 at 11:49 AM To: "ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org> Subject: Vhost Ports on OVS DPDK Bridge do not work when using virsh xml file I have an issue that when I add a vhost port to a an OVS bridge (and it doesn’t matter if it is vhostuser or vhostuserclient), the port adds okay, but the link state remains down and the port serves no traffic. I have a DPDK NIC attached to this same bridge, and it appears to work. The more interesting thing, is that the vhost ports seem to work if I crank the VM up with a bash script that runs qemu-kvm and all the options along with it. The OVS version is 2.11.1, compiled with DPDK version 18.11.8. The kernel version is 3.10.0-1127.13.1.el7.x86_64. * I looked into updating to a 4.x kernel, but I am running CentOS7 and the CentOS folks told me that they backport, and that updating the kernel would cause more problems than I could ever possibly solve. Originally, I suspected vhostuserclient to be the issue, and backed down from that and went to vhostuser (where the OVS is the server and QEMU is the client). Same problem. The XML I am using is: <interface type=’vhostuser’> <mac address=’52:54:00:d1:ba:7a’/> <source type=’unix’ path=’/var/run/openvswitch/dpdkvhost1’ mode=’client’/> <model type=’virtio’/> <address type=’pci’ domain=’0x0000’ bus=’0x00’ slot=’0x0b’ function=’0x0’/> </interface> Pretty standard config. Now one thing I will mention, is that I have to put 777 on that socket in the /var/run/openvswitch directory for the VM to start up with virsh-manager GUI. The reason for this is that qemu cannot write to a socket that is created by openvswitch using the openvswitch user group. But I don’t think that matters at all. Does anyone have a clue on why this doesn’t work? Or how to get this port link state back up and serving traffic? Mark
_______________________________________________ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss