Hi,

i successfully compiled x86_64-native-linuxapp-gcc target of latest DPDK 
(2.2.0) and latest ovs (2.5.0) on Ubuntu 15.10 with Kernel 4.2.0-34-generic 
(default) as per http://openvswitch.org/support/dist-docs/INSTALL.DPDK.md.txt 
and setup the bridges like so:

ovs-vsctl set bridge br0 datapath_type=netdev
ovs-vsctl set bridge br1 datapath_type=netdev
ovs-vsctl add-port br0 dpdk0
ovs-vsctl set Interface dpdk0 type=dpdk
ovs-vsctl add-port br1 dpdk1
ovs-vsctl set Interface dpdk1 type=dpdk
ovs-vsctl add-port br0 vhost-user-0
ovs-vsctl set Interface vhost-user-0 type=dpdkvhostuser
ovs-vsctl add-port br1 vhost-user-1
ovs-vsctl set Interface vhost-user-1 type=dpdkvhostuser
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=30

(every of those commands hangs but if terminated with ctrl+c the action is 
applied though)

I added 12 x 1GB hugepages to grub boot command and isolated the cores used for 
pmd and vm like so:

"default_hugepagesz=1G hugepagesz=1G hugepages=12 isolcpus=0,1,2,3,4,5"

Note: the system has 24 cores: core 0-5 and 12-17 are numa 0, 6-11 and 18-23 
are numa 1, nics em49 and em50 are on numa 0. Additionally the system has 32GB 
ram per numa node.
I then proceeded to startup all necessary components like so:

mount -t hugetlbfs -o pagesize=1G none /dev/hugepages
modprobe uio
insmod $DPDK_BUILD/kmod/igb_uio.ko
insmod $DPDK_DIR/lib/librte_vhost/eventfd_link/eventfd_link.ko
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio em49
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio em50
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock 
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 1024,0 -- unix:$DB_SOCK --pidfile 
--detach -monitor
chmod 777 /usr/local/var/run/openvswitch/vhost-user-*

on the ovs-vswitchd command the error/warning message below is shown and 
continues infinitely:

2016-04-01T09:16:48Z|00001|ovs_rcu(urcu3)|WARN|blocked 1000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:49Z|00002|ovs_rcu(urcu3)|WARN|blocked 2000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:49Z|00071|ovs_rcu|WARN|blocked 1000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:50Z|00082|ovs_rcu|WARN|blocked 2000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:51Z|00003|ovs_rcu(urcu3)|WARN|blocked 4000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:52Z|00083|ovs_rcu|WARN|blocked 4000 ms waiting for 
vhost_thread1 to quiesce
2016-04-01T09:16:55Z|00004|ovs_rcu(urcu3)|WARN|blocked 8000 ms waiting for 
vhost_thread1 to quiesce

If killing the above executed command with killall ovs-vswitchd and restarting 
the same command the error/warning does not appear but no traffic reaches the 
vhost-user interface.
In this case the traffic (let it be an ARP request) can be observed on the brX 
internal interface but never reaches the vhost interface in the vm.

The vm config snippet looks like this:

<memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB' nodeset='0'/>
    </hugepages>
  </memoryBacking>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
  </cputune>
<cpu>
    <numa>
      <cell id='0' cpus='0-3' memory='2097152' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
[..]
<interface type='vhostuser'>
      <mac address='52:54:00:e3:eb:b2'/>
      <source type='unix' path='/usr/local/var/run/openvswitch/vhost-user-0' 
mode='client'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' 
function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='52:54:00:26:50:09'/>
      <source type='unix' path='/usr/local/var/run/openvswitch/vhost-user-1' 
mode='client'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' 
function='0x0'/>
    </interface>


In case someone wonders why ubuntu 15, it has a more or less current kernel for 
the latest improvements and qemu >2.2.0 is provided via repository.

I already tested if 2MB hugepages and/or ivshmem build solve the problem but 
they didn't.

Any help is greatly appreciated!

Best regards
Felix

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to