Hi, thanks, this worked perfectly, vm communication also works!
Best regards Felix -----Ursprüngliche Nachricht----- Von: Ben Pfaff [mailto:[email protected]] Gesendet: Freitag, 1. April 2016 17:08 An: Felix Brucker <[email protected]> Cc: [email protected] Betreff: Re: [ovs-discuss] [DPDK-OVS] blocked waiting for vhost_thread1 to quiesce On Fri, Apr 01, 2016 at 09:53:15AM +0000, Felix Brucker wrote: > Hi, > > i successfully compiled x86_64-native-linuxapp-gcc target of latest DPDK > (2.2.0) and latest ovs (2.5.0) on Ubuntu 15.10 with Kernel 4.2.0-34-generic > (default) as per http://openvswitch.org/support/dist-docs/INSTALL.DPDK.md.txt > and setup the bridges like so: > > ovs-vsctl set bridge br0 datapath_type=netdev ovs-vsctl set bridge br1 > datapath_type=netdev ovs-vsctl add-port br0 dpdk0 ovs-vsctl set > Interface dpdk0 type=dpdk ovs-vsctl add-port br1 dpdk1 ovs-vsctl set > Interface dpdk1 type=dpdk ovs-vsctl add-port br0 vhost-user-0 > ovs-vsctl set Interface vhost-user-0 type=dpdkvhostuser ovs-vsctl > add-port br1 vhost-user-1 ovs-vsctl set Interface vhost-user-1 > type=dpdkvhostuser ovs-vsctl set Open_vSwitch . > other_config:pmd-cpu-mask=30 > > (every of those commands hangs but if terminated with ctrl+c the > action is applied though) > > I added 12 x 1GB hugepages to grub boot command and isolated the cores used > for pmd and vm like so: > > "default_hugepagesz=1G hugepagesz=1G hugepages=12 isolcpus=0,1,2,3,4,5" > > Note: the system has 24 cores: core 0-5 and 12-17 are numa 0, 6-11 and 18-23 > are numa 1, nics em49 and em50 are on numa 0. Additionally the system has > 32GB ram per numa node. > I then proceeded to startup all necessary components like so: > > mount -t hugetlbfs -o pagesize=1G none /dev/hugepages modprobe uio > insmod $DPDK_BUILD/kmod/igb_uio.ko insmod > $DPDK_DIR/lib/librte_vhost/eventfd_link/eventfd_link.ko > $DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio em49 > $DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio em50 ovsdb-server > --remote=punix:/usr/local/var/run/openvswitch/db.sock > --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile > --detach ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 1024,0 -- > unix:$DB_SOCK --pidfile --detach -monitor chmod 777 > /usr/local/var/run/openvswitch/vhost-user-* > > on the ovs-vswitchd command the error/warning message below is shown and > continues infinitely: > > 2016-04-01T09:16:48Z|00001|ovs_rcu(urcu3)|WARN|blocked 1000 ms waiting > for vhost_thread1 to quiesce > 2016-04-01T09:16:49Z|00002|ovs_rcu(urcu3)|WARN|blocked 2000 ms waiting > for vhost_thread1 to quiesce > 2016-04-01T09:16:49Z|00071|ovs_rcu|WARN|blocked 1000 ms waiting for > vhost_thread1 to quiesce > 2016-04-01T09:16:50Z|00082|ovs_rcu|WARN|blocked 2000 ms waiting for > vhost_thread1 to quiesce > 2016-04-01T09:16:51Z|00003|ovs_rcu(urcu3)|WARN|blocked 4000 ms waiting > for vhost_thread1 to quiesce > 2016-04-01T09:16:52Z|00083|ovs_rcu|WARN|blocked 4000 ms waiting for > vhost_thread1 to quiesce > 2016-04-01T09:16:55Z|00004|ovs_rcu(urcu3)|WARN|blocked 8000 ms waiting > for vhost_thread1 to quiesce This was recently fixed on branch-2.5 with the following commit. commit f519a72d9a3708fbc5f796f176e7c8bd3dcfb738 Author: Daniele Di Proietto <[email protected]> Date: Wed Mar 23 16:37:47 2016 -0700 ovs-thread: Do not always end quiescent state in ovs_thread_create(). A new thread must be started in a non quiescent state. There is a call to ovsrcu_quiesce_end() in ovsthread_wrapper(), to enforce this. ovs_thread_create(), instead, is executed in the parent thread. It must call ovsrcu_quiesce_end() on its first invocation, to put the main thread in a non quiescent state. On every other invocation, it doesn't make sense to alter the calling thread state, so this commits wraps the call to ovsrcu_quiesce_end() in an ovsthread_once construct. This fixes a bug in ovs-rcu where the first call in the process to ovsrcu_quiesce_start() will not be honored, because the calling thread will need to create the 'urcu' thread (and creating a thread will wrongly end its quiescent state). ovsrcu_quiesce_start() ovs_rcu_quiesced() if (ovsthread_once_start(&once)) { ovs_thread_create("urcu") /*This will end the quiescent state*/ } This bug affects in particular ovs-vswitchd with DPDK. In the DPDK case the first threads created are "vhost_thread" and "dpdk_watchdog". If dpdk_watchdog is the first to call ovsrcu_quiesce_start() (via xsleep()), the call is not honored and the RCU grace period lasts at least for DPDK_PORT_WATCHDOG_INTERVAL (5s on current master). If vhost_thread, on the other hand, is the first to call ovsrcu_quiesce_start(), the call is not honored and the RCU grace period lasts undefinitely, because no more calls to ovsrcu_quiesce_start() are issued from vhost_thread. For some reason (it's a race condition after all), on current master, dpdk_watchdog will always be the first to call ovsrcu_quiesce_start(), but with the upcoming DPDK database configuration changes, sometimes vhost_thread will issue the first call to ovsrcu_quiesce_start(). Sample ovs-vswitchd.log: 2016-03-23T22:34:28.532Z|00004|ovs_rcu(urcu3)|WARN|blocked 8000 ms waiting for vhost_thread2 to quiesce 2016-03-23T22:34:30.501Z|00118|ovs_rcu|WARN|blocked 8000 ms waiting for vhost_thread2 to quiesce 2016-03-23T22:34:36.532Z|00005|ovs_rcu(urcu3)|WARN|blocked 16000 ms waiting for vhost_thread2 to quiesce 2016-03-23T22:34:38.501Z|00119|ovs_rcu|WARN|blocked 16000 ms waiting for vhost_thread2 to quiesce The commit also adds a test for the ovs-rcu module to make sure that: * A new thread is started in a non quiescent state. * The first call to ovsrcu_quiesce_start() is honored. * When a process becomes multithreaded the main thread is put in an active state Signed-off-by: Daniele Di Proietto <[email protected]> Acked-by: Ben Pfaff <[email protected]> _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
