Thank you Ciara.
OVS: 2.5.0
Dpdk: 2.2.0
Qemu: 2.2.1
Even appending mrg_rxbuf=off to param '-device' , two vms still cannot
communication;
While ping from vm1 to vm2, statistics on vm1 shows that eth1 RX_packet keeps
zero, TX_PACKET keeps increasing.
My build command as follows:
#!/bin/bash
################ config and compile dpdk ################
# cd dpdk
# make config T=x86_64-native-linuxapp-gcc
# make install T=x86_64-native-linuxapp-gcc
########################################################
################ config and compile ovs ################
# cd ovs
# ./boot.sh
# ./configure --localstatedir=/var
--with-dpdk=/root/workplane/dpdk/x86_64-native-linuxapp-gcc
# make
# make install
########################################################
################ config and compile qemu ################
# cd qemu
# ./configure
# make
# make install
########################################################
## set hugepage number, use boot cmdline or procfs
echo 8 > /proc/sys/vm/nr_hugepages
## insert the kernel modules
modprobe uio
insmod $DPDK_BUILD/kmod/igb_uio.ko
insmod $DPDK_BUILD/kmod/rte_kni.ko
insmod $DPDK_DIR/lib/librte_vhost/eventfd_link/eventfd_link.ko
# unbind the dpdk interface
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio 01:00.0
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio 01:00.1
#Mount hugetable
mkdir -p /dev/hugepages
mount -t hugetlbfs -o pagesize=1G none /dev/hugepages
#first time
#ovsdb-tool create /usr/local/etc/openvswitch/conf.db
/usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
--log-file
#firsttime only
#ovs-vsctl --no-wait init
ovs-vswitchd --dpdk -c 0x77 -n 2 --socket-mem 2048,0 -- unix:$DB_SOCK --pidfile
--detach --log-file
##############################################################################
## Add brige
/usr/local/bin/ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
## Add dpdk port
/usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk
/usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk
# /usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk2 -- set Interface dpdk2
type=dpdk
# /usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk3 -- set Interface dpdk3
type=dpdk
## Add vhost-user port
/usr/local/bin/ovs-vsctl add-port ovsbr0 vhost-user-0 -- set Interface
vhost-user-0 type=dpdkvhostuser
/usr/local/bin/ovs-vsctl add-port ovsbr0 vhost-user-1 -- set Interface
vhost-user-1 type=dpdkvhostuser
-----邮件原件-----
发件人: Loftus, Ciara [mailto:[email protected]]
发送时间: 2016年4月13日 22:34
收件人: lifuqiong; [email protected]
主题: RE: [ovs-discuss] ovs + dpdk vhost-user match flows but cannot execute
actions
> I want to test dpdk vhost-user port on ovs to follow
> https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-
> for-
> ovs-today-using-dpdk-200;
> I create ovs+dpdk environment followed INSTALL.DPDK.md; and create 2
> VM2, try to ping each other but show me "Destination Host
> Unreachable"; Dump-flows shows packets matched the flow, but can't
> output to port 4, why ? I can't get any useful error or warning info from
> ovs-vswitchd.log.
Do any statistics increment on the interface on the VM? Eg rx errors?
What versions of OVS / DPDK / QEMU are you using?
Some things to try if you haven't already:
QEMU v2.5.0
Disable mergeable rx buffers like so:
-device virtio-net-pci,mac=00:00:00:00:01:12,netdev=mynet1, mrg_rxbuf=off
Thanks,
Ciara
>
> ovs-ofctl dump-flows ovsbr0
> NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=836.946s, table=0, n_packets=628, n_bytes=26376,
> idle_age=0, in_port=3 actions=output:4 cookie=0x0, duration=831.458s,
> table=0, n_packets=36, n_bytes=1512, idle_age=770, in_port=4
> actions=output:3
>
> root@host152:/usr/local/var/run/openvswitch# ovs-vsctl show
> 03ae6f7d-3b71-45e3-beb0-09fa11292eaa
> Bridge "ovsbr0"
> Port "vhost-user-1"
> Interface "vhost-user-1"
> type: dpdkvhostuser
> Port "ovsbr0"
> Interface "ovsbr0"
> type: internal
> Port "dpdk1"
> Interface "dpdk1"
> type: dpdk
> Port "vhost-user-0"
> Interface "vhost-user-0"
> type: dpdkvhostuser
> Port "dpdk0"
> Interface "dpdk0"
> type: dpdk
>
> Start VM info:
> qemu-system-x86_64 -m 1024 -smp 2 -hda /root/vm11.qcow2 -boot c -
> enable-kvm -vnc 0.0.0.0:1 -chardev
> socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-0
> -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device
> virtio-net-
> pci,mac=00:00:00:00:01:12,netdev=mynet1 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa
> node,memdev=mem -mem-prealloc -d exec
>
> qemu-system-x86_64: -netdev type=vhost-
> user,id=mynet1,chardev=char1,vhostforce: chardev "char1" went up
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss