Hi Andrey/Sergey,
I had posted some problem I am facing on similar topic as below:

http://comments.gmane.org/gmane.linux.network.openvswitch.general/9105

I too observed dpdkhost0 mac of addr:00:00:00:00:00:00 in ovs. I do not see packets sent by VM on this port and seen by ovs (in fact, dump-flows also report 0 packets).

I am looking for any one opinion on this if dpdkhost0 mac reported is expected
(and to quickly debug any other problem there).

I am on qemu 2.1.0 as well.

Thank you,
Gowrishankar

On Friday 05 June 2015 10:32 PM, Andrey Korolyov wrote:
On Fri, Jun 5, 2015 at 11:49 AM, Sergey Matov <[email protected]> wrote:
Hello.
I am trying to build VM to VM scheme using Open vSwitch 2.3.9 and DPDK
2.0.0.
After building and installing OVS+DPDK I'm running single bridge with extra
port dpdkvhost type.
I'm using user-defined vhost device called vhost-net1.


#ovs-vswitchd --dpdk --cuse_dev_name vhost-net1 -c 0x01 -n 4 --socket-mem
1024,0 -- unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach

014d4a3f-0a8a-4d8f-8fc3-222775266ebf
     Bridge "br10"
         Port "dpdkvhost0"
             Interface "dpdkvhost0"
                 type: dpdkvhost
         Port "br10"
             Interface "br10"
                 type: internal

Then I'm trying to run single KVM VM using qemu 2.0 (pretty old with
-mem-prealloc option)

#/usr/bin/qemu-system-x86_64
  -name qemu_smatov-0 -cpu host -m 2048 -smp 2 -boot once=c -drive
file=/var/lib/libvirt/images/qemu_smatov-0.qcow2 -enable-kvm -net none
-mem-path /mnt/huge -numa node,memdev=mem -mem-prealloc -netdev
type=tap,id=net1,script=no,downscript=no,ifname=dpdkvhost0,vhost=on,vhostfd=3
-device
virtio-net-pci,netdev=net1,mac=52:54:f1:61:4c:84,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

It runs VM, but raises CPU usage up to 100%
  PID USER      PR  NI    VIRT      RES    SHR S  %CPU %MEM     TIME+
COMMAND
  5895 root        20   0   4296120   9412   2144 S    100.1    0.1
0:52.50   ovs-vswitchd

And dpdkvhost port stays down
#ovs-ofctl show
OFPT_FEATURES_REPLY (xid=0x2): dpid:00003acf36cb6443
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
  1(dpdkvhost0): addr:00:00:00:00:00:00
      config:     PORT_DOWN
      state:      LINK_DOWN
      speed: 0 Mbps now, 0 Mbps max
  LOCAL(br10): addr:3a:cf:36:cb:64:43
      config:     PORT_DOWN
      state:      LINK_DOWN
      current:    10MB-FD COPPER
      speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

Reserved 6G of hugepages.
Running on Ubuntu 14.04 3.13.0-53-generic
CPU i5 with 16gb RAM

Does this usage of CPU expected?

And, btw, does I need any extra guest VM configuration to run ping/iperf?
--

Sergey Matov


_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Yes, this is expected behavior. If you followed INSTALL.DPDK, none of
additional steps are required if the switch itself performing in a
standalone mode.
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss


_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to