Re: [ovs-discuss] Rate limit based on IP

2017-10-22 Thread BALL SUN
Hi Guoshuai

thank for your suggestion, I will take a look. are you currently using
this method?

On Fri, Oct 20, 2017 at 6:55 PM, Guoshuai Li  wrote:
>
> you can try the meter in the dpdk environment.
> It can base on a flow.
>
> https://github.com/openvswitch/ovs/blob/fa10e59a6b9a894644c99f39035afc427d09a490/tests/dpif-netdev.at#L208
>
>> Hi
>>
>> I understood that OVS has rate shaping based on port, is there any way
>> to perform rate shaping on IP level?
>>
>> - RBK
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Rate limit based on IP

2017-10-22 Thread BALL SUN
Hi Ben

would you mind give me some samples?

On Sat, Oct 21, 2017 at 1:45 AM, Ben Pfaff  wrote:
> On Fri, Oct 20, 2017 at 04:58:06PM +0800, BALL SUN wrote:
>> I understood that OVS has rate shaping based on port, is there any way
>> to perform rate shaping on IP level?
>
> You can use the flow table to assign packets to queues based on
> destination IP, then assign different queues different rates with QoS
> configuration.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Rate limit based on IP

2017-10-20 Thread BALL SUN
Hi

I understood that OVS has rate shaping based on port, is there any way
to perform rate shaping on IP level?

- RBK
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Route between bridges in OVS+DPDK

2017-10-18 Thread BALL SUN
I have two VLANs in the host, and bridge is created for each VLAN, and
I need to perofrm routing between, so I believed I need a patch port,
right?

On Thu, Oct 19, 2017 at 4:44 AM, Aaron Conole  wrote:
> BALL SUN  writes:
>
>> Hi
>>
>> is it possible to route the packet from bridge interface 1 to bridge
>> interface 2 in OVS+DPDK environment?
>
> Are you looking for patch ports?  veth ports?  What are you trying to
> accomplish?
>
>> RBK
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-10-16 Thread BALL SUN
Thank for the share.

I think I have missed an important information, our setup is on a VM
guest, so not sure it is related.

On Mon, Oct 16, 2017 at 7:26 PM, Guoshuai Li  wrote:
> I can not answer your question, but I can share my environment:
>
>
> I have 32 cpu:
>
>
> [root@gateway1 ~]# cat /proc/cpuinfo | grep processor | wc -l
> 32
> [root@gateway1 ~]#
>
>
> I config my pmd-cpu-mask with 0xff00.
>
> [root@gateway1 ~]# ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", pmd-cpu-mask="0xff00"}
>
>
> I config my dpdk port with "n_rxq=4", This configuration is important :
>
> Bridge br-ext
> Port bond-ext
> Interface "ext-dpdk-2"
> type: dpdk
> options: {dpdk-devargs=":84:00.1", n_rxq="4"}
> Interface "ext-dpdk-1"
> type: dpdk
> options: {dpdk-devargs=":84:00.0", n_rxq="4"}
> Bridge br-agg
> Port bond-agg
> Interface "agg-dpdk-2"
> type: dpdk
> options: {dpdk-devargs=":07:00.1", n_rxq="4"}
> Interface "agg-dpdk-1"
> type: dpdk
> options: {dpdk-devargs=":07:00.0", n_rxq="4"}
>
> And then cpu 1600%
>
>
> top - 19:24:27 up 18 days, 24 min,  6 users,  load average: 16.00, 16.00,
> 16.00
> Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 50.0 us,  0.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> KiB Mem : 26409787+total, 25773403+free,  5427996 used,   935844 buff/cache
> KiB Swap:  4194300 total,  4194300 free,0 used. 25799068+avail Mem
>
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 32426 openvsw+  10 -10 5772520 653044  14888 S  1599  0.2 2267:10
> ovs-vswitchd
>
>
>
> [root@gateway1 ~]# top
> top - 19:24:50 up 18 days, 25 min,  6 users,  load average: 16.00, 16.00,
> 16.00
> Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
> %Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu3  :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu7  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu8  :  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu9  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu10 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu11 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu12 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu13 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu14 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu15 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu16 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu17 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu18 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu19 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu20 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu21 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu22 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu23 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu24 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu25 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu26 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu27 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu28 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu29 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu30 :100.

[ovs-discuss] Route between bridges in OVS+DPDK

2017-10-16 Thread BALL SUN
Hi

is it possible to route the packet from bridge interface 1 to bridge
interface 2 in OVS+DPDK environment?

RBK
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-10-16 Thread BALL SUN
sorry for late reply

we have reinstall the OVS, but still having the same issue.

we tried to set the pmd-cpu-mask=3, but only CPU1 is occupied.
%Cpu0  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

#  /usr/local/bin/ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", pmd-cpu-mask="3"}

# /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 0:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 1:
isolated : false

is it because there is only one nodes available in numa?

#  numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 8191 MB
node 0 free: 2633 MB
node distances:
node   0
  0:  10







On Fri, Sep 22, 2017 at 9:16 PM, Flavio Leitner  wrote:
> On Fri, 22 Sep 2017 15:02:20 +0800
> Sun Paul  wrote:
>
>> hi
>>
>> we have tried on that. e.g. if we set to 0x22, we still only able to
>> see 2 cpu is in 100%, why?
>
> Because that's what you told OVS to do.
> The mask 0x22 is 0010 0010 and each '1' there represents a CPU.
>
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] packet flow cannot be detected

2017-10-15 Thread BALL SUN
Hi

I am using OVS+DPDK to before a dedicated packet routing on a Linux box.

My setup is shown below

10.10.0.240 <> (10.10.0.236) OVS (10.10.10.236) <--> 10.10.10.229

flow1: ovs-ofctl  add-flow x1 "table=0, priority=121, dl_type=0x0800,
nw_proto=17, nw_dst=10.10.0.236, tp_dst=21213, in_port=1,
nw_dst=10.10.0.229, actions=mod_nw_dst=10.10.10.229, goto_table=10"

flow2: ovs-ofctl  add-flow x1 "table=0, priority=121, dl_type=0x0800,
nw_proto=17, nw_src=10.10.10.229, tp_src=21213, in_port=1,
actions=mod_nw_dst=10.10.0.236, goto_table=10"

we have try to send packet to 10.10.0.236 from 10.10.0.240, and we
observed it hits the 1st flow, but when the packet is returned from
10.10.10.229 to 10.10.0.240, it never hits the second, any idea why?

on host 10.10.10.229, we see the packet request and response as below.

11:31:33.145886 IP 10.10.0.240.21213 > 10.10.10.229.21213: UDP, length 120
11:31:33.146293 IP 10.10.10.229.21213 > 10.10.0.240.21213: UDP, length 84
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] (no subject)

2017-10-12 Thread BALL SUN
we have created two dpdk interface on a KVM guest, and we add both
dpdk interface on the same OVS bridge, when we add the second dpdk
interface, below error spawn out repeatedly. any idea?

- RBK

Oct 12 18:03:11 localhostovs-vswitchd[2250]:
ovs|00385|ovs_rcu|WARN|blocked 2001 ms waiting for pmd20 to quiesce
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01123|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01124|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01125|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01126|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01127|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01128|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01129|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01130|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01131|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01132|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01133|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
Oct 12 18:03:12 localhostovs-vswitchd[2250]:
ovs|01134|dpdk(pmd20)|ERR|PMD: virtio_xmit_pkts() tx: No free tx
descriptors to transmit
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Max number of tables and flows can be configured in OVS

2017-10-02 Thread BALL SUN
Hi

is there anyone know is there any limitation on the number of tables
and flows can be configured in OVS?

- RBK
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Problem on creating bridge interface on OVS_DPDK in a VMware VM guest

2017-09-27 Thread BALL SUN
I need some direction on " look into vmxnet3_recv_pkts with gdb"

also, I am not abke to complie the vmxnet3-usermap, do you have any
step on that?

On Thu, Sep 28, 2017 at 12:36 AM, Darrell Ball  wrote:
>
>
> On 9/26/17, 11:47 PM, "BALL SUN"  wrote:
>
> tried on 17.05.2, seems got the same problem.
>
>
> [Darrell]
> Can you look into vmxnet3_recv_pkts with gdb to see where the issue?
>
> In addition, you mentioned in an earlier thread about
> http://dpdk.org/doc/vmxnet3-usermap
> I assume you are already using that driver now?
>
>
>
> # gdb /usr/local/sbin/ovs-vswitchd coredump
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gnu.org_licenses_gpl.html&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=AccF7D_1ucmacdhBYtneFmWraGWha6yRG9H7_2p_Oik&s=zdjTw6g4SPW8CaLqwVobBuIFl_6gZ7e13U4Vr0ZG7gQ&e=
>  >
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> For bug reporting instructions, please see:
> 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gnu.org_software_gdb_bugs_&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=AccF7D_1ucmacdhBYtneFmWraGWha6yRG9H7_2p_Oik&s=MnzJQxJz9eucCrn96S5RPQ0WkdldccSBcaBs0t3HQro&e=
>  >...
> Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
> [New LWP 2610]
> [New LWP 2607]
> [New LWP 2599]
> [New LWP 2601]
> [New LWP 2608]
> [New LWP 2606]
> [New LWP 2603]
> [New LWP 2605]
> [New LWP 2604]
> [New LWP 2602]
> [New LWP 2609]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/local/sbin/ovs-vswitchd
> --pidfile=/root/run/ovs-vswitchd.pid'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00686165 in vmxnet3_recv_pkts ()
> (gdb) bt
> #0  0x00686165 in vmxnet3_recv_pkts ()
> #1  0x007cbad4 in rte_eth_rx_burst (nb_pkts=32,
> rx_pkts=0x7fa2c67fb7b0, queue_id=0, port_id=0 '\000')
> at 
> /data1/build/dpdk-stable-17.05.2/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2774
> #2  netdev_dpdk_rxq_recv (rxq=, batch=0x7fa2c67fb7a0)
> at lib/netdev-dpdk.c:1664
> #3  0x00729641 in netdev_rxq_recv (rx=rx@entry=0x7fa2f424a800,
> batch=batch@entry=0x7fa2c67fb7a0) at lib/netdev.c:701
> #4  0x00705bde in dp_netdev_process_rxq_port
> (pmd=pmd@entry=0x216e550, rx=0x7fa2f424a800, port_no=1) at
> lib/dpif-netdev.c:3114
> #5  0x00705e46 in pmd_thread_main (f_=) at
> lib/dpif-netdev.c:3854
> #6  0x00779584 in ovsthread_wrapper (aux_=) at
> lib/ovs-thread.c:348
> #7  0x7fa2f7d55dc5 in start_thread (arg=0x7fa2c67fc700) at
> pthread_create.c:308
> #8  0x7fa2f733973d in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>
> On Wed, Sep 27, 2017 at 2:31 PM, BALL SUN  wrote:
> > so, it is suggested to try on  17.05.2?
> >
> > On Wed, Sep 27, 2017 at 12:23 PM, Darrell Ball  wrote:
> >>
> >>
> >> On 9/26/17, 9:19 PM, "ovs-discuss-boun...@openvswitch.org on behalf of 
> Darrell Ball"  db...@vmware.com> wrote:
> >>
> >>
> >>
> >>
> >>
> >> On 9/26/17, 8:26 PM, "BALL SUN"  wrote:
> >>
> >>
> >>
> >> below is the backtrace
> >>
> >>
> >>
> >> # gdb /usr/local/sbin/ovs-vswitchd coredump
> >>
> >> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
> >>
> >> Copyright (C) 2013 Free Software Foundation, Inc.
> >>
> >> License GPLv3+: GNU GPL version 3 or later 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gnu.org_licenses_gpl.html&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=wa0yd0R7cqOWvjJOu1dSW07R0zHnvfzd9YivtLlPVpc&s=vsFbzhYP_z61VhdI1w-dlGINFOOOMgkZjMhs4SjvOM8&e=>
> >>
> >> This is free software: you are free to change and redistribut

Re: [ovs-discuss] Cannot dump packet capture using dpdk-pdump

2017-09-27 Thread BALL SUN
retried on the setup, once tried to rrun the dpdk-pdump, the following
command of ovs-ofctl will be hanged, any idea?

#  ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:001b21a7f597
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
mod_tp_dst
 1(dpdk0): addr:00:1b:21:a7:f5:97
 config: 0
 state:  0
 current:1GB-FD AUTO_NEG
 speed: 1000 Mbps now, 0 Mbps max
 LOCAL(br0): addr:00:1b:21:a7:f5:97
 config: 0
 state:  0
 current:10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0


[root@dhost4 app]#
[roott@dhost app]#
[root@dhost app]#  ovs-ofctl show br0  <--- HANG


On Tue, Sep 26, 2017 at 8:30 PM, Sun Paul  wrote:
> Hi
>
> Whenever I run the dpdk-pdump, the ova-vswitchd daemon will be dead.
>
> Any idea?
>
>
>
> On Monday, September 25, 2017, Loftus, Ciara  wrote:
>>
>> > Hi
>> >
>> > I am trying to configured dpdk-pdump to dump the packet in the OVS
>> > bridge, however, I am failed. no packet is capture on ingress or
>> > egress, any idea?
>>
>> Hi,
>>
>> You are capturing on port 1 but the link state is "down". Make sure your
>> link is up and that you can see packets being rx/tx in the statistics first
>> eg. via 'ovs-ofctl dump-ports '
>> The following might be useful as well:
>>
>> https://software.intel.com/en-us/articles/dpdk-pdump-in-open-vswitch-with-dpdk
>>
>> Thanks,
>> Ciara
>>
>> >
>> >
>> > the command I used is
>> > ./dpdk-pdump -- --pdump
>> > port=1,queue=*,rx-dev=/tmp/pkts_rx.pcap,tx-dev=/tmp/pkts_tx.pcap
>> > --server-socket-path=/usr/local/var/run/openvswitch
>> >
>> > the output on the screen is shown below.
>> >
>> > EAL: Detected 24 lcore(s)
>> > EAL: Probing VFIO support...
>> > EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in
>> > the kernel.
>> > EAL:This may cause issues with mapping memory into secondary
>> > processes
>> > EAL: PCI device :04:00.0 on NUMA socket 0
>> > EAL:   probe driver: 8086:150e net_e1000_igb
>> > EAL: PCI device :04:00.1 on NUMA socket 0
>> > EAL:   probe driver: 8086:150e net_e1000_igb
>> > EAL: PCI device :04:00.2 on NUMA socket 0
>> > EAL:   probe driver: 8086:150e net_e1000_igb
>> > EAL: PCI device :04:00.3 on NUMA socket 0
>> > EAL:   probe driver: 8086:150e net_e1000_igb
>> > EAL: PCI device :07:00.0 on NUMA socket 0
>> > EAL:   probe driver: 8086:1521 net_e1000_igb
>> > EAL: PCI device :07:00.1 on NUMA socket 0
>> > EAL:   probe driver: 8086:1521 net_e1000_igb
>> > EAL: PCI device :07:00.2 on NUMA socket 0
>> > EAL:   probe driver: 8086:1521 net_e1000_igb
>> > EAL: PCI device :07:00.3 on NUMA socket 0
>> > EAL:   probe driver: 8086:1521 net_e1000_igb
>> > PMD: Initializing pmd_pcap for net_pcap_rx_0
>> > PMD: Creating pcap-backed ethdev on numa socket 4294967295
>> > Port 2 MAC: 00 00 00 01 02 03
>> > PMD: Initializing pmd_pcap for net_pcap_tx_0
>> > PMD: Creating pcap-backed ethdev on numa socket 4294967295
>> > Port 3 MAC: 00 00 00 01 02 03
>> >
>> > the port assignment is
>> >
>> > # ovs-ofctl show xtp1
>> > OFPT_FEATURES_REPLY (xid=0x2): dpid:001b21a7f596
>> > n_tables:254, n_buffers:0
>> > capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS
>> > ARP_MATCH_IP
>> > actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
>> > mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos
>> > mod_tp_src
>> > mod_tp_dst
>> >  1(dpdk1): addr:00:1b:21:a7:f5:97
>> >  config: 0
>> >  state:  LINK_DOWN
>> >  current:AUTO_NEG
>> >  speed: 0 Mbps now, 0 Mbps max
>> >  2(dpdk0): addr:00:1b:21:a7:f5:96
>> >  config: 0
>> >  state:  LINK_DOWN
>> >  current:AUTO_NEG
>> >  speed: 0 Mbps now, 0 Mbps max
>> >  LOCAL(gtp1): addr:00:1b:21:a7:f5:96
>> >  config: 0
>> >  state:  0
>> >  current:10MB-FD COPPER
>> >  speed: 10 Mbps now, 0 Mbps max
>> > OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>> > ___
>> > discuss mailing list
>> > disc...@openvswitch.org
>> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Problem on creating bridge interface on OVS_DPDK in a VMware VM guest

2017-09-26 Thread BALL SUN
tried on 17.05.2, seems got the same problem.


# gdb /usr/local/sbin/ovs-vswitchd coredump
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
[New LWP 2610]
[New LWP 2607]
[New LWP 2599]
[New LWP 2601]
[New LWP 2608]
[New LWP 2606]
[New LWP 2603]
[New LWP 2605]
[New LWP 2604]
[New LWP 2602]
[New LWP 2609]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/local/sbin/ovs-vswitchd
--pidfile=/root/run/ovs-vswitchd.pid'.
Program terminated with signal 11, Segmentation fault.
#0  0x00686165 in vmxnet3_recv_pkts ()
(gdb) bt
#0  0x00686165 in vmxnet3_recv_pkts ()
#1  0x007cbad4 in rte_eth_rx_burst (nb_pkts=32,
rx_pkts=0x7fa2c67fb7b0, queue_id=0, port_id=0 '\000')
at 
/data1/build/dpdk-stable-17.05.2/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2774
#2  netdev_dpdk_rxq_recv (rxq=, batch=0x7fa2c67fb7a0)
at lib/netdev-dpdk.c:1664
#3  0x00729641 in netdev_rxq_recv (rx=rx@entry=0x7fa2f424a800,
batch=batch@entry=0x7fa2c67fb7a0) at lib/netdev.c:701
#4  0x00705bde in dp_netdev_process_rxq_port
(pmd=pmd@entry=0x216e550, rx=0x7fa2f424a800, port_no=1) at
lib/dpif-netdev.c:3114
#5  0x00705e46 in pmd_thread_main (f_=) at
lib/dpif-netdev.c:3854
#6  0x00779584 in ovsthread_wrapper (aux_=) at
lib/ovs-thread.c:348
#7  0x7fa2f7d55dc5 in start_thread (arg=0x7fa2c67fc700) at
pthread_create.c:308
#8  0x7fa2f733973d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113

On Wed, Sep 27, 2017 at 2:31 PM, BALL SUN  wrote:
> so, it is suggested to try on  17.05.2?
>
> On Wed, Sep 27, 2017 at 12:23 PM, Darrell Ball  wrote:
>>
>>
>> On 9/26/17, 9:19 PM, "ovs-discuss-boun...@openvswitch.org on behalf of 
>> Darrell Ball" > db...@vmware.com> wrote:
>>
>>
>>
>>
>>
>> On 9/26/17, 8:26 PM, "BALL SUN"  wrote:
>>
>>
>>
>> below is the backtrace
>>
>>
>>
>> # gdb /usr/local/sbin/ovs-vswitchd coredump
>>
>> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
>>
>> Copyright (C) 2013 Free Software Foundation, Inc.
>>
>> License GPLv3+: GNU GPL version 3 or later 
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gnu.org_licenses_gpl.html&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=wa0yd0R7cqOWvjJOu1dSW07R0zHnvfzd9YivtLlPVpc&s=vsFbzhYP_z61VhdI1w-dlGINFOOOMgkZjMhs4SjvOM8&e=>
>>
>> This is free software: you are free to change and redistribute it.
>>
>> There is NO WARRANTY, to the extent permitted by law.  Type "show 
>> copying"
>>
>> and "show warranty" for details.
>>
>> This GDB was configured as "x86_64-redhat-linux-gnu".
>>
>> For bug reporting instructions, please see:
>>
>> 
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gnu.org_software_gdb_bugs_&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=wa0yd0R7cqOWvjJOu1dSW07R0zHnvfzd9YivtLlPVpc&s=MOrbM5qkYI0I-SbDgsQ3lYwkcqQcDD64Vez202bqEng&e=>...
>>
>> Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
>>
>> [New LWP 2716]
>>
>> [New LWP 2707]
>>
>> [New LWP 2712]
>>
>> [New LWP 2711]
>>
>> [New LWP 2713]
>>
>> [New LWP 2708]
>>
>> [New LWP 2714]
>>
>> [New LWP 2715]
>>
>> [New LWP 2706]
>>
>> [New LWP 2710]
>>
>> [New LWP 2709]
>>
>> [Thread debugging using libthread_db enabled]
>>
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>>
>> Core was generated by `/usr/local/sbin/ovs-vswitchd
>>
>> --pidfile=/root/run/ovs-vswitchd.pid'.
>>
>> Program terminated with signal 11, Segmentation fault.
>>
>> #0  0x0068ad75 in vmxnet3_recv_pkts ()
>>
>> (gdb) next
>>
>> 

Re: [ovs-discuss] Problem on creating bridge interface on OVS_DPDK in a VMware VM guest

2017-09-26 Thread BALL SUN
so, it is suggested to try on  17.05.2?

On Wed, Sep 27, 2017 at 12:23 PM, Darrell Ball  wrote:
>
>
> On 9/26/17, 9:19 PM, "ovs-discuss-boun...@openvswitch.org on behalf of 
> Darrell Ball"  db...@vmware.com> wrote:
>
>
>
>
>
> On 9/26/17, 8:26 PM, "BALL SUN"  wrote:
>
>
>
> below is the backtrace
>
>
>
> # gdb /usr/local/sbin/ovs-vswitchd coredump
>
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
>
> Copyright (C) 2013 Free Software Foundation, Inc.
>
> License GPLv3+: GNU GPL version 3 or later 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gnu.org_licenses_gpl.html&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=wa0yd0R7cqOWvjJOu1dSW07R0zHnvfzd9YivtLlPVpc&s=vsFbzhYP_z61VhdI1w-dlGINFOOOMgkZjMhs4SjvOM8&e=>
>
> This is free software: you are free to change and redistribute it.
>
> There is NO WARRANTY, to the extent permitted by law.  Type "show 
> copying"
>
> and "show warranty" for details.
>
> This GDB was configured as "x86_64-redhat-linux-gnu".
>
> For bug reporting instructions, please see:
>
> 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gnu.org_software_gdb_bugs_&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=wa0yd0R7cqOWvjJOu1dSW07R0zHnvfzd9YivtLlPVpc&s=MOrbM5qkYI0I-SbDgsQ3lYwkcqQcDD64Vez202bqEng&e=>...
>
> Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
>
> [New LWP 2716]
>
> [New LWP 2707]
>
> [New LWP 2712]
>
> [New LWP 2711]
>
> [New LWP 2713]
>
> [New LWP 2708]
>
> [New LWP 2714]
>
> [New LWP 2715]
>
> [New LWP 2706]
>
> [New LWP 2710]
>
> [New LWP 2709]
>
> [Thread debugging using libthread_db enabled]
>
> Using host libthread_db library "/lib64/libthread_db.so.1".
>
> Core was generated by `/usr/local/sbin/ovs-vswitchd
>
> --pidfile=/root/run/ovs-vswitchd.pid'.
>
> Program terminated with signal 11, Segmentation fault.
>
> #0  0x0068ad75 in vmxnet3_recv_pkts ()
>
> (gdb) next
>
> The program is not being run.
>
> (gdb) bt
>
> #0  0x0068ad75 in vmxnet3_recv_pkts ()
>
>
>
> hmm, interesting signature
>
> maybe you want to drill down on this function to see where the seg fault 
> was.
>
> Btw, ovs-dpdk master/2.8 just moved to 17.05.2 and I see a release note
>
> for vmxnet3 here
>
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__dpdk.org_doc_guides-2D17.05_rel-5Fnotes_release-5F17-5F05.html&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=qUaugJc_cpKZ0k9zQTz2QTAB2P5PlyAq5c_cc5Sanr0&s=4sfd93KUta1pbiZYJxgtDgKvYc0gUKKLiF9jb_Zttto&e=
>
> “net/vmxnet3: fix filtering on promiscuous disabling
> net/vmxnet3: fix receive queue memory leak“
>
>
>
>
>
>
> #1  0x007d2252 in rte_eth_rx_burst (nb_pkts=32,
>
> rx_pkts=0x7f3e4bffe7b0, queue_id=0, port_id=0 '\000')
>
> at /data1/build/dpdk-stable- rte_eth_rx_burst 
> /x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2774
>
> #2  netdev_dpdk_rxq_recv (rxq=, batch=0x7f3e4bffe7a0)
>
> at lib/netdev-dpdk.c:1664
>
> #3  0x0072e571 in netdev_rxq_recv (rx=rx@entry=0x7f3e5cc4a680,
>
> batch=batch@entry=0x7f3e4bffe7a0) at lib/netdev.c:701
>
> #4  0x0070ab0e in dp_netdev_process_rxq_port
>
> (pmd=pmd@entry=0x29e5e20, rx=0x7f3e5cc4a680, port_no=1) at
>
> lib/dpif-netdev.c:3114
>
> #5  0x0070ad76 in pmd_thread_main (f_=) at
>
> lib/dpif-netdev.c:3854
>
> #6  0x0077e4b4 in ovsthread_wrapper (aux_=) at
>
> lib/ovs-thread.c:348
>
> #7  0x7f3e5fe07dc5 in start_thread (arg=0x7f3e4bfff700) at
>
> pthread_create.c:308
>
> #8  0x7f3e5f3eb73d in clone () at
>
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>
> (gdb) bt
>
>
>
> On Wed, Sep 27, 2017 at 9:15 AM, Sun Paul  wrote:
>
> > Hi
>
> >
>
> > I am trying to decode the core dump, however, I am not familiar with
>
> > the command to run the debug, can you please provide the steps?
>
> >
>
> > the linux version is Cen

Re: [ovs-discuss] Problem on creating bridge interface on OVS_DPDK in a VMware VM guest

2017-09-26 Thread BALL SUN
below is the backtrace

# gdb /usr/local/sbin/ovs-vswitchd coredump
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
[New LWP 2716]
[New LWP 2707]
[New LWP 2712]
[New LWP 2711]
[New LWP 2713]
[New LWP 2708]
[New LWP 2714]
[New LWP 2715]
[New LWP 2706]
[New LWP 2710]
[New LWP 2709]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/local/sbin/ovs-vswitchd
--pidfile=/root/run/ovs-vswitchd.pid'.
Program terminated with signal 11, Segmentation fault.
#0  0x0068ad75 in vmxnet3_recv_pkts ()
(gdb) next
The program is not being run.
(gdb) bt
#0  0x0068ad75 in vmxnet3_recv_pkts ()
#1  0x007d2252 in rte_eth_rx_burst (nb_pkts=32,
rx_pkts=0x7f3e4bffe7b0, queue_id=0, port_id=0 '\000')
at 
/data1/build/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2774
#2  netdev_dpdk_rxq_recv (rxq=, batch=0x7f3e4bffe7a0)
at lib/netdev-dpdk.c:1664
#3  0x0072e571 in netdev_rxq_recv (rx=rx@entry=0x7f3e5cc4a680,
batch=batch@entry=0x7f3e4bffe7a0) at lib/netdev.c:701
#4  0x0070ab0e in dp_netdev_process_rxq_port
(pmd=pmd@entry=0x29e5e20, rx=0x7f3e5cc4a680, port_no=1) at
lib/dpif-netdev.c:3114
#5  0x0070ad76 in pmd_thread_main (f_=) at
lib/dpif-netdev.c:3854
#6  0x0077e4b4 in ovsthread_wrapper (aux_=) at
lib/ovs-thread.c:348
#7  0x7f3e5fe07dc5 in start_thread (arg=0x7f3e4bfff700) at
pthread_create.c:308
#8  0x7f3e5f3eb73d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) bt

On Wed, Sep 27, 2017 at 9:15 AM, Sun Paul  wrote:
> Hi
>
> I am trying to decode the core dump, however, I am not familiar with
> the command to run the debug, can you please provide the steps?
>
> the linux version is CentOS7 (3.10.0-514.el7.x86_64), and we currently
> try to run the OVS+DPDK as a VMware guest. the expected topology would
> be connect another two nodes with this OVS+DPDK vm.
>
> On Wed, Sep 27, 2017 at 5:13 AM, Darrell Ball  wrote:
>>
>>
>> On 9/25/17, 11:18 PM, "Sun Paul"  wrote:
>>
>> hi
>>
>> I am trying to use the vmxnet3 (:03:00.0) as the dpdk0 interface.
>>
>> # ./dpdk-devbind.py --status
>>
>> Network devices using DPDK-compatible driver
>> 
>> :03:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=
>>
>> Network devices using kernel driver
>> ===
>> :02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
>> if=ens33 drv=e1000 unused=igb_uio *Active*
>>
>> Other Network devices
>> =
>> :0b:00.0 'VMXNET3 Ethernet Controller 07b0' unused=igb_uio
>>
>>
>> so, when I try to execute "ovs-vsctl add-port g1 dpdk0 -- set
>> Interface dpdk0 type=dpdk options:dpdk-devargs=:03:00.0"
>>
>> I got core dump as below.
>>
>> [Darrell]
>> Pls decode your core dump (btw: probably you will get this request on some 
>> of the other threads you created)
>> What version of rhel is this ?
>> Can you describe your environment and your setup steps?
>>
>>
>> Sep 27 11:28:50 dlocalhost ovs-vsctl: ovs|1|vsctl|INFO|Called as
>> ovs-vsctl del-port g1 dpdk0
>> Sep 27 11:28:50 dlocalhost ovs-vsctl: ovs|2|db_ctl_base|ERR|no
>> port named dpdk0
>> Sep 27 11:28:51 dlocalhost ovs-vswitchd[2858]:
>> ovs|00071|memory|INFO|20076 kB peak resident set size after 10.3
>> seconds
>> Sep 27 11:28:51 dlocalhost ovs-vswitchd:
>> 2017-09-27T03:28:51Z|00071|memory|INFO|20076 kB peak resident set size
>> after 10.3 seconds
>> Sep 27 11:28:51 dlocalhost ovs-vswitchd:
>> 2017-09-27T03:28:51Z|00072|memory|INFO|handlers:2 ports:1
>> revalidators:2 rules:5
>> Sep 27 11:28:51 dlocalhost ovs-vswitchd[2858]:
>> ovs|00072|memory|INFO|handlers:2 ports:1 revalidators:2 rules:5
>> Sep 27 11:29:10 dlocalhost ovs-vsctl: ovs|1|vsctl|INFO|Called as
>> ovs-vsctl add-port g1 dpdk0 -- set Interface dpdk0 type=dpdk
>> options:dpdk-devargs=:03:00.0
>> Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>> ovs|00073|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  0
>> created.
>> Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>> 2017-09-27T03:29:10Z|00073|dpif_netdev|INFO|PMD thread on numa_id: 0,
>> core id:  0 created.
>> Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>> 2017-09-27T03:29:10Z|00074|dpif_netdev|INFO|There are 1 pmd threads o