[ovs-discuss] How instance get metadata with OVN

2017-09-22 Thread Vikrant Aggarwal
Hi Folks,

I am trying to understand how instance get metadata when OVN is used as
mechanism driver. I read the theory on [1] but not able to understand the
practical implementation of same.

Created two private networks (internal1 and internal2), one private network
(internal1) is created to router and other one (internal2) is isolated.

I tried to spin the cirros instances using both networks. Both instances
are able to get the metadata from networks.

List of metadata related processes running on devstack node.

~~~
stack@testuser-KVM:~/devstack$ ps -ef | grep -i metadata
stack 1067 1  0 Sep22 ?00:00:39 /usr/bin/python
/usr/local/bin/networking-ovn-metadata-agent --config-file
/etc/neutron/networking_ovn_metadata_agent.ini
stack 1414  1067  0 Sep22 ?00:00:17 /usr/bin/python
/usr/local/bin/networking-ovn-metadata-agent --config-file
/etc/neutron/networking_ovn_metadata_agent.ini
stack 1415  1067  0 Sep22 ?00:00:17 /usr/bin/python
/usr/local/bin/networking-ovn-metadata-agent --config-file
/etc/neutron/networking_ovn_metadata_agent.ini
stack25192 1  0 10:43 ?00:00:00 haproxy -f
/opt/stack/data/neutron/ovn-metadata-proxy/54f264d5-c2f5-409c-9bd2-dbcec52edffd.conf
stack27424 1  0 11:24 ?00:00:00 haproxy -f
/opt/stack/data/neutron/ovn-metadata-proxy/86eefb22-1417-407a-b56f-a1f3f147ee4e.conf
~~~

Default content of neutron ovn metadata file.

~~~
stack@testuser-KVM:~/devstack$ egrep -v "^(#|$)"
/etc/neutron/networking_ovn_metadata_agent.ini
[DEFAULT]
state_path = /opt/stack/data/neutron
metadata_workers = 2
nova_metadata_ip = 192.168.122.98
debug = True
[ovs]
ovsdb_connection = unix:/usr/local/var/run/openvswitch/db.sock
[agent]
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon
/etc/neutron/rootwrap.conf
[ovn]
ovn_sb_connection = tcp:192.168.122.98:6642
~~~

I don't see any NAT rule inside the network namespace which can route the
request coming for "169.254.169.254" to nova metadata IP which is mentioned
in ovn metadata configuration file.

~~~
stack@testuser-KVM:~/devstack$ sudo ip netns list
ovnmeta-86eefb22-1417-407a-b56f-a1f3f147ee4e (id: 1)
ovnmeta-54f264d5-c2f5-409c-9bd2-dbcec52edffd (id: 0)
stack@testuser-KVM:~/devstack$ sudo ip netns exec
ovnmeta-86eefb22-1417-407a-b56f-a1f3f147ee4e iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source   destination

Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source   destination
~~~

Content of the haproxy configuration file.

~~~
root@testuser-KVM:~/devstack# cat
/opt/stack/data/neutron/ovn-metadata-proxy/86eefb22-1417-407a-b56f-a1f3f147ee4e.conf

global
log /dev/log local0 debug
userstack
group   stack
maxconn 1024
pidfile
/opt/stack/data/neutron/external/pids/86eefb22-1417-407a-b56f-a1f3f147ee4e.pid
daemon

defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor
retries 3
timeout http-request30s
timeout connect 30s
timeout client  32s
timeout server  32s
timeout http-keep-alive 30s

listen listener
bind 0.0.0.0:80
server metadata /opt/stack/data/neutron/metadata_proxy
http-request add-header X-OVN-Network-ID
86eefb22-1417-407a-b56f-a1f3f147ee4e
~~~

It seems like that isolate metadata option is enabled by default in my
setup, but in neutron ovn configuration files I don't see such setting, I
am suspecting it's enabled because when network is not connected to router
even in that case instance spawned using isolated network able to get the
metadata.

How the instance is able to get metadata in both cases isolate network and
network connected to router?

[1]
https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html


Thanks & Regards,
Vikrant Aggarwal
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Flavio Leitner
On Fri, 22 Sep 2017 15:02:20 +0800
Sun Paul  wrote:

> hi
> 
> we have tried on that. e.g. if we set to 0x22, we still only able to
> see 2 cpu is in 100%, why?

Because that's what you told OVS to do.
The mask 0x22 is 0010 0010 and each '1' there represents a CPU.

-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Cannot dump packet capture using dpdk-pdump

2017-09-22 Thread Sun Paul
Hi

I am trying to configured dpdk-pdump to dump the packet in the OVS
bridge, however, I am failed. no packet is capture on ingress or
egress, any idea?


the command I used is
./dpdk-pdump -- --pdump
port=1,queue=*,rx-dev=/tmp/pkts_rx.pcap,tx-dev=/tmp/pkts_tx.pcap
--server-socket-path=/usr/local/var/run/openvswitch

the output on the screen is shown below.

EAL: Detected 24 lcore(s)
EAL: Probing VFIO support...
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in
the kernel.
EAL:This may cause issues with mapping memory into secondary processes
EAL: PCI device :04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:150e net_e1000_igb
EAL: PCI device :04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:150e net_e1000_igb
EAL: PCI device :04:00.2 on NUMA socket 0
EAL:   probe driver: 8086:150e net_e1000_igb
EAL: PCI device :04:00.3 on NUMA socket 0
EAL:   probe driver: 8086:150e net_e1000_igb
EAL: PCI device :07:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device :07:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device :07:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device :07:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
PMD: Initializing pmd_pcap for net_pcap_rx_0
PMD: Creating pcap-backed ethdev on numa socket 4294967295
Port 2 MAC: 00 00 00 01 02 03
PMD: Initializing pmd_pcap for net_pcap_tx_0
PMD: Creating pcap-backed ethdev on numa socket 4294967295
Port 3 MAC: 00 00 00 01 02 03

the port assignment is

# ovs-ofctl show xtp1
OFPT_FEATURES_REPLY (xid=0x2): dpid:001b21a7f596
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
mod_tp_dst
 1(dpdk1): addr:00:1b:21:a7:f5:97
 config: 0
 state:  LINK_DOWN
 current:AUTO_NEG
 speed: 0 Mbps now, 0 Mbps max
 2(dpdk0): addr:00:1b:21:a7:f5:96
 config: 0
 state:  LINK_DOWN
 current:AUTO_NEG
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(gtp1): addr:00:1b:21:a7:f5:96
 config: 0
 state:  0
 current:10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] CPU freqency is not max when running ovs_DPDK

2017-09-22 Thread 王志克
Hi All,

I am using OVS_DPDK, and the target CPU has running 100%. However, I notice the 
cpu frequency is NOT exceeding the max value, so the performance may not reach 
the best value. I have 2 pmd only for now, each is hyper-thread core in one 
physical core.

I am not sure the reason, and do not get idea after going through the BIOS and 
googles. Can someone kindly indicate the root cause? I am using centos7.  
Appreciate your help.

# cat /sys/devices/system/cpu/cpu4/cpufreq/cpuinfo_cur_freq
2599980   -àI expect it 
would be 300
# cat /sys/devices/system/cpu/cpu4/cpufreq/cpuinfo_max_freq
300
# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):64
On-line CPU(s) list:   0-63
Thread(s) per core:2
Core(s) per socket:16
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 79
Model name:Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
Stepping:  1
CPU MHz:   1199.953
BogoMIPS:  4195.88
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  40960K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63


Br,
Wang Zhike
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Per-Erik Westerberg
Hi,

Using bit-mask 0x22 is still only two bits set which will result in two
CPUs being used, use 0x33 or 0x0f for four CPUs etc.

  Regards / Per-Erik

On fre, 2017-09-22 at 15:02 +0800, Sun Paul wrote:
> hi
> 
> we have tried on that. e.g. if we set to 0x22, we still only able to
> see 2 cpu is in 100%, why?
> 
> # ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x22"}
> 
> 
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00196|netdev_dpdk|WARN|Failed to enable flow control on device 0
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00197|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2
> destroyed.
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00198|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  5
> created.
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00199|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
> 
> 
> 
> On Wed, Sep 20, 2017 at 8:59 PM, Flavio Leitner 
> wrote:
> > 
> > On Wed, 20 Sep 2017 09:13:55 +0800
> > Sun Paul  wrote:
> > 
> > > 
> > > sorry about that
> > > 
> > > # ovs-vsctl get Open_vSwitch . other_config
> > > {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}
> > 
> > Have you tried to change pmd-cpu-mask? Because that is mask of bits
> > representing the CPUs you allow PMDs to be created.  In this case,
> > you are saying '0x6' (binary mask: 0110), so only two CPUs.
> > 
> > Also check ovs-vswitchd.conf.db(5) man-page:
> > 
> >    other_config : pmd-cpu-mask: optional string
> >   Specifies  CPU  mask for setting the cpu affinity of
> > PMD (Poll Mode
> >   Driver) threads. Value should be in the form of hex
> > string, similar
> >   to  the  dpdk  EAL ’-c COREMASK’ option input or the
> > ’taskset’ mask
> >   input.
> > 
> >   The lowest order bit corresponds to the first CPU
> > core. A  set  bit
> >   means  the corresponding core is available and a pmd
> > thread will be
> >   created and pinned to it. If the input does
> > not  cover  all  cores,
> >   those uncovered cores are considered not set.
> > 
> >   If not specified, one pmd thread will be created for
> > each numa node
> >   and pinned to any available core on the numa node by
> > default.
> > 
> > fbl
> > 
> > > 
> > > 
> > > On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  > > > wrote:
> > > > 
> > > > On Tue, 19 Sep 2017 13:43:25 +0800
> > > > Sun Paul  wrote:
> > > > 
> > > > > 
> > > > > Hi
> > > > > 
> > > > > below is the output. currently, I am only able to set to use
> > > > > two CPU for PMD.
> > > > 
> > > > 
> > > > I was referring to the output of
> > > > ovs-vsctl get Open_vSwitch . other_config
> > > > 
> > > > fbl
> > > > 
> > > > > 
> > > > > 
> > > > > # ovs-vsctl show
> > > > > ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
> > > > > Bridge "gtp1"
> > > > > Port "dpdk0"
> > > > > Interface "dpdk0"
> > > > > type: dpdk
> > > > > options: {dpdk-devargs=":04:00.2",
> > > > > n_rxq="4"}
> > > > > Port "gtp1"
> > > > > Interface "gtp1"
> > > > > type: internal
> > > > > Port "dpdk1"
> > > > > Interface "dpdk1"
> > > > > type: dpdk
> > > > > options: {dpdk-devargs=":04:00.3",
> > > > > n_rxq="4"}
> > > > > 
> > > > > 
> > > > > 
> > > > > On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  > > > > .org> wrote:
> > > > > > 
> > > > > > On Mon, 18 Sep 2017 16:51:33 +0800
> > > > > > Sun Paul  wrote:
> > > > > > 
> > > > > > > 
> > > > > > > Hi
> > > > > > > 
> > > > > > > I have two interfaces mapped to DPDK, and run the OVS on
> > > > > > > top of it. I
> > > > > > > tried to set the cpu mask, but I cannot only allocate
> > > > > > > more than 2 CPU
> > > > > > > for pmd thread. any idea?
> > > > > > > 
> > > > > > > # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> > > > > > > pmd thread numa_id 0 core_id 1:
> > > > > > > isolated : false
> > > > > > > port: dpdk0 queue-id: 0
> > > > > > > pmd thread numa_id 0 core_id 2:
> > > > > > > isolated : false
> > > > > > > port: dpdk1 queue-id: 0
> > > > > > 
> > > > > > Could you post the DPDK configuration and what do you want?
> > > > > > 
> > > > > > Thanks,
> > > > > > --
> > > > > > Flavio
> > > > > > 
> > > > 
> > > > 
> > > > 
> > > > --
> > > > Flavio
> > > > 
> > 
> > 
> > 
> > --
> > Flavio
> > 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

smime.p7s
Description: S/MIME cryptographic signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Sun Paul
hi

we have tried on that. e.g. if we set to 0x22, we still only able to
see 2 cpu is in 100%, why?

# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x22"}


Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00196|netdev_dpdk|WARN|Failed to enable flow control on device 0
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00197|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2
destroyed.
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00198|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  5
created.
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00199|dpif_netdev|INFO|There are 2 pmd threads on numa node 0



On Wed, Sep 20, 2017 at 8:59 PM, Flavio Leitner  wrote:
> On Wed, 20 Sep 2017 09:13:55 +0800
> Sun Paul  wrote:
>
>> sorry about that
>>
>> # ovs-vsctl get Open_vSwitch . other_config
>> {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}
>
> Have you tried to change pmd-cpu-mask? Because that is mask of bits
> representing the CPUs you allow PMDs to be created.  In this case,
> you are saying '0x6' (binary mask: 0110), so only two CPUs.
>
> Also check ovs-vswitchd.conf.db(5) man-page:
>
>other_config : pmd-cpu-mask: optional string
>   Specifies  CPU  mask for setting the cpu affinity of PMD (Poll 
> Mode
>   Driver) threads. Value should be in the form of hex string, 
> similar
>   to  the  dpdk  EAL ’-c COREMASK’ option input or the ’taskset’ 
> mask
>   input.
>
>   The lowest order bit corresponds to the first CPU core. A  set  
> bit
>   means  the corresponding core is available and a pmd thread 
> will be
>   created and pinned to it. If the input does not  cover  all  
> cores,
>   those uncovered cores are considered not set.
>
>   If not specified, one pmd thread will be created for each numa 
> node
>   and pinned to any available core on the numa node by default.
>
> fbl
>
>>
>> On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  wrote:
>> > On Tue, 19 Sep 2017 13:43:25 +0800
>> > Sun Paul  wrote:
>> >
>> >> Hi
>> >>
>> >> below is the output. currently, I am only able to set to use two CPU for 
>> >> PMD.
>> >
>> >
>> > I was referring to the output of
>> > ovs-vsctl get Open_vSwitch . other_config
>> >
>> > fbl
>> >
>> >>
>> >> # ovs-vsctl show
>> >> ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
>> >> Bridge "gtp1"
>> >> Port "dpdk0"
>> >> Interface "dpdk0"
>> >> type: dpdk
>> >> options: {dpdk-devargs=":04:00.2", n_rxq="4"}
>> >> Port "gtp1"
>> >> Interface "gtp1"
>> >> type: internal
>> >> Port "dpdk1"
>> >> Interface "dpdk1"
>> >> type: dpdk
>> >> options: {dpdk-devargs=":04:00.3", n_rxq="4"}
>> >>
>> >>
>> >>
>> >> On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote:
>> >> > On Mon, 18 Sep 2017 16:51:33 +0800
>> >> > Sun Paul  wrote:
>> >> >
>> >> >> Hi
>> >> >>
>> >> >> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
>> >> >> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
>> >> >> for pmd thread. any idea?
>> >> >>
>> >> >> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
>> >> >> pmd thread numa_id 0 core_id 1:
>> >> >> isolated : false
>> >> >> port: dpdk0 queue-id: 0
>> >> >> pmd thread numa_id 0 core_id 2:
>> >> >> isolated : false
>> >> >> port: dpdk1 queue-id: 0
>> >> >
>> >> > Could you post the DPDK configuration and what do you want?
>> >> >
>> >> > Thanks,
>> >> > --
>> >> > Flavio
>> >> >
>> >
>> >
>> >
>> > --
>> > Flavio
>> >
>
>
>
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss