[ovs-discuss] poor ovs-dpdk performance vs ovs

2018-11-15 Thread Robert Brooks
I have a pair of Centos 7 hosts on 10gig switch.

Using OVS without dpdk enabled I can get 9.4Gb/s with a simple iperf test.

After switching the receiving host to ovs-dpdk following guidance here:
http://docs.openvswitch.org/en/latest/intro/install/dpdk/

I get 1.02Gb/s with 1500 MTU and 5.75Gb/s with a 9000 MTU.

Hardware is a Dell R630, with two E5-2680 26 core @ 2.40GHz cpus, 256GB
RAM, Intel 92599ES NIC.

I have confirmed the doucmented kernel boot options are set and 1GB
hugepages are in use.

# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-862.3.2.el7.x86_64
root=UUID=7d4edcad-0fd5-4224-a55f-ff2db81aac27 ro crashkernel=auto
rd.lvm.lv=vg01/swap
rd.lvm.lv=vg01/usr console=ttyS0,115200 iommu=pt intel_iommu=on
default_hugepagesz=1G hugepagesz=1G hugepages=8

Software is openvswitch 2.10.1 built using the provided rpm spec file. I
have used both the Centos provided dpdk 17.11 and a re-compile using latest
LTS of 17.11.4.

I have tried various performance tweaks, including cpu pinning and isolated
cpus.

Switching back to regular ovs returns the performance to close to wire
speed.

Under load the following is logged:

2018-11-15T21:59:24.306Z|00170|poll_loop|INFO|wakeup due to [POLLIN] on fd
74 (character device /dev/net/tun) at lib/netdev-linux.c:1347 (98% CPU
usage)

Config dump:

# ovsdb-client dump Open_vSwitch

AutoAttach table

_uuid mappings system_description system_name

-  -- ---


Bridge table

_uuidauto_attach controller datapath_id
datapath_type datapath_version external_ids fail_mode flood_vlans
flow_tables ipfix mcast_snooping_enable mirrors name  netflow other_config
ports

 --- --
-- -   -
--- --- - - --- - ---

-

006f021a-b14d-49fb-a11d-48df2fa2bca1 []  [] "001b21a6ddc4"
netdev"" {}   [][]  {}
  []false []  "br0" []  {}
[3bd79dba-777c-40d0-b573-bf9e027326f4,
60b0661f-2177-4283-a6cb-a80336


Controller table

_uuid connection_mode controller_burst_limit controller_rate_limit
enable_async_messages external_ids inactivity_probe is_connected
local_gateway local_ip local_netmask max_backoff other_config role status
target

- --- -- -
-   
-  - ---   --
--


Flow_Sample_Collector_Set table

_uuid bridge external_ids id ipfix

- --  -- -


Flow_Table table

_uuid external_ids flow_limit groups name overflow_policy prefixes

-  -- --  --- 


IPFIX table

_uuid cache_active_timeout cache_max_flows external_ids obs_domain_id
obs_point_id other_config sampling targets

-  ---  -
   ---


Interface table

_uuidadmin_state bfd bfd_status cfm_fault
cfm_fault_status cfm_flap_count cfm_health cfm_mpid cfm_remote_mpids
cfm_remote_opstate duplex error external_ids ifindex  ingress_policing_burst
ingress_policing_rate lacp_current link_resets link_speed  link_stat

 --- --- -- -
 -- --  
-- -- -  
-- -  ---
--- --

271077eb-97af-4c2d-a5ee-2c63c9367312 up  {}  {} [][]
  [] [] []   []   []
  full   []{}   13   0  0
  []   11  1000up

4c3f7cdb-fd16-44c7-bb82-4aa8cef0c136 up  {}  {} [][]
  [] [] []   []   []
  full   []{}   13032858 0  0
  []   0   100 up   s


Manager table

_uuid connection_mode external_ids inactivity_probe is_connected
max_backoff other_config status target

- ---   
---  -- --


Mirror table

_uuid external_ids name output_port output_vlan select_all select_dst_port
select_src_port select_vlan snaplen statistics

-   --- --- -- ---
--- --- --- --


NetFlow table

_uuid active_timeout add_id_to_interface engine_id engine_type external_ids
targets

- -- --- - --- 

Re: [ovs-discuss] Raft issues while removing a node

2018-11-15 Thread ramteja tadishetti
Awesome, thanks!

On Thu, Nov 15, 2018, 9:17 AM Ben Pfaff  wrote:

> On Thu, Nov 08, 2018 at 04:17:03PM -0800, ramteja tadishetti wrote:
> > I am facing trouble in graceful removal of node in a 3 Node RAFT setup.
>
> Thanks for the report.  I followed up on it and found a number of bugs
> in the implementation of the "kick" request.  There is a patch series
> out that fixes all of the bugs that I identified:
>
> https://patchwork.ozlabs.org/project/openvswitch/list/?series=76115
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-15 Thread Gregory Rose

Hi Siva,

I have some updates but I am traveling today so I'll provide them tomorrow.

Thanks,

- Greg


On 11/13/2018 4:02 PM, Gregory Rose wrote:


On 11/13/2018 1:44 PM, Siva Teja ARETI wrote:

Hi Greg,

Did you happen to get a chance to investigate this further?


Unfortunately not.  The IT team replaced a switch in the lab over the 
weekend and my access to the

test machines is down.
I have a ticket in to get it fixed and will resume debugging then.

Sorry for the delay.

- Greg



Siva Teja.

On Fri, Nov 9, 2018 at 1:26 PM Gregory Rose > wrote:



On 11/8/2018 4:16 PM, Gregory Rose wrote:

On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:



Siva,


When you see the error condition with the local_ip option
on vxlan can you provide me the output of
this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu
65470 qdisc noqueue master ovs-system state UNKNOWN mode
DEFAULT group default qlen 1000
    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0 0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8 99  0

Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT qlen
1000
    link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0          0        0       0       0      0
    TX: bytes  packets  errors  dropped carrier collsns
    3666796    130957   0       0       0      0

 Siva Teja.

It will help me understand which error you're encountering.

Thanks!

- Greg



Well then obviously I still have errors in my own setup.

Back to the drawing board but I think it's a routing issue in my
case.

Thanks!



Siva,

I've made progress.  I misconfigured my network which led to the
errors you were seeing.  Now I've got that fixed up and I think
I'm reproducing the error you are seeing. When adding the local
IP option the packets are getting
delivered to the VXLAN port but not getting delivered over to the
bridge with the local ip address.

I have two machines A and B.  They are bare metal running OVS
with kvm virtual machines.  Here is the config:

A) IP 10.172.208.214
Bridge test-vxlan   < ip=10.1.1.3
    Port test-vxlan
    Interface test-vxlan
    type: internal
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.3",
remote_ip="10.172.208.215"}
    Port "vnet4"
    Interface "vnet4"  < VM 1 with IP 10.1.1.1


B) IP 10.172.208.215
Bridge test-vxlan <- ip=10.1.1.4
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.4",
remote_ip="10.172.208.214"}
    Port "vnet6"
    Interface "vnet6"  < VM 2 with IP 10.1.1.2
    Port test-vxlan
    Interface test-vxlan
    type: internal

From VM 2 on machine B I start a ping from 10.1.1.2 -> 10.1.1.1

roseg@ubuntu-1604-base:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
From 10.1.1.2 icmp_seq=1 Destination Host Unreachable
From 10.1.1.2 icmp_seq=2 Destination Host Unreachable
From 10.1.1.2 icmp_seq=3 Destination Host Unreachable
From 10.1.1.2 icmp_seq=4 Destination Host Unreachable

On machine B we can see the vxlan_sys_4789 tx counter increasing:

[root@sc2-hs2-b2515 ~]# ip -s link show vxlan_sys_4789
76: vxlan_sys_4789:  mtu 65470
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group
default qlen 1000
    link/ether f2:3a:d4:fd:b3:46 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    4200   150  0   8   0   0

On machine A we can see the vxlan_sys_4789 rx counter increasing:

53: vxlan_sys_4789:  mtu 65470
qdisc noqueue ma
ster ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:4b:21:d8:af:8b brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    4200   150  0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    0   8   0   0

However, even though there is no indication of drops the packets
are not getting over to the test-vxlan 

Re: [ovs-discuss] Raft issues while removing a node

2018-11-15 Thread Ben Pfaff
On Thu, Nov 08, 2018 at 04:17:03PM -0800, ramteja tadishetti wrote:
> I am facing trouble in graceful removal of node in a 3 Node RAFT setup.

Thanks for the report.  I followed up on it and found a number of bugs
in the implementation of the "kick" request.  There is a patch series
out that fixes all of the bugs that I identified:
https://patchwork.ozlabs.org/project/openvswitch/list/?series=76115
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Physical interface vs interface under OVS bridge Performance/throughput impact

2018-11-15 Thread Grant Taylor via discuss

On 11/15/2018 07:48 AM, Ben Pfaff wrote:
No.  (Why would running OVS in the simplest way increase the performance 
cost?)


Is it possible that there's slightly more overhead in the algorithm(s) 
of the simple L2 learning functionality of the "NORMAL" action? 
Compared to simple rules that explicitly match the frame and specify an 
action?  (Assuming that there aren't any other flows before the 
aforementioned flows would match and act.)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Physical interface vs interface under OVS bridge Performance/throughput impact

2018-11-15 Thread Ben Pfaff
On Thu, Nov 15, 2018 at 10:03:30AM -0700, Grant Taylor via discuss wrote:
> On 11/15/2018 07:48 AM, Ben Pfaff wrote:
> >No.  (Why would running OVS in the simplest way increase the performance
> >cost?)
> 
> Is it possible that there's slightly more overhead in the algorithm(s) of
> the simple L2 learning functionality of the "NORMAL" action? Compared to
> simple rules that explicitly match the frame and specify an action?
> (Assuming that there aren't any other flows before the aforementioned flows
> would match and act.)

I think the difference would be lost in the noise.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Supporting L2 VPN with MPLS

2018-11-15 Thread Ben Pfaff
On Wed, Nov 14, 2018 at 02:03:54PM +0530, Martin Varghese wrote:
> When using the OVS actions to push a label stack, the labels get added “in
> the middle”, between the customer’s ETH header and the original payload.
> This is fine for an L3 service where the outermost eth layer gets removed
> when pushing to the tunnel, but for L2 services, the MPLS stack should be
> “outside”  the original packet.
> 
> Does OVS support any alternate MPLS actions to support L2VPN

Not currently.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Physical interface vs interface under OVS bridge Performance/throughput impact

2018-11-15 Thread Ben Pfaff
On Wed, Nov 14, 2018 at 05:54:52PM +, Srinivas via discuss wrote:
> Hello all,Lets say i have a physical interface eth0 that i move under
> a ovs-bridge br0.
> a) What would the performance / throughput impact as a result of the
> physical interface being part of the ovs bridge now?The reason i ask
> is there is probably a extra hop the packet will have to take now to
> go  from the physical interface through the ovs stack before it
> reaches the ip on the bridge now.

There is definitely an extra hop and extra cost.  The difference is
usually minimal.

> b ) Would this performance impact be more if i run ovs as a plain L2
> learning switch i.e no openflow rules configured.

No.  (Why would running OVS in the simplest way increase the performance
cost?)
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Could not Add Network Device p4p1 to ofproto (No Such Device)

2018-11-15 Thread Ramzah Rehman
I have an interface p4p1 on my machine. When I add this interface to
kernel-based bridge the interface is added successfully, but when I change
the bridge to userspace-based bridge and try to add interface p4p1 to the
bridge, it gives "Could not Add Network Device p4p1 to ofproto (No Such
Device)" error.

ovs-vsctl -- set bridge br1 datapath_type=netdev
ovs-vsctl add-port br1 p4p1 -- set Interface p4p1 ofport=1
 Error detected while setting up 'p4p1': Could not Add Network Device p4p1
to ofproto (No Such Device). See ovs-vswitchd log for details.

Best Regards,
Ramzah Rehman
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss