Re: [Openstack] poor bandwidth across instances running on same host

2017-04-18 Thread Manuel Sopena Ballesteros
Hi Tomas,

I would expect bigger bandwidth I was in contact with SamYaple from the 
#openstack IRC channel who got 40Gigs/sec in his lab. The main difference 
between our environments:

Me:

· DPDK: NO

· SR-IOV: NO

· OS: Centos 7.3 (kernel 3.10)


SamYaple:

· DPDK: YES

· SR-IOV: YES

· OS: Ubuntu (kernel 4.1)

Do you think the kernel may have a huge impact in the ovs performance?

Thank you very much

Manuel

From: Tomáš Vondra [mailto:von...@homeatcloud.cz]
Sent: Tuesday, April 18, 2017 10:12 PM
To: Manuel Sopena Ballesteros
Cc: openstack@lists.openstack.org
Subject: RE: [Openstack] poor bandwidth across instances running on same host

Sorry to shatter you expectations, but those numbers are perfectly OK.
I was testing on HPE DL380 gen9 with Intel Xeon 
E5-2630v3<https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2630+v3+%40+2.40GHz&id=2386&cpuCount=2>
 and I got these speeds between two KVM VMs on the same host using netperf
28186 Mb/s with linux bridge
18552 Mb/s with OpenVSwitch with the full Neutron setup with iptables.

How much would you like to achieve? I got 38686 on dev lo on the physical 
server and 47894 on a VM. You could turn to OVS with DPDK as the data path, bu 
I doubt it will do much. SR-IOV might, but I never tried any of this. I’m 
satisfied with the speed for my purposes.
Tomas from Homeatcloud

From: Manuel Sopena Ballesteros [mailto:manuel...@garvan.org.au]
Sent: Tuesday, April 18, 2017 9:11 AM
To: openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Subject: [Openstack] poor bandwidth across instances running on same host

Hi all,

I created 2 instances on the same compute node and tested the bandwidth between 
them, surprisingly iperf tells me I got 16.1Gbits/sec only. Then I changed the 
firewall from hybrid iptables to ovs, the bandwidth improved a little bit to 
17.5Gbits/sec but still far from expected.

Ml2_config.ini config file


[root@nova-compute ~]# docker exec -t neutron_openvswitch_agent vi 
/var/lib/kolla/config_files/ml2_config.ini

network_vlan_ranges =



[ml2_type_flat]

flat_networks = physnet1



[ml2_type_vxlan]

vni_ranges = 1:1000

vxlan_group = 239.1.1.1



[securitygroup]

firewall_driver = openvswitch

#firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver



[agent]

tunnel_types = vxlan

l2_population = true

arp_responder = true



[ovs]

bridge_mappings = physnet1:br-ex

ovsdb_connection = tcp:129.94.72.54:6640

local_ip = 10.1.0.12



ovs config


[root@nova-compute ~]# docker exec openvswitch_vswitchd ovs-vsctl show

306d62c4-8e35-45e0-838e-53ebe81f1d06

Bridge br-ex

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port "eno50336512"

Interface "eno50336512"

Port phy-br-ex

Interface phy-br-ex

type: patch

options: {peer=int-br-ex}

Port br-ex

Interface br-ex

type: internal

Bridge br-tun

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port "vxlan-0a01000b"

Interface "vxlan-0a01000b"

type: vxlan

options: {df_default="true", in_key=flow, local_ip="10.1.0.12", 
out_key=flow, remote_ip="10.1.0.11"}

Port br-tun

Interface br-tun

type: internal

Bridge br-int

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port int-br-ex

Interface int-br-ex

type: patch

options: {peer=phy-br-ex}

Port "tapa26ee521-3b"

tag: 2

Interface "tapa26ee521-3b"

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port br-int

Interface br-int

type: internal

Port "tap1f76851b-ea"

tag: 2

Interface "tap1f76851b-ea"






Iperf results



[centos@centos7 ~]$ iperf -c 192.168.1.105



Client connecting to 192.168.1.105, TCP port 5001

TCP window size: 45.0 KByte (default)



[  3] local 192.168.1.101 port 48522 connected with 192.168.1.105 port 5001

[ ID] Interval   Transfer Bandwidth

[  3]  0.0-10.0 sec  20.3 GBytes  17.5 Gbits/sec



Ovs info

[root@nova-compute ~]# docker exec openvswitch_vswitchd modinfo openvswitch
filename:   
/lib/module

Re: [Openstack] poor bandwidth across instances running on same host

2017-04-18 Thread Chris Friesen

On 04/18/2017 01:11 AM, Manuel Sopena Ballesteros wrote:

Hi all,

I created 2 instances on the same compute node and tested the bandwidth between
them, surprisingly iperf tells me I got 16.1Gbits/sec only. Then I changed the
firewall from hybrid iptables to ovs, the bandwidth improved a little bit to
17.5Gbits/sec but still far from expected.


Just curious, did you constrain the two instances so they were running on the 
same NUMA node of that host?  If not you're going to hit inter-socket NUMA overhead.


Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] poor bandwidth across instances running on same host

2017-04-18 Thread Tomáš Vondra
Sorry to shatter you expectations, but those numbers are perfectly OK.

I was testing on HPE DL380 gen9 with Intel Xeon E5-2630v3 
<https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2630+v3+%40+2.40GHz&id=2386&cpuCount=2>
  and I got these speeds between two KVM VMs on the same host using netperf

28186 Mb/s with linux bridge

18552 Mb/s with OpenVSwitch with the full Neutron setup with iptables.

 

How much would you like to achieve? I got 38686 on dev lo on the physical 
server and 47894 on a VM. You could turn to OVS with DPDK as the data path, bu 
I doubt it will do much. SR-IOV might, but I never tried any of this. I’m 
satisfied with the speed for my purposes.

Tomas from Homeatcloud

 

From: Manuel Sopena Ballesteros [mailto:manuel...@garvan.org.au] 
Sent: Tuesday, April 18, 2017 9:11 AM
To: openstack@lists.openstack.org
Subject: [Openstack] poor bandwidth across instances running on same host

 

Hi all,

 

I created 2 instances on the same compute node and tested the bandwidth between 
them, surprisingly iperf tells me I got 16.1Gbits/sec only. Then I changed the 
firewall from hybrid iptables to ovs, the bandwidth improved a little bit to 
17.5Gbits/sec but still far from expected. 

 

Ml2_config.ini config file

 

[root@nova-compute ~]# docker exec -t neutron_openvswitch_agent vi 
/var/lib/kolla/config_files/ml2_config.ini
network_vlan_ranges =
 
[ml2_type_flat]
flat_networks = physnet1
 
[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1
 
[securitygroup]
firewall_driver = openvswitch
#firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true
 
[ovs]
bridge_mappings = physnet1:br-ex
ovsdb_connection = tcp:129.94.72.54:6640
local_ip = 10.1.0.12

 

 

 

ovs config

 

[root@nova-compute ~]# docker exec openvswitch_vswitchd ovs-vsctl show
306d62c4-8e35-45e0-838e-53ebe81f1d06
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "eno50336512"
Interface "eno50336512"
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-0a01000b"
Interface "vxlan-0a01000b"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.1.0.12", 
out_key=flow, remote_ip="10.1.0.11"}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tapa26ee521-3b"
tag: 2
Interface "tapa26ee521-3b"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port "tap1f76851b-ea"
tag: 2
Interface "tap1f76851b-ea"
 
 
 

Iperf results

 
[centos@centos7 ~]$ iperf -c 192.168.1.105

Client connecting to 192.168.1.105, TCP port 5001
TCP window size: 45.0 KByte (default)

[  3] local 192.168.1.101 port 48522 connected with 192.168.1.105 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  20.3 GBytes  17.5 Gbits/sec

 

 

 

Ovs info

 

[root@nova-compute ~]# docker exec openvswitch_vswitchd modinfo openvswitch

filename:   
/lib/modules/3.10.0-514.el7.x86_64/kernel/net/openvswitch/openvswitch.ko

license:GPL

description:Open vSwitch switching datapath

rhelversion:7.3

srcversion: B31AE95554C9D9A0067F935

depends:
nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6

intree: Y

vermagic:   3.10.0-514.el7.x86_64 SMP mod_unload modversions

signer: CentOS Linux kernel signing key

sig_key:D4:88:63:A7:C1:6F:CC:27:41:23:E6:29:8F:74:F0:57:AF:19:FC:54

sig_hashalgo:   sha256

 

 

As far as I know the communication is VM ßOVSàVM and the linux bridge is not 
involved.

 

What could be throttling the network traffic and what can I do to improve 
performance?

 

Thank you very much

 

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institu

[Openstack] poor bandwidth across instances running on same host

2017-04-18 Thread Manuel Sopena Ballesteros
Hi all,

I created 2 instances on the same compute node and tested the bandwidth between 
them, surprisingly iperf tells me I got 16.1Gbits/sec only. Then I changed the 
firewall from hybrid iptables to ovs, the bandwidth improved a little bit to 
17.5Gbits/sec but still far from expected.

Ml2_config.ini config file


[root@nova-compute ~]# docker exec -t neutron_openvswitch_agent vi 
/var/lib/kolla/config_files/ml2_config.ini

network_vlan_ranges =



[ml2_type_flat]

flat_networks = physnet1



[ml2_type_vxlan]

vni_ranges = 1:1000

vxlan_group = 239.1.1.1



[securitygroup]

firewall_driver = openvswitch

#firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver



[agent]

tunnel_types = vxlan

l2_population = true

arp_responder = true



[ovs]

bridge_mappings = physnet1:br-ex

ovsdb_connection = tcp:129.94.72.54:6640

local_ip = 10.1.0.12



ovs config


[root@nova-compute ~]# docker exec openvswitch_vswitchd ovs-vsctl show

306d62c4-8e35-45e0-838e-53ebe81f1d06

Bridge br-ex

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port "eno50336512"

Interface "eno50336512"

Port phy-br-ex

Interface phy-br-ex

type: patch

options: {peer=int-br-ex}

Port br-ex

Interface br-ex

type: internal

Bridge br-tun

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port "vxlan-0a01000b"

Interface "vxlan-0a01000b"

type: vxlan

options: {df_default="true", in_key=flow, local_ip="10.1.0.12", 
out_key=flow, remote_ip="10.1.0.11"}

Port br-tun

Interface br-tun

type: internal

Bridge br-int

Controller "tcp:127.0.0.1:6633"

is_connected: true

fail_mode: secure

Port int-br-ex

Interface int-br-ex

type: patch

options: {peer=phy-br-ex}

Port "tapa26ee521-3b"

tag: 2

Interface "tapa26ee521-3b"

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port br-int

Interface br-int

type: internal

Port "tap1f76851b-ea"

tag: 2

Interface "tap1f76851b-ea"






Iperf results



[centos@centos7 ~]$ iperf -c 192.168.1.105



Client connecting to 192.168.1.105, TCP port 5001

TCP window size: 45.0 KByte (default)



[  3] local 192.168.1.101 port 48522 connected with 192.168.1.105 port 5001

[ ID] Interval   Transfer Bandwidth

[  3]  0.0-10.0 sec  20.3 GBytes  17.5 Gbits/sec



Ovs info

[root@nova-compute ~]# docker exec openvswitch_vswitchd modinfo openvswitch
filename:   
/lib/modules/3.10.0-514.el7.x86_64/kernel/net/openvswitch/openvswitch.ko
license:GPL
description:Open vSwitch switching datapath
rhelversion:7.3
srcversion: B31AE95554C9D9A0067F935
depends:
nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6
intree: Y
vermagic:   3.10.0-514.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key:D4:88:63:A7:C1:6F:CC:27:41:23:E6:29:8F:74:F0:57:AF:19:FC:54
sig_hashalgo:   sha256


As far as I know the communication is VM <--OVS-->VM and the linux bridge is 
not involved.

What could be throttling the network traffic and what can I do to improve 
performance?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack