Re: [Openstack] Very low bandwidth between instances and routers with OVS, GRE on Debian Jessie (Icehouse)

2014-11-10 Thread Akilesh K
Hi Alberto,
May I know the flavor and image you were using to do this test. TSO seems
to be a method that moves the process of tcp segmentation off to the nic
card.

I believe in OpenStack the nic is a tap interface that kvm attaches your
instance to and hence you have offloaded the task of segmenting to the host
cpu now ( I am only guessing, not sure).

I would like to know if this is the case and using a better flavor and
image would offer you better results without tinkering with the interface.

Thank you,
Ageeleshwar K

On Mon, Nov 10, 2014 at 5:41 PM, Alberto Molina Coballes 
alb.mol...@gmail.com wrote:

 2014-11-09 21:37 GMT+01:00 George Shuklin george.shuk...@gmail.com:

 Try to disable GRO on interfaces. It's well known bug, causing
 significant net performance drop in case of the GRE tunnels.

 (Use ethtool)

 Hi George,

 Thanks for your reply, but as mentioned in first message disabling GRO in
 physical interfaces doesn't improve the performance (step 5), only
 disabling TSO in virtual interfaces causes a significant improvement. the
 problem with that is a modification is needed every time an instance is
 launched.

 Cheers

 Alberto

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Very low bandwidth between instances and routers with OVS, GRE on Debian Jessie (Icehouse)

2014-11-10 Thread Alberto Molina Coballes
2014-11-10 13:45 GMT+01:00 Akilesh K akilesh1...@gmail.com:

 Hi Alberto,
 May I know the flavor and image you were using to do this test. TSO seems
 to be a method that moves the process of tcp segmentation off to the nic
 card.


Hi Akilesh,

I was using m1.tiny flavor with 512MiB of RAM and 1 vCPU for all the
previous tests.

Testing now with m1.smaill (2048 MiB of RAM and 1 vCPU) there is a slight
increase in the bandwidth between the instance and its router: ~
300Kbits/sec. The increase is more evident using m1.medium (4096 MiB of RAM
and 2 vCPUs) where a bandwidth ~ 700 Kbits/sec is achieved.

In both cases bandwidth of 700-800 Mbits/sec are measured turning off TSO:

ubuntu@test4g:~$ iperf -c 10.0.0.1

Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 85.0 KByte (default)

[  3] local 10.0.0.17 port 50855 connected with 10.0.0.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-11.9 sec  1.00 MBytes   704 Kbits/sec

ubuntu@test4g:~$ sudo ethtool -K eth0 tso off

ubuntu@test4g:~$ iperf -c 10.0.0.1

Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 85.0 KByte (default)

[  3] local 10.0.0.17 port 50856 connected with 10.0.0.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   938 MBytes   786 Mbits/sec



 I believe in OpenStack the nic is a tap interface that kvm attaches your
 instance to and hence you have offloaded the task of segmenting to the host
 cpu now ( I am only guessing, not sure).


Yes. KVM is using tap interfaces with virtio driver.

I would like to know if this is the case and using a better flavor and
 image would offer you better results without tinkering with the interface.


Images used are Ubuntu Trusty downloaded from
http://images.ubuntu.com/trusty/current and a Debian Wheezy tested
previously in another private cloud. No significant differences were found
using Ubuntu or Debian images.

Thanks Akilesh!

Alberto
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Very low bandwidth between instances and routers with OVS, GRE on Debian Jessie (Icehouse)

2014-11-10 Thread Alberto Molina Coballes
Hi,

I'm continuing with testing ...

It seems this issue is related to virtio driver. Virtio driver is used by
default with the option use_virtio_for_bridges=true in nova-compute.conf,
if this option is disabled instances are created with a RTL-8139 network
interface and the bandwidth between it and its router appears to be as
expected without disabling TSO:

iperf -c 10.0.0.1

Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 23.5 KByte (default)

[  3] local 10.0.0.19 port 51554 connected with 10.0.0.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   145 MBytes   121 Mbits/sec

On the other hand, bandwidth between instances running on the same compute
node is ~ 3 Gbits/sec with virtio driver.

More info:
- OVS: 2.3.0
- Linux kernel 3.16-3
- In-tree version of kernel openvswitch module

Cheers

Alberto
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Very low bandwidth between instances and routers with OVS, GRE on Debian Jessie (Icehouse)

2014-11-09 Thread Alberto Molina Coballes
Hi,

Debian testing (jessie) is now frozen, so it seems a good time to use it as
a base for an OpenStack deployment. Debian jessie provides OpenStack
Icehouse packages from official repos, backported repos are no longer
needed.

Installation procedure works fine in a setup with OVS and GRE tunnels, but
a very low bandwidth is found between instances and routers. I know this is
a frequently asked topic, but after reading some related threads and bugs,
different causes and solutions where shown.

In order to find the problem, several test with iperf has been made and
proposed solutions applied:

1. Bandwidth between two instances running on the same compute node: 2.77
Gbits/sec
2. Bandwidht between compute node and network node (Gigabith Ethernet
used): 941 Mbits/sec
3. Bandwidth between an instance and a router running on network node: 200
Kbits/sec !!!
4. After applying proposed solution setting instance MTU to 1454, identical
result was obtained.
5. After applying proposed solution setting GRO off to physical interfaces
(network and compute nodes), identical result was obtained.

A solution was found here: https://bugs.launchpad.net/fuel/+bug/1256289

6. After setting tcp segmentation offload off in internal router interface:

ip netns exec qrouter-XX ethtool -K qr-14460da5-08 tso off

Bandwidth increased to 27 Mbits/sec

7. After setting  tcp segmentation offload off in the instance interface, a
very good performance is achieved: 776 Mbits/sec.

Solution found :), but the question is: What's the best way to implement
it? It isn't a practical solution to modify instance Ethernet configuration
after an instance is launched.

Any tip on this?

Thanks!

Alberto
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Very low bandwidth between instances and routers with OVS, GRE on Debian Jessie (Icehouse)

2014-11-09 Thread George Shuklin
Try to disable GRO on interfaces. It's well known bug, causing significant
net performance drop in case of the GRE tunnels.

(Use ethtool)
On Nov 9, 2014 11:17 AM, Alberto Molina Coballes alb.mol...@gmail.com
wrote:

 Hi,

 Debian testing (jessie) is now frozen, so it seems a good time to use it
 as a base for an OpenStack deployment. Debian jessie provides OpenStack
 Icehouse packages from official repos, backported repos are no longer
 needed.

 Installation procedure works fine in a setup with OVS and GRE tunnels, but
 a very low bandwidth is found between instances and routers. I know this is
 a frequently asked topic, but after reading some related threads and bugs,
 different causes and solutions where shown.

 In order to find the problem, several test with iperf has been made and
 proposed solutions applied:

 1. Bandwidth between two instances running on the same compute node: 2.77
 Gbits/sec
 2. Bandwidht between compute node and network node (Gigabith Ethernet
 used): 941 Mbits/sec
 3. Bandwidth between an instance and a router running on network node: 200
 Kbits/sec !!!
 4. After applying proposed solution setting instance MTU to 1454,
 identical result was obtained.
 5. After applying proposed solution setting GRO off to physical interfaces
 (network and compute nodes), identical result was obtained.

 A solution was found here: https://bugs.launchpad.net/fuel/+bug/1256289

 6. After setting tcp segmentation offload off in internal router interface:

 ip netns exec qrouter-XX ethtool -K qr-14460da5-08 tso off

 Bandwidth increased to 27 Mbits/sec

 7. After setting  tcp segmentation offload off in the instance interface,
 a very good performance is achieved: 776 Mbits/sec.

 Solution found :), but the question is: What's the best way to implement
 it? It isn't a practical solution to modify instance Ethernet configuration
 after an instance is launched.

 Any tip on this?

 Thanks!

 Alberto






 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack