Hi,
I think you're right, the nic is not offloading because testing the
instances I see a process ksoftirqd/0 with high CPU on compute hosts.
Doing iperf on baremetal I don't see this process with high CPU.
Is my assumption right?
Thanks
On Wed, Jan 21, 2015 at 3:59 PM, Robert van Leeuwen <
r
Hi Mathieu,
I use VXLAN so I don't think VLAN splinters workaround applies, I also have
GRO enabled.
Thanks
Pedro Sousa
On Fri, Jan 23, 2015 at 1:00 PM, Mathieu Rohon
wrote:
> Hi pedro,
>
> This thread might interest you :
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-January/05
Hi pedro,
This thread might interest you :
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054953.html
Mathieu
On Fri, Jan 23, 2015 at 12:07 PM, Pedro Sousa wrote:
> Hi Slawek,
>
> I've tried with 8950/9000 but I had problems communicating with external
> hosts from the VM.
Hi Slawek,
I've tried with 8950/9000 but I had problems communicating with external
hosts from the VM.
Regards,
Pedro Sousa
On Thu, Jan 22, 2015 at 9:36 PM, Sławek Kapłoński
wrote:
> As I wrote earlier, for me it is best to have 9000 on hosts and 8950 on
> instances. Then I have full speed
As I wrote earlier, for me it is best to have 9000 on hosts and 8950 on
instances. Then I have full speed between instances. With lower mtu on
instances I have about 2-2.5 Gbps and I saw that vhost-net process on
host is using 100 of 1 cpu core. I'm using libvirt with kvm - maybe You
are using
Hello,
In dnsmasq file in neutron will be ok. It will then force option 26 on vm.
You can also manually change it on vms to tests.
Slawek Kaplonski
W dniu 22.01.2015 o 17:06, Pedro Sousa pisze:
Hi Slawek,
I'll test this, did you change the mtu on dnsmasq file in /etc/neutron/?
Or do you need
> Hi Robert,
>
>how do I check that?
I would take a look at the Spec sheet of the nic.
Since it is a pretty recent thing it probably is not supported unless you
specifically shopped for a card with support...
Cheers,
Robert
___
OpenStack-operators mail
Hi Robert,
how do I check that?
Thanks,
Pedro Sousa
On Wed, Jan 21, 2015 at 3:59 PM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> >is there a way to improve network performance on my instances with
> VXLAN?
> >I changed the MTU on physical interfaces to 1600, still performanc
>is there a way to improve network performance on my instances with VXLAN?
>I changed the MTU on physical interfaces to 1600, still performance it's lower
>than in baremetal hosts:
Do you have VXLAN hardware offloading on the NIC?
I think you are hitting the maximum speed you can do encapsulation
Hi all,
is there a way to improve network performance on my instances with VXLAN? I
changed the MTU on physical interfaces to 1600, still performance it's
lower than in baremetal hosts:
*On Instance:*
[root@vms6-149a71e8-1f2a-4d6e-bba4-e70dfa42b289 ~]# iperf3 -s
-
10 matches
Mail list logo