Hi Heiko,

I do have tried edit dhcp-option-force in etc/neutron/dnsmasq.conf.
Base on my test, the value dhcp-option-force=26,${MTU} is only affect the MTU 
in instances, it not affected any virtual port that we found in compute node.
This means, even I set dhcp-option-force=26,9000, you can get MTU=9000 in 
instances, but due to the virtual ports in host is still have MTU=1500, so the 
real package size instance will get is still 1500.

Also, I do restricted instance's bandwidth by setting extra_specs in flavor :
nova flavor-show test_flavor
+----------------------------+-----------------------------------------------------------------------------------+
| Property                   | Value                                            
                                 |
+----------------------------+-----------------------------------------------------------------------------------+
| name                       | test_flavor        |
| ram                        | 512                                              
                                 |
| OS-FLV-DISABLED:disabled   | False                                            
                                 |
| vcpus                      | 1                                                
                                 |
| extra_specs                | {u'quota:vif_inbound_average': u'51200', 
u'quota:vif_outbound_average': u'51200'} |
| swap                       |                                                  
                                 |
| os-flavor-access:is_public | True                                             
                                 |
| rxtx_factor                | 1.0                                              
                                 |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                
                                 |
| disk                       | 0                                                
                                 |
| id                         | 7                                                
                                 |
+----------------------------+-----------------------------------------------------------------------------------+

The reason I want to use jumbo frames is I hope jumbo frames will use less 
CPU%, so that I can get higher bandwidth for a whole compute node.

Thanks.
-chen


-----Original Message-----
From: Heiko Krämer [mailto:i...@honeybutcher.de] 
Sent: Monday, January 27, 2014 4:11 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] How to enable jumbo frames for instances ?

Hi Chen,

first of all, the default supported network driver for kvm supported only 1g.

You can configure your MTU in your /etc/neutron/dnsmasq.conf on network
node,like:
dhcp-option-force=26,1454

MTU = 1454

You need to kill all dnsmasq processes on your network node and restart dhcp 
agent.

In addition you ought to consider that is maybe a bad idea to allow each 
spawned instance to use the complete 10G connection. I'm using for each compute 
node two 10G Nic's which is one for internal instance communications and the 
other for storage attachments but every time only 1G to the instances.

Maybe it helps.


Cheers
Heiko

On 27.01.2014 08:39, Li, Chen wrote:
> Hi list,
>
> I'm working under CentOS 6.4 + Havana + Neutron + OVS + gre.
>
> I'm testing performance for gre.
>
> I have a 10Gb/s NIC for compute Node.
>
> While, the max bandwidth I can get is small then 3Gb/s, even I have
enough instances.
> I noticed the reason the bandwidth can't reach higher is due to the
utilization for one CPU core is already 100%.
>
> So, I want to try if I can get higher bandwidth if I have bigger MTU,
because the default MTU = 1500.
>
> But, after I set network_device_mtu=8500 in "/etc/nova/nova.conf", and
restart openstack-nova-compute service and re-create a new instance, the MTU 
for devices is still 1500:
>
> 202: qbr053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN
>     link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff
> 203: qvo053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast state UP qlen 1000
>     link/ether f6:0b:04:3f:9d:41 brd ff:ff:ff:ff:ff:ff
> 204: qvb053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast state UP qlen 1000
>     link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff
> 205: tap053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
htb state UNKNOWN qlen 500
>     link/ether fe:18:3e:c2:e9:84 brd ff:ff:ff:ff:ff:ff
>
> Anyone know why is this happen ?
> How can I solve it ??
>
> Thanks.
> -chen
>
>
>
> _______________________________________________
> Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack@lists.openstack.org
> Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to