From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, June 14, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][ovs] The way we deal with MTU



On 13 June 2016 at 22:22, Terry Wilson 
<twil...@redhat.com<mailto:twil...@redhat.com>> wrote:
> So basically, as long as we try to plug ports with different MTUs into the 
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any 
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
> network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions, 
> including upgrade impact since it will obviously introduce a dataplane 
> downtime. That would be a huge shift in paradigm, probably too huge to 
> swallow. The latter option may not fly with vswitch folks. Any better ideas?

I know I've heard from people who'd like to be able to support both
DPDK and non-DPDK workloads on the same node. The current
implementation with a single br-int (and thus datapath) makes that
impossible to pull of with good performance. So there may be other
reasons to consider introducing multiple isolated bridges: MTUs,
datapath_types, etc.

[Mooney, Sean K]
I just noticed this now but I just wanted to share some of the rational as to 
why we explicitly do not support running both datapaths on the same host today.
We experiment with using both datapaths during the juno cycle when we were 
frist upstreaming support for ovs-dpdk.
To efficiently enable both data paths we determined that you would have to 
duplicate all bridges  for each data path otherwise there is a significant 
performance
Penalty that degrades the performance of both datapaths.

The only way to interconnect bridge of different data paths in ovs is to use 
veth pairs. Even in the case of the kernel datapath the use
Of veth pairs is a significant performance hit compared to patchports.  Adding 
a veth interface to the dpdk datapath is very costly from
A dpdk perpective to rx/tx packets as it take significantly more cpu cycles to 
process packet from veth interfaces then dpdk interfaces.

What we determined at the time was to make this configuration work effectively 
you would have to have 2 copies of every bridge  and
Either modify the existing agent significantly or run two copies of the ovs 
agent on the same host.  If you use two agents on the same host
With two configfiles specifying different bridge names e.g. br-int and 
br-int-dpdk  br-tun,br-tun-dpdk and br-ex br-ex-dpdk it should be possible to 
support today.

You might need to make minor changes to the agent and server to ensure the 
agents are both reported separately in the db and
You would need to provide some mechanism to request the use of kernel vhost or 
vhost-user. Unfortunately there is no construct currently in
Neutron that can be used directly for that and also the nova scheduler does not 
currently have any idea regarding the vif-types or networking backend supported 
on each
Compute host.

The scheduler side could be addressed by reusing the resource provider 
framework that jay pipes is working on. In essence each compute node would be a 
provider of vif-types.
When you boot a vm you would also pass a desired vif-type and when nova is 
scheduling it will fileter to only host of that type. When nova asks neutron to 
bind the
Port it would pass the requested vif-type to neutron which would then use it 
for the port binding. Ian wells and I proposed a mechanism for this over the 
last few cycles that
Should be possible to intergrate cleanly with os-vif when nova and neutron have 
both adopted its uses.
https://review.openstack.org/#/c/190917/7/specs/mitaka/approved/nova-neutron-binding-negotiation.rst

while requesting a vif type is somewhat of a leaky abstraction it does not mean 
that you will know what the neutron backend is.
A vhost-user interface for example could be ovs-dpdk or vpp or snabb swtich or 
ovs-fastpath. So while it is leaking the capability
To provide a vhost-user interface it does not leak the implantation which still 
maintains some level of abstraction and flexablity
For an operator. For a tenant other then performance they cannot detect if they 
are using vhost-user or kernel-vhost in any way since they
All they see is a virtiio-net interface in either case.

if there is interest in supporting both datapaths concurrently and people are 
open to having multiple copies of the ovs l2, and possible l3/dhcp agents on 
the same host
then I would be happy to help with that effort but the added complexity and 
operator overhead of managing two copies of the neutron agents on each host is 
why we
have not tried to enable this configuration  to date.


Incidentally this is something that Nova is already capable of handling (ie. 
wiring VM's in different bridges) thanks to [1], and with some minor additions 
as being discussed in the context of [2] vlan-aware-vms, we can open up the 
possibility to this deployment model in a not so distant future.

[1] https://blueprints.launchpad.net/nova/+spec/neutron-ovs-bridge-name
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097025.html


Terry

On Mon, Jun 13, 2016 at 11:49 AM, Ihar Hrachyshka 
<ihrac...@redhat.com<mailto:ihrac...@redhat.com>> wrote:
> Hi all,
>
> in Mitaka, we introduced a bunch of changes to the way we handle MTU in 
> Neutron/Nova, making sure that the whole instance data path, starting from 
> instance internal interface, thru hybrid bridge, into the br-int; as well as 
> router data path (qr) have proper MTU value set on all participating devices. 
> On hypervisor side, both Nova and Neutron take part in it, setting it with 
> ip-link tool based on what Neutron plugin calculates for us. So far so good.
>
> Turns out that for OVS, it does not work as expected in regards to br-int. 
> There was a bug reported lately: https://launchpad.net/bugs/1590397
>
> Briefly, when we try to set MTU on a device that is plugged into a bridge, 
> and if the bridge already has another port with lower MTU, the bridge itself 
> inherits MTU from that latter port, and Linux kernel (?) does not allow to 
> set MTU on the first device at all, making ip link calls ineffective.
>
> AFAIU this behaviour is consistent with Linux bridging rules: you can’t have 
> ports of different MTU plugged into the same bridge.
>
> Now, that’s a huge problem for Neutron, because we plug ports that belong to 
> different networks (and that hence may have different MTUs) into the same 
> br-int bridge.
>
> So I played with the code locally a bit and spotted that currently, we set 
> MTU for router ports before we move their devices into router namespaces. And 
> once the device is in a namespace, ip-link actually works. So I wrote a fix 
> with a functional test that proves the point: 
> https://review.openstack.org/#/c/327651/ The fix was validated by the 
> reporter of the original bug and seems to fix the issue for him.
>
> It’s suspicious that it works from inside a namespace but not when the device 
> is still in the root namespace. So I reached out to Jiri Benc from our local 
> Open vSwitch team, and here is a quote:
>
> ===
>
> "It's a bug in ovs-vswitchd. It doesn't see the interface that's in
> other netns and thus cannot enforce the correct MTU.
>
> We'll hopefully fix it and disallow incorrect MTU setting even across
> namespaces. However, it requires significant effort and rework of ovs
> name space handling.
>
> You should not depend on the current buggy behavior. Don't set MTU of
> the internal interfaces higher than the rest of the bridge, it's not
> supported. Hacking this around by moving the interface to a netns is
> exploiting of a bug.
>
> We can certainly discuss whether this limitation could be relaxed.
> Honestly, I don't know, it's for a discussion upstream. But as of now,
> it's not supported and you should not do it.”
>
> So basically, as long as we try to plug ports with different MTUs into the 
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any 
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
> network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions, 
> including upgrade impact since it will obviously introduce a dataplane 
> downtime. That would be a huge shift in paradigm, probably too huge to 
> swallow. The latter option may not fly with vswitch folks. Any better ideas?
>
> It’s also not clear whether we want to proceed with my immediate fix. Advices 
> are welcome.
>
> Thanks,
> Ihar
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to