Hi Luke,
Very impressive solution!

I do not think there is a problem to keep agent out of the tree in a short 
term, but would highly recommend to put it upstream in a longer term.
You will benefit from quite valuable community review. And most important it 
will allow to keep your code as much as possible aligned with neutron code 
base. Once there are some general changes done by other people, your code will 
be taken into account and won’t be broken accidentally.
I would like to mention that there is Modular L2 Agent initiative driven by ML2 
team, you may be interested to follow: 
https://etherpad.openstack.org/p/modular-l2-agent-outline

Best Regards,
Irena

From: luk...@gmail.com [mailto:luk...@gmail.com] On Behalf Of Luke Gorrie
Sent: Tuesday, June 10, 2014 12:48 PM
To: Irena Berezovsky
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much "shim rest proxy" 
mechanism drivers in ML2

Hi Irena,

Thanks for the very interesting perspective!

On 10 June 2014 10:57, Irena Berezovsky 
<ire...@mellanox.com<mailto:ire...@mellanox.com>> wrote:
[IrenaB] The DB access approach was previously used by OVS and LinuxBridge 
Agents and at some point (~Grizzly Release) was changed to use RPC 
communication.

That is very interesting. I've been involved in OpenStack since the Havana 
cycle and was not familiar with the old design.

I'm optimistic about the scalability of our implementation. We have 
sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure we 
will find some parts that we need to spend optimization energy on, however.

The other scalability aspect we are being careful of is the cost of individual 
update operations. (In LinuxBridge that would be the iptables, ebtables, etc 
commands.) In our implementation the compute nodes preprocess the Neutron 
config into a small config file for the local traffic plane and then load that 
in one atomic operation ("SIGHUP" style). Again, I am sure we will find cases 
that we need to spend optimization effort on, but the design seems scalable to 
me thanks to the atomicity.

For concreteness, here is the agent we are running on the DB node to make the 
Neutron config available:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master

and here is the agent that pulls it onto the compute node:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent

TL;DR we snapshot the config with mysqldump and distribute it with git.

Here's the sanity test I referred to: 
https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J

I will be glad to report on our experience and what we change based on our 
deployment experience during the Juno cycle.

[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be 
need for some sort of agent to handle port update events even though it might 
not be required in order to bind the port.

True. Indeed, we do have an agent running on the compute host, and it we are 
synchronizing it with port updates based on the mechanism described above.

Really what I mean is: Can we keep our agent out-of-tree and apart from ML2 and 
decide for ourselves how to keep it synchronized (instead of using the MQ)? Is 
there a precedent for doing things this way in an ML2 mech driver (e.g. one of 
the SDNs)?

Cheers!
-Luke


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to