Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-24 Thread racha
Hi Ian,

Here some details about integrating the L2-gateway (supporting multiple
plugins/MDs and not limited to OVS agent implementation) as a trunking
gateway:

It's a block, i.e. building block, that has multiple access ports as tenant
Neutron network ports (having that block type/uuid as
device_owner/device_id) but in different Neutron networks and up to one
gateway port as a provider external network port.

Adding the two following constraints ensure that Neutron networks and
blocks are stubby and there's no way to loop the networks thus very
simply and very easily providing one of the several means of alleviating
the raised concern:
1) Each Neutron network cannot have more than one port that could be
bound/added to any block as an access port.
2) Each block cannot own more than one gateway port that can be set/unset
to that block.

If the type of that block is "learning bridge" then the gateway port is a
Neutron port on a specific provider external network (with the segmentation
details provided as with existent Neutron API) and that block will forward
between access-ports and gateway-port in broadcast isolation (as with
private VLANs) or broadcast merge (community VLANs). For that, a very easy
implementation was provided for review a very long time ago.

If the type of that block is "trunking bridge" then the gateway-port is a
trunk-port as in "VLAN-aware VMs BP" or as a dynamic collection of Neutron
ports as in a suggested extension of the "networks collection Idea" with
each port in a different provider external network (with a 1 to 1
transparent patching hook service between 1 access-port@tenant_net_x and 1
external-port@provider_net_y that could be the place holder for a
cross-network summarized/factorized security group for tenant networks or
whatever ...)... Then we further abstract a trunk as a mix of VLANs, GREs,
VxLANs, etc. (i.e. Neutron networks) next to each others on the same
networks trunk not limited to usual VLAN trunks. What happens (match ->
block/forward/...) to this trunk in the provider external networks as well
as in the transparent patching hooks within that block is up to the
provider I guess. Just a tiny abstract idea out of the top of my head that
I can detail in the specs if there's a tiny interest/match with what is
required?


Thanks,

Best Regards,
Racha


On Thu, Oct 23, 2014 at 2:58 PM, Ian Wells  wrote:

> There are two categories of problems:
>
> 1. some networks don't pass VLAN tagged traffic, and it's impossible to
> detect this from the API
> 2. it's not possible to pass traffic from multiple networks to one port on
> one machine as (e.g.) VLAN tagged traffic
>
> (1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
> addresses this, particularly in the case that one VM is emitting tagged
> packets that another one should receive and Openstack knows nothing about
> what's going on.
>
> We should get this in, and ideally in quickly and in a simple form where
> it simply tells you if a network is capable of passing tagged traffic.  In
> general, this is possible to calculate but a bit tricky in ML2 - anything
> using the OVS mechanism driver won't pass VLAN traffic, anything using
> VLANs should probably also claim it doesn't pass VLAN traffic (though
> actually it depends a little on the switch), and combinations of L3 tunnels
> plus Linuxbridge seem to pass VLAN traffic just fine.  Beyond that, it's
> got a backward compatibility mode, so it's possible to ensure that any
> plugin that doesn't implement VLAN reporting is still behaving correctly
> per the specification.
>
> (2) is addressed by several blueprints, and these have overlapping ideas
> that all solve the problem.  I would summarise the possibilities as follows:
>
> A. Racha's L2 gateway blueprint,
> https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension,
> which (at its simplest, though it's had features added on and is somewhat
> OVS-specific in its detail) acts as a concentrator to multiplex multiple
> networks onto one as a trunk.  This is a very simple approach and doesn't
> attempt to resolve any of the hairier questions like making DHCP work as
> you might want it to on the ports attached to the trunk network.
> B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
> which is more limited in that it refers only to external connections.
> C. Erik's VLAN port blueprint,
> https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which
> tries to solve the addressing problem mentioned above by having ports
> within ports (much as, on the VM side, interfaces passing trunk traffic
> tend to have subinterfaces that deal with the traffic streams).
> D. Not a blueprint, but an idea I've come 

Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-07-31 Thread racha
+1


On Thu, Jul 31, 2014 at 1:31 AM, Aaron Rosen  wrote:

> +1!
>
>
> On Thu, Jul 31, 2014 at 12:40 AM, Nikola Đipanov 
> wrote:
>
>> On 07/30/2014 11:02 PM, Michael Still wrote:
>> > Greetings,
>> >
>> > I would like to nominate Jay Pipes for the nova-core team.
>> >
>> > Jay has been involved with nova for a long time now.  He's previously
>> > been a nova core, as well as a glance core (and PTL). He's been around
>> > so long that there are probably other types of core status I have
>> > missed.
>> >
>> > Please respond with +1s or any concerns.
>> >
>>
>> +1
>>
>> > References:
>> >
>> >
>> https://review.openstack.org/#/q/owner:%22jay+pipes%22+status:open,n,z
>> >
>> >   https://review.openstack.org/#/q/reviewer:%22jay+pipes%22,n,z
>> >
>> >   http://stackalytics.com/?module=nova-group&user_id=jaypipes
>> >
>> > As a reminder, we use the voting process outlined at
>> > https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
>> > core team.
>> >
>> > Thanks,
>> > Michael
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread racha
Hi,
Does it make sense also to have the choice between ovs-ofctl CLI and a
direct OF1.3 connection too in the ovs-agent?

Best Regards,
Racha



On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan <
vivekanandan.narasim...@hp.com> wrote:

>
>
> Managing the ports and plumbing logic is today driven by L2 Agent, with
> little assistance
>
> from controller.
>
>
>
> If we plan to move that functionality to the controller,  the controller
> has to be more
>
> heavy weight (both hardware and software)  since it has to do the job of
> L2 Agent for all
>
> the compute servers in the cloud. , We need to re-verify all scale numbers
> for the controller
>
> on POC’ing of such a change.
>
>
>
> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> certainly a good direction.
>
>
>
> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> processing) to follow up
>
> on success or failure of such invocations.  Nor there is certain guarantee
> that all such
>
> flow invocations would be executed by the third-process fired by OVS-Lib
> to execute CLI.
>
>
>
> When we transition to OVSDB calls which are more programmatic in nature,
> we can
>
> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> codes (or content)
>
> and ovs-agent (and even other components) can act on such return state
> more
>
> intelligently/appropriately.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Tuesday, June 17, 2014 10:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> architecture
>
>
>
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >&g

Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-22 Thread racha
Hi Vikash,
   I am wondering why you need to have specs approved to have things
working as you want? There's nothing that prevent you to have openstack
support whatever you want except probably for vendor proprietary plugins.
Install OpenStack with Neutron, search for one of the multi patches that
enable that in Nova and apply it to your installation, and voila you can
have nova boot VMs with multi vnics on same neutron. If you want to test
your setup with a public cloud provider that allow that, you can loock in
to Amazon EC2.

Best Regards,
Racha



On Wed, Apr 16, 2014 at 3:48 AM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> Hi,
>
>  I want to launch one VM which will have two Ethernet interfaces with
> IP of single subnet. Is this supported now in openstack ? Any suggestion ?
>
>
> Thanx
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-18 Thread racha
Hi Bob,
Thanks a lot for answering my question regarding the simultaneous operation
of mechanism drivers (that wasn't probably clear enough in my first email).

Best Regards,
Racha


On Tue, Mar 18, 2014 at 12:52 PM, Robert Kukura wrote:

>
> On 3/18/14, 3:04 PM, racha wrote:
>
>  Hi Mathieu,
>Sorry I wasn't following the recent progress on ML2, and I was
> effectively missing the right abstractions of all MDs in my out of topic
> questions.
> If I understand correctly, there will be no priority between all MDs
> binding the same port, but an optional "port filter" could also be used so
> that the first responding MD matching the filter will assign itself.
>
> Hi Racha,
>
> The bug fix Mathieu referred to below that I am working on will move the
> attempt to bind outside the DB transaction that triggered the [re]binding,
> and thus will involve a separate DB transaction to commit the result of the
> binding. But the basic algorithm for binding ports will not be changing as
> part of this fix. The bind_port() method is called sequentially on each
> mechanism driver in the order they are listed in the mechanism_drivers
> config variable, until one succeeds in binding the port, or all have failed
> to bind the port. Since this will now be happening outside a DB
> transaction, its possible that more than one thread could simultaneously
> try to bind the same port, and this concurrency is handled by having all
> such threads use the result that gets committed first.
>
> -Bob
>
>
>
>  Thanks for your answer.
>
>  Best Regards,
>  Racha
>
>
>
> On Mon, Mar 17, 2014 at 3:17 AM, Mathieu Rohon wrote:
>
>> Hi racha,
>>
>> I don't think your topic has something to deal with Nader's topics.
>> Please, create another topic, it would be easier to follow.
>> FYI, robert kukura is currently refactoring the MD binding, please
>> have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
>> i understand, there won't be priority between MD that can bind a same
>> port. The first that will respond to the binding request will give its
>> vif_type.
>>
>> Best,
>>
>> Mathieu
>>
>> On Fri, Mar 14, 2014 at 8:14 PM, racha  wrote:
>> > Hi,
>> >   Is it possible (in the latest upstream) to partition the same
>> > integration bridge "br-int" into multiple isolated partitions (in terms
>> of
>> > lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
>> > mechanism driver? And then how can we pass some details to Neutron API
>> (as
>> > in the provider segmentation type/id/etc) so that ML2 assigns a
>> mechanism
>> > driver to the virtual network? The other alternative I guess is to
>> create
>> > another integration bridge managed by a different Neutron instance?
>> Probably
>> > I am missing something.
>> >
>> > Best Regards,
>> > Racha
>> >
>> >
>> > On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti > >
>> > wrote:
>> >>
>> >> 1) Does it mean an interim solution is to have our own plugin (and have
>> >> all the changes in it) and declare it as core_plugin instead of
>> Ml2Plugin?
>> >>
>> >> 2) The other issue as I mentioned before, is that the extension(s) is
>> not
>> >> showing up in the result, for instance when create_network is called
>> >> [result = super(Ml2Plugin, self).create_network(context, network)],
>> and as
>> >> a result they cannot be used in the mechanism drivers when needed.
>> >>
>> >> Looks like the process_extensions is disabled when fix for Bug 1201957
>> >> committed and here is the change:
>> >> Any idea why it is disabled?
>> >>
>> >> --
>> >> Avoid performing extra query for fetching port security binding
>> >>
>> >> Bug 1201957
>> >>
>> >>
>> >> Add a relationship performing eager load in Port and Network
>> >>
>> >> models, thus preventing the 'extend' function from performing
>> >>
>> >> an extra database query.
>> >>
>> >> Also fixes a comment in securitygroups_db.py
>> >>
>> >>
>> >> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>> >>
>> >>  master   h.1
>> >>
>>  >> ...
>>  >>
>> >>  2013.2
>> >>
>> >> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>

Re: [openstack-dev] [Neutron][ML2]

2014-03-18 Thread racha
Hi Mathieu,
   Sorry I wasn't following the recent progress on ML2, and I was
effectively missing the right abstractions of all MDs in my out of topic
questions.
If I understand correctly, there will be no priority between all MDs
binding the same port, but an optional "port filter" could also be used so
that the first responding MD matching the filter will assign itself.

Thanks for your answer.

Best Regards,
Racha



On Mon, Mar 17, 2014 at 3:17 AM, Mathieu Rohon wrote:

> Hi racha,
>
> I don't think your topic has something to deal with Nader's topics.
> Please, create another topic, it would be easier to follow.
> FYI, robert kukura is currently refactoring the MD binding, please
> have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
> i understand, there won't be priority between MD that can bind a same
> port. The first that will respond to the binding request will give its
> vif_type.
>
> Best,
>
> Mathieu
>
> On Fri, Mar 14, 2014 at 8:14 PM, racha  wrote:
> > Hi,
> >   Is it possible (in the latest upstream) to partition the same
> > integration bridge "br-int" into multiple isolated partitions (in terms
> of
> > lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
> > mechanism driver? And then how can we pass some details to Neutron API
> (as
> > in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
> > driver to the virtual network? The other alternative I guess is to create
> > another integration bridge managed by a different Neutron instance?
> Probably
> > I am missing something.
> >
> > Best Regards,
> > Racha
> >
> >
> > On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti 
> > wrote:
> >>
> >> 1) Does it mean an interim solution is to have our own plugin (and have
> >> all the changes in it) and declare it as core_plugin instead of
> Ml2Plugin?
> >>
> >> 2) The other issue as I mentioned before, is that the extension(s) is
> not
> >> showing up in the result, for instance when create_network is called
> >> [result = super(Ml2Plugin, self).create_network(context, network)], and
> as
> >> a result they cannot be used in the mechanism drivers when needed.
> >>
> >> Looks like the process_extensions is disabled when fix for Bug 1201957
> >> committed and here is the change:
> >> Any idea why it is disabled?
> >>
> >> --
> >> Avoid performing extra query for fetching port security binding
> >>
> >> Bug 1201957
> >>
> >>
> >> Add a relationship performing eager load in Port and Network
> >>
> >> models, thus preventing the 'extend' function from performing
> >>
> >> an extra database query.
> >>
> >> Also fixes a comment in securitygroups_db.py
> >>
> >>
> >> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
> >>
> >>  master   h.1
> >>
> >> ...
> >>
> >>  2013.2
> >>
> >> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
> >>
> >> Salvatore Orlando salv-orlando authored 8 months ago
> >>
> >>
> >> 2  neutron/db/db_base_plugin_v2.py View
> >>
> >>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
> >>
> >> 995   'status': constants.NET_STATUS_ACTIVE}
> >>
> >> 996   network = models_v2.Network(**args)
> >>
> >> 997   context.session.add(network)
> >>
> >> 998 -return self._make_network_dict(network)
> >>
> >> 998 +return self._make_network_dict(network,
> >> process_extensions=False)
> >>
> >> 999
> >>
> >> 1000  def update_network(self, context, id, network):
> >>
> >> 1001
> >>
> >>  n = network['network']
> >>
> >>
> >> ---
> >>
> >>
> >> Regards,
> >> Nader.
> >>
> >>
> >>
> >>
> >>
> >> On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura  >
> >> wrote:
> >>>
> >>>
> >>> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
> >>>
> >>> Yes, that sounds good to be able to load extensions from a mechanism
> >>> driver.
> >>>
> >>> But another problem I think we have with ML2 plugin is the list
> >>> extensions supported by default [1].
> >>

Re: [openstack-dev] [Neutron][ML2]

2014-03-14 Thread racha
Hi,
  Is it possible (in the latest upstream) to partition the same
integration bridge "br-int" into multiple isolated partitions (in terms of
lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
mechanism driver? And then how can we pass some details to Neutron API (as
in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
driver to the virtual network? The other alternative I guess is to create
another integration bridge managed by a different Neutron instance? Probably
I am missing something.

Best Regards,
Racha


On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti wrote:

> 1) Does it mean an interim solution is to have our own plugin (and have
> all the changes in it) and declare it as core_plugin instead of Ml2Plugin?
>
> 2) The other issue as I mentioned before, is that the extension(s) is not
> showing up in the result, for instance when create_network is called
> [*result = super(Ml2Plugin, self).create_network(context, network)]*, and
> as a result they cannot be used in the mechanism drivers when needed.
>
> Looks like the process_extensions is disabled when fix for Bug 1201957
> committed and here is the change:
> Any idea why it is disabled?
>
> --
> Avoid performing extra query for fetching port security binding
>
> Bug 1201957
>
>
> Add a relationship performing eager load in Port and Network
>
> models, thus preventing the 'extend' function from performing
>
> an extra database query.
>
> Also fixes a comment in securitygroups_db.py
>
>
> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>
>  master   h.1
>
> …
>
>  2013.2
>
> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>
> Salvatore Orlando salv-orlando authored 8 months ago
>
>
> 2  neutron/db/db_base_plugin_v2.py View
>
>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
>
> 995   'status': constants.NET_STATUS_ACTIVE}
>
> 996   network = models_v2.Network(**args)
>
> 997   context.session.add(network)
>
> *998 -return self._make_network_dict(network)*
>
> *998 +return self._make_network_dict(network,
> process_extensions=False)*
>
> 999
>
> 1000  def update_network(self, context, id, network):
>
> 1001
>
>  n = network['network']
>
> ---
>
>
> Regards,
> Nader.
>
>
>
>
>
> On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura wrote:
>
>>
>> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
>>
>> Yes, that sounds good to be able to load extensions from a mechanism
>> driver.
>>
>> But another problem I think we have with ML2 plugin is the list
>> extensions supported by default [1].
>> The extensions should only load by MD and the ML2 plugin should only
>> implement the Neutron core API.
>>
>>
>> Keep in mind that ML2 supports multiple MDs simultaneously, so no single
>> MD can really control what set of extensions are active. Drivers need to be
>> able to load private extensions that only pertain to that driver, but we
>> also need to be able to share common extensions across subsets of drivers.
>> Furthermore, the semantics of the extensions need to be correct in the face
>> of multiple co-existing drivers, some of which know about the extension,
>> and some of which don't. Getting this properly defined and implemented
>> seems like a good goal for juno.
>>
>> -Bob
>>
>>
>>
>>  Any though ?
>> Édouard.
>>
>>  [1]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
>>
>>
>>
>> On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:
>>
>>> Hi,
>>>
>>> I think it is better to continue the discussion here. It is a good log
>>> :-)
>>>
>>> Eugine and I talked the related topic to allow drivers to load
>>> extensions)  in Icehouse Summit
>>> but I could not have enough time to work on it during Icehouse.
>>> I am still interested in implementing it and will register a blueprint
>>> on it.
>>>
>>> etherpad in icehouse summit has baseline thought on how to achieve it.
>>> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
>>> I hope it is a good start point of the discussion.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
>>> wrote:
>>> > Hi Kyle,
>>> >
>>> > Just wanted to clarify: Should I continue using this mailing list to
>>> post my
>