Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-07-02 Thread Mathieu Rohon
Hi, sorry for the late reply, I was out of office for 3 weeks.

I also love the idea of having a single thread in charge of writing
dataplane actions.
As Zang described, this thread would read events in a queue, which could be
populated by agent_drivers.
The main goal would be to avoid desynchronization. While a first
greenthread can yield while performing a dataplane action (for instance,
running a ofctl command) another greenthread could potentially process
another action that should be done after the first one is terminated.
enqueuing those actions and processing them in a single thread would avoid
such a behavior.

I also think this is orthogonal with the agent/resource architecture. I
think that agent driver could populate the queue, and the singleton thread
would call the correct resource driver, depending on the impacted port, to
interpret the order filed in the queue.

regards
Mathieu


On Fri, Jun 20, 2014 at 10:38 PM, Mohammad Banikazemi  wrote:

> Zang, thanks for your comments.
>
> I think what you are suggesting is perhaps orthogonal to having Resource
> and Agent drivers. By that I mean we can have what you are suggesting and
> keep the Resource and Agent drivers. The reason for having Resource drivers
> is to provide the means for possibly extending what an agent does in
> response to say changes to a port in a modular way. We can restrict the
> access to Resource drivers from the events loop only. That restriction is
> not there in the current model but would adding that address your concerns?
> What are your thoughts? As Salvatore has mentioned in his email in this
> thread, that is what the current OVS agent does wrt port updates. That is,
> the update to ports get processed from the events loop.
>
> As a separate but relevant issue, we can and should discuss whether having
> the Resource and Agent drivers is useful in making the agent more modular.
> The idea behind using these drivers is to have the agent use a collection
> of drivers rather than mixin classes so we can more easily select what
>  (and how) functionalities an agent support and reuse as much as we can
> across L2 agents. Are there better ways of achieving this? Any thoughts?
>
> Best,
>
> Mohammad
>
>
>
> [image: Inactive hide details for Zang MingJie ---06/19/2014 06:27:31
> AM---Hi: I don't like the idea of ResourceDriver and AgentDriver.]Zang
> MingJie ---06/19/2014 06:27:31 AM---Hi: I don't like the idea of
> ResourceDriver and AgentDriver. I suggested
>
> From: Zang MingJie 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 06/19/2014 06:27 AM
>
> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
> --
>
>
>
> Hi:
>
> I don't like the idea of ResourceDriver and AgentDriver. I suggested
> use a singleton worker thread to manager all underlying setup, so the
> driver should do nothing other than fire a update event to the worker.
>
> The worker thread may looks like this one:
>
> # the only variable store all local state which survives between
> different events, including lvm, fdb or whatever
> state = {}
>
> # loop forever
> while True:
>event = ev_queue.pop()
>if not event:
>sleep() # may be interrupted when new event comes
>continue
>
>origin_state = state
>new_state = event.merge_state(state)
>
>if event.is_ovsdb_changed():
>if event.is_tunnel_changed():
>setup_tunnel(new_state, old_state, event)
>if event.is_port_tags_changed():
>setup_port_tags(new_state, old_state, event)
>
>if event.is_flow_changed():
>if event.is_flow_table_1_changed():
>setup_flow_table_1(new_state, old_state, event)
>if event.is_flow_table_2_changed():
>setup_flow_table_2(new_state, old_state, event)
>if event.is_flow_table_3_changed():
>setup_flow_table_3(new_state, old_state, event)
>if event.is_flow_table_4_changed():
>setup_flow_table_4(new_state, old_state, event)
>
>if event.is_iptable_changed():
>if event.is_iptable_nat_changed():
>setup_iptable_nat(new_state, old_state, event)
>if event.is_iptable_filter_changed():
>setup_iptable_filter(new_state, old_state, event)
>
>   state = new_state
>
> when any part has been changed by a event, the corresponding setup_xxx
> function rebuild the whole part, then use the restore like
> `iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
> part.
>
> ___
> OpenSt

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-20 Thread Mohammad Banikazemi

Zang, thanks for your comments.

I think what you are suggesting is perhaps orthogonal to having Resource
and Agent drivers. By that I mean we can have what you are suggesting and
keep the Resource and Agent drivers. The reason for having Resource drivers
is to provide the means for possibly extending what an agent does in
response to say changes to a port in a modular way. We can restrict the
access to Resource drivers from the events loop only. That restriction is
not there in the current model but would adding that address your concerns?
What are your thoughts? As Salvatore has mentioned in his email in this
thread, that is what the current OVS agent does wrt port updates. That is,
the update to ports get processed from the events loop.

As a separate but relevant issue, we can and should discuss whether having
the Resource and Agent drivers is useful in making the agent more modular.
The idea behind using these drivers is to have the agent use a collection
of drivers rather than mixin classes so we can more easily select what
(and how) functionalities an agent support and reuse as much as we can
across L2 agents. Are there better ways of achieving this? Any thoughts?

Best,

Mohammad





From:   Zang MingJie 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date:   06/19/2014 06:27 AM
Subject:    Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
    architecture



Hi:

I don't like the idea of ResourceDriver and AgentDriver. I suggested
use a singleton worker thread to manager all underlying setup, so the
driver should do nothing other than fire a update event to the worker.

The worker thread may looks like this one:

# the only variable store all local state which survives between
different events, including lvm, fdb or whatever
state = {}

# loop forever
while True:
event = ev_queue.pop()
if not event:
sleep() # may be interrupted when new event comes
continue

origin_state = state
new_state = event.merge_state(state)

if event.is_ovsdb_changed():
if event.is_tunnel_changed():
setup_tunnel(new_state, old_state, event)
if event.is_port_tags_changed():
setup_port_tags(new_state, old_state, event)

if event.is_flow_changed():
if event.is_flow_table_1_changed():
setup_flow_table_1(new_state, old_state, event)
if event.is_flow_table_2_changed():
setup_flow_table_2(new_state, old_state, event)
if event.is_flow_table_3_changed():
setup_flow_table_3(new_state, old_state, event)
if event.is_flow_table_4_changed():
setup_flow_table_4(new_state, old_state, event)

if event.is_iptable_changed():
if event.is_iptable_nat_changed():
setup_iptable_nat(new_state, old_state, event)
if event.is_iptable_filter_changed():
setup_iptable_filter(new_state, old_state, event)

   state = new_state

when any part has been changed by a event, the corresponding setup_xxx
function rebuild the whole part, then use the restore like
`iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
part.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-19 Thread Isaku Yamahata
On Thu, Jun 19, 2014 at 11:05:53AM -0400,
Terry Wilson  wrote:

> - Original Message -
> > What's the progress by Terry Wilson?
> > If not much, I'm willing to file blueprint/spec and drive it.
> > 
> > thanks,
> 
> I've been working on some proof-of-concept code to help flesh out ideas for 
> writing the spec. I'd talked to Maru and he mentioned that he didn't think 
> that the official OVS python library was a good base for this (the one that 
> ryu uses). I don't remember what all of the reasons were, though. It isn't 
> particularly well documented, but that can always be remedied. Does anyone 
> else have any experience with the official OVS python API who could speak to 
> its quality/stability/usefulness? It looked fairly full-featured.

Interesting. Can you share the technical reason? Maru? I'm curious.
At least I fixed the issues I hit when I wrote ovs_vsctl.py.


Thanks,

> Terry
>  
> > On Wed, Jun 18, 2014 at 07:00:59PM +0900,
> > Isaku Yamahata  wrote:
> > 
> > > Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
> > > ovs-vsctl command. It speaks OVSDB protocl.
> > > https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py
> > > 
> > > So with the library, it's mostly mechanical change to convert
> > > ovs_lib.py, I think.
> > > I'm not aware other similar library written in python.
> 
> Most of ryu's library is implemented on top of the official OVS python stuff: 
> https://github.com/openvswitch/ovs/tree/master/python/ovs
> 
> Terry
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-19 Thread Terry Wilson
- Original Message -
> What's the progress by Terry Wilson?
> If not much, I'm willing to file blueprint/spec and drive it.
> 
> thanks,

I've been working on some proof-of-concept code to help flesh out ideas for 
writing the spec. I'd talked to Maru and he mentioned that he didn't think that 
the official OVS python library was a good base for this (the one that ryu 
uses). I don't remember what all of the reasons were, though. It isn't 
particularly well documented, but that can always be remedied. Does anyone else 
have any experience with the official OVS python API who could speak to its 
quality/stability/usefulness? It looked fairly full-featured.

Terry
 
> On Wed, Jun 18, 2014 at 07:00:59PM +0900,
> Isaku Yamahata  wrote:
> 
> > Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
> > ovs-vsctl command. It speaks OVSDB protocl.
> > https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py
> > 
> > So with the library, it's mostly mechanical change to convert
> > ovs_lib.py, I think.
> > I'm not aware other similar library written in python.

Most of ryu's library is implemented on top of the official OVS python stuff: 
https://github.com/openvswitch/ovs/tree/master/python/ovs

Terry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-19 Thread Zang MingJie
Hi:

I don't like the idea of ResourceDriver and AgentDriver. I suggested
use a singleton worker thread to manager all underlying setup, so the
driver should do nothing other than fire a update event to the worker.

The worker thread may looks like this one:

# the only variable store all local state which survives between
different events, including lvm, fdb or whatever
state = {}

# loop forever
while True:
event = ev_queue.pop()
if not event:
sleep() # may be interrupted when new event comes
continue

origin_state = state
new_state = event.merge_state(state)

if event.is_ovsdb_changed():
if event.is_tunnel_changed():
setup_tunnel(new_state, old_state, event)
if event.is_port_tags_changed():
setup_port_tags(new_state, old_state, event)

if event.is_flow_changed():
if event.is_flow_table_1_changed():
setup_flow_table_1(new_state, old_state, event)
if event.is_flow_table_2_changed():
setup_flow_table_2(new_state, old_state, event)
if event.is_flow_table_3_changed():
setup_flow_table_3(new_state, old_state, event)
if event.is_flow_table_4_changed():
setup_flow_table_4(new_state, old_state, event)

if event.is_iptable_changed():
if event.is_iptable_nat_changed():
setup_iptable_nat(new_state, old_state, event)
if event.is_iptable_filter_changed():
setup_iptable_filter(new_state, old_state, event)

   state = new_state

when any part has been changed by a event, the corresponding setup_xxx
function rebuild the whole part, then use the restore like
`iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
part.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread henry hly
OVS agent manipulate not only ovs flow table, but also linux stack, which
is not so easily replaced by pure openflow controller today.
fastpath-slowpath separation sounds good, but really a nightmare for high
concurrent connection application if we set L4 flow into OVS (in our
testing, vswitchd daemon always stop working in this case).

Someday when OVS can do all the L2-L4 rules in the kernel without bothering
userspace classifier, pure OF controller can replace agent based solution
then. OVS hooking to netfilter conntrack may come this year, but not enough
yet.


On Wed, Jun 18, 2014 at 12:56 AM, Armando M.  wrote:

> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
>> Another area of improvement for the agent would be to move away from
>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
>> and I talked about this, and re-writing ovs_lib to use an OVSDB
>> connection instead of the CLI methods would be a huge improvement
>> here. I'm not sure if Terry was going to move forward with this, but
>> I'd be in favor of this for Juno if he or someone else wants to move
>> in this direction.
>>
>> Thanks,
>> Kyle
>>
>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
>> wrote:
>> > We've started doing this in a slightly more reasonable way for icehouse.
>> > What we've done is:
>> > - remove unnecessary notification from the server
>> > - process all port-related events, either trigger via RPC or via
>> monitor in
>> > one place
>> >
>> > Obviously there is always a lot of room for improvement, and I agree
>> > something along the lines of what Zang suggests would be more
>> maintainable
>> > and ensure faster event processing as well as making it easier to have
>> some
>> > form of reliability on event processing.
>> >
>> > I was considering doing something for the ovs-agent again in Juno, but
>> since
>> > we've moving towards a unified agent, I think any new "big" ticket
>> should
>> > address this effort.
>> >
>> > Salvatore
>> >
>> >
>> > On 17 June 2014 13:31, Zang MingJie  wrote:
>> >>
>> >> Hi:
>> >>
>> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> >> intent to rebuild a more stable flexible agent.
>> >>
>> >> Taking the experience of ovs-agent bugs, I think the concurrency
>> >> problem is also a very important problem, the agent gets lots of event
>> >> from different greenlets, the rpc, the ovs monitor or the main loop.
>> >> I'd suggest to serialize all event to a queue, then process events in
>> >> a dedicated thread. The thread check the events one by one ordered,
>> >> and resolve what has been changed, then apply the corresponding
>> >> changes. If there is any error occurred in the thread, discard the
>> >> current processing event, do a fresh start event, which reset
>> >> everything, then apply the correct settings.
>> >>
>> >> The threading model is so important and may prevent tons of bugs in
>> >> the future development, we should describe it clearly in the
>> >> architecture
>> >>
>> >>
>> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
>> >> wrote:
>> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> >> > added
>> >> > more information on the etherpad [1] describing the proposed
>> >> > architecture
>> >> > for modular L2 agents. I have also posted some code fragments at [2]
>> >> > sketching the implementation of the proposed architecture. Please
>> have a
>> >> > look when you get a chance and let us know if you have any comments.
>> >> >
>> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> >> > [2] https://review.openstack.org/#/c/99187/
>> >> >
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
What's the progress by Terry Wilson?
If not much, I'm willing to file blueprint/spec and drive it.

thanks,

On Wed, Jun 18, 2014 at 07:00:59PM +0900,
Isaku Yamahata  wrote:

> Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
> ovs-vsctl command. It speaks OVSDB protocl.
> https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py
> 
> So with the library, it's mostly mechanical change to convert
> ovs_lib.py, I think.
> I'm not aware other similar library written in python.
> 
> thanks,
> Isaku Yamahata
> 
> 
> On Tue, Jun 17, 2014 at 11:38:36AM -0500,
> Kyle Mestery  wrote:
> 
> > Another area of improvement for the agent would be to move away from
> > executing CLIs for port commands and instead use OVSDB. Terry Wilson
> > and I talked about this, and re-writing ovs_lib to use an OVSDB
> > connection instead of the CLI methods would be a huge improvement
> > here. I'm not sure if Terry was going to move forward with this, but
> > I'd be in favor of this for Juno if he or someone else wants to move
> > in this direction.
> > 
> > Thanks,
> > Kyle
> > 
> > On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando  
> > wrote:
> > > We've started doing this in a slightly more reasonable way for icehouse.
> > > What we've done is:
> > > - remove unnecessary notification from the server
> > > - process all port-related events, either trigger via RPC or via monitor 
> > > in
> > > one place
> > >
> > > Obviously there is always a lot of room for improvement, and I agree
> > > something along the lines of what Zang suggests would be more maintainable
> > > and ensure faster event processing as well as making it easier to have 
> > > some
> > > form of reliability on event processing.
> > >
> > > I was considering doing something for the ovs-agent again in Juno, but 
> > > since
> > > we've moving towards a unified agent, I think any new "big" ticket should
> > > address this effort.
> > >
> > > Salvatore
> > >
> > >
> > > On 17 June 2014 13:31, Zang MingJie  wrote:
> > >>
> > >> Hi:
> > >>
> > >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> > >> intent to rebuild a more stable flexible agent.
> > >>
> > >> Taking the experience of ovs-agent bugs, I think the concurrency
> > >> problem is also a very important problem, the agent gets lots of event
> > >> from different greenlets, the rpc, the ovs monitor or the main loop.
> > >> I'd suggest to serialize all event to a queue, then process events in
> > >> a dedicated thread. The thread check the events one by one ordered,
> > >> and resolve what has been changed, then apply the corresponding
> > >> changes. If there is any error occurred in the thread, discard the
> > >> current processing event, do a fresh start event, which reset
> > >> everything, then apply the correct settings.
> > >>
> > >> The threading model is so important and may prevent tons of bugs in
> > >> the future development, we should describe it clearly in the
> > >> architecture
> > >>
> > >>
> > >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> > >> wrote:
> > >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> > >> > added
> > >> > more information on the etherpad [1] describing the proposed
> > >> > architecture
> > >> > for modular L2 agents. I have also posted some code fragments at [2]
> > >> > sketching the implementation of the proposed architecture. Please have 
> > >> > a
> > >> > look when you get a chance and let us know if you have any comments.
> > >> >
> > >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> > >> > [2] https://review.openstack.org/#/c/99187/
> > >> >
> > >> >
> > >> > ___
> > >> > OpenStack-dev mailing list
> > >> > OpenStack-dev@lists.openstack.org
> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> >
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> Isaku Yamahata 

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
ovs-vsctl command. It speaks OVSDB protocl.
https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py

So with the library, it's mostly mechanical change to convert
ovs_lib.py, I think.
I'm not aware other similar library written in python.

thanks,
Isaku Yamahata


On Tue, Jun 17, 2014 at 11:38:36AM -0500,
Kyle Mestery  wrote:

> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
> 
> Thanks,
> Kyle
> 
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando  
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more maintainable
> > and ensure faster event processing as well as making it easier to have some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >> > added
> >> > more information on the etherpad [1] describing the proposed
> >> > architecture
> >> > for modular L2 agents. I have also posted some code fragments at [2]
> >> > sketching the implementation of the proposed architecture. Please have a
> >> > look when you get a chance and let us know if you have any comments.
> >> >
> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> >> > [2] https://review.openstack.org/#/c/99187/
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-18 Thread Isaku Yamahata
No. ovs_lib invokes both ovs-vsctl and ovs-ofctl.
ovs-vsctl speaks OVSDB protocol, ovs-ofctl speaks OF-wire.

thanks,

On Tue, Jun 17, 2014 at 01:25:59PM -0500,
Kyle Mestery  wrote:

> I don't think so. Once we implement the OVSDB support, we will
> deprecate using the CLI commands in ovs_lib.
> 
> On Tue, Jun 17, 2014 at 12:50 PM, racha  wrote:
> > Hi,
> > Does it make sense also to have the choice between ovs-ofctl CLI and a
> > direct OF1.3 connection too in the ovs-agent?
> >
> > Best Regards,
> > Racha
> >
> >
> >
> > On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan
> >  wrote:
> >>
> >>
> >>
> >> Managing the ports and plumbing logic is today driven by L2 Agent, with
> >> little assistance
> >>
> >> from controller.
> >>
> >>
> >>
> >> If we plan to move that functionality to the controller,  the controller
> >> has to be more
> >>
> >> heavy weight (both hardware and software)  since it has to do the job of
> >> L2 Agent for all
> >>
> >> the compute servers in the cloud. , We need to re-verify all scale numbers
> >> for the controller
> >>
> >> on POC’ing of such a change.
> >>
> >>
> >>
> >> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> >> certainly a good direction.
> >>
> >>
> >>
> >> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> >> processing) to follow up
> >>
> >> on success or failure of such invocations.  Nor there is certain guarantee
> >> that all such
> >>
> >> flow invocations would be executed by the third-process fired by OVS-Lib
> >> to execute CLI.
> >>
> >>
> >>
> >> When we transition to OVSDB calls which are more programmatic in nature,
> >> we can
> >>
> >> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> >> codes (or content)
> >>
> >> and ovs-agent (and even other components) can act on such return state
> >> more
> >>
> >> intelligently/appropriately.
> >>
> >>
> >>
> >> --
> >>
> >> Thanks,
> >>
> >>
> >>
> >> Vivek
> >>
> >>
> >>
> >>
> >>
> >> From: Armando M. [mailto:arma...@gmail.com]
> >> Sent: Tuesday, June 17, 2014 10:26 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
> >>
> >>
> >>
> >> just a provocative thought: If we used the ovsdb connection instead, do we
> >> really need an L2 agent :P?
> >>
> >>
> >>
> >> On 17 June 2014 18:38, Kyle Mestery  wrote:
> >>
> >> Another area of improvement for the agent would be to move away from
> >> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> >> and I talked about this, and re-writing ovs_lib to use an OVSDB
> >> connection instead of the CLI methods would be a huge improvement
> >> here. I'm not sure if Terry was going to move forward with this, but
> >> I'd be in favor of this for Juno if he or someone else wants to move
> >> in this direction.
> >>
> >> Thanks,
> >> Kyle
> >>
> >>
> >> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> >> wrote:
> >> > We've started doing this in a slightly more reasonable way for icehouse.
> >> > What we've done is:
> >> > - remove unnecessary notification from the server
> >> > - process all port-related events, either trigger via RPC or via monitor
> >> > in
> >> > one place
> >> >
> >> > Obviously there is always a lot of room for improvement, and I agree
> >> > something along the lines of what Zang suggests would be more
> >> > maintainable
> >> > and ensure faster event processing as well as making it easier to have
> >> > some
> >> > form of reliability on event processing.
> >> >
> >> > I was considering doing something for the ovs-agent again in Juno, but
> >> > since
> >> > we've moving towards a unified agent, I think any new "big" ticket
> >> > should
> >> > address this effort.
> >> >
> &g

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
Mine wasn't really a serious suggestion, Neutron's controlling logic is
already bloated as it is, and my personal opinion would be in favor of a
leaner Neutron Server rather than a more complex one; adding more
controller-like logic to it certainly goes against that direction :)

Having said that and as Vivek pointed out, using ovsdb gives us finer
control and ability to react more effectively, however, with the current
server-agent rpc framework there's no way of leveraging that...so in a
grand scheme of things I'd rather see it prioritized lower rather than
higher, to give precedence to rearchitecting the framework first.

Armando


On 17 June 2014 19:25, Narasimhan, Vivekanandan <
vivekanandan.narasim...@hp.com> wrote:

>
>
> Managing the ports and plumbing logic is today driven by L2 Agent, with
> little assistance
>
> from controller.
>
>
>
> If we plan to move that functionality to the controller,  the controller
> has to be more
>
> heavy weight (both hardware and software)  since it has to do the job of
> L2 Agent for all
>
> the compute servers in the cloud. , We need to re-verify all scale numbers
> for the controller
>
> on POC’ing of such a change.
>
>
>
> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> certainly a good direction.
>
>
>
> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> processing) to follow up
>
> on success or failure of such invocations.  Nor there is certain guarantee
> that all such
>
> flow invocations would be executed by the third-process fired by OVS-Lib
> to execute CLI.
>
>
>
> When we transition to OVSDB calls which are more programmatic in nature,
> we can
>
> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> codes (or content)
>
> and ovs-agent (and even other components) can act on such return state
> more
>
> intelligently/appropriately.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Tuesday, June 17, 2014 10:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> architecture
>
>
>
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> eve

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
I don't think so. Once we implement the OVSDB support, we will
deprecate using the CLI commands in ovs_lib.

On Tue, Jun 17, 2014 at 12:50 PM, racha  wrote:
> Hi,
> Does it make sense also to have the choice between ovs-ofctl CLI and a
> direct OF1.3 connection too in the ovs-agent?
>
> Best Regards,
> Racha
>
>
>
> On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan
>  wrote:
>>
>>
>>
>> Managing the ports and plumbing logic is today driven by L2 Agent, with
>> little assistance
>>
>> from controller.
>>
>>
>>
>> If we plan to move that functionality to the controller,  the controller
>> has to be more
>>
>> heavy weight (both hardware and software)  since it has to do the job of
>> L2 Agent for all
>>
>> the compute servers in the cloud. , We need to re-verify all scale numbers
>> for the controller
>>
>> on POC’ing of such a change.
>>
>>
>>
>> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
>> certainly a good direction.
>>
>>
>>
>> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
>> processing) to follow up
>>
>> on success or failure of such invocations.  Nor there is certain guarantee
>> that all such
>>
>> flow invocations would be executed by the third-process fired by OVS-Lib
>> to execute CLI.
>>
>>
>>
>> When we transition to OVSDB calls which are more programmatic in nature,
>> we can
>>
>> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
>> codes (or content)
>>
>> and ovs-agent (and even other components) can act on such return state
>> more
>>
>> intelligently/appropriately.
>>
>>
>>
>> --
>>
>> Thanks,
>>
>>
>>
>> Vivek
>>
>>
>>
>>
>>
>> From: Armando M. [mailto:arma...@gmail.com]
>> Sent: Tuesday, June 17, 2014 10:26 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
>>
>>
>>
>> just a provocative thought: If we used the ovsdb connection instead, do we
>> really need an L2 agent :P?
>>
>>
>>
>> On 17 June 2014 18:38, Kyle Mestery  wrote:
>>
>> Another area of improvement for the agent would be to move away from
>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
>> and I talked about this, and re-writing ovs_lib to use an OVSDB
>> connection instead of the CLI methods would be a huge improvement
>> here. I'm not sure if Terry was going to move forward with this, but
>> I'd be in favor of this for Juno if he or someone else wants to move
>> in this direction.
>>
>> Thanks,
>> Kyle
>>
>>
>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
>> wrote:
>> > We've started doing this in a slightly more reasonable way for icehouse.
>> > What we've done is:
>> > - remove unnecessary notification from the server
>> > - process all port-related events, either trigger via RPC or via monitor
>> > in
>> > one place
>> >
>> > Obviously there is always a lot of room for improvement, and I agree
>> > something along the lines of what Zang suggests would be more
>> > maintainable
>> > and ensure faster event processing as well as making it easier to have
>> > some
>> > form of reliability on event processing.
>> >
>> > I was considering doing something for the ovs-agent again in Juno, but
>> > since
>> > we've moving towards a unified agent, I think any new "big" ticket
>> > should
>> > address this effort.
>> >
>> > Salvatore
>> >
>> >
>> > On 17 June 2014 13:31, Zang MingJie  wrote:
>> >>
>> >> Hi:
>> >>
>> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> >> intent to rebuild a more stable flexible agent.
>> >>
>> >> Taking the experience of ovs-agent bugs, I think the concurrency
>> >> problem is also a very important problem, the agent gets lots of event
>> >> from different greenlets, the rpc, the ovs monitor or the main loop.
>> >> I'd suggest to serialize all event to a queue, then process events in
>> >> a dedicated thread. The thread check the events one by one ordered,
>> >> and resolve what has been changed, th

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread racha
Hi,
Does it make sense also to have the choice between ovs-ofctl CLI and a
direct OF1.3 connection too in the ovs-agent?

Best Regards,
Racha



On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan <
vivekanandan.narasim...@hp.com> wrote:

>
>
> Managing the ports and plumbing logic is today driven by L2 Agent, with
> little assistance
>
> from controller.
>
>
>
> If we plan to move that functionality to the controller,  the controller
> has to be more
>
> heavy weight (both hardware and software)  since it has to do the job of
> L2 Agent for all
>
> the compute servers in the cloud. , We need to re-verify all scale numbers
> for the controller
>
> on POC’ing of such a change.
>
>
>
> That said, replacing CLI with direct OVSDB calls in the L2 Agent is
> certainly a good direction.
>
>
>
> Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
> processing) to follow up
>
> on success or failure of such invocations.  Nor there is certain guarantee
> that all such
>
> flow invocations would be executed by the third-process fired by OVS-Lib
> to execute CLI.
>
>
>
> When we transition to OVSDB calls which are more programmatic in nature,
> we can
>
> enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
> codes (or content)
>
> and ovs-agent (and even other components) can act on such return state
> more
>
> intelligently/appropriately.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Tuesday, June 17, 2014 10:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> architecture
>
>
>
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>
> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >&g

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Narasimhan, Vivekanandan


Managing the ports and plumbing logic is today driven by L2 Agent, with little 
assistance

from controller.



If we plan to move that functionality to the controller,  the controller has to 
be more

heavy weight (both hardware and software)  since it has to do the job of L2 
Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers for 
the controller

on POC’ing of such a change.



That said, replacing CLI with direct OVSDB calls in the L2 Agent is certainly a 
good direction.



Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or processing) 
to follow up

on success or failure of such invocations.  Nor there is certain guarantee that 
all such

flow invocations would be executed by the third-process fired by OVS-Lib to 
execute CLI.



When we transition to OVSDB calls which are more programmatic in nature, we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return codes 
(or content)

and ovs-agent (and even other components) can act on such return state more

intelligently/appropriately.



--

Thanks,



Vivek





From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture



just a provocative thought: If we used the ovsdb connection instead, do we 
really need an L2 agent :P?



On 17 June 2014 18:38, Kyle Mestery 
mailto:mest...@noironetworks.com>> wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle


On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
> We've started doing this in a slightly more reasonable way for icehouse.
> What we've done is:
> - remove unnecessary notification from the server
> - process all port-related events, either trigger via RPC or via monitor in
> one place
>
> Obviously there is always a lot of room for improvement, and I agree
> something along the lines of what Zang suggests would be more maintainable
> and ensure faster event processing as well as making it easier to have some
> form of reliability on event processing.
>
> I was considering doing something for the ovs-agent again in Juno, but since
> we've moving towards a unified agent, I think any new "big" ticket should
> address this effort.
>
> Salvatore
>
>
> On 17 June 2014 13:31, Zang MingJie 
> mailto:zealot0...@gmail.com>> wrote:
>>
>> Hi:
>>
>> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> intent to rebuild a more stable flexible agent.
>>
>> Taking the experience of ovs-agent bugs, I think the concurrency
>> problem is also a very important problem, the agent gets lots of event
>> from different greenlets, the rpc, the ovs monitor or the main loop.
>> I'd suggest to serialize all event to a queue, then process events in
>> a dedicated thread. The thread check the events one by one ordered,
>> and resolve what has been changed, then apply the corresponding
>> changes. If there is any error occurred in the thread, discard the
>> current processing event, do a fresh start event, which reset
>> everything, then apply the correct settings.
>>
>> The threading model is so important and may prevent tons of bugs in
>> the future development, we should describe it clearly in the
>> architecture
>>
>>
>> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
>> mailto:m...@us.ibm.com>>
>> wrote:
>> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> > added
>> > more information on the etherpad [1] describing the proposed
>> > architecture
>> > for modular L2 agents. I have also posted some code fragments at [2]
>> > sketching the implementation of the proposed architecture. Please have a
>> > look when you get a chance and let us know if you have any comments.
>> >
>> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> > [2] https://review.openstack.org/#/c/99187/
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
>> > http://lists.openstack.org

Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
Not if you use ODL, and we don't want to reinvent that wheel. But by
skipping CLI commands and instead using OVSDB programmatically from
agent to ovs-vswitchd, that's a decent improvement.

On Tue, Jun 17, 2014 at 11:56 AM, Armando M.  wrote:
> just a provocative thought: If we used the ovsdb connection instead, do we
> really need an L2 agent :P?
>
>
> On 17 June 2014 18:38, Kyle Mestery  wrote:
>>
>> Another area of improvement for the agent would be to move away from
>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
>> and I talked about this, and re-writing ovs_lib to use an OVSDB
>> connection instead of the CLI methods would be a huge improvement
>> here. I'm not sure if Terry was going to move forward with this, but
>> I'd be in favor of this for Juno if he or someone else wants to move
>> in this direction.
>>
>> Thanks,
>> Kyle
>>
>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
>> wrote:
>> > We've started doing this in a slightly more reasonable way for icehouse.
>> > What we've done is:
>> > - remove unnecessary notification from the server
>> > - process all port-related events, either trigger via RPC or via monitor
>> > in
>> > one place
>> >
>> > Obviously there is always a lot of room for improvement, and I agree
>> > something along the lines of what Zang suggests would be more
>> > maintainable
>> > and ensure faster event processing as well as making it easier to have
>> > some
>> > form of reliability on event processing.
>> >
>> > I was considering doing something for the ovs-agent again in Juno, but
>> > since
>> > we've moving towards a unified agent, I think any new "big" ticket
>> > should
>> > address this effort.
>> >
>> > Salvatore
>> >
>> >
>> > On 17 June 2014 13:31, Zang MingJie  wrote:
>> >>
>> >> Hi:
>> >>
>> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> >> intent to rebuild a more stable flexible agent.
>> >>
>> >> Taking the experience of ovs-agent bugs, I think the concurrency
>> >> problem is also a very important problem, the agent gets lots of event
>> >> from different greenlets, the rpc, the ovs monitor or the main loop.
>> >> I'd suggest to serialize all event to a queue, then process events in
>> >> a dedicated thread. The thread check the events one by one ordered,
>> >> and resolve what has been changed, then apply the corresponding
>> >> changes. If there is any error occurred in the thread, discard the
>> >> current processing event, do a fresh start event, which reset
>> >> everything, then apply the correct settings.
>> >>
>> >> The threading model is so important and may prevent tons of bugs in
>> >> the future development, we should describe it clearly in the
>> >> architecture
>> >>
>> >>
>> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
>> >> wrote:
>> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> >> > added
>> >> > more information on the etherpad [1] describing the proposed
>> >> > architecture
>> >> > for modular L2 agents. I have also posted some code fragments at [2]
>> >> > sketching the implementation of the proposed architecture. Please
>> >> > have a
>> >> > look when you get a chance and let us know if you have any comments.
>> >> >
>> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> >> > [2] https://review.openstack.org/#/c/99187/
>> >> >
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Armando M.
just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?


On 17 June 2014 18:38, Kyle Mestery  wrote:

> Another area of improvement for the agent would be to move away from
> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> and I talked about this, and re-writing ovs_lib to use an OVSDB
> connection instead of the CLI methods would be a huge improvement
> here. I'm not sure if Terry was going to move forward with this, but
> I'd be in favor of this for Juno if he or someone else wants to move
> in this direction.
>
> Thanks,
> Kyle
>
> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando 
> wrote:
> > We've started doing this in a slightly more reasonable way for icehouse.
> > What we've done is:
> > - remove unnecessary notification from the server
> > - process all port-related events, either trigger via RPC or via monitor
> in
> > one place
> >
> > Obviously there is always a lot of room for improvement, and I agree
> > something along the lines of what Zang suggests would be more
> maintainable
> > and ensure faster event processing as well as making it easier to have
> some
> > form of reliability on event processing.
> >
> > I was considering doing something for the ovs-agent again in Juno, but
> since
> > we've moving towards a unified agent, I think any new "big" ticket should
> > address this effort.
> >
> > Salvatore
> >
> >
> > On 17 June 2014 13:31, Zang MingJie  wrote:
> >>
> >> Hi:
> >>
> >> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> >> intent to rebuild a more stable flexible agent.
> >>
> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >> problem is also a very important problem, the agent gets lots of event
> >> from different greenlets, the rpc, the ovs monitor or the main loop.
> >> I'd suggest to serialize all event to a queue, then process events in
> >> a dedicated thread. The thread check the events one by one ordered,
> >> and resolve what has been changed, then apply the corresponding
> >> changes. If there is any error occurred in the thread, discard the
> >> current processing event, do a fresh start event, which reset
> >> everything, then apply the correct settings.
> >>
> >> The threading model is so important and may prevent tons of bugs in
> >> the future development, we should describe it clearly in the
> >> architecture
> >>
> >>
> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> >> wrote:
> >> > Following the discussions in the ML2 subgroup weekly meetings, I have
> >> > added
> >> > more information on the etherpad [1] describing the proposed
> >> > architecture
> >> > for modular L2 agents. I have also posted some code fragments at [2]
> >> > sketching the implementation of the proposed architecture. Please
> have a
> >> > look when you get a chance and let us know if you have any comments.
> >> >
> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> >> > [2] https://review.openstack.org/#/c/99187/
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Kyle Mestery
Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando  wrote:
> We've started doing this in a slightly more reasonable way for icehouse.
> What we've done is:
> - remove unnecessary notification from the server
> - process all port-related events, either trigger via RPC or via monitor in
> one place
>
> Obviously there is always a lot of room for improvement, and I agree
> something along the lines of what Zang suggests would be more maintainable
> and ensure faster event processing as well as making it easier to have some
> form of reliability on event processing.
>
> I was considering doing something for the ovs-agent again in Juno, but since
> we've moving towards a unified agent, I think any new "big" ticket should
> address this effort.
>
> Salvatore
>
>
> On 17 June 2014 13:31, Zang MingJie  wrote:
>>
>> Hi:
>>
>> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
>> intent to rebuild a more stable flexible agent.
>>
>> Taking the experience of ovs-agent bugs, I think the concurrency
>> problem is also a very important problem, the agent gets lots of event
>> from different greenlets, the rpc, the ovs monitor or the main loop.
>> I'd suggest to serialize all event to a queue, then process events in
>> a dedicated thread. The thread check the events one by one ordered,
>> and resolve what has been changed, then apply the corresponding
>> changes. If there is any error occurred in the thread, discard the
>> current processing event, do a fresh start event, which reset
>> everything, then apply the correct settings.
>>
>> The threading model is so important and may prevent tons of bugs in
>> the future development, we should describe it clearly in the
>> architecture
>>
>>
>> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
>> wrote:
>> > Following the discussions in the ML2 subgroup weekly meetings, I have
>> > added
>> > more information on the etherpad [1] describing the proposed
>> > architecture
>> > for modular L2 agents. I have also posted some code fragments at [2]
>> > sketching the implementation of the proposed architecture. Please have a
>> > look when you get a chance and let us know if you have any comments.
>> >
>> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
>> > [2] https://review.openstack.org/#/c/99187/
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Salvatore Orlando
We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since we've moving towards a unified agent, I think any new "big" ticket
should address this effort.

Salvatore


On 17 June 2014 13:31, Zang MingJie  wrote:

> Hi:
>
> Awesome! Currently we are suffering lots of bugs in ovs-agent, also
> intent to rebuild a more stable flexible agent.
>
> Taking the experience of ovs-agent bugs, I think the concurrency
> problem is also a very important problem, the agent gets lots of event
> from different greenlets, the rpc, the ovs monitor or the main loop.
> I'd suggest to serialize all event to a queue, then process events in
> a dedicated thread. The thread check the events one by one ordered,
> and resolve what has been changed, then apply the corresponding
> changes. If there is any error occurred in the thread, discard the
> current processing event, do a fresh start event, which reset
> everything, then apply the correct settings.
>
> The threading model is so important and may prevent tons of bugs in
> the future development, we should describe it clearly in the
> architecture
>
>
> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi 
> wrote:
> > Following the discussions in the ML2 subgroup weekly meetings, I have
> added
> > more information on the etherpad [1] describing the proposed architecture
> > for modular L2 agents. I have also posted some code fragments at [2]
> > sketching the implementation of the proposed architecture. Please have a
> > look when you get a chance and let us know if you have any comments.
> >
> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> > [2] https://review.openstack.org/#/c/99187/
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-17 Thread Zang MingJie
Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture


On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi  wrote:
> Following the discussions in the ML2 subgroup weekly meetings, I have added
> more information on the etherpad [1] describing the proposed architecture
> for modular L2 agents. I have also posted some code fragments at [2]
> sketching the implementation of the proposed architecture. Please have a
> look when you get a chance and let us know if you have any comments.
>
> [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> [2] https://review.openstack.org/#/c/99187/
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-10 Thread Mohammad Banikazemi

Following the discussions in the ML2 subgroup weekly meetings, I have added
more information on the etherpad [1] describing the proposed architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev