[openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
 I would like to discuss the pros and cons of putting Octavia into the
Neutron LBaaS incubator project right away. If it is going to be the
reference implementation for LBaaS v 2 then I believe Octavia belong in
Neutron LBaaS v2 incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2
code. We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit
our code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that
it would be looked favorable on when time is to move it into incubated
status.

Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Genadi Chereshnya
When running neutron_lbaas scenarios tests with the latest tempest version
we fail because of https://bugs.launchpad.net/octavia/+bug/1649083.

I would like if anyone can go over the patch that fixes the problem and
merge it, so our automation will succeed.
The patch is https://review.openstack.org/#/c/411257/

Thanks in advance,
Genadi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
Just for us to learn about the incubator status, here are some of the info
on incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
wrote:

>  I would like to discuss the pros and cons of putting Octavia into the
> Neutron LBaaS incubator project right away. If it is going to be the
> reference implementation for LBaaS v 2 then I believe Octavia belong in
> Neutron LBaaS v2 incubator.
>
> The Pros:
> * Octavia is in Openstack incubation right away along with the lbaas v2
> code. We do not have to apply for incubation later on.
> * As incubation project we have our own core and should be able ot commit
> our code
> * We are starting out as an OpenStack incubated project
>
> The Cons:
> * Not sure of the velocity of the project
> * Incubation not well defined.
>
> If Octavia starts as a standalone stackforge project we are assuming that
> it would be looked favorable on when time is to move it into incubated
> status.
>
> Susanne
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kevin Benton
I think we need some clarification here too about the difference between
the general OpenStack Incubation and the Neutron incubation. From my
understanding, the Neutron incubation isn't the path to a separate project
and independence from Neutron. It's a process to get into Neutron. So if
you want to keep it as a separate project with its own cores and a PTL,
Neutron incubation would not be the way to go.


On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
wrote:

> Just for us to learn about the incubator status, here are some of the info
> on incubation:
>
> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> https://wiki.openstack.org/wiki/Governance/NewProjects
>
> Susanne
>
>
> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
> wrote:
>
>>  I would like to discuss the pros and cons of putting Octavia into the
>> Neutron LBaaS incubator project right away. If it is going to be the
>> reference implementation for LBaaS v 2 then I believe Octavia belong in
>> Neutron LBaaS v2 incubator.
>>
>> The Pros:
>> * Octavia is in Openstack incubation right away along with the lbaas v2
>> code. We do not have to apply for incubation later on.
>> * As incubation project we have our own core and should be able ot commit
>> our code
>> * We are starting out as an OpenStack incubated project
>>
>> The Cons:
>> * Not sure of the velocity of the project
>> * Incubation not well defined.
>>
>> If Octavia starts as a standalone stackforge project we are assuming that
>> it would be looked favorable on when time is to move it into incubated
>> status.
>>
>> Susanne
>>
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Adam Harwell
Yeah, I think I agree there. If we were to go the Neutron-incubator route, we'd 
end up with Neutron-Octavia, and I don't think that's what we want, right?
I believe to be "Openstack-Octavia" we need to be incubated as a separate 
project.

--Adam

https://keybase.io/rm_you


From: Kevin Benton mailto:blak...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 28, 2014 3:55 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

I think we need some clarification here too about the difference between the 
general OpenStack Incubation and the Neutron incubation. From my understanding, 
the Neutron incubation isn't the path to a separate project and independence 
from Neutron. It's a process to get into Neutron. So if you want to keep it as 
a separate project with its own cores and a PTL, Neutron incubation would not 
be the way to go.


On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
mailto:sleipnir...@gmail.com>> wrote:
Just for us to learn about the incubator status, here are some of the info on 
incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
mailto:sleipnir...@gmail.com>> wrote:
 I would like to discuss the pros and cons of putting Octavia into the Neutron 
LBaaS incubator project right away. If it is going to be the reference 
implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 
incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2 code. 
We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit our 
code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that it 
would be looked favorable on when time is to move it into incubated status.

Susanne




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Stefano Maffulli
On 08/28/2014 03:04 PM, Susanne Balle wrote:
> Just for us to learn about the incubator status, here are some of the
> info on incubation:
> 
> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> https://wiki.openstack.org/wiki/Governance/NewProjects

These are not the correct documents for the Neutron incubator.

You should look at this instead:

https://wiki.openstack.org/wiki/Network/Incubator

(which is modeled after the Oslo incubator
https://wiki.openstack.org/wiki/Oslo#Incubation)

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kyle Mestery
On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> I think we need some clarification here too about the difference between the
> general OpenStack Incubation and the Neutron incubation. From my
> understanding, the Neutron incubation isn't the path to a separate project
> and independence from Neutron. It's a process to get into Neutron. So if you
> want to keep it as a separate project with its own cores and a PTL, Neutron
> incubation would not be the way to go.

That's not true, there are 3 ways out of incubation: 1) The project
withers and dies on it's own. 2) The project is spun back into
Neutron. 3) The project is spun out into it's own project.

However, it's worth noting that if the project is spun out into it's
own entity, it would have to go through incubation to become a fully
functioning OpenStack project of it's own.

>
>
> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> wrote:
>>
>> Just for us to learn about the incubator status, here are some of the info
>> on incubation:
>>
>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>
>> Susanne
>>
>>
>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
>> wrote:
>>>
>>>  I would like to discuss the pros and cons of putting Octavia into the
>>> Neutron LBaaS incubator project right away. If it is going to be the
>>> reference implementation for LBaaS v 2 then I believe Octavia belong in
>>> Neutron LBaaS v2 incubator.
>>>
>>> The Pros:
>>> * Octavia is in Openstack incubation right away along with the lbaas v2
>>> code. We do not have to apply for incubation later on.
>>> * As incubation project we have our own core and should be able ot commit
>>> our code
>>> * We are starting out as an OpenStack incubated project
>>>
>>> The Cons:
>>> * Not sure of the velocity of the project
>>> * Incubation not well defined.
>>>
>>> If Octavia starts as a standalone stackforge project we are assuming that
>>> it would be looked favorable on when time is to move it into incubated
>>> status.
>>>
>>> Susanne
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Stephen Balukoff
Susanne--

I think you are conflating the difference between "OpenStack incubation"
and "Neutron incubator." These are two very different matters and should be
treated separately. So, addressing each one individually:

*"OpenStack Incubation"*
I think this has been the end-goal of Octavia all along and continues to be
the end-goal. Under this scenario, Octavia is its own stand-alone project
with its own PTL and core developer team, its own governance, and should
eventually become part of the integrated OpenStack release. No project ever
starts out as "OpenStack incubated."

*"Neutron Incubator"*
This has only become a serious discussion in the last few weeks and has yet
to land, so there are many assumptions about this which don't pan out
(either because of purposeful design and governance decisions, or because
of how this project actually ends up being implemented from a practical
standpoint). But given the inherent limitations about making statements
with so many unknowns, the following seem fairly clear from what has been
shared so far:

   - Neutron incubator is the on-ramp for projects which should eventually
   become a part of Neutron itself.
   - Projects which enter the Neutron incubator on-ramp should be fairly
   close to maturity in their final form. I think the intent here is for them
   to live in incubator for 1 or 2 cycles before either being merged into
   Neutron core, or being ejected (as abandoned, or as a separate project).
   - Neutron incubator projects effectively do not have their own PTL and
   core developer team, and do not have their own governance.

In addition we know the following about Neutron LBaaS and Octavia:

   - It's already (informally?) agreed that the ultimate long-term place
   for a LBaaS solution is probably to be spun out into its own project, which
   might appropriately live under a yet-to-be-defined master "Networking"
   project. (This would make Neutron, LBaaS, VPNaaS, FWaaS, etc. effective
   "peer" projects under the Networking umbrella.)  Since this "Networking"
   umbrella project has even less defined about it than Neutron incubator,
   it's impossible to know whether being a part of Neutron incubator would be
   of any benefit to Octavia (or, conversely, to Neutron incubator) at all as
   an on-ramp to becoming part of "Networking." Presumably, Octavia *might* fit
   well under the "Networking" umbrella-- but, again, with nothing defined
   there it's impossible to draw any reasonable conclusions at this time.
   - When the LBaaS component spins out of Neutron, it will more than
   likely not be Octavia.  Octavia is *intentionally* less friendly to 3rd
   party load balancer vendors both because it's envisioned that Octavia would
   just be another implementation which lives along-side said 3rd party vendor
   products (plugging into a higher level LBaaS layer via a driver), and
   because we don't want to have to compromise certain design features of
   Octavia to meet the lowest common denominator 3rd party vendor product.
   (3rd party vendors are welcome, but we will not make design compromises to
   meet the needs of a proprietary product-- compatibility with available
   open-source products and standards trumps this.)
   - The end-game for the above point is: In the future I see "Openstack
   LBaaS" (or whatever the project calls itself) being a separate but
   complimentary project to Octavia.
   - While its true that we would like Octavia to become the reference
   implementation for Neutron LBaaS, we are nowhere near being able to deliver
   on that. Attempting to become a part of Neutron LBaaS right now is likely
   just to create frustration (and very little merged code) for both the
   Octavia and Neutron teams.



So given that the only code in Octavia right now are a few database
migrations, we are very, very far away from being ready for either
OpenStack incubation or the Neutron incubator project. I don't think it's
very useful to be spending time right now worrying about either of these
outcomes:  We should be working on Octavia!

Please also understand:  I realize that probably the reason you're asking
this right now is because you have a mandate within your organization to
use only "official" OpenStack branded components, and if Octavia doesn't
fall within that category, you won't be able to use it.  Of course everyone
working on this project wants to make that happen too, so we're doing
everything we can to make sure we don't jeopardize that possibility. And
there are enough voices in this project that want that to happen, so I
think if we strayed from the path to get there, there would be sufficient
clangor over this that it would be hard to miss. But I don't think there's
anyone at all at this time that can honestly give you a promise that
Octavia definitely will be incubated and will definitely end up in the
integrated OpenStack release.

If you want to increase the chances of that happening, please help push the
project forwa

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Kevin Benton
I see. Then if a group's ultimate goal is their own project, would the
Neutron incubator even make sense as a first step?


On Thu, Aug 28, 2014 at 6:48 PM, Kyle Mestery  wrote:

> On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> > I think we need some clarification here too about the difference between
> the
> > general OpenStack Incubation and the Neutron incubation. From my
> > understanding, the Neutron incubation isn't the path to a separate
> project
> > and independence from Neutron. It's a process to get into Neutron. So if
> you
> > want to keep it as a separate project with its own cores and a PTL,
> Neutron
> > incubation would not be the way to go.
>
> That's not true, there are 3 ways out of incubation: 1) The project
> withers and dies on it's own. 2) The project is spun back into
> Neutron. 3) The project is spun out into it's own project.
>
> However, it's worth noting that if the project is spun out into it's
> own entity, it would have to go through incubation to become a fully
> functioning OpenStack project of it's own.
>
> >
> >
> > On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> > wrote:
> >>
> >> Just for us to learn about the incubator status, here are some of the
> info
> >> on incubation:
> >>
> >> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
> >> https://wiki.openstack.org/wiki/Governance/NewProjects
> >>
> >> Susanne
> >>
> >>
> >> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
> >> wrote:
> >>>
> >>>  I would like to discuss the pros and cons of putting Octavia into the
> >>> Neutron LBaaS incubator project right away. If it is going to be the
> >>> reference implementation for LBaaS v 2 then I believe Octavia belong in
> >>> Neutron LBaaS v2 incubator.
> >>>
> >>> The Pros:
> >>> * Octavia is in Openstack incubation right away along with the lbaas v2
> >>> code. We do not have to apply for incubation later on.
> >>> * As incubation project we have our own core and should be able ot
> commit
> >>> our code
> >>> * We are starting out as an OpenStack incubated project
> >>>
> >>> The Cons:
> >>> * Not sure of the velocity of the project
> >>> * Incubation not well defined.
> >>>
> >>> If Octavia starts as a standalone stackforge project we are assuming
> that
> >>> it would be looked favorable on when time is to move it into incubated
> >>> status.
> >>>
> >>> Susanne
> >>>
> >>>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Kevin Benton
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Susanne Balle
Stephen



See inline comments.



Susanne



-



Susanne--



I think you are conflating the difference between "OpenStack incubation"
and "Neutron incubator." These are two very different matters and should be
treated separately. So, addressing each one individually:



*"OpenStack Incubation"*

I think this has been the end-goal of Octavia all along and continues to be
the end-goal. Under this scenario, Octavia is its own stand-alone project
with its own PTL and core developer team, its own governance, and should
eventually become part of the integrated OpenStack release. No project ever
starts out as "OpenStack incubated."



[Susanne] I totally agree that the end goal is for Neutron LBaaS to become
its own incubated project. I did miss the nuance that was pointed out by
Mestery in an earlier email that if a Neutron incubator project wants to
become a separate project it will have to apply for incubation again or at
that time. It was my understanding that such a Neutron incubated project
would be grandfathered in but again we do not have much details on the
process yet.



To me Octavia is a driver so it is very hard to me to think of it as a
standalone project. It needs the new Neutron LBaaS v2 to function which is
why I think of them together. This of course can change since we can add
whatever layers we want to Octavia.



*"Neutron Incubator"*

This has only become a serious discussion in the last few weeks and has yet
to land, so there are many assumptions about this which don't pan out
(either because of purposeful design and governance decisions, or because
of how this project actually ends up being implemented from a practical
standpoint). But given the inherent limitations about making statements
with so many unknowns, the following seem fairly clear from what has been
shared so far:

·  Neutron incubator is the on-ramp for projects which should eventually
become a part of Neutron itself.

·  Projects which enter the Neutron incubator on-ramp should be fairly
close to maturity in their final form. I think the intent here is for them
to live in incubator for 1 or 2 cycles before either being merged into
Neutron core, or being ejected (as abandoned, or as a separate project).

·  Neutron incubator projects effectively do not have their own PTL and
core developer team, and do not have their own governance.

[Susanne] Ok I missed the last point. In an earlier discussion Mestery
implied that an incubated project would have at least one or two of its own
cores. Maybe that changed between now and then.

In addition we know the following about Neutron LBaaS and Octavia:

·  It's already (informally?) agreed that the ultimate long-term place for
a LBaaS solution is probably to be spun out into its own project, which
might appropriately live under a yet-to-be-defined master "Networking"
project. (This would make Neutron, LBaaS, VPNaaS, FWaaS, etc. effective
"peer" projects under the Networking umbrella.)  Since this "Networking"
umbrella project has even less defined about it than Neutron incubator,
it's impossible to know whether being a part of Neutron incubator would be
of any benefit to Octavia (or, conversely, to Neutron incubator) at all as
an on-ramp to becoming part of "Networking." Presumably, Octavia *might* fit
well under the "Networking" umbrella-- but, again, with nothing defined
there it's impossible to draw any reasonable conclusions at this time.

[Susanne] We are in agreement here. This was the reasons we had the ad-hoc
meeting in Atlanta so get a feel for hw people felt if we made Neutron
LBaaS its own project and also how we got an operator large scale LBaaS
that fit most of our service provider requirements. I am just worried
because you keep on talking of Octavia as a standaloe project. To me it is
an extension of Neutron LBaaS or of a new LBaaS …. I do not see us (== me)
use Octavia in a non OpenStack context. And yes it is a driver that I am
hoping we all expect to become the reference implementation for LBaaS.

·  When the LBaaS component spins out of Neutron, it will more than likely
not be Octavia.  Octavia is *intentionally* less friendly to 3rd party load
balancer vendors both because it's envisioned that Octavia would just be
another implementation which lives along-side said 3rd party vendor
products (plugging into a higher level LBaaS layer via a driver), and
because we don't want to have to compromise certain design features of
Octavia to meet the lowest common denominator 3rd party vendor product.
(3rd party vendors are welcome, but we will not make design compromises to
meet the needs of a proprietary product-- compatibility with available
open-source products and standards trumps this.)

[Susanne] Ok now I am confused… But I agree with you that it need to focus
on our use cases. I remember us discussing Octavia being the refenece
implementation for OpenStack LBaaS (whatever that is). Has that changed
while I was on vacation?

Th

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Eichberger, German
Kyle,

I am confused. So basically you (and Mark) are saying:

1) We deprecate Neutron LBaaS v1
2) We spin out Neutron LBaaS v2 into it's own project in stackforge
3) Users don't have an OpenStack LBaaS any longer until we graduate from 
OpenStack incubation (as opposed Neutron incubation)

I am hoping you can clarify how this will be shaping up - 

Thanks,
German


-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Thursday, August 28, 2014 6:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
> I think we need some clarification here too about the difference 
> between the general OpenStack Incubation and the Neutron incubation. 
> From my understanding, the Neutron incubation isn't the path to a 
> separate project and independence from Neutron. It's a process to get 
> into Neutron. So if you want to keep it as a separate project with its 
> own cores and a PTL, Neutron incubation would not be the way to go.

That's not true, there are 3 ways out of incubation: 1) The project withers and 
dies on it's own. 2) The project is spun back into Neutron. 3) The project is 
spun out into it's own project.

However, it's worth noting that if the project is spun out into it's own 
entity, it would have to go through incubation to become a fully functioning 
OpenStack project of it's own.

>
>
> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
> wrote:
>>
>> Just for us to learn about the incubator status, here are some of the 
>> info on incubation:
>>
>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>
>> Susanne
>>
>>
>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle 
>> 
>> wrote:
>>>
>>>  I would like to discuss the pros and cons of putting Octavia into 
>>> the Neutron LBaaS incubator project right away. If it is going to be 
>>> the reference implementation for LBaaS v 2 then I believe Octavia 
>>> belong in Neutron LBaaS v2 incubator.
>>>
>>> The Pros:
>>> * Octavia is in Openstack incubation right away along with the lbaas 
>>> v2 code. We do not have to apply for incubation later on.
>>> * As incubation project we have our own core and should be able ot 
>>> commit our code
>>> * We are starting out as an OpenStack incubated project
>>>
>>> The Cons:
>>> * Not sure of the velocity of the project
>>> * Incubation not well defined.
>>>
>>> If Octavia starts as a standalone stackforge project we are assuming 
>>> that it would be looked favorable on when time is to move it into 
>>> incubated status.
>>>
>>> Susanne
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Kyle Mestery
On Fri, Aug 29, 2014 at 11:51 AM, Eichberger, German
 wrote:
> Kyle,
>
> I am confused. So basically you (and Mark) are saying:
>
> 1) We deprecate Neutron LBaaS v1
> 2) We spin out Neutron LBaaS v2 into it's own project in stackforge
> 3) Users don't have an OpenStack LBaaS any longer until we graduate from 
> OpenStack incubation (as opposed Neutron incubation)
>
> I am hoping you can clarify how this will be shaping up -
>
I think what is needed is this:

1) We incubate Neutron LBaaS V2 in the incubator.
2) It graduates into a project under the networking program.
3) We deprecate Neutron LBaaS v1.

To deprecate, we need the new API stable and ready, and then once V1
is deprecated it takes 2 cycles for us to remove it.

Hope that helps!

Thanks,
Kyle

> Thanks,
> German
>
>
> -Original Message-
> From: Kyle Mestery [mailto:mest...@mestery.com]
> Sent: Thursday, August 28, 2014 6:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>
> On Thu, Aug 28, 2014 at 5:55 PM, Kevin Benton  wrote:
>> I think we need some clarification here too about the difference
>> between the general OpenStack Incubation and the Neutron incubation.
>> From my understanding, the Neutron incubation isn't the path to a
>> separate project and independence from Neutron. It's a process to get
>> into Neutron. So if you want to keep it as a separate project with its
>> own cores and a PTL, Neutron incubation would not be the way to go.
>
> That's not true, there are 3 ways out of incubation: 1) The project withers 
> and dies on it's own. 2) The project is spun back into Neutron. 3) The 
> project is spun out into it's own project.
>
> However, it's worth noting that if the project is spun out into it's own 
> entity, it would have to go through incubation to become a fully functioning 
> OpenStack project of it's own.
>
>>
>>
>> On Thu, Aug 28, 2014 at 3:04 PM, Susanne Balle 
>> wrote:
>>>
>>> Just for us to learn about the incubator status, here are some of the
>>> info on incubation:
>>>
>>> https://wiki.openstack.org/wiki/Governance/Approved/Incubation
>>> https://wiki.openstack.org/wiki/Governance/NewProjects
>>>
>>> Susanne
>>>
>>>
>>> On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle
>>> 
>>> wrote:
>>>>
>>>>  I would like to discuss the pros and cons of putting Octavia into
>>>> the Neutron LBaaS incubator project right away. If it is going to be
>>>> the reference implementation for LBaaS v 2 then I believe Octavia
>>>> belong in Neutron LBaaS v2 incubator.
>>>>
>>>> The Pros:
>>>> * Octavia is in Openstack incubation right away along with the lbaas
>>>> v2 code. We do not have to apply for incubation later on.
>>>> * As incubation project we have our own core and should be able ot
>>>> commit our code
>>>> * We are starting out as an OpenStack incubated project
>>>>
>>>> The Cons:
>>>> * Not sure of the velocity of the project
>>>> * Incubation not well defined.
>>>>
>>>> If Octavia starts as a standalone stackforge project we are assuming
>>>> that it would be looked favorable on when time is to move it into
>>>> incubated status.
>>>>
>>>> Susanne
>>>>
>>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-30 Thread Adam Harwell
Only really have comments on two of your related points:

[Susanne] To me Octavia is a driver so it is very hard to me to think of it as 
a standalone project. It needs the new Neutron LBaaS v2 to function which is 
why I think of them together. This of course can change since we can add 
whatever layers we want to Octavia.

[Adam] I guess I've always shared Stephen's viewpoint — Octavia != LBaaS-v2. 
Octavia is a peer to F5 / Radware / A10 / etc appliances, not to an Openstack 
API layer like Neutron-LBaaS. It's a little tricky to clearly define this 
difference in conversation, and I have noticed that quite a few people are 
having the same issue differentiating. In a small group, having quite a few 
people not on the same page is a bit scary, so maybe we need to really sit down 
and map this out so everyone is together one way or the other.

[Susanne] Ok now I am confused… But I agree with you that it need to focus on 
our use cases. I remember us discussing Octavia being the refenece 
implementation for OpenStack LBaaS (whatever that is). Has that changed while I 
was on vacation?

[Adam] I believe that having the Octavia "driver" (not the Octavia codebase 
itself, technically) become the reference implementation for Neutron-LBaaS is 
still the plan in my eyes. The Octavia Driver in Neutron-LBaaS is a separate 
bit of code from the actual Octavia project, similar to the way the A10 driver 
is a separate bit of code from the A10 appliance. To do that though, we need 
Octavia to be fairly close to fully functional. I believe we can do this 
because even though the reference driver would then require an additional 
service to run, what it requires is still fully-open-source and (by way of our 
plan) available as part of OpenStack core.

--Adam

https://keybase.io/rm_you


From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, August 29, 2014 9:19 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Stephen

See inline comments.

Susanne

-

Susanne--

I think you are conflating the difference between "OpenStack incubation" and 
"Neutron incubator." These are two very different matters and should be treated 
separately. So, addressing each one individually:

"OpenStack Incubation"
I think this has been the end-goal of Octavia all along and continues to be the 
end-goal. Under this scenario, Octavia is its own stand-alone project with its 
own PTL and core developer team, its own governance, and should eventually 
become part of the integrated OpenStack release. No project ever starts out as 
"OpenStack incubated."

[Susanne] I totally agree that the end goal is for Neutron LBaaS to become its 
own incubated project. I did miss the nuance that was pointed out by Mestery in 
an earlier email that if a Neutron incubator project wants to become a separate 
project it will have to apply for incubation again or at that time. It was my 
understanding that such a Neutron incubated project would be grandfathered in 
but again we do not have much details on the process yet.

To me Octavia is a driver so it is very hard to me to think of it as a 
standalone project. It needs the new Neutron LBaaS v2 to function which is why 
I think of them together. This of course can change since we can add whatever 
layers we want to Octavia.

"Neutron Incubator"
This has only become a serious discussion in the last few weeks and has yet to 
land, so there are many assumptions about this which don't pan out (either 
because of purposeful design and governance decisions, or because of how this 
project actually ends up being implemented from a practical standpoint). But 
given the inherent limitations about making statements with so many unknowns, 
the following seem fairly clear from what has been shared so far:
·  Neutron incubator is the on-ramp for projects which should eventually become 
a part of Neutron itself.
·  Projects which enter the Neutron incubator on-ramp should be fairly close to 
maturity in their final form. I think the intent here is for them to live in 
incubator for 1 or 2 cycles before either being merged into Neutron core, or 
being ejected (as abandoned, or as a separate project).
·  Neutron incubator projects effectively do not have their own PTL and core 
developer team, and do not have their own governance.
[Susanne] Ok I missed the last point. In an earlier discussion Mestery implied 
that an incubated project would have at least one or two of its own cores. 
Maybe that changed between now and then.
In addition we know the following about Neutron LBaaS and Octavia:
·  It's already (informally?) 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-01 Thread Susanne Balle
Kyle, Adam,



Based on this thread Kyle is suggesting the follow moving forward plan:



1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
LBaas V1.0”
2) “Eventually” It graduates into a project under the networking program.
3) “At that point” We deprecate Neutron LBaaS v1.



The words in “xx“ are works I added to make sure I/We understand the whole
picture.



And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
Radware / A10 / etc *appliances* which is a definition I agree with BTW.



What I am trying to now understand is how we will move Octavia into the new
LBaaS project?



If we do it later rather than develop Octavia in tree under the new
incubated LBaaS project when do we plan to bring it in-tree from
Stackforge? Kilo? Later? When LBaaS is a separate project under the
Networking program?



What are the criteria to bring a driver into the LBaaS project and what do
we need to do to replace the existing reference driver? Maybe adding a
software driver to LBaaS source tree is less of a problem than converting a
whole project to an OpenStack project.



Again I am open to both directions I just want to make sure we understand
why we are choosing to do one or the other and that our  decision is based
on data and not emotions.



I am assuming that keeping Octavia in Stackforge will increase the velocity
of the project and allow us more freedom which is goodness. We just need to
have a plan to make it part of the Openstack LBaaS project.



Regards Susanne


On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell 
wrote:

>   Only really have comments on two of your related points:
>
>  [Susanne] To me Octavia is a driver so it is very hard to me to think of
> it as a standalone project. It needs the new Neutron LBaaS v2 to function
> which is why I think of them together. This of course can change since we
> can add whatever layers we want to Octavia.
>
>  [Adam] I guess I've always shared Stephen's viewpoint — Octavia !=
> LBaaS-v2. Octavia is a peer to F5 / Radware / A10 / etc appliances, not
> to an Openstack API layer like Neutron-LBaaS. It's a little tricky to
> clearly define this difference in conversation, and I have noticed that
> quite a few people are having the same issue differentiating. In a small
> group, having quite a few people not on the same page is a bit scary, so
> maybe we need to really sit down and map this out so everyone is together
> one way or the other.
>
>  [Susanne] Ok now I am confused… But I agree with you that it need to
> focus on our use cases. I remember us discussing Octavia being the refenece
> implementation for OpenStack LBaaS (whatever that is). Has that changed
> while I was on vacation?
>
>  [Adam] I believe that having the Octavia "driver" (not the Octavia
> codebase itself, technically) become the reference implementation for
> Neutron-LBaaS is still the plan in my eyes. The Octavia Driver in
> Neutron-LBaaS is a separate bit of code from the actual Octavia project,
> similar to the way the A10 driver is a separate bit of code from the A10
> appliance. To do that though, we need Octavia to be fairly close to fully
> functional. I believe we can do this because even though the reference
> driver would then require an additional service to run, what it requires is
> still fully-open-source and (by way of our plan) available as part of
> OpenStack core.
>
>   --Adam
>
>  https://keybase.io/rm_you
>
>
>   From: Susanne Balle 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, August 29, 2014 9:19 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>
>Stephen
>
>
>
> See inline comments.
>
>
>
> Susanne
>
>
>
> -
>
>
>
> Susanne--
>
>
>
> I think you are conflating the difference between "OpenStack incubation"
> and "Neutron incubator." These are two very different matters and should be
> treated separately. So, addressing each one individually:
>
>
>
> *"OpenStack Incubation"*
>
> I think this has been the end-goal of Octavia all along and continues to
> be the end-goal. Under this scenario, Octavia is its own stand-alone
> project with its own PTL and core developer team, its own governance, and
> should eventually become part of the integrated OpenStack release. No
> project ever starts out as "OpenStack incubated."
>
>
>
> [Susanne] I totally agree that the end goal is for Neutron LBaaS to become
> its own incubated project. I did miss the nuance 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-01 Thread Brandon Logan
e way the A10
> driver is a separate bit of code from the A10 appliance. To do
> that though, we need Octavia to be fairly close to fully
> functional. I believe we can do this because even though the
> reference driver would then require an additional service to
> run, what it requires is still fully-open-source and (by way
> of our plan) available as part of OpenStack core.
> 
> 
> --Adam
> 
> 
> https://keybase.io/rm_you
>     
>     
> 
> 
> From: Susanne Balle 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Friday, August 29, 2014 9:19 AM
> To: "OpenStack Development Mailing List (not for usage
> questions)" 
> 
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> 
> 
> 
> Stephen
> 
>  
> 
> See inline comments.
> 
>  
> 
> Susanne
> 
>  
> 
> -
> 
>  
> 
> Susanne--
> 
>  
> 
> I think you are conflating the difference between
> "OpenStack incubation" and "Neutron incubator." These
> are two very different matters and should be treated
> separately. So, addressing each one individually:
> 
>  
> 
> "OpenStack Incubation"
> 
> I think this has been the end-goal of Octavia all
> along and continues to be the end-goal. Under this
> scenario, Octavia is its own stand-alone project with
> its own PTL and core developer team, its own
> governance, and should eventually become part of the
> integrated OpenStack release. No project ever starts
> out as "OpenStack incubated."
> 
>  
> 
> [Susanne] I totally agree that the end goal is for
> Neutron LBaaS to become its own incubated project. I
> did miss the nuance that was pointed out by Mestery in
> an earlier email that if a Neutron incubator project
> wants to become a separate project it will have to
> apply for incubation again or at that time. It was my
> understanding that such a Neutron incubated project
> would be grandfathered in but again we do not have
> much details on the process yet.
> 
>  
> 
> To me Octavia is a driver so it is very hard to me to
> think of it as a standalone project. It needs the new
> Neutron LBaaS v2 to function which is why I think of
> them together. This of course can change since we can
> add whatever layers we want to Octavia.
> 
>  
> 
> "Neutron Incubator"
> 
> This has only become a serious discussion in the last
> few weeks and has yet to land, so there are many
> assumptions about this which don't pan out (either
> because of purposeful design and governance decisions,
> or because of how this project actually ends up being
> implemented from a practical standpoint). But given
> the inherent limitations about making statements with
> so many unknowns, the following seem fairly clear from
> what has been shared so far:
> 
> · Neutron incubator is the on-ramp for projects which
> should eventually become a part of Neutron itself.
> 
> · Projects which enter the Neutron incubator on-ramp
> should be fairly close to maturity in their final
> form. I think the intent here is for them to live in
> incubator for 1 or 2 cycles before either being merged
> into Neutron core, or being ejected (as abandoned, or
> 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Avishay Balderman
+1

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, September 02, 2014 8:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Hi Susanne and everyone,

My opinions are that keeping it in stackforge until it gets mature is the best 
solution.  I'm pretty sure we can all agree on that.  Whenever it is mature 
then, and only then, we should try to get it into openstack one way or another. 
 If Neutron LBaaS v2 is still incubated then it should be relatively easy to 
get it in that codebase.  If Neutron LBaaS has already spun out, even easier 
for us.  If we want Octavia to just become an openstack project all its own 
then that will be the difficult part.

I think the best course of action is to get Octavia itself into the same 
codebase as LBaaS (Neutron or spun out).  They do go together, and the 
maintainers will almost always be the same for both.  This makes even more 
sense when LBaaS is spun out into its own project.

I really think all of the answers to these questions will fall into place when 
we actually deliver a product that we are all wanting and talking about 
delivering with Octavia.  Once we prove that we can all come together as a 
community and manage a product from inception to maturity, we will then have 
the respect and trust to do what is best for an Openstack LBaaS product.

Thanks,
Brandon

On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
> Kyle, Adam,
> 
>  
> 
> Based on this thread Kyle is suggesting the follow moving forward
> plan: 
> 
>  
> 
> 1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze 
> LBaas V1.0”
> 2) “Eventually” It graduates into a project under the networking 
> program.
> 3) “At that point” We deprecate Neutron LBaaS v1.
> 
>  
> 
> The words in “xx“ are works I added to make sure I/We understand the 
> whole picture.
> 
>  
> 
> And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 / 
> Radware / A10 / etc appliances which is a definition I agree with BTW.
> 
>  
> 
> What I am trying to now understand is how we will move Octavia into 
> the new LBaaS project?
> 
>  
> 
> If we do it later rather than develop Octavia in tree under the new 
> incubated LBaaS project when do we plan to bring it in-tree from 
> Stackforge? Kilo? Later? When LBaaS is a separate project under the 
> Networking program?

>  
> 
> What are the criteria to bring a driver into the LBaaS project and 
> what do we need to do to replace the existing reference driver? Maybe 
> adding a software driver to LBaaS source tree is less of a problem 
> than converting a whole project to an OpenStack project.

>  
> 
> Again I am open to both directions I just want to make sure we 
> understand why we are choosing to do one or the other and that our  
> decision is based on data and not emotions.
> 
>  
> 
> I am assuming that keeping Octavia in Stackforge will increase the 
> velocity of the project and allow us more freedom which is goodness.
> We just need to have a plan to make it part of the Openstack LBaaS 
> project.
> 
>  
> 
> Regards Susanne
> 
> 
> 
> 
> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell 
>  wrote:
> Only really have comments on two of your related points:
> 
> 
> [Susanne] To me Octavia is a driver so it is very hard to me
> to think of it as a standalone project. It needs the new
> Neutron LBaaS v2 to function which is why I think of them
> together. This of course can change since we can add whatever
> layers we want to Octavia.
> 
> 
> [Adam] I guess I've always shared Stephen's
> viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
> Radware / A10 / etcappliances, not to an Openstack API layer
> like Neutron-LBaaS. It's a little tricky to clearly define
> this difference in conversation, and I have noticed that quite
> a few people are having the same issue differentiating. In a
> small group, having quite a few people not on the same page is
> a bit scary, so maybe we need to really sit down and map this
> out so everyone is together one way or the other.
> 
> 
> [Susanne] Ok now I am confused… But I agree with you that it
> need to focus on our use cases. I remember us discussing
> Octavia being the refenece implementation for OpenStack LBaaS
> (whatever that is). Has that changed while I was on vacation?
> 
> 
> [Adam] I believe that having the Octavia "driver" (not the
> Octavia codebase its

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Adam Harwell
 we are choosing to do one or the other and that our
>>  decision is based on data and not emotions.
>> 
>>  
>> 
>> I am assuming that keeping Octavia in Stackforge will increase the
>> velocity of the project and allow us more freedom which is goodness.
>> We just need to have a plan to make it part of the Openstack LBaaS
>> project.
>> 
>>  
>> 
>> Regards Susanne
>> 
>> 
>> 
>> 
>> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
>>  wrote:
>> Only really have comments on two of your related points:
>> 
>> 
>> [Susanne] To me Octavia is a driver so it is very hard to me
>> to think of it as a standalone project. It needs the new
>> Neutron LBaaS v2 to function which is why I think of them
>> together. This of course can change since we can add whatever
>> layers we want to Octavia.
>> 
>> 
>> [Adam] I guess I've always shared Stephen's
>> viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
>> Radware / A10 / etcappliances, not to an Openstack API layer
>> like Neutron-LBaaS. It's a little tricky to clearly define
>> this difference in conversation, and I have noticed that quite
>> a few people are having the same issue differentiating. In a
>> small group, having quite a few people not on the same page is
>> a bit scary, so maybe we need to really sit down and map this
>> out so everyone is together one way or the other.
>> 
>> 
>> [Susanne] Ok now I am confusedŠ But I agree with you that it
>> need to focus on our use cases. I remember us discussing
>> Octavia being the refenece implementation for OpenStack LBaaS
>> (whatever that is). Has that changed while I was on vacation?
>> 
>> 
>> [Adam] I believe that having the Octavia "driver" (not the
>> Octavia codebase itself, technically) become the reference
>> implementation for Neutron-LBaaS is still the plan in my eyes.
>> The Octavia Driver in Neutron-LBaaS is a separate bit of code
>> from the actual Octavia project, similar to the way the A10
>> driver is a separate bit of code from the A10 appliance. To do
>>     that though, we need Octavia to be fairly close to fully
>> functional. I believe we can do this because even though the
>> reference driver would then require an additional service to
>> run, what it requires is still fully-open-source and (by way
>> of our plan) available as part of OpenStack core.
>> 
>> 
>> --Adam
>> 
>> 
>> https://keybase.io/rm_you
>> 
>> 
>> 
>> 
>> From: Susanne Balle 
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)" 
>> Date: Friday, August 29, 2014 9:19 AM
>> To: "OpenStack Development Mailing List (not for usage
>> questions)" 
>> 
>> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>> 
>> 
>> 
>> Stephen
>> 
>> 
>> 
>> See inline comments.
>> 
>> 
>> 
>> Susanne
>> 
>> 
>> 
>> -
>> 
>> 
>> 
>> Susanne--
>> 
>> 
>> 
>> I think you are conflating the difference between
>> "OpenStack incubation" and "Neutron incubator." These
>> are two very different matters and should be treated
>> separately. So, addressing each one individually:
>> 
>> 
>> 
>> "OpenStack Incubation"
>> 
>> I think this has been the end-goal of Octavia all
>> along and continues to be the end-goal. Under this
>> scenario, Octavia is its own stand-alone project with
>>

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Salvatore Orlando
talking about delivering with Octavia.  Once we prove that we can all
> >come together as a community and manage a product from inception to
> >maturity, we will then have the respect and trust to do what is best for
> >an Openstack LBaaS product.
> >
> >Thanks,
> >Brandon
> >
> >On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
> >> Kyle, Adam,
> >>
> >>
> >>
> >> Based on this thread Kyle is suggesting the follow moving forward
> >> plan:
> >>
> >>
> >>
> >> 1) We incubate Neutron LBaaS V2 in the ³Neutron² incubator ³and freeze
> >> LBaas V1.0²
> >> 2) ³Eventually² It graduates into a project under the networking
> >> program.
> >> 3) ³At that point² We deprecate Neutron LBaaS v1.
> >>
> >>
> >>
> >> The words in ³xx³ are works I added to make sure I/We understand the
> >> whole picture.
> >>
> >>
> >>
> >> And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
> >> Radware / A10 / etc appliances which is a definition I agree with BTW.
> >>
> >>
> >>
> >> What I am trying to now understand is how we will move Octavia into
> >> the new LBaaS project?
> >>
> >>
> >>
> >> If we do it later rather than develop Octavia in tree under the new
> >> incubated LBaaS project when do we plan to bring it in-tree from
> >> Stackforge? Kilo? Later? When LBaaS is a separate project under the
> >> Networking program?
> >
> >>
> >>
> >> What are the criteria to bring a driver into the LBaaS project and
> >> what do we need to do to replace the existing reference driver? Maybe
> >> adding a software driver to LBaaS source tree is less of a problem
> >> than converting a whole project to an OpenStack project.
> >
> >>
> >>
> >> Again I am open to both directions I just want to make sure we
> >> understand why we are choosing to do one or the other and that our
> >>  decision is based on data and not emotions.
> >>
> >>
> >>
> >> I am assuming that keeping Octavia in Stackforge will increase the
> >> velocity of the project and allow us more freedom which is goodness.
> >> We just need to have a plan to make it part of the Openstack LBaaS
> >> project.
> >>
> >>
> >>
> >> Regards Susanne
> >>
> >>
> >>
> >>
> >> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
> >>  wrote:
> >> Only really have comments on two of your related points:
> >>
> >>
> >> [Susanne] To me Octavia is a driver so it is very hard to me
> >> to think of it as a standalone project. It needs the new
> >> Neutron LBaaS v2 to function which is why I think of them
> >> together. This of course can change since we can add whatever
> >> layers we want to Octavia.
> >>
> >>
> >> [Adam] I guess I've always shared Stephen's
> >> viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
> >> Radware / A10 / etcappliances, not to an Openstack API layer
> >> like Neutron-LBaaS. It's a little tricky to clearly define
> >> this difference in conversation, and I have noticed that quite
> >> a few people are having the same issue differentiating. In a
> >> small group, having quite a few people not on the same page is
> >> a bit scary, so maybe we need to really sit down and map this
> >> out so everyone is together one way or the other.
> >>
> >>
> >> [Susanne] Ok now I am confusedŠ But I agree with you that it
> >> need to focus on our use cases. I remember us discussing
> >> Octavia being the refenece implementation for OpenStack LBaaS
> >> (whatever that is). Has that changed while I was on vacation?
> >>
> >>
> >> [Adam] I believe that having the Octavia "driver" (not the
> >> Octavia codebase itself, technically) become the reference
> >> implementation for Neutron-LBaaS is still the plan in my eyes.
> >> The Octavia Driver in Neutron-LBaaS is a separate bit of code
> >> from the actual Octavia project, similar to the way the A10
> >> driver is a separate bit of code from the A10 appliance. To do
> >> 

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
>>
>> >>
>> >>
>> >>
>> >> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
>> >>  wrote:
>> >> Only really have comments on two of your related points:
>> >>
>> >>
>> >> [Susanne] To me Octavia is a driver so it is very hard to me
>> >> to think of it as a standalone project. It needs the new
>> >> Neutron LBaaS v2 to function which is why I think of them
>> >> together. This of course can change since we can add whatever
>> >> layers we want to Octavia.
>> >>
>> >>
>> >> [Adam] I guess I've always shared Stephen's
>> >> viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
>> >> Radware / A10 / etcappliances, not to an Openstack API layer
>> >> like Neutron-LBaaS. It's a little tricky to clearly define
>> >> this difference in conversation, and I have noticed that quite
>> >> a few people are having the same issue differentiating. In a
>> >> small group, having quite a few people not on the same page is
>> >> a bit scary, so maybe we need to really sit down and map this
>> >> out so everyone is together one way or the other.
>> >>
>> >>
>> >> [Susanne] Ok now I am confusedŠ But I agree with you that it
>> >> need to focus on our use cases. I remember us discussing
>> >> Octavia being the refenece implementation for OpenStack LBaaS
>> >> (whatever that is). Has that changed while I was on vacation?
>> >>
>> >>
>> >> [Adam] I believe that having the Octavia "driver" (not the
>> >> Octavia codebase itself, technically) become the reference
>> >> implementation for Neutron-LBaaS is still the plan in my eyes.
>> >> The Octavia Driver in Neutron-LBaaS is a separate bit of code
>> >> from the actual Octavia project, similar to the way the A10
>> >> driver is a separate bit of code from the A10 appliance. To do
>> >> that though, we need Octavia to be fairly close to fully
>> >> functional. I believe we can do this because even though the
>> >> reference driver would then require an additional service to
>> >> run, what it requires is still fully-open-source and (by way
>> >> of our plan) available as part of OpenStack core.
>> >>
>> >>
>> >> --Adam
>> >>
>> >>
>> >> https://keybase.io/rm_you
>> >>
>> >>
>> >>
>> >>
>> >> From: Susanne Balle 
>> >> Reply-To: "OpenStack Development Mailing List (not for usage
>> >> questions)" 
>> >> Date: Friday, August 29, 2014 9:19 AM
>> >> To: "OpenStack Development Mailing List (not for usage
>> >> questions)" 
>> >>
>> >> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>> >>
>> >>
>> >>
>> >> Stephen
>> >>
>> >>
>> >>
>> >> See inline comments.
>> >>
>> >>
>> >>
>> >> Susanne
>> >>
>> >>
>> >>
>> >> -
>> >>
>> >>
>> >>
>> >> Susanne--
>> >>
>> >>
>> >>
>> >> I think you are conflating the difference between
>> >> "OpenStack incubation" and "Neutron incubator." These
>> >> are two very different matters and should be treated
>> >> separately. So, addressing each one individually:
>> >>
>> >>
>> >>
>> >> "OpenStack Incubation"
>> >>
>> >> I think this has been the end-goal of Octavia all
>> >> along and continues to be the end-goal. Under this
>> >> scenario, Octavia is its own stand-alone project with
>> >> its own PTL and core developer team, its own
>> >>  

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
ving the same issue differentiating. In a
> > small group, having quite a few people not on the same page is
> > a bit scary, so maybe we need to really sit down and map this
> > out so everyone is together one way or the other.
> >
> >
> > [Susanne] Ok now I am confused… But I agree with you that it
> > need to focus on our use cases. I remember us discussing
> > Octavia being the refenece implementation for OpenStack LBaaS
> > (whatever that is). Has that changed while I was on vacation?
> >
> >
> > [Adam] I believe that having the Octavia "driver" (not the
> > Octavia codebase itself, technically) become the reference
> > implementation for Neutron-LBaaS is still the plan in my eyes.
> > The Octavia Driver in Neutron-LBaaS is a separate bit of code
> > from the actual Octavia project, similar to the way the A10
> > driver is a separate bit of code from the A10 appliance. To do
> > that though, we need Octavia to be fairly close to fully
> > functional. I believe we can do this because even though the
> > reference driver would then require an additional service to
> > run, what it requires is still fully-open-source and (by way
> > of our plan) available as part of OpenStack core.
> >
> >
> > --Adam
> >
> >
> > https://keybase.io/rm_you
> >
> >
> >
> >
> > From: Susanne Balle 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Date: Friday, August 29, 2014 9:19 AM
> > To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> >
> > Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> >
> >
> >
> > Stephen
> >
> >
> >
> > See inline comments.
> >
> >
> >
> > Susanne
> >
> >
> >
> > -
> >
> >
> >
> > Susanne--
> >
> >
> >
> > I think you are conflating the difference between
> > "OpenStack incubation" and "Neutron incubator." These
> > are two very different matters and should be treated
> > separately. So, addressing each one individually:
> >
> >
> >
> > "OpenStack Incubation"
> >
> > I think this has been the end-goal of Octavia all
> > along and continues to be the end-goal. Under this
> > scenario, Octavia is its own stand-alone project with
> > its own PTL and core developer team, its own
> > governance, and should eventually become part of the
> > integrated OpenStack release. No project ever starts
> > out as "OpenStack incubated."
> >
> >
> >
> > [Susanne] I totally agree that the end goal is for
> > Neutron LBaaS to become its own incubated project. I
> > did miss the nuance that was pointed out by Mestery in
> > an earlier email that if a Neutron incubator project
> > wants to become a separate project it will have to
> > apply for incubation again or at that time. It was my
> > understanding that such a Neutron incubated project
> > would be grandfathered in but again we do not have
> > much details on the process yet.
> >
> >
> >
> > To me Octavia is a driver so it is very hard to me to
> > think of it as a standalone project. It needs the new
> > Neutron LBaaS v2 to function which is why I think of
> > them together. This of course can change since we can
> > add whatever layers we want to Octavia.
> >
> >
> >
> > "Neutron Incubator"
> >
> > This has only become a serious discussion in the last
> > few weeks and has yet to land, so there are many
> > assumptions about this which don't pan out (either
> > because of purposeful design and governance decisions,
> > or because of how this

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Salvatore Orlando
alking about delivering with Octavia.  Once we prove that we can all
>>> >come together as a community and manage a product from inception to
>>> >maturity, we will then have the respect and trust to do what is best for
>>> >an Openstack LBaaS product.
>>> >
>>> >Thanks,
>>> >Brandon
>>> >
>>> >On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
>>> >> Kyle, Adam,
>>> >>
>>> >>
>>> >>
>>> >> Based on this thread Kyle is suggesting the follow moving forward
>>> >> plan:
>>> >>
>>> >>
>>> >>
>>> >> 1) We incubate Neutron LBaaS V2 in the ³Neutron² incubator ³and freeze
>>> >> LBaas V1.0²
>>> >> 2) ³Eventually² It graduates into a project under the networking
>>> >> program.
>>> >> 3) ³At that point² We deprecate Neutron LBaaS v1.
>>> >>
>>> >>
>>> >>
>>> >> The words in ³xx³ are works I added to make sure I/We understand the
>>> >> whole picture.
>>> >>
>>> >>
>>> >>
>>> >> And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
>>> >> Radware / A10 / etc appliances which is a definition I agree with BTW.
>>> >>
>>> >>
>>> >>
>>> >> What I am trying to now understand is how we will move Octavia into
>>> >> the new LBaaS project?
>>> >>
>>> >>
>>> >>
>>> >> If we do it later rather than develop Octavia in tree under the new
>>> >> incubated LBaaS project when do we plan to bring it in-tree from
>>> >> Stackforge? Kilo? Later? When LBaaS is a separate project under the
>>> >> Networking program?
>>> >
>>> >>
>>> >>
>>> >> What are the criteria to bring a driver into the LBaaS project and
>>> >> what do we need to do to replace the existing reference driver? Maybe
>>> >> adding a software driver to LBaaS source tree is less of a problem
>>> >> than converting a whole project to an OpenStack project.
>>> >
>>> >>
>>> >>
>>> >> Again I am open to both directions I just want to make sure we
>>> >> understand why we are choosing to do one or the other and that our
>>> >>  decision is based on data and not emotions.
>>> >>
>>> >>
>>> >>
>>> >> I am assuming that keeping Octavia in Stackforge will increase the
>>> >> velocity of the project and allow us more freedom which is goodness.
>>> >> We just need to have a plan to make it part of the Openstack LBaaS
>>> >> project.
>>> >>
>>> >>
>>> >>
>>> >> Regards Susanne
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
>>> >>  wrote:
>>> >> Only really have comments on two of your related points:
>>> >>
>>> >>
>>> >> [Susanne] To me Octavia is a driver so it is very hard to me
>>> >> to think of it as a standalone project. It needs the new
>>> >> Neutron LBaaS v2 to function which is why I think of them
>>> >> together. This of course can change since we can add whatever
>>> >> layers we want to Octavia.
>>> >>
>>> >>
>>> >> [Adam] I guess I've always shared Stephen's
>>> >> viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
>>> >> Radware / A10 / etcappliances, not to an Openstack API layer
>>> >> like Neutron-LBaaS. It's a little tricky to clearly define
>>> >> this difference in conversation, and I have noticed that quite
>>> >> a few people are having the same issue differentiating. In a
>>> >> small group, having quite a few people not on the same page is
>>> >> a bit scary, so maybe we need to really sit down and map this
>>> >> out so everyone is together one way or the other.
>>> >>
>>> >>
>>> >> [Susanne] Ok now I am confusedŠ But I agree with you that it
>>> >> need to focus

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
Doug

I agree with you but I need to understand the options. Susanne

>> And I agree with Brandon’s sentiments.  We need to get something built
before I’m going to worry too
>> much about where it should live.  Is this a candidate to get sucked into
LBaaS?  Sure.  Could the reverse
>> happen?  Sure.  Let’s see how it develops.


On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley  wrote:

>  Hi all,
>
>  > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose. Octavia would be a
> service-VM framework for doing load balancing using a variety of drivers.
> The drivers ultimately are in charge of using backends like haproxy or
> nginx running on the service VM to implement lbaas configuration.
>
>  This, exactly.  I think it’s much fairer to define Octavia as an LBaaS
> purpose-built service vm framework, which will use nova and haproxy
> initially to provide a highly scalable backend. But before we get into
> terminology misunderstandings, there are a bunch of different “drivers” at
> play here, exactly because this is a framework:
>
>- Neutron lbaas drivers – what we all know and love
>- Octavia’s “network driver” - this is a piece of glue that exists to
>hide internal calls we have to make into Neutron until clean interfaces
>exist.  It might be a no-op in the case of an actual neutron lbaas driver,
>which could serve that function instead.
>- Octavia’s “vm driver” - this is a piece of glue between the octavia
>controller and the nova VMs that are doing the load balancing.
>- Octavia’s “compute driver” - you guessed it, an abstraction to Nova
>and its scheduler.
>
> Places that can be the “front-end” for Octavia:
>
>- Neutron LBaaS v2 driver
>- Neutron LBaaS v1 driver
>- It’s own REST API
>
> Things that could have their own VM drivers:
>
>- haproxy, running inside nova
>- Nginx, running inside nova
>- Anything else you want, running inside any hypervisor you want
>- Vendor soft appliances
>- Null-out the VM calls and go straight to some other backend?  Sure,
>though I’m not sure I’d see the point.
>
> There are quite a few synergies with other efforts, and we’re monitoring
> them, but not waiting for any of them.
>
>  And I agree with Brandon’s sentiments.  We need to get something built
> before I’m going to worry too much about where it should live.  Is this a
> candidate to get sucked into LBaaS?  Sure.  Could the reverse happen?
>  Sure.  Let’s see how it develops.
>
>  Incidentally, we are currently having a debate over the use of the term
> “vm” (and “vm driver”) as the name to describe octavia’s backends.  Feel
> free to chime in here: https://review.openstack.org/#/c/117701/
>
>  Thanks,
> doug
>
>
>   From: Salvatore Orlando 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, September 2, 2014 at 9:05 AM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>
>   Hi Susanne,
>
>  I'm just trying to gain a good understanding of the situation here.
> More comments and questions inline.
>
>  Salvatore
>
> On 2 September 2014 16:34, Susanne Balle  wrote:
>
>> Salvatore
>>
>>  Thanks for your clarification below around the blueprint.
>>
>>  > For LBaaS v2 therefore the relationship between it and Octavia should
>> be the same as with any other
>> > backend. I see Octavia has a blueprint for a "network driver" - and the
>> derivable of that should definitely be
>> > part of the LBaaS project.
>>
>>  > For the rest, it would seem a bit strange to me if the LBaaS project
>> incorporated a backend as well. After
>>  > all, LBaaS v1 did not incorporate haproxy!
>> > Also, as Adam points out, Nova does not incorporate an Hypervisor.
>>
>>  In my vision Octavia is a LBaaS framework that should not be tied to
>> ha-proxy. The interfaces should be clean and at a high enough level that we
>> can switch load-balancer. We should be able to switch the load-balancer to
>> nginx so to me the analogy is more Octavia+LBaaSV2 == nova and hypervisor
>> == load-balancer.
>>
>
>  Indeed I said that it would have been initially tied to haproxy
> considering the blueprints currently defined for octavia, but I'm sure the
> solution could leverage nginx or something else in the future.
>
>  I think however it is correct to say that LBaaS v2 will have a

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
Yeah I've been worried about the term "driver" being overused here.
However, it might not be too bad if we get the other terminology correct
(network driver, vm/container/appliance driver, etc).

I was thinking of ML2 when I said Octavia living in the LBaaS tree might
be best.  I was also thinking that it makes sense if the end goal is for
Octavia to be in openstack.  Also, even if it goes into the LBaaS tree,
it doesn't mean it can't be spun out as its own openstack project,
though I do recognize the backwards-ness of that.

That said, I'm not stongly opposed to either options.  I just want
everyone involved to be happy, though that is not always going to
happen.

Thanks,
Brandon

On Tue, 2014-09-02 at 15:45 +, Doug Wiegley wrote:
> Hi all,
> 
> 
> > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose. Octavia would
> be a service-VM framework for doing load balancing using a variety of
> drivers. The drivers ultimately are in charge of using backends like
> haproxy or nginx running on the service VM to implement lbaas
> configuration.
> 
> 
> This, exactly.  I think it’s much fairer to define Octavia as an LBaaS
> purpose-built service vm framework, which will use nova and haproxy
> initially to provide a highly scalable backend. But before we get into
> terminology misunderstandings, there are a bunch of different
> “drivers” at play here, exactly because this is a framework:
>   * Neutron lbaas drivers – what we all know and love
>   * Octavia’s “network driver” - this is a piece of glue that
> exists to hide internal calls we have to make into Neutron
> until clean interfaces exist.  It might be a no-op in the case
> of an actual neutron lbaas driver, which could serve that
> function instead.
>   * Octavia’s “vm driver” - this is a piece of glue between the
> octavia controller and the nova VMs that are doing the load
> balancing.
>   * Octavia’s “compute driver” - you guessed it, an abstraction to
> Nova and its scheduler.
> Places that can be the “front-end” for Octavia:
>   * Neutron LBaaS v2 driver
>   * Neutron LBaaS v1 driver
>   * It’s own REST API
> Things that could have their own VM drivers:
>   * haproxy, running inside nova
>   * Nginx, running inside nova
>   * Anything else you want, running inside any hypervisor you want
>   * Vendor soft appliances
>   * Null-out the VM calls and go straight to some other backend?
>  Sure, though I’m not sure I’d see the point.
> There are quite a few synergies with other efforts, and we’re
> monitoring them, but not waiting for any of them.
> 
> 
> And I agree with Brandon’s sentiments.  We need to get something built
> before I’m going to worry too much about where it should live.  Is
> this a candidate to get sucked into LBaaS?  Sure.  Could the reverse
> happen?  Sure.  Let’s see how it develops.
> 
> 
> Incidentally, we are currently having a debate over the use of the
> term “vm” (and “vm driver”) as the name to describe octavia’s
> backends.  Feel free to chime in
> here: https://review.openstack.org/#/c/117701/
> 
> 
> Thanks,
> doug
> 
> 
> 
> 
> From: Salvatore Orlando 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Tuesday, September 2, 2014 at 9:05 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> 
> 
> 
> Hi Susanne,
> 
> 
> I'm just trying to gain a good understanding of the situation here.
> More comments and questions inline.
> 
> 
> Salvatore
> 
> On 2 September 2014 16:34, Susanne Balle 
> wrote:
> Salvatore 
> 
> 
> Thanks for your clarification below around the blueprint.
> 
> 
> > For LBaaS v2 therefore the relationship between it and
> Octavia should be the same as with any other
> > backend. I see Octavia has a blueprint for a "network
> driver" - and the derivable of that should definitely be
> > part of the LBaaS project.
> 
> 
> > For the rest, it would seem a bit strange to me if the LBaaS
> project incorporated a backend as well. After 
> 
> > all, LBaaS v1 did not incorporate haproxy!
> > Also, as Adam points out, Nova does not incorporate an
> Hypervisor.
> 
> 
> In my vision Octavia is a LBaaS framework that should not be
> tied to ha-proxy. The i

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
Hi Susanne,

I believe the options for Octavia are:
1) Merge into the LBaaS tree (wherever LBaaS is)
2) Become its own openstack project
3) Remains in stackforge for eternity

#1 Is dependent on these options
1) LBaaS V2 graduates from the incubator into Neutron. V1 is deprecated.
2) LBaaS V2 remains in incubator until it can be spun out.  V1 in
Neutron is deprecated.
3) LBaaS V2 is abandoned in the incubator and LBaaS V1 remains.  (An
unlikely option)

I don't see any other feasible options.

On Tue, 2014-09-02 at 12:06 -0400, Susanne Balle wrote:
> Doug
> 
> 
> I agree with you but I need to understand the options. Susanne
> 
> 
> >> And I agree with Brandon’s sentiments.  We need to get something
> built before I’m going to worry too 
> >> much about where it should live.  Is this a candidate to get sucked
> into LBaaS?  Sure.  Could the reverse 
> >> happen?  Sure.  Let’s see how it develops.
> 
> 
> 
> On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley 
> wrote:
> Hi all,
> 
> 
> > On the other hand one could also say that Octavia is the ML2
> equivalent of LBaaS. The equivalence here is very loose.
> Octavia would be a service-VM framework for doing load
> balancing using a variety of drivers. The drivers ultimately
> are in charge of using backends like haproxy or nginx running
> on the service VM to implement lbaas configuration.
> 
> 
> This, exactly.  I think it’s much fairer to define Octavia as
> an LBaaS purpose-built service vm framework, which will use
> nova and haproxy initially to provide a highly scalable
> backend. But before we get into terminology misunderstandings,
> there are a bunch of different “drivers” at play here, exactly
> because this is a framework:
>   * Neutron lbaas drivers – what we all know and love
>   * Octavia’s “network driver” - this is a piece of glue
> that exists to hide internal calls we have to make
> into Neutron until clean interfaces exist.  It might
> be a no-op in the case of an actual neutron lbaas
> driver, which could serve that function instead.
>   * Octavia’s “vm driver” - this is a piece of glue
> between the octavia controller and the nova VMs that
> are doing the load balancing.
>   * Octavia’s “compute driver” - you guessed it, an
> abstraction to Nova and its scheduler.
> Places that can be the “front-end” for Octavia:
>   * Neutron LBaaS v2 driver
>   * Neutron LBaaS v1 driver
>   * It’s own REST API
> Things that could have their own VM drivers:
>   * haproxy, running inside nova
>   * Nginx, running inside nova
>   * Anything else you want, running inside any hypervisor
> you want
>   * Vendor soft appliances
>   * Null-out the VM calls and go straight to some other
> backend?  Sure, though I’m not sure I’d see the point.
> There are quite a few synergies with other efforts, and we’re
> monitoring them, but not waiting for any of them.
> 
> 
> And I agree with Brandon’s sentiments.  We need to get
> something built before I’m going to worry too much about where
> it should live.  Is this a candidate to get sucked into
> LBaaS?  Sure.  Could the reverse happen?  Sure.  Let’s see how
> it develops.
> 
> 
> Incidentally, we are currently having a debate over the use of
> the term “vm” (and “vm driver”) as the name to describe
> octavia’s backends.  Feel free to chime in
> here: https://review.openstack.org/#/c/117701/
> 
> 
> Thanks,
> doug
> 
> 
> 
> 
> From: Salvatore Orlando 
> 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> 
> Date: Tuesday, September 2, 2014 at 9:05 AM
> 
> To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
> 
> 
> 
> Hi Susanne,
> 
> 
> I'm just trying to gain a good understanding of the situation
> here.
> More comments and questions inline.
> 
> 
> Salvatore
> 
>   

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Kyle Mestery
onfiguration.
>>> >
>>> >
>>> > This, exactly.  I think it’s much fairer to define Octavia as
>>> > an LBaaS purpose-built service vm framework, which will use
>>> > nova and haproxy initially to provide a highly scalable
>>> > backend. But before we get into terminology misunderstandings,
>>> > there are a bunch of different “drivers” at play here, exactly
>>> > because this is a framework:
>>> >   * Neutron lbaas drivers – what we all know and love
>>> >   * Octavia’s “network driver” - this is a piece of glue
>>> > that exists to hide internal calls we have to make
>>> > into Neutron until clean interfaces exist.  It might
>>> > be a no-op in the case of an actual neutron lbaas
>>> > driver, which could serve that function instead.
>>> >   * Octavia’s “vm driver” - this is a piece of glue
>>> > between the octavia controller and the nova VMs that
>>> > are doing the load balancing.
>>> >   * Octavia’s “compute driver” - you guessed it, an
>>> > abstraction to Nova and its scheduler.
>>> > Places that can be the “front-end” for Octavia:
>>> >   * Neutron LBaaS v2 driver
>>> >   * Neutron LBaaS v1 driver
>>> >   * It’s own REST API
>>> > Things that could have their own VM drivers:
>>> >   * haproxy, running inside nova
>>> >   * Nginx, running inside nova
>>> >   * Anything else you want, running inside any hypervisor
>>> > you want
>>> >   * Vendor soft appliances
>>> >   * Null-out the VM calls and go straight to some other
>>> > backend?  Sure, though I’m not sure I’d see the point.
>>> >         There are quite a few synergies with other efforts, and we’re
>>> > monitoring them, but not waiting for any of them.
>>> >
>>> >
>>> > And I agree with Brandon’s sentiments.  We need to get
>>> > something built before I’m going to worry too much about where
>>> > it should live.  Is this a candidate to get sucked into
>>> > LBaaS?  Sure.  Could the reverse happen?  Sure.  Let’s see how
>>> > it develops.
>>> >
>>> >
>>> > Incidentally, we are currently having a debate over the use of
>>> > the term “vm” (and “vm driver”) as the name to describe
>>> > octavia’s backends.  Feel free to chime in
>>> > here: https://review.openstack.org/#/c/117701/
>>> >
>>> >
>>> > Thanks,
>>> > doug
>>> >
>>> >
>>> >
>>> >
>>> > From: Salvatore Orlando 
>>> >
>>> > Reply-To: "OpenStack Development Mailing List (not for usage
>>> > questions)" 
>>> >
>>> > Date: Tuesday, September 2, 2014 at 9:05 AM
>>> >
>>> > To: "OpenStack Development Mailing List (not for usage
>>> > questions)" 
>>> > Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>>> >
>>> >
>>> >
>>> > Hi Susanne,
>>> >
>>> >
>>> > I'm just trying to gain a good understanding of the situation
>>> > here.
>>> > More comments and questions inline.
>>> >
>>> >
>>> > Salvatore
>>> >
>>> > On 2 September 2014 16:34, Susanne Balle
>>> >  wrote:
>>> > Salvatore
>>> >
>>> >
>>> > Thanks for your clarification below around the
>>> > blueprint.
>>> >
>>> >
>>> > > For LBaaS v2 therefore the relationship between it
>>> > and Octavia should be the same as with any other
>>> > > backend. I see Octavia has a blueprint for a
>>> > "network driver" - and the derivable of that should
>>> > defi

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Eichberger, German
t; > on the service VM to implement lbaas configuration.
>>> >
>>> >
>>> > This, exactly.  I think it’s much fairer to define Octavia as
>>> > an LBaaS purpose-built service vm framework, which will use
>>> > nova and haproxy initially to provide a highly scalable
>>> > backend. But before we get into terminology misunderstandings,
>>> > there are a bunch of different “drivers” at play here, exactly
>>> > because this is a framework:
>>> >   * Neutron lbaas drivers – what we all know and love
>>> >   * Octavia’s “network driver” - this is a piece of glue
>>> > that exists to hide internal calls we have to make
>>> > into Neutron until clean interfaces exist.  It might
>>> > be a no-op in the case of an actual neutron lbaas
>>> > driver, which could serve that function instead.
>>> >   * Octavia’s “vm driver” - this is a piece of glue
>>> > between the octavia controller and the nova VMs that
>>> > are doing the load balancing.
>>> >   * Octavia’s “compute driver” - you guessed it, an
>>> > abstraction to Nova and its scheduler.
>>> > Places that can be the “front-end” for Octavia:
>>> >   * Neutron LBaaS v2 driver
>>> >   * Neutron LBaaS v1 driver
>>> >   * It’s own REST API
>>> > Things that could have their own VM drivers:
>>> >   * haproxy, running inside nova
>>> >   * Nginx, running inside nova
>>> >   * Anything else you want, running inside any hypervisor
>>> > you want
>>> >   * Vendor soft appliances
>>> >   * Null-out the VM calls and go straight to some other
>>> > backend?  Sure, though I’m not sure I’d see the point.
>>> >         There are quite a few synergies with other efforts, and we’re
>>> > monitoring them, but not waiting for any of them.
>>> >
>>> >
>>> > And I agree with Brandon’s sentiments.  We need to get
>>> > something built before I’m going to worry too much about where
>>> > it should live.  Is this a candidate to get sucked into
>>> > LBaaS?  Sure.  Could the reverse happen?  Sure.  Let’s see how
>>> > it develops.
>>> >
>>> >
>>> > Incidentally, we are currently having a debate over the use of
>>> > the term “vm” (and “vm driver”) as the name to describe
>>> > octavia’s backends.  Feel free to chime in
>>> > here: https://review.openstack.org/#/c/117701/
>>> >
>>> >
>>> > Thanks,
>>> > doug
>>> >
>>> >
>>> >
>>> >
>>> > From: Salvatore Orlando 
>>> >
>>> > Reply-To: "OpenStack Development Mailing List (not for usage
>>> > questions)" 
>>> >
>>> > Date: Tuesday, September 2, 2014 at 9:05 AM
>>> >
>>> > To: "OpenStack Development Mailing List (not for usage
>>> > questions)" 
>>> > Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
>>> >
>>> >
>>> >
>>> > Hi Susanne,
>>> >
>>> >
>>> > I'm just trying to gain a good understanding of the situation
>>> > here.
>>> > More comments and questions inline.
>>> >
>>> >
>>> > Salvatore
>>> >
>>> > On 2 September 2014 16:34, Susanne Balle
>>> >  wrote:
>>> > Salvatore
>>> >
>>> >
>>> > Thanks for your clarification below around the
>>> > blueprint.
>>> >
>>> >
>>> > > For LBaaS v2 therefore the relationship between it
>>> > and Octavia should be the same as with any other
>>> > > backend. I see Octavia has a blueprint for a
>>> > "network driver" - and the der

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Stephen Balukoff
Hi Kyle,

IMO, that depends entirely on how the incubator project is run. For now,
I'm in favor of remaining separate and letting someone else be the guinea
pig. :/  I think we'll (all) be more productive this way.

Also keep in mind that the LBaaS v2 code is mostly there (just waiting on
reviews), so it's probably going to be ready for neutron-incubator
incubation well before Octavia is ready for anything like that.

Stephen

On Tue, Sep 2, 2014 at 12:52 PM, Kyle Mestery  wrote:

>
> To me what makes sense here is that we merge the Octavia code into the
> neutron-incubator when the LBaaS V2 code is merged there. If the end
> goal is to spin the LBaaS V2 stuff out into a separate git repository
> and project (under the networking umbrella), this would allow for the
> Octavia driver to be developed alongside the V2 API code, and in fact
> help satisfy one of the requirements around Neutron incubation
> graduation: Having a functional driver. And it also allows for the
> driver to continue to live on next to the API.
>
> What do people think about this?
>
> Thanks,
> Kyle
>
>

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Brandon Logan
I am not for this if Octavia is merged into the incubator when LBaaS V2
is, assuming LBaaS V2 will be merged into it before the summit.  I'd
rather Octavia get merged into whatever repository it is destined to
whenever it is much more mature.  If Octavia is merged into the
incubator too soon, I think it's velocity will be much less than if it
were independent at first.

On Tue, 2014-09-02 at 13:45 -0700, Stephen Balukoff wrote:
> Hi Kyle,
> 
> 
> IMO, that depends entirely on how the incubator project is run. For
> now, I'm in favor of remaining separate and letting someone else be
> the guinea pig. :/  I think we'll (all) be more productive this way.
> 
> 
> Also keep in mind that the LBaaS v2 code is mostly there (just waiting
> on reviews), so it's probably going to be ready for neutron-incubator
> incubation well before Octavia is ready for anything like that.
> 
> 
> Stephen
> 
> On Tue, Sep 2, 2014 at 12:52 PM, Kyle Mestery 
> wrote:
> 
> 
> To me what makes sense here is that we merge the Octavia code
> into the
> neutron-incubator when the LBaaS V2 code is merged there. If
> the end
> goal is to spin the LBaaS V2 stuff out into a separate git
> repository
> and project (under the networking umbrella), this would allow
> for the
> Octavia driver to be developed alongside the V2 API code, and
> in fact
> help satisfy one of the requirements around Neutron incubation
> graduation: Having a functional driver. And it also allows for
> the
> driver to continue to live on next to the API.
> 
> What do people think about this?
> 
> Thanks,
> Kyle
> 
> 
> 
> 
> 
> -- 
> Stephen Balukoff 
> Blue Box Group, LLC 
> (800)613-4305 x807
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Nir Magnezi
I would like to emphasize the importance of this issue.

Currently, all te LBaaS/Octavia gates are up on running (touch wood).
Nevertheless, this bug will become more apparent (aka broken gates) in the
next release of tempest (if we don't merge this fix beforehand).

The reason is that the issue occurs when you use tempest master,
while our gates currently use tempest tag 13.0.0 (as expected).

Nir

On Tue, Jan 3, 2017 at 11:04 AM, Genadi Chereshnya 
wrote:

> When running neutron_lbaas scenarios tests with the latest tempest version
> we fail because of https://bugs.launchpad.net/octavia/+bug/1649083.
>
> I would like if anyone can go over the patch that fixes the problem and
> merge it, so our automation will succeed.
> The patch is https://review.openstack.org/#/c/411257/
>
> Thanks in advance,
> Genadi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Kosnik, Lubosz
In my opinion this patch should be changed. We should start using project_id 
instead of still keeping tenant_id property.
All occurences of project_id in [1] should be fixed.

Lubosz

[1] neutron_lbaas/tests/tempest/v2/scenario/base.py

From: Nir Magnezi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, January 3, 2017 at 3:37 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron-lbaas][octavia]

I would like to emphasize the importance of this issue.

Currently, all te LBaaS/Octavia gates are up on running (touch wood).
Nevertheless, this bug will become more apparent (aka broken gates) in the next 
release of tempest (if we don't merge this fix beforehand).

The reason is that the issue occurs when you use tempest master,
while our gates currently use tempest tag 13.0.0 (as expected).

Nir

On Tue, Jan 3, 2017 at 11:04 AM, Genadi Chereshnya 
mailto:gcher...@redhat.com>> wrote:
When running neutron_lbaas scenarios tests with the latest tempest version we 
fail because of https://bugs.launchpad.net/octavia/+bug/1649083.
I would like if anyone can go over the patch that fixes the problem and merge 
it, so our automation will succeed.
The patch is https://review.openstack.org/#/c/411257/
Thanks in advance,
Genadi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-15 Thread Jorge Miramontes
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on "average concurrent connections". This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work. < 1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology for the
amphora then this model may break.


Also, and more generally speaking, I have categorized usage into three
categories:

1) Tracking usage - this is usage that will be used my operators and
support teams to gain insight into what load balancers are doing in an
attempt to monitor potential issues.
2) Billable usage - this is usage that is a subset of tracking usage used
to bill customers.
3) Real-time usage - this is usage that should be exposed via the API so
that customers can make decisions that affect their configuration (ex.
"Based off of the number of connections my web heads can handle when
should I add another node to my pool?").

These are my preliminary thoughts, and I'd love to gain insight into what
the community thinks. I have built about 3 usage collection systems thus
far (1 with Brandon) and have learned a lot. Some basic rules I have
discovered with collecting usage are:

1) Always collect granular usage as it "paints a picture" of what actually
happened. Massaged/un-granular usage == lost information.
2) Never imply, always be explicit. Implications usually stem from bad
assumptions.


Last but not least, we need to store every user and system load balancer
event such as creation, updates, suspension and deletion so that we may
bill on things like uptime and serve our customers better by knowing what
happened and when.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-06 Thread Phillip Toohill
Hello All, 

I wanted to start a discussion on floating IP management and ultimately
decide how the LBaaS group wants to handle the association. 

There is a need to utilize floating IPs(FLIP) and its API calls to
associate a FLIP to the neutron port that we currently spin up. 

See DOCS here:

> http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_create.html

Currently, LBaaS will make internal service calls (clean interface :/) to 
create and attach a Neutron port. 
The VIP from this port is added to the Loadbalancer object of the Load balancer 
configuration and returned to the user.

This creates a bit of a problem if we want to associate a FLIP with the port 
and display the FLIP to the user instead of
the ports VIP because the port is currently created and attached in the plugin 
and there is no code anywhere to handle the FLIP
association. 

To keep this short and to the point:

We need to discuss where and how we want to handle this association. I have a 
few questions to start it off. 

Do we want to add logic in the plugin to call the FLIP association API?

If we have logic in the plugin should we have configuration that identifies 
weather to use/return the FLIP instead the port VIP?

Would we rather have logic for FLIP association in the drivers?

If logic is in the drivers would we still return the port VIP to the user then 
later overwrite it with the FLIP? 
Or would we have configuration to not return the port VIP initially, but an 
additional query would show the associated FLIP.


Is there an internal service call for this, and if so would we use it instead 
of API calls? 


Theres plenty of other thoughts and questions to be asked and discussed in 
regards to FLIP handling, 
hopefully this will get us going. I'm certain I may not be completely 
understanding this and 
is the hopes of this email to clarify any uncertainties. 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Stephen Balukoff
Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having concrete requirements for logging,
eh. Once this discussion is nearing a conclusion, could you write up the
specifics of logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as
there doesn't seem to be high demand for it, and it certainly won't be
supported in v 0.5 of Octavia (and maybe not in v1 or v2 either, unless we
see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding
getting this from a combination of iptables and / or the haproxy stats
interface. Were you thinking something different that involves on-the-fly
analysis of the logs or something?  (I tend to find that logs are great for
non-real time data, but can often be lacking if you need, say, a gauge like
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae
themselves, then we need to have log rotation as part of the configuration
here. It would be silly to have an amphora failure just because its
ephemeral disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

> Hey Octavia folks!
>
>
> First off, yes, I'm still alive and kicking. :)
>
> I,d like to start a conversation on usage requirements and have a few
> suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
> based protocols, we inherently enable connection logging for load
> balancers for several reasons:
>
> 1) We can use these logs as the raw and granular data needed to track
> usage. With logs, the operator has flexibility as to what usage metrics
> they want to bill against. For example, bandwidth is easy to track and can
> even be split into header and body data so that the provider can choose if
> they want to bill on header data or not. Also, the provider can determine
> if they will bill their customers for failed requests that were the fault
> of the provider themselves. These are just a few examples; the point is
> the flexible nature of logs.
>
> 2) Creating billable usage from logs is easy compared to other options
> like polling. For example, in our current LBaaS iteration at Rackspace we
> bill partly on "average concurrent connections". This is based on polling
> and is not as accurate as it possibly can be. It's very close, but it
> doesn't get more accurate that the logs themselves. Furthermore, polling
> is more complex and uses up resources on the polling cadence.
>
> 3) Enabling logs for all load balancers can be used for debugging, support
> and audit purposes. While the customer may or may not want their logs
> uploaded to swift, operators and their support teams can still use this
> data to help customers out with billing and setup issues. Auditing will
> also be easier with raw logs.
>
> 4) Enabling logs for all load balancers will help mitigate uncertainty in
> terms of capacity planning. Imagine if every customer suddenly enabled
> logs without it ever being turned on. This could produce a spike in
> resource utilization that will be hard to manage. Enabling logs from the
> start means we are certain as to what to plan for other than the nature of
> the customer's traffic pattern.
>
> Some Cons I can think of (please add more as I think the pros outweigh the
> cons):
>
> 1) If we every add UDP based protocols then this model won't work. < 1% of
> our load balancers at Rackspace are UDP based so we are not looking at
> using this protocol for Octavia. I'm more of a fan of building a really
> good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
> a different problem. For me different problem == different product.
>
> 2) I'm assuming HA Proxy. Thus, if we choose another technology for the
> amphora then this model may break.
>
>
> Also, and more generally speaking, I have categorized usage into three
> categories:
>
> 1) Tracking usage - this is usage that will be used my operators and
> support teams to gain insight into what load balancers are doing in an
> attempt to monitor potential issues.
> 2) Billable usage - this is usage that is a subset of tracking usage used
> to bill customers.
> 3) Real-time usage - this is usage that should be exposed via the API so
> that customers can make decisions that affect their configuration (ex.
> "Based off of the number of connections my web heads can handle when
> should I add another node to my pool?").
>
> These are my preliminary thoughts, and I'd love to gain insight into what
> the community thinks. I have built about 3 usage collection systems thus
> far (1 with Brandon) and have learned a lot. Some basic rules I have
> discovered with collecting usage are:
>
> 1) Always collect granular usage as it "paints a picture" of what actually
> happened. Massaged/un-granular usage == lost informati

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Robert van Leeuwen
> I,d like to start a conversation on usage requirements and have a few
> suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
> based protocols, we inherently enable connection logging for load
> balancers for several reasons:

Just request from the operator side of things:
Please think about the scalability when storing all logs.

e.g. we are currently logging http requests to one load balanced application 
(that would be a fit for LBAAS)
It is about 500 requests per second, which adds up to 40GB per day (in 
elasticsearch.)
Please make sure whatever solution is chosen it can cope with machines doing 
1000s of requests per second...

Cheers,
Robert van Leeuwen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Jorge Miramontes
Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 22, 2014 4:04 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on "average concurrent connections". This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work. < 1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Eichberger, German
Hi Jorge,

Good discussion so far + glad to have you back :)

I am not a big fan of using logs for billing information since ultimately (at 
least at HP) we need to pump it into ceilometer. So I am envisioning either the 
amphora (via a proxy) to pump it straight into that system or we collect it on 
the controller and pump it from there.

Allowing/enabling logging creates some requirements on the hardware, mainly, 
that they can handle the IO coming from logging. Some operators might choose to 
hook up very cheap and non performing disks which might not be able to deal 
with the log traffic. So I would suggest that there is some rate limiting on 
the log output to help with that.

Thanks,
German

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, October 22, 2014 6:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 22, 2014 4:04 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on "average concurrent connections". This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-23 Thread Jorge Miramontes
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex than what I am advocating and
without the added benefits listed above. An understanding of HP's desires
on this matter will hopefully get this to a point where we can start
working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an
API call that returns "real-time" data such as this ==>
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Wednesday, October 22, 2014 2:41 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


>Hi Jorge,
> 
>Good discussion so far + glad to have you back
>J
> 
>I am not a big fan of using logs for billing information since ultimately
>(at least at HP) we need to pump it into ceilometer. So I am envisioning
>either the
> amphora (via a proxy) to pump it straight into that system or we collect
>it on the controller and pump it from there.
> 
>Allowing/enabling logging creates some requirements on the hardware,
>mainly, that they can handle the IO coming from logging. Some operators
>might choose to
> hook up very cheap and non performing disks which might not be able to
>deal with the log traffic. So I would suggest that there is some rate
>limiting on the log output to help with that.
>
> 
>Thanks,
>German
> 
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>
>Sent: Wednesday, October 22, 2014 6:51 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>
> 
>Hey Stephen (and Robert),
>
> 
>
>For real-time usage I was thinking something similar to what you are
>proposing. Using logs for this would be overkill IMO so your suggestions
>were what I was
> thinking of starting with.
>
> 
>
>As far as storing logs is concerned I was definitely thinking of
>offloading these onto separate storage devices. Robert, I totally hear
>you on the scalability
> part as our current LBaaS setup generates TB of request logs. I'll start
>planning out a spec and then I'll let everyone chime in there. I just
>wanted to get a general feel for the ideas I had mentioned. I'll also
>bring it up in today's meeting.
>
> 
>
>Cheers,
>
>--Jorge
>
>
>
>
> 
>
>From:
>Stephen Balukoff 
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
>Date: Wednesday, October 22, 2014 4:04 AM
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-24 Thread Eichberger, German
Hi Jorge,

I agree completely with the points you make about the logs. We still feel that 
metering and logging are two different problems. The ceilometers community has 
a proposal on how to meter lbaas (see 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_metering.html)
 and we at HP think that those values are be sufficient for us for the time 
being. 

I think our discussion is mostly about connection logs which are emitted some 
way from amphora (e.g. haproxy logs). Since they are customer's logs we need to 
explore on our end the privacy implications (I assume at RAX you have controls 
in place to make sure that there is no violation :-). Also I need to check if 
our central logging system is scalable enough and we can send logs there 
without creating security holes.

Another possibility is to log like syslog our apmphora agent logs to a central 
system to help with trouble shooting debugging. Those could be sufficiently 
anonymized to avoid privacy issue. What are your thoughts on logging those?

Thanks,
German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Thursday, October 23, 2014 3:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide more 
insight into you usage requirements? Also, I'd like to clarify a few points 
related to using logging.

I am advocating that logs be used for multiple purposes, including billing. 
Billing requirements are different that connection logging requirements. 
However, connection logging is a very accurate mechanism to capture billable 
metrics and thus, is related. My vision for this is something like the 
following:

- Capture logs in a scalable way (i.e. capture logs and put them on a separate 
scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and send 
them on their merry way to cielometer or whatever service an operator will be 
using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything from 
indefinitely to not at all. Rackspace is planing on keeping them for a certain 
period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns 
on the connection logging feature for their load balancer it will already have 
a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually after a 
tragic lb event). By already capturing the logs I'm sure customers will be 
extremely happy to see that there are already X days worth of logs they can 
immediately sift through.
B) Operators and their support teams can leverage logs when providing 
service to their customers. This is huge for finding issues and resolving them 
quickly.
C) Albeit a minor point, building support for logs from the get-go 
mitigates capacity management uncertainty. My example earlier was the extreme 
case of every customer turning on logging at the same time. While unlikely, I 
would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my 
experience, those tend to be more complex than what I am advocating and without 
the added benefits listed above. An understanding of HP's desires on this 
matter will hopefully get this to a point where we can start working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an API 
call that returns "real-time" data such as this ==> 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Wednesday, October 22, 2014 2:41 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


>Hi Jorge,
> 
>Good discussion so far + glad to have you back J
> 
>I am not a big fan of using logs for billing information since 
>ultimately (at least at HP) we need to pump it into ceilometer. So I am 
>envisioning either the  amphora (via a proxy) to pump it straight into 
>that system or we collect it on the controller and pump it from there.
> 
>Allowing/enabling logging creates some requirements on the hardware, 
>mainly, that they can handle the IO coming from logging. Some operators 
>might choose to  hook up very cheap and non performing disks which 
>might not be able to deal with the log traffic. So I would suggest that 
>there is some rate limiting on the log output to help with that.
>
> 
>Thanks,
>German
> 
>From: Jorge Miramontes [mailto:jor

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-27 Thread Jorge Miramontes
Hey German,

I totally agree on the security/privacy aspect of logs, especially due to
the SSL/TLS Termination feature.

After looking at BP [1] and the spec [2] for metering, it looks like it is
proposing to send more than just billable usage to cielometer. From my
previous email I considered this "tracking" usage ("billable" usage can be
a subset of tracking usage). It also appears to me that there is an
implied interface  for cielometer as we need to be able to capture metrics
from various lb devices (HAProxy, Nginx, Netscaler, etc.), standardize
them, and then send them off. That said, what type of implementation was
HP thinking of to gather these metrics? Instead of focusing on my idea of
using logging I'd like to change the discussion and get a picture as to
what you all are envisioning for a possible implementation direction.
Important items for Rackspace include accuracy of data, no lost data (i.e.
when sending to upstream system ensure it gets there), reliability of
cadence when sending usage to upstream system, and the ability to
backtrack and audit data whenever there seems to be a discrepancy in a
customer's monthly statement. Keep in mind that we need to integrate with
our current billing pipeline so we are not planning on using cielometer at
the moment. Thus, we need to make this somewhat configurable for those not
using cielometer.

Cheers,
--Jorge

[1] 
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-meter-lbaas

[2] https://review.openstack.org/#/c/94958/12/specs/juno/lbaas_metering.rst


On 10/24/14 5:19 PM, "Eichberger, German"  wrote:

>Hi Jorge,
>
>I agree completely with the points you make about the logs. We still feel
>that metering and logging are two different problems. The ceilometers
>community has a proposal on how to meter lbaas (see
>http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_met
>ering.html) and we at HP think that those values are be sufficient for us
>for the time being.
>
>I think our discussion is mostly about connection logs which are emitted
>some way from amphora (e.g. haproxy logs). Since they are customer's logs
>we need to explore on our end the privacy implications (I assume at RAX
>you have controls in place to make sure that there is no violation :-).
>Also I need to check if our central logging system is scalable enough and
>we can send logs there without creating security holes.
>
>Another possibility is to log like syslog our apmphora agent logs to a
>central system to help with trouble shooting debugging. Those could be
>sufficiently anonymized to avoid privacy issue. What are your thoughts on
>logging those?
>
>Thanks,
>German
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Thursday, October 23, 2014 3:30 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>Hey German/Susanne,
>
>To continue our conversation from our IRC meeting could you all provide
>more insight into you usage requirements? Also, I'd like to clarify a few
>points related to using logging.
>
>I am advocating that logs be used for multiple purposes, including
>billing. Billing requirements are different that connection logging
>requirements. However, connection logging is a very accurate mechanism to
>capture billable metrics and thus, is related. My vision for this is
>something like the following:
>
>- Capture logs in a scalable way (i.e. capture logs and put them on a
>separate scalable store somewhere so that it doesn't affect the amphora).
>- Every X amount of time (every hour, for example) process the logs and
>send them on their merry way to cielometer or whatever service an
>operator will be using for billing purposes.
>- Keep logs for some configurable amount of time. This could be anything
>from indefinitely to not at all. Rackspace is planing on keeping them for
>a certain period of time for the following reasons:
>   
>   A) We have connection logging as a planned feature. If a customer turns
>on the connection logging feature for their load balancer it will already
>have a history. One important aspect of this is that customers (at least
>ours) tend to turn on logging after they realize they need it (usually
>after a tragic lb event). By already capturing the logs I'm sure
>customers will be extremely happy to see that there are already X days
>worth of logs they can immediately sift through.
>   B) Operators and their support teams can leverage logs when providing
>service to their customers. This is huge for finding issues and resolving
>them quickly.
>   C) Albeit a minor point, building support for logs from the get-go
>mit

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-27 Thread Angus Lees
On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
> > I,d like to start a conversation on usage requirements and have a few
> > suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
> > based protocols, we inherently enable connection logging for load
> 
> > balancers for several reasons:
> Just request from the operator side of things:
> Please think about the scalability when storing all logs.
> 
> e.g. we are currently logging http requests to one load balanced application
> (that would be a fit for LBAAS) It is about 500 requests per second, which
> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
> solution is chosen it can cope with machines doing 1000s of requests per
> second...

And to take this further, what happens during DoS attack (either syn flood or 
full connections)?  How do we ensure that we don't lose our logging system 
and/or amplify the DoS attack?

One solution is sampling, with a tunable knob for the sampling rate - perhaps 
tunable per-vip.  This still increases linearly with attack traffic, unless you 
use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).

One of the advantages of (eg) polling the number of current sessions is that 
the cost of that monitoring is essentially fixed regardless of the number of 
connections passing through.  Numerous other metrics (rate of new connections, 
etc) also have this property and could presumably be used for accurate billing 
- without amplifying attacks.

I think we should be careful about whether we want logging or metrics for more 
accurate billing.  Both are useful, but full logging is only really required 
for ad-hoc debugging (important! but different).

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Jorge Miramontes
Thanks for the reply Angus,

DDoS attacks are definitely a concern we are trying to address here. My
assumptions are based on a solution that is engineered for this type of
thing. Are you more concerned with network I/O during a DoS attack or
storing the logs? Under the idea I had, I wanted to make the amount of
time logs are stored for configurable so that the operator can choose
whether they want the logs after processing or not. The network I/O of
pumping logs out is a concern of mine, however.

Sampling seems like the go-to solution for gathering usage but I was
looking for something different as sampling can get messy and can be
inaccurate for certain metrics. Depending on the sampling rate, this
solution has the potential to miss spikes in traffic if you are gathering
gauge metrics such as active connections/sessions. Using logs would be
100% accurate in this case. Also, I'm assuming LBaaS will have events so
combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
gets complicated. Combining logs with events is arguably less complicated
as the granularity of logs is high. Due to this granularity, one can split
the logs based on the event times cleanly. Since sampling will have a
fixed cadence you will have to perform a "manual" sample at the time of
the event (i.e. add complexity).

At the end of the day there is no free lunch so more insight is
appreciated. Thanks for the feedback.

Cheers,
--Jorge




On 10/27/14 6:55 PM, "Angus Lees"  wrote:

>On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
>> > I,d like to start a conversation on usage requirements and have a few
>> > suggestions. I advocate that, since we will be using TCP and
>>HTTP/HTTPS
>> > based protocols, we inherently enable connection logging for load
>> 
>> > balancers for several reasons:
>> Just request from the operator side of things:
>> Please think about the scalability when storing all logs.
>> 
>> e.g. we are currently logging http requests to one load balanced
>>application
>> (that would be a fit for LBAAS) It is about 500 requests per second,
>>which
>> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
>> solution is chosen it can cope with machines doing 1000s of requests per
>> second...
>
>And to take this further, what happens during DoS attack (either syn
>flood or 
>full connections)?  How do we ensure that we don't lose our logging
>system 
>and/or amplify the DoS attack?
>
>One solution is sampling, with a tunable knob for the sampling rate -
>perhaps 
>tunable per-vip.  This still increases linearly with attack traffic,
>unless you 
>use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
>
>One of the advantages of (eg) polling the number of current sessions is
>that 
>the cost of that monitoring is essentially fixed regardless of the number
>of 
>connections passing through.  Numerous other metrics (rate of new
>connections, 
>etc) also have this property and could presumably be used for accurate
>billing 
>- without amplifying attacks.
>
>I think we should be careful about whether we want logging or metrics for
>more 
>accurate billing.  Both are useful, but full logging is only really
>required 
>for ad-hoc debugging (important! but different).
>
>-- 
> - Gus
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Angus Lees
On Tue, 28 Oct 2014 04:42:27 PM Jorge Miramontes wrote:
> Thanks for the reply Angus,
> 
> DDoS attacks are definitely a concern we are trying to address here. My
> assumptions are based on a solution that is engineered for this type of
> thing. Are you more concerned with network I/O during a DoS attack or
> storing the logs? Under the idea I had, I wanted to make the amount of
> time logs are stored for configurable so that the operator can choose
> whether they want the logs after processing or not. The network I/O of
> pumping logs out is a concern of mine, however.

My primary concern was the generated network I/O, and the write bandwidth to 
storage media implied by that (not so much the accumulated volume of data).

We're in an era where 10Gb/s networking is now common for serving/loadbalancer 
infrastructure and as far as I can see the trend for networking is climbing 
more steeply that storage I/O, so it's only going to get worse.   10Gb/s of 
short-lived connections is a *lot* to try to write to reliable storage 
somewhere and later analyse.
It's a useful option for some users, but it would be a shame to have to limit 
loadbalancer throughput by the logging infrastructure just because we didn't 
have an alternative available.

I think you're right, that we don't have an obviously-correct choice here.  I 
think we need to expose both cheap sampling/polling of counters and more 
detailed logging of connections matching patterns (and indeed actual packet 
capture would be nice too).  Someone could then choose to base their billing 
on either datasource depending on their own accuracy-vs-cost-of-collection 
tradeoffs.  I don't see that either approach is going to be sufficiently 
universal to obsolete the other :(

Also: UDP.   Most providers are all about HTTP now, but there are still some 
people that need to bill for UDP, SIP, VPN, etc traffic.

 - Gus

> Sampling seems like the go-to solution for gathering usage but I was
> looking for something different as sampling can get messy and can be
> inaccurate for certain metrics. Depending on the sampling rate, this
> solution has the potential to miss spikes in traffic if you are gathering
> gauge metrics such as active connections/sessions. Using logs would be
> 100% accurate in this case. Also, I'm assuming LBaaS will have events so
> combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
> gets complicated. Combining logs with events is arguably less complicated
> as the granularity of logs is high. Due to this granularity, one can split
> the logs based on the event times cleanly. Since sampling will have a
> fixed cadence you will have to perform a "manual" sample at the time of
> the event (i.e. add complexity).
> 
> At the end of the day there is no free lunch so more insight is
> appreciated. Thanks for the feedback.
> 
> Cheers,
> --Jorge
> 
> On 10/27/14 6:55 PM, "Angus Lees"  wrote:
> >On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
> >> > I,d like to start a conversation on usage requirements and have a few
> >> > suggestions. I advocate that, since we will be using TCP and
> >>
> >>HTTP/HTTPS
> >>
> >> > based protocols, we inherently enable connection logging for load
> >> 
> >> > balancers for several reasons:
> >> Just request from the operator side of things:
> >> Please think about the scalability when storing all logs.
> >> 
> >> e.g. we are currently logging http requests to one load balanced
> >>
> >>application
> >>
> >> (that would be a fit for LBAAS) It is about 500 requests per second,
> >>
> >>which
> >>
> >> adds up to 40GB per day (in elasticsearch.) Please make sure whatever
> >> solution is chosen it can cope with machines doing 1000s of requests per
> >> second...
> >
> >And to take this further, what happens during DoS attack (either syn
> >flood or
> >full connections)?  How do we ensure that we don't lose our logging
> >system
> >and/or amplify the DoS attack?
> >
> >One solution is sampling, with a tunable knob for the sampling rate -
> >perhaps
> >tunable per-vip.  This still increases linearly with attack traffic,
> >unless you
> >use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
> >
> >One of the advantages of (eg) polling the number of current sessions is
> >that
> >the cost of that monitoring is essentially fixed regardless of the number
> >of
> >connections passing through.  Numerous other metrics (rate of new
> >connections,
> >etc) also have this property and could presumably be used for accurate
> >billing
> >- without amplifying attacks.
> >
> >I think we should be careful about whether we want logging or metrics for
> >more
> >accurate billing.  Both are useful, but full logging is only really
> >required
> >for ad-hoc debugging (important! but different).
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Susanne Balle
Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be
moved to various backends such as an elastic search, hadoop HDFS, Swift,
etc as well as by default (but with the option to disable it) ceilometer.
Ceilometer is the metering defacto for OpenStack so we need to support it.
We would like the integration with Ceilometer to be based on Notifications.
I believe German send a reference to that in another email. The
pre-processing will need to be optional and the amount of data aggregation
configurable.

What you describe below to me is usage gathering/metering. The billing is
independent since companies with private clouds might not want to bill but
still need usage reports for capacity planning etc. Billing/Charging is
just putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift
or Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we
were in disagreement on the IRC. I am not sure why but it sounded like you
were talking about something else when you were talking about the real time
processing. If we are just taking about moving the logs to your Hadoop
cluster or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

> Hey German/Susanne,
>
> To continue our conversation from our IRC meeting could you all provide
> more insight into you usage requirements? Also, I'd like to clarify a few
> points related to using logging.
>
> I am advocating that logs be used for multiple purposes, including
> billing. Billing requirements are different that connection logging
> requirements. However, connection logging is a very accurate mechanism to
> capture billable metrics and thus, is related. My vision for this is
> something like the following:
>
> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).
> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.
> - Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:
>
> A) We have connection logging as a planned feature. If a customer
> turns
> on the connection logging feature for their load balancer it will already
> have a history. One important aspect of this is that customers (at least
> ours) tend to turn on logging after they realize they need it (usually
> after a tragic lb event). By already capturing the logs I'm sure customers
> will be extremely happy to see that there are already X days worth of logs
> they can immediately sift through.
> B) Operators and their support teams can leverage logs when
> providing
> service to their customers. This is huge for finding issues and resolving
> them quickly.
> C) Albeit a minor point, building support for logs from the get-go
> mitigates capacity management uncertainty. My example earlier was the
> extreme case of every customer turning on logging at the same time. While
> unlikely, I would hate to manage that!
>
> I agree that there are other ways to capture billing metrics but, from my
> experience, those tend to be more complex than what I am advocating and
> without the added benefits listed above. An understanding of HP's desires
> on this matter will hopefully get this to a point where we can start
> working on a spec.
>
> Cheers,
> --Jorge
>
> P.S. Real-time stats is a different beast and I envision there being an
> API call that returns "real-time" data such as this ==>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.
>
>
> From:  , German 
> Reply-To:  "OpenStack Development Mai

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Jorge Miramontes
Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Eichberger, German
Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

>From an HP perspective the full logs on the load balancer are mostly 
>interesting for the user of the loadbalancer - we only care about aggregates 
>for our metering. That said we would be happy to just move them on demand to a 
>place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Jorge Miramontes
Thanks German,

It looks like the conversation is going towards using the HAProxy stats 
interface and/or iptables. I just wanted to explore logging a bit. That said, 
can you and Stephen share your thoughts on how we might implement that 
approach? I'd like to get a spec out soon because I believe metric gathering 
can be worked on in parallel with the rest of the project. In fact, I was 
hoping to get my hands dirty on this one and contribute some code, but a 
strategy and spec are needed first before I can start that ;)

Cheers,
--Jorge

From: , German 
mailto:german.eichber...@hp.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 5, 2014 3:50 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

>From an HP perspective the full logs on the load balancer are mostly 
>interesting for the user of the loadbalancer – we only care about aggregates 
>for our metering. That said we would be happy to just move them on demand to a 
>place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 4, 2014 11:10 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

> - Capture logs in a scalable way (i.e. capture logs and put them on a
> separate scalable store somewhere so that it doesn't affect the amphora).

> - Every X amount of time (every hour, for example) process the logs and
> send them on their merry way to cielometer or whatever service an operator
> will be using for billing purposes.

"Keep the logs": This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

>- Keep logs for some configurable amount of time. This could be anything
> from indefinitely to not at all. Rackspace is planing on keeping them for
> a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more i

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-06 Thread Stephen Balukoff
Hi Jorge,

So, one can query a pre-defined UDP socket or "stats HTTP service" (which
can be an in-band service, by the way) and HAProxy will give all kinds of
useful stats on the current listener, its pools, its members, etc. We will
probably be querying this service in any case to detect things like members
going down, etc. for sending notifications upstream. The problem is this
interface presently resets state whenever haproxy is reloaded, which needs
to happen whenever there's a configuration change. I was able to meet with
the HAProxy team (including Willy Tarreau), and they're interested in
making improvements to HAProxy that we would find useful. Foremost on their
list was the ability to preserve this state information between restarts.

Until that's ready and in a stable release of haproxy, it's also pretty
trivial to parse out IP addresses and listening ports from the haproxy
config, and use these to populate a series of IPtables chains whose entire
purpose is to gather bandwidth I/O data. These tables won't give you things
like max connnection counts, etc., but if you're billing on raw bandwidth
usage, these stats are guaranteed to be accurate and survive through
haproxy restarts. It also does not require one to scan logs, and is
available cheaply in real time. (This is how we bill for bandwidth on our
current software load balancer product.)

My vote would be to use the IPTables approach for now until HAProxy is able
to retain state between restarts. For other stats data (eg. max connection
counts, total number of requests), I would recommend gathering this data
from the haproxy daemon, and keeping an external state file that we update
immediately before restarting haproxy. (Yes, this means we lose some
information on connections that are still open when haproxy restarts, but
it gives us an "approximate" good value since we anticipate haproxy
restarts being relatively rare in comparison to serving actual requests).

Logs are still very handy, and I agree that if extreme accuracy in billing
is required, this is the way to get that data. Logs are also very handy for
users to have for troubleshooting purposes. But I think logs are not well
suited to providing data which will be consumed in real time (eg. stuff
which will populate a dashboard.)

What do y'all think of this?

Stephen

On Wed, Nov 5, 2014 at 10:25 AM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   Thanks German,
>
>  It looks like the conversation is going towards using the HAProxy stats
> interface and/or iptables. I just wanted to explore logging a bit. That
> said, can you and Stephen share your thoughts on how we might implement
> that approach? I'd like to get a spec out soon because I believe metric
> gathering can be worked on in parallel with the rest of the project. In
> fact, I was hoping to get my hands dirty on this one and contribute some
> code, but a strategy and spec are needed first before I can start that ;)
>
>  Cheers,
> --Jorge
>
>   From: , German 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, November 5, 2014 3:50 AM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>Hi Jorge,
>
>
>
> I am still not convinced that we need to use logging for usage metrics. We
> can also use the haproxy stats interface (which the haproxy team is willing
> to improve based on our input) and/or iptables as Stephen suggested. That
> said this probably needs more exploration.
>
>
>
> From an HP perspective the full logs on the load balancer are mostly
> interesting for the user of the loadbalancer – we only care about
> aggregates for our metering. That said we would be happy to just move them
> on demand to a place the user can access.
>
>
>
> Thanks,
>
> German
>
>
>
>
>
> *From:* Jorge Miramontes [mailto:jorge.miramon...@rackspace.com
> ]
> *Sent:* Tuesday, November 04, 2014 8:20 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage
> Requirements
>
>
>
> Hi Susanne,
>
>
>
> Thanks for the reply. As Angus pointed out, the one big item that needs to
> be addressed with this method is network I/O of raw logs. One idea to
> mitigate this concern is to store the data locally for the
> operator-configured granularity, process it and THEN send it to cielometer,
> etc. If we can't engineer a way to deal with the high network I/O that will
> inevitably occur we may have to move towards a polling approach. Thoughts?

[openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-14 Thread Eichberger, German
All,

Let's decide on a logo tomorrow so we can print stickers in time for Vancouver. 
Here are some designs to consider: http://bit.ly/Octavia_logo_vote

We will discuss more at tomorrow's meeting - Agenda: 
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015-04-15
 - but please come prepared with one of your favorite designs...

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting today

2015-05-06 Thread Eichberger, German
All,

In order to work on the demo for Vancouver we will be skipping todays, 5/6/15 
meeting. We will have another meeting on 5/13 to finalize for the summit --

If you have questions you can find us in the channel — and again please keep up 
the good work with reviews!

Thanks,
German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-07 Thread Brandon Logan
I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
> Hello All, 
> 
> I wanted to start a discussion on floating IP management and ultimately
> decide how the LBaaS group wants to handle the association. 
> 
> There is a need to utilize floating IPs(FLIP) and its API calls to
> associate a FLIP to the neutron port that we currently spin up. 
> 
> See DOCS here:
> 
> > http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_create.html
> 
> Currently, LBaaS will make internal service calls (clean interface :/) to 
> create and attach a Neutron port. 
> The VIP from this port is added to the Loadbalancer object of the Load 
> balancer configuration and returned to the user.
> 
> This creates a bit of a problem if we want to associate a FLIP with the port 
> and display the FLIP to the user instead of
> the ports VIP because the port is currently created and attached in the 
> plugin and there is no code anywhere to handle the FLIP
> association. 
> 
> To keep this short and to the point:
> 
> We need to discuss where and how we want to handle this association. I have a 
> few questions to start it off. 
> 
> Do we want to add logic in the plugin to call the FLIP association API?
> 
> If we have logic in the plugin should we have configuration that identifies 
> weather to use/return the FLIP instead the port VIP?
> 
> Would we rather have logic for FLIP association in the drivers?
> 
> If logic is in the drivers would we still return the port VIP to the user 
> then later overwrite it with the FLIP? 
> Or would we have configuration to not return the port VIP initially, but an 
> additional query would show the associated FLIP.
> 
> 
> Is there an internal service call for this, and if so would we use it instead 
> of API calls? 
> 
> 
> Theres plenty of other thoughts and questions to be asked and discussed in 
> regards to FLIP handling, 
> hopefully this will get us going. I'm certain I may not be completely 
> understanding this and 
> is the hopes of this email to clarify any uncertainties. 
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-12 Thread Phillip Toohill
Hello all, 

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sha
ring

-diagrams are draw.io based and can be opened from within Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, "Brandon Logan"  wrote:

>I'll add some more info to this as well:
>
>Neutron LBaaS creates the neutron port for the VIP in the plugin layer
>before drivers ever have any control.  In the case of an async driver,
>it will then call the driver's create method, and then return to the
>user the vip info.  This means the user will know the VIP before the
>driver even finishes creating the load balancer.
>
>So if Octavia is just going to create a floating IP and then associate
>that floating IP to the neutron port, there is the problem of the user
>not ever seeing the correct VIP (which would be the floating iP).
>
>So really, we need to have a very detailed discussion on what the
>options are for us to get this to work for those of us intending to use
>floating ips as VIPs while also working for those only requiring a
>neutron port.  I'm pretty sure this will require changing the way V2
>behaves, but there's more discussion points needed on that.  Luckily, V2
>is in a feature branch and not merged into Neutron master, so we can
>change it pretty easily.  Phil and I will bring this up in the meeting
>tomorrow, which may lead to a meeting topic in the neutron lbaas
>meeting.
>
>Thanks,
>Brandon
>
>
>On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
>> Hello All, 
>> 
>> I wanted to start a discussion on floating IP management and ultimately
>> decide how the LBaaS group wants to handle the association.
>> 
>> There is a need to utilize floating IPs(FLIP) and its API calls to
>> associate a FLIP to the neutron port that we currently spin up.
>> 
>> See DOCS here:
>> 
>> > 
>>http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_cr
>>eate.html
>> 
>> Currently, LBaaS will make internal service calls (clean interface :/)
>>to create and attach a Neutron port.
>> The VIP from this port is added to the Loadbalancer object of the Load
>>balancer configuration and returned to the user.
>> 
>> This creates a bit of a problem if we want to associate a FLIP with the
>>port and display the FLIP to the user instead of
>> the ports VIP because the port is currently created and attached in the
>>plugin and there is no code anywhere to handle the FLIP
>> association. 
>> 
>> To keep this short and to the point:
>> 
>> We need to discuss where and how we want to handle this association. I
>>have a few questions to start it off.
>> 
>> Do we want to add logic in the plugin to call the FLIP association API?
>> 
>> If we have logic in the plugin should we have configuration that
>>identifies weather to use/return the FLIP instead the port VIP?
>> 
>> Would we rather have logic for FLIP association in the drivers?
>> 
>> If logic is in the drivers would we still return the port VIP to the
>>user then later overwrite it with the FLIP?
>> Or would we have configuration to not return the port VIP initially,
>>but an additional query would show the associated FLIP.
>> 
>> 
>> Is there an internal service call for this, and if so would we use it
>>instead of API calls?
>> 
>> 
>> Theres plenty of other thoughts and questions to be asked and discussed
>>in regards to FLIP handling,
>> hopefully this will get us going. I'm certain I may not be completely
>>understanding this and
>> is the hopes of this email to clarify any uncertainties.
>> 
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Susanne Balle
Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill <
phillip.tooh...@rackspace.com> wrote:

> Diagrams in jpeg format..
>
> On 10/12/14 10:06 PM, "Phillip Toohill" 
> wrote:
>
> >Hello all,
> >
> >Heres some additional diagrams and docs. Not incredibly detailed, but
> >should get the point across.
> >
> >Feel free to edit if needed.
> >
> >Once we come to some kind of agreement and understanding I can rewrite
> >these more to be thorough and get them in a more official place. Also, I
> >understand theres other use cases not shown in the initial docs, so this
> >is a good time to collaborate to make this more thought out.
> >
> >Please feel free to ping me with any questions,
> >
> >Thank you
> >
> >
> >Google DOCS link for FLIP folder:
> >
> https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sh
> >a
> >ring
> >
> >-diagrams are draw.io based and can be opened from within Drive by
> >selecting the appropriate application.
> >
> >On 10/7/14 2:25 PM, "Brandon Logan"  wrote:
> >
> >>I'll add some more info to this as well:
> >>
> >>Neutron LBaaS creates the neutron port for the VIP in the plugin layer
> >>before drivers ever have any control.  In the case of an async driver,
> >>it will then call the driver's create method, and then return to the
> >>user the vip info.  This means the user will know the VIP before the
> >>driver even finishes creating the load balancer.
> >>
> >>So if Octavia is just going to create a floating IP and then associate
> >>that floating IP to the neutron port, there is the problem of the user
> >>not ever seeing the correct VIP (which would be the floating iP).
> >>
> >>So really, we need to have a very detailed discussion on what the
> >>options are for us to get this to work for those of us intending to use
> >>floating ips as VIPs while also working for those only requiring a
> >>neutron port.  I'm pretty sure this will require changing the way V2
> >>behaves, but there's more discussion points needed on that.  Luckily, V2
> >>is in a feature branch and not merged into Neutron master, so we can
> >>change it pretty easily.  Phil and I will bring this up in the meeting
> >>tomorrow, which may lead to a meeting topic in the neutron lbaas
> >>meeting.
> >>
> >>Thanks,
> >>Brandon
> >>
> >>
> >>On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
> >>> Hello All,
> >>>
> >>> I wanted to start a discussion on floating IP management and ultimately
> >>> decide how the LBaaS group wants to handle the association.
> >>>
> >>> There is a need to utilize floating IPs(FLIP) and its API calls to
> >>> associate a FLIP to the neutron port that we currently spin up.
> >>>
> >>> See DOCS here:
> >>>
> >>> >
> >>>
> http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
> >>>r
> >>>eate.html
> >>>
> >>> Currently, LBaaS will make internal service calls (clean interface :/)
> >>>to create and attach a Neutron port.
> >>> The VIP from this port is added to the Loadbalancer object of the Load
> >>>balancer configuration and returned to the user.
> >>>
> >>> This creates a bit of a problem if we want to associate a FLIP with the
> >>>port and display the FLIP to the user instead of
> >>> the ports VIP because the port is currently created and attached in the
> >>>plugin and there is no code anywhere to handle the FLIP
> >>> association.
> >>>
> >>> To keep this short and to the point:
> >>>
> >>> We need to discuss where and how we want to handle this association. I
> >>>have a few questions to start it off.
> >>>
> >>> Do we want to add logic in the plugin to call the FLIP association API?
> >>>
> >>> If we have logic in the plugin should we have configuration that
> >>>identifies weather to use/return the FLIP instead the port VIP?
> >>>
> >>> Would we rather have logic for FLIP association in the drivers?
> >>>
> >>> If logic is in the drivers would we still return the port VIP to the
> >>>user then later overwrite it with the FLIP?
> >>> Or would we have configuration to not return the port VIP initially,
> >>>but an additional query would show the associated FLIP.
> >>>
> >>>
> >>> Is there an internal service call for this, and if so would we use it
> >>>instead of API calls?
> >>>
> >>>
> >>> Theres plenty of other thoughts and questions to be asked and discussed
> >>>in regards to FLIP handling,
> >>> hopefully this will get us going. I'm certain I may not be completely
> >>>understanding this and
> >>> is the hopes of this email to clarify any uncertainties.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >__

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay B
Hi Phillip,


Adding my thoughts below. I’ll first answer the questions you raised with
what I think should be done, and then give my explanations to reason
through with those views.



1. Do we want to add logic in the plugin to call the FLIP association API?


 >> We should implement the logic in the new v2 extension and the plugin
layer as a single API call. We would need to add to the existing v2 API to
be able to do this. The best place to add this option of passing the FLIP
info/request to the VIP is in the VIP create and update API calls via new
parameters.


2. If we have logic in the plugin should we have configuration that
identifies whether to use/return the FLIP instead of the port VIP?


 >> Yes and no, in that we should return the complete result of the VIP
create/update/list/show API calls, in which we show the VIP internal IP,
but we also show the FLIP either as empty or having a FLIP uuid. External
users will anyway use only the FLIP, else they wouldn’t be able to reach
the LB and the VIP IP, but the APIs need to show both fields.


3. Would we rather have logic for FLIP association in the drivers?


 >> This is the hardest part to decide. To understand this, we need to look
at two important drivers of LBaaS design:


 I)  The Neutron core plugin we’re using.

II) The different types of LB devices - physical, virtual standalone, and
virtual controlled by a management plane. This leads to different kinds of
LBaaS drivers and different kinds of interaction or the lack of it between
them and the core neutron plugin.


The reason we need to take into account both these is that port
provisioning as well as NATing for the FLIP to internal VIP IP will be
configured differently by the different network management/backend planes
that the plugins use, and the way drivers configure LBs can be highly
impacted by this.


For example, we can have an NSX infrastructure that will implement the FLIP
to internal IP conversion in the logical router module which sits pretty
much outside of Openstack’s realm, using openflow. Or we can use lighter
solutions directly on the hypervisor that still employ open flow entries
without actually having a dedicated logical routing module. Neither will
matter much if we are in a position to have them deploy our networking for
us, i.e., in the cases of us using virtual LBs sitting on compute nodes.
But if we have a physical LB, the neutron plugins cannot do much of the
network provisioning work for us, typically because physical LBs usually
sit outside of the cloud, and are among the earliest points of contact from
the external world.


This already nudges us to consider putting the FLIP provisioning
functionality in the driver. However, consider again more closely the major
ways in which LBaaS drivers talk to LB solutions today depending on II) :


 a) LBaaS drivers that talk to a virtual LB device on a compute node,
directly.

b) LBaaS drivers that talk to a physical LB device (or a virtual LB sitting
outside the cloud) directly.

c) LBaaS drivers that talk to a management plane like F5’s BigIQ, or
Netscaler’s NCC, or as in our case, Octavia, that try to provide tenant
based provisioning of virtual LBs.

d) The HAProxy reference namespace driver.


d) is really a PoC use case, and we can forget it. Let’s consider a), b)
and c).


If we use a) or b), we must assume that the required routing for the
virtual LB has been setup correctly, either already through nova or
manually. So we can afford to do our FLIP plumbing in the neutron plugin
layer, but, driven by the driver - how? - typically, after the VIP is
successfully created on the LB, and just before the driver updates the
VIP’s status as ACTIVE, it can create the FLIP. Of course, if the FLIP
provisioning fails for any reason, the VIP still stands. It’ll be empty in
the result, and the API will error out saying “VIP created but FLIP
creation failed”. It must be manually deleted by another delete VIP call.
We can’t afford to provision a FLIP before a VIP is active, for external
traffic shouldn’t be taken while the VIP isn’t up yet. If the lines are
getting hazy right now because of this callback model, let’s just focus on
the point that we’re initiating FLIP creation in the driver layer while the
code sits in the plugin layer because it will need to update the database.
But in absolute terms, we’re doing it in the driver.


It is use case c) that is interesting. In this case, we should do all
neutron based provisioning neither in the driver, nor in the plugin in
neutron, rather, we should do this in Octavia, and in the Octavia
controller to be specific. This is very important to note, because if
customers are using this deployment (which today has the potential to be
way greater in the near future than any other model simply because of the
sheer existing customer base), we can’t be creating the FLIP in the plugin
layer and have the controller reattempt it. Indeed, the controllers can
change their code to not attempt this,

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay Venkatachalam
Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
mailto:phillip.tooh...@rackspace.com>> wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, "Phillip Toohill" 
mailto:phillip.tooh...@rackspace.com>>
wrote:

>Hello all,
>
>Heres some additional diagrams and docs. Not incredibly detailed, but
>should get the point across.
>
>Feel free to edit if needed.
>
>Once we come to some kind of agreement and understanding I can rewrite
>these more to be thorough and get them in a more official place. Also, I
>understand theres other use cases not shown in the initial docs, so this
>is a good time to collaborate to make this more thought out.
>
>Please feel free to ping me with any questions,
>
>Thank you
>
>
>Google DOCS link for FLIP folder:
>https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sh
>a
>ring
>
>-diagrams are draw.io<http://draw.io> based and can be opened from within 
>Drive by
>selecting the appropriate application.
>
>On 10/7/14 2:25 PM, "Brandon Logan" 
>mailto:brandon.lo...@rackspace.com>> wrote:
>
>>I'll add some more info to this as well:
>>
>>Neutron LBaaS creates the neutron port for the VIP in the plugin layer
>>before drivers ever have any control.  In the case of an async driver,
>>it will then call the driver's create method, and then return to the
>>user the vip info.  This means the user will know the VIP before the
>>driver even finishes creating the load balancer.
>>
>>So if Octavia is just going to create a floating IP and then associate
>>that floating IP to the neutron port, there is the problem of the user
>>not ever seeing the correct VIP (which would be the floating iP).
>>
>>So really, we need to have a very detailed discussion on what the
>>options are for us to get this to work for those of us intending to use
>>floating ips as VIPs while also working for those only requiring a
>>neutron port.  I'm pretty sure this will require changing the way V2
>>behaves, but there's more discussion points needed on that.  Luckily, V2
>>is in a feature branch and not merged into Neutron master, so we can
>>change it pretty easily.  Phil and I will bring this up in the meeting
>>tomorrow, which may lead to a meeting topic in the neutron lbaas
>>meeting.
>>
>>Thanks,
>>Brandon
>>
>>
>>On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
>>> Hello All,
>>>
>>> I wanted to start a discussion on floating IP management and ultimately
>>> decide how the LBaaS group wants to handle the association.
>>>
>>> There is a need to utilize floating IPs(FLIP) and its API calls to
>>> associate a FLIP to the neutron port that we currently spin up.
>>>
>>> See DOCS here:
>>>
>>> >
>>>http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
>>>r
>>>eate.html
>>>
>>> Currently, LBaaS will make internal service calls (clean interface :/)
>>>to create and attach a Neutron port.
>>> The VIP from this port is added to the Loadbalancer object of the Load
>>>balancer configuration and returned to the user.
>>>
>>> This creates a bit of a problem if we want to associate a FLIP with the
>>>port and display the FLIP to the user instead of
>>> the ports VIP because the port is currently created and attached in the
>>>plugin and there is no code anywhere to handle the FLIP
>>> association.
>>>
>>> To keep this short and to the point:
>>>
>>> We need to discuss where and how we want to handle this association. I
>>>have a few questions to start it off.
>>>
>>> Do we want to add logic in the plugin to call the FLIP association API?
>>>
>>> If we have logic in the plugin should we have configuration that
>>>identifies weather to use/return the FLIP instead the port VIP?
>>>
>>> Would we rather have logic for FLIP association in the drivers?
>>>
>>> If logic is in the drivers would we still return the port VIP to the
>>>user then later overwrite it with the FLIP?
>>> Or would we have configuration to not return

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
I felt guilty after reading Vijay B. ’s reply ☺.
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
mailto:phillip.tooh...@rackspace.com>> wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, "Phillip Toohill" 
mailto:phillip.tooh...@rackspace.com>>
wrote:

>Hello all,
>
>Heres some additional diagrams and docs. Not incredibly detailed, but
>should get the point across.
>
>Feel free to edit if needed.
>
>Once we come to some kind of agreement and understanding I can rewrite
>these more to be thorough and get them in a more official place. Also, I
>understand theres other use cases not shown in the initial docs, so this
>is a good time to collaborate to make this more thought out.
>
>Please feel free to ping me with any questions,
>
>Thank you
>
>
>Google DOCS link for FLIP folder:
>https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sh
>a
>ring
>
>-diagrams are draw.io<http://draw.io> based and can be opened from within 
>Drive by
>selecting the appropriate application.
>
>On 10/7/14 2:25 PM, "Brandon Logan" 
>mailto:brandon.lo...@rackspace.com>> wrote:
>
>>I'll add some more info to this as well:
>>
>>Neutron LBaaS creates the neutron port for the VIP in the plugin layer
>>before drivers ever have any control.  In the case of an async driver,
>>it will then call the driver's create method, and then return to the
>>user the vip info.  This means the user will know the VIP before the
>>driver even finishes creating the load balancer.
>>
>>So if Octavia is just going to create a floating IP and then associate
>>that floating IP to the neutron port, there is the problem of the user
>>not ever seeing the correct VIP (which would be the floating iP).
>>
>>So really, we need to have a very detailed discussion on what the
>>options are for us to get this to work for those of us intending to use
>>floating ips as VIPs while also working for those only requiring a
>>neutron port.  I'm pretty sure this will require changing the way V2
>>behaves, but there's more discussion points needed on that.  Luckily, V2
>>is in a feature branch and not merged into Neutron master, so we can
>>change it pretty easily.  Phil and I will bring this up in the meeting
>>tomorrow, which may lead to a meeting topic in the neutron lbaas
>>meeting.
>>
>>Thanks,
>>Brandon
>>
>>
>>On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
>>> Hello All,
>>>
>>> I wanted to start a discussion on floating IP management and ultimately
>>> decide how the LBaaS group wants to handle the association.
>>>
>>> There is a need to utilize

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
Responses in-line:

From: Vijay B mailto:os.v...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, October 14, 2014 7:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management


Hi Phillip,


Adding my thoughts below. I’ll first answer the questions you raised with what 
I think should be done, and then give my explanations to reason through with 
those views.



1. Do we want to add logic in the plugin to call the FLIP association API?


>> We should implement the logic in the new v2 extension and the plugin layer 
>> as a single API call. We would need to add to the existing v2 API to be able 
>> to do this. The best place to add this option of passing the FLIP 
>> info/request to the VIP is in the VIP create and update API calls via new 
>> parameters.

>>>>Agreed, we would need to have another field for the FLIP itself. Although, 
>>>>I was under the impression we would use the associate_floating_ip and not 
>>>>have to modify the VIP call at all. I may be misunderstanding where you 
>>>>think this should happen.


2. If we have logic in the plugin should we have configuration that identifies 
whether to use/return the FLIP instead of the port VIP?


>> Yes and no, in that we should return the complete result of the VIP 
>> create/update/list/show API calls, in which we show the VIP internal IP, but 
>> we also show the FLIP either as empty or having a FLIP uuid. External users 
>> will anyway use only the FLIP, else they wouldn’t be able to reach the LB 
>> and the VIP IP, but the APIs need to show both fields.

>>>> By this, do you mean we would be altering the vip_port API calls? Again, I 
>>>> was under the impression that we would utilize neutrons FLIP calls. With 
>>>> the question #2 I was referring to 'replacing' the VIP on the LB object 
>>>> with a configuration value rather than showing both or just the VIP. If by 
>>>> VIP CRUD you are talking about LBaaS API then this makes sense to me and 
>>>> we should indeed show them both.


3. Would we rather have logic for FLIP association in the drivers?


>> This is the hardest part to decide. To understand this, we need to look at 
>> two important drivers of LBaaS design:


I)  The Neutron core plugin we’re using.

II) The different types of LB devices - physical, virtual standalone, and 
virtual controlled by a management plane. This leads to different kinds of 
LBaaS drivers and different kinds of interaction or the lack of it between them 
and the core neutron plugin.


The reason we need to take into account both these is that port provisioning as 
well as NATing for the FLIP to internal VIP IP will be configured differently 
by the different network management/backend planes that the plugins use, and 
the way drivers configure LBs can be highly impacted by this.


For example, we can have an NSX infrastructure that will implement the FLIP to 
internal IP conversion in the logical router module which sits pretty much 
outside of Openstack’s realm, using openflow. Or we can use lighter solutions 
directly on the hypervisor that still employ open flow entries without actually 
having a dedicated logical routing module. Neither will matter much if we are 
in a position to have them deploy our networking for us, i.e., in the cases of 
us using virtual LBs sitting on compute nodes. But if we have a physical LB, 
the neutron plugins cannot do much of the network provisioning work for us, 
typically because physical LBs usually sit outside of the cloud, and are among 
the earliest points of contact from the external world.


This already nudges us to consider putting the FLIP provisioning functionality 
in the driver. However, consider again more closely the major ways in which 
LBaaS drivers talk to LB solutions today depending on II) :


a) LBaaS drivers that talk to a virtual LB device on a compute node, directly.

b) LBaaS drivers that talk to a physical LB device (or a virtual LB sitting 
outside the cloud) directly.

c) LBaaS drivers that talk to a management plane like F5’s BigIQ, or 
Netscaler’s NCC, or as in our case, Octavia, that try to provide tenant based 
provisioning of virtual LBs.

d) The HAProxy reference namespace driver.


d) is really a PoC use case, and we can forget it. Let’s consider a), b) and c).


If we use a) or b), we must assume that the required routing for the virtual LB 
has been setup correctly, either already through nova or manually. So we can 
afford to do our FLIP plumbing in the neutron plugin layer, but, driven by the 
driver - how? - typically, 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 15, 2014 9:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
mailto:phillip.tooh...@rackspace.com>> wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, "Phillip Toohill" 
mailto:phillip.tooh...@rackspace.com>>
wrote:

>Hello all,
>
>Heres some additional diagrams and docs. Not incredibly detailed, but
>should get the point across.
>
>Feel free to edit if needed.
>
>Once we come to some kind of agreement and understanding I can rewrite
>these more to be thorough and get them in a more official place. Also, I
>understand theres other use cases not shown in the initial docs, so this
>is a good time to collaborate to make this more thought out.
>
>Please feel free to ping me with any questions,
>
>Thank you
>
>
>Google DOCS link for FLIP folder:
>https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sh
>a
>ring
>
>-diagrams are draw.io<http://draw.io> based and can be opened from within 
>Drive by
>selecting the appropriate application.
>
>On 10/7/14 2:25 PM, "Brandon Logan" 
>mailto:brandon.lo...@rackspace.com>> wrote:
>
>>I'll add some more info to this as well:
>>
>>Neutron LBaaS creates the neutron port for the VIP in the plugin layer
>>before drivers ever have any control.  In the case of an async driver,
>>it will then call the driver's create method, and then return to the
>>user the vip info.  This means the user will know the VIP before the
>>driver even finishes creating the load balancer.
>>
>>So if Octavia is just going to create a floating IP and then associate
>>that floating IP to the neutron port, there is the problem of the user
>>not ever seeing the correct VIP (which would be the floating iP).
>>
>>So really, we need to have a very detailed discussion on what the
>>options are for us to get this t

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
> I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the "owner" of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 15, 2014 9:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. 's reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first "create a VIP with a private IP" and then "creates a FLIP and 
assigns FLIP to private VIP" which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
mailto:phillip.tooh...@rackspace.com>> wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, "Phillip Toohill" 
mailto:phillip.tooh...@rackspace.com>>
wrote:

>Hello all,
>
>Heres some additional diagrams and docs. Not incredibly detailed, but
>should get the point across.
>
>Feel free to edit if needed.
>
>Once we come to some kind of agreement and understanding I can rewrite
>these more to be thorough and get them in a more official place. Also, I
>understand theres other use cases not shown in the initial docs, so this
>is a good time to collaborate to make this more thought out.
>
>Please feel free to ping me with any questions,
>
>Thank you
>
>
>Google DOCS link for FLIP folder:
>https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWM&usp=sh
>a
>ring
>
>-diagrams are draw.io<http://draw.io> based and can be opened from within 
>Drive by
>selecting the appropriate application.
>
>On 10/7/14 2:25 PM, "Brandon Logan" 
>mailto:brandon.lo...@rackspace.com>> wrote:
>
>>I'll add some more info to this as well:
>>
>>Neutron LBaaS creates the neutron port for the VIP in the plugin

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
Ah, this makes sense. Guess I'm wondering more so how that’s configured and if 
it utilizes Neutron at all. And if it does how does it configure that.

I have some more research to do it seems ;)

Thanks for the clarification

From: Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 15, 2014 1:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

> I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the “owner” of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 15, 2014 9:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
mailto:phillip.tooh...@rackspace.com>> wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, "Phillip Toohill" 
mailto:phillip.tooh...@rackspace.com>>
wrote:

>Hello all,
>
>Heres some additional diagrams and docs. Not incredibly detailed, but
>should get the point across.
>
>Feel free to edit if needed.
>
>Once we come to some kind of agreement and understanding I can rewrite
>these more to be thorough and get them in a more official place. Also, I
>understand theres other use cases not shown in the initial docs, so this
>is a good time to col

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-15 Thread Trevor Vardeman
I have a couple proposals done up on paper that I'll have available
shortly, I'll reply with a link.

 - Trevor J. Vardeman
 - trevor.varde...@rackspace.com
 - (210) 312 - 4606




On 4/14/15, 5:34 PM, "Eichberger, German"  wrote:

>All,
>
>Let's decide on a logo tomorrow so we can print stickers in time for
>Vancouver. Here are some designs to consider:
>http://bit.ly/Octavia_logo_vote
>
>We will discuss more at tomorrow's meeting - Agenda:
>https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015
>-04-15 - but please come prepared with one of your favorite designs...
>
>Thanks,
>German
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About playing Neutron LBaaS

2016-11-18 Thread Michael Johnson
Hi Yipei,

A note, that you probably want to use the tags [neutron-lbaas] and
[octavia] instead of [tricicle] to catch the LBaaS team attention.

Since you are using the octavia driver, can you please include a link
to your o-cw.log?  This will tell us why the load balancer create
failed.

Also, I see that your two servers are on the lb-mgmt-net, this may
cause some problems with the load balancer when you add them as
members.  The lb-mgmt-net is intended to only be used for
communication between the octavia controller processes and the octavia
amphora (service VMs).  Since you didn't get as far as adding members
I'm sure this is not the root cause of the problem you are seeing.
The o-cw log will help us determine the root cause.

Michael


On Thu, Nov 17, 2016 at 11:48 PM, Yipei Niu  wrote:
> Hi, all,
>
> Recently I try to configure and play Neutron LBaaS in one OpenStack instance
> and have some trouble when creating a load balancer.
>
> I install devstack with neutron networking as well as LBaaS in one VM. The
> detailed configuration of local.conf is pasted in the link
> http://paste.openstack.org/show/589669/.
>
> Then I boot two VMs in the OpenStack instance, which can be reached via ping
> command from the host VM. The detailed information of the two VMs are listed
> in the following table.
>
> +--+-+++-+--+
> | ID   | Name| Status | Task State |
> Power State | Networks |
> +--+-+++-+--+
> | 4cf7527b-05cc-49b7-84f9-3cc0f061be4f | server1 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.6  |
> | bc7384a0-62aa-4987-89b6-8b98a6c467a9 | server2 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.12 |
> +--+-+++-+--+
>
> After building up the environment, I try to create a load balancer based on
> the guide in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun. When
> executing the command "neutron lbaas-loadbalancer-create --name lb1
> private-subnet", the state of the load balancer remains "PENDING_CREATE" and
> finally becomes "ERROR". I checked q-agt.log and q-svc.log, the detailed
> info is pasted in http://paste.openstack.org/show/589676/.
>
> Look forward to your valuable comments. Thanks a lot!
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-19 Thread Yipei Niu
Hi, Micheal,

Thanks a lot for your comments.

Please find the errors of o-cw.log in link http://paste.openstack.org/
show/589806/ . Hope it will help.

About the lb-mgmt-net, I just follow the guide of running LBaaS. If I
create a ordinary subnet with neutron for the two VMs, will it prevent the
issue you mentioned happening?

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-21 Thread Michael Johnson
Hi Yipei,

That error means the controller worker process was not able to reach
the amphora REST API.

I am guessing this is the issue with diskimage-builder which we have
patches up for, but not all of them have merged yet [1][2].

Try running my script:
https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9 to
rebuild the image and boot another amphora.

Also, could you provide a link to the docs you used that booted the
web servers on the lb-mgmt-lan?  I want to make sure we update that
and clarify for future users.

Michael

[1] https://review.openstack.org/399272
[2] https://review.openstack.org/399276

On Sat, Nov 19, 2016 at 9:46 PM, Yipei Niu  wrote:
> Hi, Micheal,
>
> Thanks a lot for your comments.
>
> Please find the errors of o-cw.log in link
> http://paste.openstack.org/show/589806/. Hope it will help.
>
> About the lb-mgmt-net, I just follow the guide of running LBaaS. If I create
> a ordinary subnet with neutron for the two VMs, will it prevent the issue
> you mentioned happening?
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-22 Thread Yipei Niu
Hi, Micheal,

Thanks a lot for your help. I am trying your solution.

Best regards,
Yipei

On Sun, Nov 20, 2016 at 1:46 PM, Yipei Niu  wrote:

> Hi, Micheal,
>
> Thanks a lot for your comments.
>
> Please find the errors of o-cw.log in link http://paste.openstack.
> org/show/589806/ . Hope it will
> help.
>
> About the lb-mgmt-net, I just follow the guide of running LBaaS. If I
> create a ordinary subnet with neutron for the two VMs, will it prevent the
> issue you mentioned happening?
>
> Best regards,
> Yipei
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-27 Thread Yipei Niu
Hi, All,

I failed creating a load balancer on a subnet. The detailed info of
o-cw.log is pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting 5/20/15

2015-05-14 Thread Eichberger, German
All,

We won¹t have an Octavia meeting next week due to the OpenStack summit but
we will have a few sessions there ‹ so please make sure to say hiŠ

German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Brandon Logan
With the recent talk about advanced services spinning out of Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin out of
Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it will
need a standalone API.  Octavia's API seems to be a good solution to
this.  It will support vendor drivers much like the current Neutron
LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
exact duplicate.  Octavia will be growing more mature in stackforge at a
higher velocity than an Openstack project, so I expect by the time Kilo
comes around it's API will be very mature.

Octavia's API doesn't have to be called Octavia either.  It can be
separated out and it can be called Openstack LBaaS, and the rest of
Octavia (the actual brains of it) will just be another driver to
Openstack LBaaS, which would retain the Octavia name.

This is my PROS and CONS list to using Octavia's API as the spun out
LBaaS:

PROS
1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
will already have this done.
2. Most of the same people working on Octavia have worked on Neutron
LBaaS v2.
3. It's out of Neutron faster, which is good for Neutron and LBaaS.

CONS
1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
another version of an LBaaS API.
2. The Octavia API will also have a separate Operator API which will
most likely only work with Octavia, not any vendors.

The CONS are easily solvable, and IMHO the PROS greatly outweigh the
CONS.

This is just my opinion though and I'd like to hear back from as many as
possible.  Add on to the PROS and CONS if wanted.

If it is direction we can agree on going then we can add as a talking
point in the advanced services spin out meeting:

http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread mihaela.balas
Hello,

I have the following setup:
Neutron - Newton version
Octavia - Ocata version

Neutron LBaaS had the following configuration in services_lbaas.conf:

[octavia]

..
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
request_poll_interval = 2

# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
request_poll_timeout = 300



However, neutron-lbaas seems not to respect the request poll interval and it 
takes about 15 minutes to create a load balancer+listener+pool+members+hm. 
Below, you have the timestamps for the API calls made by neutron towards 
Octavia (extracted with tcpdump when I create a load balancer from horizon GUI):

10.100.0.14 - - [01/Feb/2018 12:11:53] "POST /v1/loadbalancers HTTP/1.1" 202 437
10.100.0.14 - - [01/Feb/2018 12:11:54] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 430
10.100.0.14 - - [01/Feb/2018 12:11:58] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:12:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:14:12] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:16:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/listeners HTTP/1.1" 202 
445
10.100.0.14 - - [01/Feb/2018 12:16:23] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:18:32] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:18:37] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools HTTP/1.1" 202 318
10.100.0.14 - - [01/Feb/2018 12:18:37] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:20:46] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:00] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 317
10.100.0.14 - - [01/Feb/2018 12:23:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:23:05] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:08] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 316
10.100.0.14 - - [01/Feb/2018 12:23:08] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:25:20] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:25:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/healthmonitor
 HTTP/1.1" 202 215
10.100.0.14 - - [01/Feb/2018 12:27:30] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 437

It seems that, after 1 or 2 polls, it waits for more than two minutes until the 
next poll. Is it normal? Has anyone seen this behavior?

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia] New time proposal for weekly meeting

2016-12-08 Thread Kobi Samoray
Hi,
As some project members are based outside of the US, I’d like to propose time 
change for the weekly meeting, which will more friendly to non-US based members.
Please post your preferences/info in the etherpad below.

https://etherpad.openstack.org/p/octavia-weekly-meeting-time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-29 Thread Kosnik, Lubosz
Based on this logs. I can tell you that problem is with plugging VIP address. 
You need to show us also n-cpu logs. There should be some info what happened 
because we can see in logs in line 22 that client failed with error 500 on 
attaching network adapter. Maybe you’re out of IP’s in this subnet?
Without the rest of logs there is no way to tell exactly what happened.

Regards,
Lubosz.

From: Yipei Niu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, December 27, 2016 at 9:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [neutron-lbaas][octavia] Error when creating load 
balancer

Hi, All,

I failed creating a load balancer on a subnet. The detailed info of o-cw.log is 
pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-15 Thread Brandon Logan
I filed a bug [1] a while ago that subnet_id should be an optional
parameter for member creation.  Currently it is required.  Review [2] is
makes it optional.

The original thinking was that if the load balancer is ever connected to
that same subnet, be it by another member on that subnet or the vip on
that subnet, then the user does not need to specify the subnet for new
member if that new member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it
required too many assumptions on the part of the end user, neutron
lbaas, and driver.

If anyone wants to voice their opinion on this matter, do so on the bug
report, review, or in response to this thread.  Otherwise, it'll
probably be abandoned and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Stephen Balukoff
+1 to this, eh!

Though it sounds more like you're talking about spinning the Octavia user
API out of Octavia to become it's own thing (ie. "Openstack LBaaS"), and
then ensuring a standardized driver interface that vendors (including
Octavia) will interface with. It's sort of a half-dozen of one, six of the
other kind of deal.

To the pros, I would add:  Spin out from Neutron ensures that LBaaS uses
"clean" interfaces to the networking layer, and separation of concerns here
means that Neutron and LBaaS can evolve independently. (And testing and
failure modes, etc. all become easier with separation of concerns.)

One other thing to consider (not sure if pro or con): I know at Atlanta
there was a lot of talk around using the Neutron flavor framework to allow
for multiple vendors in a single installation as well as differentiated
product offerings for Operators. If / when LBaaS is spun out of Neutron,
LBaaS will still probably have need for something like Neutron flavors,
even if it isn't an equivalent implementation. (Noting of course, that no
implementation of Neutron flavors actually presently exists. XD )

Stephen


On Fri, Oct 24, 2014 at 2:47 PM, Brandon Logan 
wrote:

> With the recent talk about advanced services spinning out of Neutron,
> and the fact most of the LBaaS community has wanted LBaaS to spin out of
> Neutron, I wanted to bring up a possibility and gauge interest and
> opinion on this possibility.
>
> Octavia is going to (and has) an API.  The current thinking is that an
> Octavia driver will be created in Neutron LBaaS that will make a
> requests to the Octavia API.  When LBaaS spins out of Neutron, it will
> need a standalone API.  Octavia's API seems to be a good solution to
> this.  It will support vendor drivers much like the current Neutron
> LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
> exact duplicate.  Octavia will be growing more mature in stackforge at a
> higher velocity than an Openstack project, so I expect by the time Kilo
> comes around it's API will be very mature.
>
> Octavia's API doesn't have to be called Octavia either.  It can be
> separated out and it can be called Openstack LBaaS, and the rest of
> Octavia (the actual brains of it) will just be another driver to
> Openstack LBaaS, which would retain the Octavia name.
>
> This is my PROS and CONS list to using Octavia's API as the spun out
> LBaaS:
>
> PROS
> 1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
> will already have this done.
> 2. Most of the same people working on Octavia have worked on Neutron
> LBaaS v2.
> 3. It's out of Neutron faster, which is good for Neutron and LBaaS.
>
> CONS
> 1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
> another version of an LBaaS API.
> 2. The Octavia API will also have a separate Operator API which will
> most likely only work with Octavia, not any vendors.
>
> The CONS are easily solvable, and IMHO the PROS greatly outweigh the
> CONS.
>
> This is just my opinion though and I'd like to hear back from as many as
> possible.  Add on to the PROS and CONS if wanted.
>
> If it is direction we can agree on going then we can add as a talking
> point in the advanced services spin out meeting:
>
>
> http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY
>
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Doug Wiegley
Hi all,

Before we get into the details of which API goes where, I’d like to see us
answer the questions of:

1. Are we spinning out?
2. When?
3. With or without the rest of advanced services?
4. Do we want to wait until we (the royal “we” of “the Neutron team”) have
had the Paris summit discussions on vendor split-out and adv. services
spinout before we answer those questions?  (Yes, that question is leading.)

To me, the “where does the API live” is an implementation detail, and not
where the time will need to be spent.

For the record, my answers are:

1. Yes.
2. I don’t know.
3. I don’t know; this needs some serious discussion.
4. Yes.

Thanks,
doug

On 10/24/14, 3:47 PM, "Brandon Logan"  wrote:

>With the recent talk about advanced services spinning out of Neutron,
>and the fact most of the LBaaS community has wanted LBaaS to spin out of
>Neutron, I wanted to bring up a possibility and gauge interest and
>opinion on this possibility.
>
>Octavia is going to (and has) an API.  The current thinking is that an
>Octavia driver will be created in Neutron LBaaS that will make a
>requests to the Octavia API.  When LBaaS spins out of Neutron, it will
>need a standalone API.  Octavia's API seems to be a good solution to
>this.  It will support vendor drivers much like the current Neutron
>LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
>exact duplicate.  Octavia will be growing more mature in stackforge at a
>higher velocity than an Openstack project, so I expect by the time Kilo
>comes around it's API will be very mature.
>
>Octavia's API doesn't have to be called Octavia either.  It can be
>separated out and it can be called Openstack LBaaS, and the rest of
>Octavia (the actual brains of it) will just be another driver to
>Openstack LBaaS, which would retain the Octavia name.
>
>This is my PROS and CONS list to using Octavia's API as the spun out
>LBaaS:
>
>PROS
>1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
>will already have this done.
>2. Most of the same people working on Octavia have worked on Neutron
>LBaaS v2.
>3. It's out of Neutron faster, which is good for Neutron and LBaaS.
>
>CONS
>1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
>another version of an LBaaS API.
>2. The Octavia API will also have a separate Operator API which will
>most likely only work with Octavia, not any vendors.
>
>The CONS are easily solvable, and IMHO the PROS greatly outweigh the
>CONS.
>
>This is just my opinion though and I'd like to hear back from as many as
>possible.  Add on to the PROS and CONS if wanted.
>
>If it is direction we can agree on going then we can add as a talking
>point in the advanced services spin out meeting:
>
>http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.
>VEq66HWx3UY
>
>Thanks,
>Brandon
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Brandon Logan
Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
"advanced services" will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
> Hi all,
> 
> Before we get into the details of which API goes where, I’d like to see us
> answer the questions of:
> 
> 1. Are we spinning out?
> 2. When?
> 3. With or without the rest of advanced services?
> 4. Do we want to wait until we (the royal “we” of “the Neutron team”) have
> had the Paris summit discussions on vendor split-out and adv. services
> spinout before we answer those questions?  (Yes, that question is leading.)
> 
> To me, the “where does the API live” is an implementation detail, and not
> where the time will need to be spent.
> 
> For the record, my answers are:
> 
> 1. Yes.
> 2. I don’t know.
> 3. I don’t know; this needs some serious discussion.
> 4. Yes.
> 
> Thanks,
> doug
> 
> On 10/24/14, 3:47 PM, "Brandon Logan"  wrote:
> 
> >With the recent talk about advanced services spinning out of Neutron,
> >and the fact most of the LBaaS community has wanted LBaaS to spin out of
> >Neutron, I wanted to bring up a possibility and gauge interest and
> >opinion on this possibility.
> >
> >Octavia is going to (and has) an API.  The current thinking is that an
> >Octavia driver will be created in Neutron LBaaS that will make a
> >requests to the Octavia API.  When LBaaS spins out of Neutron, it will
> >need a standalone API.  Octavia's API seems to be a good solution to
> >this.  It will support vendor drivers much like the current Neutron
> >LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
> >exact duplicate.  Octavia will be growing more mature in stackforge at a
> >higher velocity than an Openstack project, so I expect by the time Kilo
> >comes around it's API will be very mature.
> >
> >Octavia's API doesn't have to be called Octavia either.  It can be
> >separated out and it can be called Openstack LBaaS, and the rest of
> >Octavia (the actual brains of it) will just be another driver to
> >Openstack LBaaS, which would retain the Octavia name.
> >
> >This is my PROS and CONS list to using Octavia's API as the spun out
> >LBaaS:
> >
> >PROS
> >1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
> >will already have this done.
> >2. Most of the same people working on Octavia have worked on Neutron
> >LBaaS v2.
> >3. It's out of Neutron faster, which is good for Neutron and LBaaS.
> >
> >CONS
> >1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
> >another version of an LBaaS API.
> >2. The Octavia API will also have a separate Operator API which will
> >most likely only work with Octavia, not any vendors.
> >
> >The CONS are easily solvable, and IMHO the PROS greatly outweigh the
> >CONS.
> >
> >This is just my opinion though and I'd like to hear back from as many as
> >possible.  Add on to the PROS and CONS if wanted.
> >
> >If it is direction we can agree on going then we can add as a talking
> >point in the advanced services spin out meeting:
> >
> >http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.
> >VEq66HWx3UY
> >
> >Thanks,
> >Brandon
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Doug Wiegley
Hi Brandon,

> 4. I brought this up now so that we can decide whether we want to
> discuss it at the advanced services spin out session.  I don't see the
> harm in opinions being discussed before the summit, during the summit,
> and more thoroughly after the summit.

I agree with this sentiment.  I’d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let’s jump to the interesting part.

> 3. The main reason a spin out makes sense from Neutron is that the scope
> for Neutron is too large for the attention advances services needs from
> the Neutron Core.  If all of advanced services spins out, I see that

There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

… this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
Services -> LB, where we’re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It’s at least worth a
conversation or three.

Thanks,
Doug




On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:

>Good questions Doug.  My answers are as follows:
>
>1. Yes
>2. Some time after Kilo (same as I don't know when)
>3. The main reason a spin out makes sense from Neutron is that the scope
>for Neutron is too large for the attention advances services needs from
>the Neutron Core.  If all of advanced services spins out, I see that
>repeating itself within an advanced services project.  More and more
>"advanced services" will get added in and the scope will become too
>large.  There would definitely be benefits to it though, but I think we
>would end up being right where we are today.
>4. I brought this up now so that we can decide whether we want to
>discuss it at the advanced services spin out session.  I don't see the
>harm in opinions being discussed before the summit, during the summit,
>and more thoroughly after the summit.
>
>Yes the brunt of the time will not be spent on the API, but since it
>seemed like an opportunity to kill two birds with one stone, I figured
>it warranted a discussion.
>
>Thanks,
>Brandon
>
>On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
>> Hi all,
>> 
>> Before we get into the details of which API goes where, I’d like to see
>>us
>> answer the questions of:
>> 
>> 1. Are we spinning out?
>> 2. When?
>> 3. With or without the rest of advanced services?
>> 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
>>have
>> had the Paris summit discussions on vendor split-out and adv. services
>> spinout before we answer those questions?  (Yes, that question is
>>leading.)
>> 
>> To me, the “where does the API live” is an implementation detail, and
>>not
>> where the time will need to be spent.
>> 
>> For the record, my answers are:
>> 
>> 1. Yes.
>> 2. I don’t know.
>> 3. I don’t know; this needs some serious discussion.
>> 4. Yes.
>> 
>> Thanks,
>> doug
>> 
>> On 10/24/14, 3:47 PM, "Brandon Logan" 
>>wrote:
>> 
>> >With the recent talk about advanced services spinning out of Neutron,
>> >and the fact most of the LBaaS community has wanted LBaaS to spin out
>>of
>> >Neutron, I wanted to bring up a possibility and gauge interest and
>> >opinion on this possibility.
>> >
>> >Octavia is going to (and has) an API.  The current thinking is that an
>> >Octavia driver will be created in Neutron LBaaS that will make a
>> >requests to the Octavia API.  When LBaaS spins out of Neutron, it will
>> >need a standalone API.  Octavia's API seems to be a good solution to
>> >this.  It will support vendor drivers much like the current Neutron
>> >LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
>> >exact duplicate.  Octavia will be growing more mature in stackforge at
>>a
>> >higher velocity than an Openstack project, so I expect by the time Kilo
>> >comes around it's API will be very mature.
>> >
>> >Octavia's API doesn't have to be called Octavia either.  It can be
>> >separated out and it can be called Openstack LBaaS, and the rest of
>> >Octavia (the actual brains of it) will just be another driver to
>> >Openstack LBaaS, which would retain the Octavia name.
>> >
>> >This is my PROS and CONS list to using Octavia's API as the spun out
>> >LBaaS:
>> >
>> >PROS
>> >1. Time will need to be spent on a spun out LBaaS's API anyway.
>>Octavia
>> >will already have this done.
>> >2. Most of the same people working on O

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Sridar Kandaswamy (skandasw)
Hi Doug:

On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:

>Hi Brandon,
>
>> 4. I brought this up now so that we can decide whether we want to
>> discuss it at the advanced services spin out session.  I don't see the
>> harm in opinions being discussed before the summit, during the summit,
>> and more thoroughly after the summit.
>
>I agree with this sentiment.  I¹d just like to pull-up to the decision
>level, and if we can get some consensus on how we move forward, we can
>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
>love each other.  Check.  Things are going to change sometime.  Check.  We
>might spin-out someday.  Check.  Now, let¹s jump to the interesting part.
>
>> 3. The main reason a spin out makes sense from Neutron is that the scope
>> for Neutron is too large for the attention advances services needs from
>> the Neutron Core.  If all of advanced services spins out, I see that
>
>There is merit here, but consider the sorts of things that an advanced
>services framework should be doing:
>
>- plugging into neutron ports, with all manner of topologies
>- service VM handling
>- plugging into nova-network
>- service chaining
>- applying things like security groups to services
>
>Š this is all stuff that Octavia is talking about implementing itself in a
>basically defensive manner, instead of leveraging other work.  And there
>are specific reasons for that.  But, maybe we can at least take steps to
>not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
>Services -> LB, where we¹re still spun out, but not doing it in a way that
>we have to re-implement the world all the time.  It¹s at least worth a
>conversation or three.

In total agreement and I have heard these sentiments in multiple
conversations across multiple players.
It would be really fruitful to have a constructive conversation on this
across the services, and there are
enough similar issues to make this worthwhile.

Thanks

Sridar

>
>Thanks,
>Doug
>
>
>
>
>On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:
>
>>Good questions Doug.  My answers are as follows:
>>
>>1. Yes
>>2. Some time after Kilo (same as I don't know when)
>>3. The main reason a spin out makes sense from Neutron is that the scope
>>for Neutron is too large for the attention advances services needs from
>>the Neutron Core.  If all of advanced services spins out, I see that
>>repeating itself within an advanced services project.  More and more
>>"advanced services" will get added in and the scope will become too
>>large.  There would definitely be benefits to it though, but I think we
>>would end up being right where we are today.
>>4. I brought this up now so that we can decide whether we want to
>>discuss it at the advanced services spin out session.  I don't see the
>>harm in opinions being discussed before the summit, during the summit,
>>and more thoroughly after the summit.
>>
>>Yes the brunt of the time will not be spent on the API, but since it
>>seemed like an opportunity to kill two birds with one stone, I figured
>>it warranted a discussion.
>>
>>Thanks,
>>Brandon
>>
>>On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
>>> Hi all,
>>> 
>>> Before we get into the details of which API goes where, I¹d like to see
>>>us
>>> answer the questions of:
>>> 
>>> 1. Are we spinning out?
>>> 2. When?
>>> 3. With or without the rest of advanced services?
>>> 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
>>>have
>>> had the Paris summit discussions on vendor split-out and adv. services
>>> spinout before we answer those questions?  (Yes, that question is
>>>leading.)
>>> 
>>> To me, the ³where does the API live² is an implementation detail, and
>>>not
>>> where the time will need to be spent.
>>> 
>>> For the record, my answers are:
>>> 
>>> 1. Yes.
>>> 2. I don¹t know.
>>> 3. I don¹t know; this needs some serious discussion.
>>> 4. Yes.
>>> 
>>> Thanks,
>>> doug
>>> 
>>> On 10/24/14, 3:47 PM, "Brandon Logan" 
>>>wrote:
>>> 
>>> >With the recent talk about advanced services spinning out of Neutron,
>>> >and the fact most of the LBaaS community has wanted LBaaS to spin out
>>>of
>>> >Neutron, I wanted to bring up a possibility and gauge interest and
>>> >opinion on this possibility.
>>> >
>>> >Octavia is going to (and has) an API.  The current thinking is that an
>>> >Octavia driver will be created in Neutron LBaaS that will make a
>>> >requests to the Octavia API.  When LBaaS spins out of Neutron, it will
>>> >need a standalone API.  Octavia's API seems to be a good solution to
>>> >this.  It will support vendor drivers much like the current Neutron
>>> >LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
>>> >exact duplicate.  Octavia will be growing more mature in stackforge at
>>>a
>>> >higher velocity than an Openstack project, so I expect by the time
>>>Kilo
>>> >comes around it's API will be very mature.
>>> >
>>> >Octavia's API doesn't have to be called Octavia either.  It can be
>>> >

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Sumit Naiksatam
Several people have been requesting that we resume the Advanced
Services' meetings [1] to discuss some of the topics being mentioned
in this thread. Perhaps it might help people to have a focussed
discussion on the topic of "advanced services' spin-out" prior to the
design summit session [2] in Paris. So I propose that we resume our
weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
#openstack-meeting-3.

Thanks,
~Sumit.

[1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
[2] 
http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y

On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
 wrote:
> Hi Doug:
>
> On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:
>
>>Hi Brandon,
>>
>>> 4. I brought this up now so that we can decide whether we want to
>>> discuss it at the advanced services spin out session.  I don't see the
>>> harm in opinions being discussed before the summit, during the summit,
>>> and more thoroughly after the summit.
>>
>>I agree with this sentiment.  I¹d just like to pull-up to the decision
>>level, and if we can get some consensus on how we move forward, we can
>>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
>>love each other.  Check.  Things are going to change sometime.  Check.  We
>>might spin-out someday.  Check.  Now, let¹s jump to the interesting part.
>>
>>> 3. The main reason a spin out makes sense from Neutron is that the scope
>>> for Neutron is too large for the attention advances services needs from
>>> the Neutron Core.  If all of advanced services spins out, I see that
>>
>>There is merit here, but consider the sorts of things that an advanced
>>services framework should be doing:
>>
>>- plugging into neutron ports, with all manner of topologies
>>- service VM handling
>>- plugging into nova-network
>>- service chaining
>>- applying things like security groups to services
>>
>>Š this is all stuff that Octavia is talking about implementing itself in a
>>basically defensive manner, instead of leveraging other work.  And there
>>are specific reasons for that.  But, maybe we can at least take steps to
>>not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
>>Services -> LB, where we¹re still spun out, but not doing it in a way that
>>we have to re-implement the world all the time.  It¹s at least worth a
>>conversation or three.
>
> In total agreement and I have heard these sentiments in multiple
> conversations across multiple players.
> It would be really fruitful to have a constructive conversation on this
> across the services, and there are
> enough similar issues to make this worthwhile.
>
> Thanks
>
> Sridar
>
>>
>>Thanks,
>>Doug
>>
>>
>>
>>
>>On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:
>>
>>>Good questions Doug.  My answers are as follows:
>>>
>>>1. Yes
>>>2. Some time after Kilo (same as I don't know when)
>>>3. The main reason a spin out makes sense from Neutron is that the scope
>>>for Neutron is too large for the attention advances services needs from
>>>the Neutron Core.  If all of advanced services spins out, I see that
>>>repeating itself within an advanced services project.  More and more
>>>"advanced services" will get added in and the scope will become too
>>>large.  There would definitely be benefits to it though, but I think we
>>>would end up being right where we are today.
>>>4. I brought this up now so that we can decide whether we want to
>>>discuss it at the advanced services spin out session.  I don't see the
>>>harm in opinions being discussed before the summit, during the summit,
>>>and more thoroughly after the summit.
>>>
>>>Yes the brunt of the time will not be spent on the API, but since it
>>>seemed like an opportunity to kill two birds with one stone, I figured
>>>it warranted a discussion.
>>>
>>>Thanks,
>>>Brandon
>>>
>>>On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,

 Before we get into the details of which API goes where, I¹d like to see
us
 answer the questions of:

 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)

 To me, the ³where does the API live² is an implementation detail, and
not
 where the time will need to be spent.

 For the record, my answers are:

 1. Yes.
 2. I don¹t know.
 3. I don¹t know; this needs some serious discussion.
 4. Yes.

 Thanks,
 doug

 On 10/24/14, 3:47 PM, "Brandon Logan" 
wrote:

 >With the recent talk about advanced services spinning out of Neutron,
 >and the fact most of the LBaaS community has wanted LBaaS to spin out
of
 >Neutron, I wanted to bring up a possibility and gauge int

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
 wrote:
> Several people have been requesting that we resume the Advanced
> Services' meetings [1] to discuss some of the topics being mentioned
> in this thread. Perhaps it might help people to have a focussed
> discussion on the topic of "advanced services' spin-out" prior to the
> design summit session [2] in Paris. So I propose that we resume our
> weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
> #openstack-meeting-3.
>
Given how important this is to Neutron in general, I would prefer NOT
to see this discussed in the Advanced Services meeting, but rather in
the regular Neutron meeting. These are the types of things which need
broader oversight and involvement. Lets please discuss this in the
regular Neutron meeting, which is an on-demand meeting format, rather
than in a sub-team meeting.

> Thanks,
> ~Sumit.
>
> [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> [2] 
> http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
>
> On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
>  wrote:
>> Hi Doug:
>>
>> On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:
>>
>>>Hi Brandon,
>>>
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.
>>>
>>>I agree with this sentiment.  I¹d just like to pull-up to the decision
>>>level, and if we can get some consensus on how we move forward, we can
>>>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
>>>love each other.  Check.  Things are going to change sometime.  Check.  We
>>>might spin-out someday.  Check.  Now, let¹s jump to the interesting part.
>>>
 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that
>>>
>>>There is merit here, but consider the sorts of things that an advanced
>>>services framework should be doing:
>>>
>>>- plugging into neutron ports, with all manner of topologies
>>>- service VM handling
>>>- plugging into nova-network
>>>- service chaining
>>>- applying things like security groups to services
>>>
>>>Š this is all stuff that Octavia is talking about implementing itself in a
>>>basically defensive manner, instead of leveraging other work.  And there
>>>are specific reasons for that.  But, maybe we can at least take steps to
>>>not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
>>>Services -> LB, where we¹re still spun out, but not doing it in a way that
>>>we have to re-implement the world all the time.  It¹s at least worth a
>>>conversation or three.
>>
>> In total agreement and I have heard these sentiments in multiple
>> conversations across multiple players.
>> It would be really fruitful to have a constructive conversation on this
>> across the services, and there are
>> enough similar issues to make this worthwhile.
>>
>> Thanks
>>
>> Sridar
>>
>>>
>>>Thanks,
>>>Doug
>>>
>>>
>>>
>>>
>>>On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:
>>>
Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
"advanced services" will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
> Hi all,
>
> Before we get into the details of which API goes where, I¹d like to see
>us
> answer the questions of:
>
> 1. Are we spinning out?
> 2. When?
> 3. With or without the rest of advanced services?
> 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
>have
> had the Paris summit discussions on vendor split-out and adv. services
> spinout before we answer those questions?  (Yes, that question is
>leading.)
>
> To me, the ³where does the 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley  wrote:
> Hi Brandon,
>
>> 4. I brought this up now so that we can decide whether we want to
>> discuss it at the advanced services spin out session.  I don't see the
>> harm in opinions being discussed before the summit, during the summit,
>> and more thoroughly after the summit.
>
> I agree with this sentiment.  I’d just like to pull-up to the decision
> level, and if we can get some consensus on how we move forward, we can
> bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
> love each other.  Check.  Things are going to change sometime.  Check.  We
> might spin-out someday.  Check.  Now, let’s jump to the interesting part.
>
I think we all know we want to spin these out, as Doug says we just
need to have a plan around how we make that happen. I'm in agreement
with Doug's sentiment above.

>> 3. The main reason a spin out makes sense from Neutron is that the scope
>> for Neutron is too large for the attention advances services needs from
>> the Neutron Core.  If all of advanced services spins out, I see that
>
> There is merit here, but consider the sorts of things that an advanced
> services framework should be doing:
>
> - plugging into neutron ports, with all manner of topologies
> - service VM handling
> - plugging into nova-network
> - service chaining
> - applying things like security groups to services
>
> … this is all stuff that Octavia is talking about implementing itself in a
> basically defensive manner, instead of leveraging other work.  And there
> are specific reasons for that.  But, maybe we can at least take steps to
> not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
> Services -> LB, where we’re still spun out, but not doing it in a way that
> we have to re-implement the world all the time.  It’s at least worth a
> conversation or three.
>
Doug, can you document this on the etherpad for the "services spinout"
[1]? I've added some brief text at the top on what the objective for
this session is, but documenting more along the lines of what you have
here would be good.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/neutron-services

> Thanks,
> Doug
>
>
>
>
> On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:
>
>>Good questions Doug.  My answers are as follows:
>>
>>1. Yes
>>2. Some time after Kilo (same as I don't know when)
>>3. The main reason a spin out makes sense from Neutron is that the scope
>>for Neutron is too large for the attention advances services needs from
>>the Neutron Core.  If all of advanced services spins out, I see that
>>repeating itself within an advanced services project.  More and more
>>"advanced services" will get added in and the scope will become too
>>large.  There would definitely be benefits to it though, but I think we
>>would end up being right where we are today.
>>4. I brought this up now so that we can decide whether we want to
>>discuss it at the advanced services spin out session.  I don't see the
>>harm in opinions being discussed before the summit, during the summit,
>>and more thoroughly after the summit.
>>
>>Yes the brunt of the time will not be spent on the API, but since it
>>seemed like an opportunity to kill two birds with one stone, I figured
>>it warranted a discussion.
>>
>>Thanks,
>>Brandon
>>
>>On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
>>> Hi all,
>>>
>>> Before we get into the details of which API goes where, I’d like to see
>>>us
>>> answer the questions of:
>>>
>>> 1. Are we spinning out?
>>> 2. When?
>>> 3. With or without the rest of advanced services?
>>> 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
>>>have
>>> had the Paris summit discussions on vendor split-out and adv. services
>>> spinout before we answer those questions?  (Yes, that question is
>>>leading.)
>>>
>>> To me, the “where does the API live” is an implementation detail, and
>>>not
>>> where the time will need to be spent.
>>>
>>> For the record, my answers are:
>>>
>>> 1. Yes.
>>> 2. I don’t know.
>>> 3. I don’t know; this needs some serious discussion.
>>> 4. Yes.
>>>
>>> Thanks,
>>> doug
>>>
>>> On 10/24/14, 3:47 PM, "Brandon Logan" 
>>>wrote:
>>>
>>> >With the recent talk about advanced services spinning out of Neutron,
>>> >and the fact most of the LBaaS community has wanted LBaaS to spin out
>>>of
>>> >Neutron, I wanted to bring up a possibility and gauge interest and
>>> >opinion on this possibility.
>>> >
>>> >Octavia is going to (and has) an API.  The current thinking is that an
>>> >Octavia driver will be created in Neutron LBaaS that will make a
>>> >requests to the Octavia API.  When LBaaS spins out of Neutron, it will
>>> >need a standalone API.  Octavia's API seems to be a good solution to
>>> >this.  It will support vendor drivers much like the current Neutron
>>> >LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
>>> >exact duplicate.  Octavia will be growing more mature in stackforge at
>>>a
>>> >h

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Mandeep Dhami
Hi Kyle:

Are you scheduling an on-demand meeting, or are you proposing that the
agenda for next neutron meeting include this as an on-demand item?

Regards,
Mandeep


On Mon, Oct 27, 2014 at 6:56 AM, Kyle Mestery  wrote:

> On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
>  wrote:
> > Several people have been requesting that we resume the Advanced
> > Services' meetings [1] to discuss some of the topics being mentioned
> > in this thread. Perhaps it might help people to have a focussed
> > discussion on the topic of "advanced services' spin-out" prior to the
> > design summit session [2] in Paris. So I propose that we resume our
> > weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
> > #openstack-meeting-3.
> >
> Given how important this is to Neutron in general, I would prefer NOT
> to see this discussed in the Advanced Services meeting, but rather in
> the regular Neutron meeting. These are the types of things which need
> broader oversight and involvement. Lets please discuss this in the
> regular Neutron meeting, which is an on-demand meeting format, rather
> than in a sub-team meeting.
>
> > Thanks,
> > ~Sumit.
> >
> > [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> > [2]
> http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
> >
> > On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
> >  wrote:
> >> Hi Doug:
> >>
> >> On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:
> >>
> >>>Hi Brandon,
> >>>
>  4. I brought this up now so that we can decide whether we want to
>  discuss it at the advanced services spin out session.  I don't see the
>  harm in opinions being discussed before the summit, during the summit,
>  and more thoroughly after the summit.
> >>>
> >>>I agree with this sentiment.  I¹d just like to pull-up to the decision
> >>>level, and if we can get some consensus on how we move forward, we can
> >>>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
> We
> >>>love each other.  Check.  Things are going to change sometime.  Check.
> We
> >>>might spin-out someday.  Check.  Now, let¹s jump to the interesting
> part.
> >>>
>  3. The main reason a spin out makes sense from Neutron is that the
> scope
>  for Neutron is too large for the attention advances services needs
> from
>  the Neutron Core.  If all of advanced services spins out, I see that
> >>>
> >>>There is merit here, but consider the sorts of things that an advanced
> >>>services framework should be doing:
> >>>
> >>>- plugging into neutron ports, with all manner of topologies
> >>>- service VM handling
> >>>- plugging into nova-network
> >>>- service chaining
> >>>- applying things like security groups to services
> >>>
> >>>Š this is all stuff that Octavia is talking about implementing itself
> in a
> >>>basically defensive manner, instead of leveraging other work.  And there
> >>>are specific reasons for that.  But, maybe we can at least take steps to
> >>>not be incompatible about it.  Or maybe there is a hierarchy of Neutron
> ->
> >>>Services -> LB, where we¹re still spun out, but not doing it in a way
> that
> >>>we have to re-implement the world all the time.  It¹s at least worth a
> >>>conversation or three.
> >>
> >> In total agreement and I have heard these sentiments in multiple
> >> conversations across multiple players.
> >> It would be really fruitful to have a constructive conversation on this
> >> across the services, and there are
> >> enough similar issues to make this worthwhile.
> >>
> >> Thanks
> >>
> >> Sridar
> >>
> >>>
> >>>Thanks,
> >>>Doug
> >>>
> >>>
> >>>
> >>>
> >>>On 10/26/14, 6:35 PM, "Brandon Logan" 
> wrote:
> >>>
> Good questions Doug.  My answers are as follows:
> 
> 1. Yes
> 2. Some time after Kilo (same as I don't know when)
> 3. The main reason a spin out makes sense from Neutron is that the
> scope
> for Neutron is too large for the attention advances services needs from
> the Neutron Core.  If all of advanced services spins out, I see that
> repeating itself within an advanced services project.  More and more
> "advanced services" will get added in and the scope will become too
> large.  There would definitely be benefits to it though, but I think we
> would end up being right where we are today.
> 4. I brought this up now so that we can decide whether we want to
> discuss it at the advanced services spin out session.  I don't see the
> harm in opinions being discussed before the summit, during the summit,
> and more thoroughly after the summit.
> 
> Yes the brunt of the time will not be spent on the API, but since it
> seemed like an opportunity to kill two birds with one stone, I figured
> it warranted a discussion.
> 
> Thanks,
> Brandon
> 
> On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
> > Hi all,
> >
> > Before we get into the details of which API goes w

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 11:48 AM, Mandeep Dhami  wrote:
> Hi Kyle:
>
> Are you scheduling an on-demand meeting, or are you proposing that the
> agenda for next neutron meeting include this as an on-demand item?
>
Per my email to the list recently [1], the weekly rotating Neutron
meeting is now an on-demand agenda, rather than a rollup of sub-team
status. I'm saying this particular topic (advanced services spinout)
will be discussed in Paris, and it's worth adding it to the weekly
Neutron meeting [2] agenda in the on-demand section. This is a pretty
large topic with many interested parties, thus the attention in the
broader neutron meeting.

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048328.html
[2] https://wiki.openstack.org/wiki/Network/Meetings

> Regards,
> Mandeep
>
>
> On Mon, Oct 27, 2014 at 6:56 AM, Kyle Mestery  wrote:
>>
>> On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
>>  wrote:
>> > Several people have been requesting that we resume the Advanced
>> > Services' meetings [1] to discuss some of the topics being mentioned
>> > in this thread. Perhaps it might help people to have a focussed
>> > discussion on the topic of "advanced services' spin-out" prior to the
>> > design summit session [2] in Paris. So I propose that we resume our
>> > weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
>> > #openstack-meeting-3.
>> >
>> Given how important this is to Neutron in general, I would prefer NOT
>> to see this discussed in the Advanced Services meeting, but rather in
>> the regular Neutron meeting. These are the types of things which need
>> broader oversight and involvement. Lets please discuss this in the
>> regular Neutron meeting, which is an on-demand meeting format, rather
>> than in a sub-team meeting.
>>
>> > Thanks,
>> > ~Sumit.
>> >
>> > [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
>> > [2]
>> > http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
>> >
>> > On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
>> >  wrote:
>> >> Hi Doug:
>> >>
>> >> On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:
>> >>
>> >>>Hi Brandon,
>> >>>
>>  4. I brought this up now so that we can decide whether we want to
>>  discuss it at the advanced services spin out session.  I don't see
>>  the
>>  harm in opinions being discussed before the summit, during the
>>  summit,
>>  and more thoroughly after the summit.
>> >>>
>> >>>I agree with this sentiment.  I¹d just like to pull-up to the decision
>> >>>level, and if we can get some consensus on how we move forward, we can
>> >>>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
>> >>> We
>> >>>love each other.  Check.  Things are going to change sometime.  Check.
>> >>> We
>> >>>might spin-out someday.  Check.  Now, let¹s jump to the interesting
>> >>> part.
>> >>>
>>  3. The main reason a spin out makes sense from Neutron is that the
>>  scope
>>  for Neutron is too large for the attention advances services needs
>>  from
>>  the Neutron Core.  If all of advanced services spins out, I see that
>> >>>
>> >>>There is merit here, but consider the sorts of things that an advanced
>> >>>services framework should be doing:
>> >>>
>> >>>- plugging into neutron ports, with all manner of topologies
>> >>>- service VM handling
>> >>>- plugging into nova-network
>> >>>- service chaining
>> >>>- applying things like security groups to services
>> >>>
>> >>>Š this is all stuff that Octavia is talking about implementing itself
>> >>> in a
>> >>>basically defensive manner, instead of leveraging other work.  And
>> >>> there
>> >>>are specific reasons for that.  But, maybe we can at least take steps
>> >>> to
>> >>>not be incompatible about it.  Or maybe there is a hierarchy of Neutron
>> >>> ->
>> >>>Services -> LB, where we¹re still spun out, but not doing it in a way
>> >>> that
>> >>>we have to re-implement the world all the time.  It¹s at least worth a
>> >>>conversation or three.
>> >>
>> >> In total agreement and I have heard these sentiments in multiple
>> >> conversations across multiple players.
>> >> It would be really fruitful to have a constructive conversation on this
>> >> across the services, and there are
>> >> enough similar issues to make this worthwhile.
>> >>
>> >> Thanks
>> >>
>> >> Sridar
>> >>
>> >>>
>> >>>Thanks,
>> >>>Doug
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>On 10/26/14, 6:35 PM, "Brandon Logan" 
>> >>> wrote:
>> >>>
>> Good questions Doug.  My answers are as follows:
>> 
>> 1. Yes
>> 2. Some time after Kilo (same as I don't know when)
>> 3. The main reason a spin out makes sense from Neutron is that the
>>  scope
>> for Neutron is too large for the attention advances services needs
>>  from
>> the Neutron Core.  If all of advanced services spins out, I see that
>> repeating itself within an advanced services project.  More and more
>> "

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Mandeep Dhami
Got it. So we will be discussing this in the 2PM meeting today. Correct?

Regards,
Mandeep

On Mon, Oct 27, 2014 at 10:02 AM, Kyle Mestery  wrote:

> On Mon, Oct 27, 2014 at 11:48 AM, Mandeep Dhami 
> wrote:
> > Hi Kyle:
> >
> > Are you scheduling an on-demand meeting, or are you proposing that the
> > agenda for next neutron meeting include this as an on-demand item?
> >
> Per my email to the list recently [1], the weekly rotating Neutron
> meeting is now an on-demand agenda, rather than a rollup of sub-team
> status. I'm saying this particular topic (advanced services spinout)
> will be discussed in Paris, and it's worth adding it to the weekly
> Neutron meeting [2] agenda in the on-demand section. This is a pretty
> large topic with many interested parties, thus the attention in the
> broader neutron meeting.
>
> Thanks,
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-October/048328.html
> [2] https://wiki.openstack.org/wiki/Network/Meetings
>
> > Regards,
> > Mandeep
> >
> >
> > On Mon, Oct 27, 2014 at 6:56 AM, Kyle Mestery 
> wrote:
> >>
> >> On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
> >>  wrote:
> >> > Several people have been requesting that we resume the Advanced
> >> > Services' meetings [1] to discuss some of the topics being mentioned
> >> > in this thread. Perhaps it might help people to have a focussed
> >> > discussion on the topic of "advanced services' spin-out" prior to the
> >> > design summit session [2] in Paris. So I propose that we resume our
> >> > weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
> >> > #openstack-meeting-3.
> >> >
> >> Given how important this is to Neutron in general, I would prefer NOT
> >> to see this discussed in the Advanced Services meeting, but rather in
> >> the regular Neutron meeting. These are the types of things which need
> >> broader oversight and involvement. Lets please discuss this in the
> >> regular Neutron meeting, which is an on-demand meeting format, rather
> >> than in a sub-team meeting.
> >>
> >> > Thanks,
> >> > ~Sumit.
> >> >
> >> > [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> >> > [2]
> >> >
> http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
> >> >
> >> > On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
> >> >  wrote:
> >> >> Hi Doug:
> >> >>
> >> >> On 10/26/14, 6:01 PM, "Doug Wiegley"  wrote:
> >> >>
> >> >>>Hi Brandon,
> >> >>>
> >>  4. I brought this up now so that we can decide whether we want to
> >>  discuss it at the advanced services spin out session.  I don't see
> >>  the
> >>  harm in opinions being discussed before the summit, during the
> >>  summit,
> >>  and more thoroughly after the summit.
> >> >>>
> >> >>>I agree with this sentiment.  I¹d just like to pull-up to the
> decision
> >> >>>level, and if we can get some consensus on how we move forward, we
> can
> >> >>>bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
> >> >>> We
> >> >>>love each other.  Check.  Things are going to change sometime.
> Check.
> >> >>> We
> >> >>>might spin-out someday.  Check.  Now, let¹s jump to the interesting
> >> >>> part.
> >> >>>
> >>  3. The main reason a spin out makes sense from Neutron is that the
> >>  scope
> >>  for Neutron is too large for the attention advances services needs
> >>  from
> >>  the Neutron Core.  If all of advanced services spins out, I see
> that
> >> >>>
> >> >>>There is merit here, but consider the sorts of things that an
> advanced
> >> >>>services framework should be doing:
> >> >>>
> >> >>>- plugging into neutron ports, with all manner of topologies
> >> >>>- service VM handling
> >> >>>- plugging into nova-network
> >> >>>- service chaining
> >> >>>- applying things like security groups to services
> >> >>>
> >> >>>Š this is all stuff that Octavia is talking about implementing itself
> >> >>> in a
> >> >>>basically defensive manner, instead of leveraging other work.  And
> >> >>> there
> >> >>>are specific reasons for that.  But, maybe we can at least take steps
> >> >>> to
> >> >>>not be incompatible about it.  Or maybe there is a hierarchy of
> Neutron
> >> >>> ->
> >> >>>Services -> LB, where we¹re still spun out, but not doing it in a way
> >> >>> that
> >> >>>we have to re-implement the world all the time.  It¹s at least worth
> a
> >> >>>conversation or three.
> >> >>
> >> >> In total agreement and I have heard these sentiments in multiple
> >> >> conversations across multiple players.
> >> >> It would be really fruitful to have a constructive conversation on
> this
> >> >> across the services, and there are
> >> >> enough similar issues to make this worthwhile.
> >> >>
> >> >> Thanks
> >> >>
> >> >> Sridar
> >> >>
> >> >>>
> >> >>>Thanks,
> >> >>>Doug
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>>On 10/26/14, 6:35 PM, "Brandon Logan" 
> >> >>> wrote:
> >> >>>
> >> Good questions Doug.  My answers are as follows:
> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Jay Pipes
Sorry for top-posting, but where can the API working group see the 
proposed Octavia API specification or documentation? I'd love it if the 
API WG could be involved in reviewing the public REST API.


Best,
-jay

On 10/27/2014 10:01 AM, Kyle Mestery wrote:

On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley  wrote:

Hi Brandon,


4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.


I agree with this sentiment.  I’d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let’s jump to the interesting part.


I think we all know we want to spin these out, as Doug says we just
need to have a plan around how we make that happen. I'm in agreement
with Doug's sentiment above.


3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that


There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

… this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron ->
Services -> LB, where we’re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It’s at least worth a
conversation or three.


Doug, can you document this on the etherpad for the "services spinout"
[1]? I've added some brief text at the top on what the objective for
this session is, but documenting more along the lines of what you have
here would be good.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/neutron-services


Thanks,
Doug




On 10/26/14, 6:35 PM, "Brandon Logan"  wrote:


Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
"advanced services" will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:

Hi all,

Before we get into the details of which API goes where, I’d like to see
us
answer the questions of:

1. Are we spinning out?
2. When?
3. With or without the rest of advanced services?
4. Do we want to wait until we (the royal “we” of “the Neutron team”)
have
had the Paris summit discussions on vendor split-out and adv. services
spinout before we answer those questions?  (Yes, that question is
leading.)

To me, the “where does the API live” is an implementation detail, and
not
where the time will need to be spent.

For the record, my answers are:

1. Yes.
2. I don’t know.
3. I don’t know; this needs some serious discussion.
4. Yes.

Thanks,
doug

On 10/24/14, 3:47 PM, "Brandon Logan" 
wrote:


With the recent talk about advanced services spinning out of Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin out

of

Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it will
need a standalone API.  Octavia's API seems to be a good solution to
this.  It will support vendor drivers much like the current Neutron
LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
exact duplicate.  Octavia will be growing more mature in stackforge at

a

higher velocity than an Openstack projec

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Doug Wiegley
Hi Jay,

Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
this timeslot?

https://wiki.openstack.org/wiki/Octavia#Meetings


Thanks,
Doug


On 10/27/14, 11:27 AM, "Jay Pipes"  wrote:

>Sorry for top-posting, but where can the API working group see the
>proposed Octavia API specification or documentation? I'd love it if the
>API WG could be involved in reviewing the public REST API.
>
>Best,
>-jay
>
>On 10/27/2014 10:01 AM, Kyle Mestery wrote:
>> On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley 
>>wrote:
>>> Hi Brandon,
>>>
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.
>>>
>>> I agree with this sentiment.  I’d just like to pull-up to the decision
>>> level, and if we can get some consensus on how we move forward, we can
>>> bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
>>>We
>>> love each other.  Check.  Things are going to change sometime.  Check.
>>> We
>>> might spin-out someday.  Check.  Now, let’s jump to the interesting
>>>part.
>>>
>> I think we all know we want to spin these out, as Doug says we just
>> need to have a plan around how we make that happen. I'm in agreement
>> with Doug's sentiment above.
>>
 3. The main reason a spin out makes sense from Neutron is that the
scope
 for Neutron is too large for the attention advances services needs
from
 the Neutron Core.  If all of advanced services spins out, I see that
>>>
>>> There is merit here, but consider the sorts of things that an advanced
>>> services framework should be doing:
>>>
>>> - plugging into neutron ports, with all manner of topologies
>>> - service VM handling
>>> - plugging into nova-network
>>> - service chaining
>>> - applying things like security groups to services
>>>
>>> … this is all stuff that Octavia is talking about implementing itself
>>>in a
>>> basically defensive manner, instead of leveraging other work.  And
>>>there
>>> are specific reasons for that.  But, maybe we can at least take steps
>>>to
>>> not be incompatible about it.  Or maybe there is a hierarchy of
>>>Neutron ->
>>> Services -> LB, where we’re still spun out, but not doing it in a way
>>>that
>>> we have to re-implement the world all the time.  It’s at least worth a
>>> conversation or three.
>>>
>> Doug, can you document this on the etherpad for the "services spinout"
>> [1]? I've added some brief text at the top on what the objective for
>> this session is, but documenting more along the lines of what you have
>> here would be good.
>>
>> Thanks,
>> Kyle
>>
>> [1] https://etherpad.openstack.org/p/neutron-services
>>
>>> Thanks,
>>> Doug
>>>
>>>
>>>
>>>
>>> On 10/26/14, 6:35 PM, "Brandon Logan" 
>>>wrote:
>>>
 Good questions Doug.  My answers are as follows:

 1. Yes
 2. Some time after Kilo (same as I don't know when)
 3. The main reason a spin out makes sense from Neutron is that the
scope
 for Neutron is too large for the attention advances services needs
from
 the Neutron Core.  If all of advanced services spins out, I see that
 repeating itself within an advanced services project.  More and more
 "advanced services" will get added in and the scope will become too
 large.  There would definitely be benefits to it though, but I think
we
 would end up being right where we are today.
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

 Yes the brunt of the time will not be spent on the API, but since it
 seemed like an opportunity to kill two birds with one stone, I figured
 it warranted a discussion.

 Thanks,
 Brandon

 On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
> Hi all,
>
> Before we get into the details of which API goes where, I’d like to
>see
> us
> answer the questions of:
>
> 1. Are we spinning out?
> 2. When?
> 3. With or without the rest of advanced services?
> 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
> have
> had the Paris summit discussions on vendor split-out and adv.
>services
> spinout before we answer those questions?  (Yes, that question is
> leading.)
>
> To me, the “where does the API live” is an implementation detail, and
> not
> where the time will need to be spent.
>
> For the record, my answers are:
>
> 1. Yes.
> 2. I don’t know.
> 3. I don’t know; this needs some serious discussion.
> 4. Yes.
>
> Thanks,
> doug
>
> On 10/24/14, 3:47 PM, "Brandon Logan" 
> wrote:
>
>> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Jay Pipes

Yup, can do! :)

-jay

On 10/27/2014 01:55 PM, Doug Wiegley wrote:

Hi Jay,

Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
this timeslot?

https://wiki.openstack.org/wiki/Octavia#Meetings


Thanks,
Doug


On 10/27/14, 11:27 AM, "Jay Pipes"  wrote:


Sorry for top-posting, but where can the API working group see the
proposed Octavia API specification or documentation? I'd love it if the
API WG could be involved in reviewing the public REST API.

Best,
-jay

On 10/27/2014 10:01 AM, Kyle Mestery wrote:

On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley 
wrote:

Hi Brandon,


4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.


I agree with this sentiment.  I’d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
We
love each other.  Check.  Things are going to change sometime.  Check.
We
might spin-out someday.  Check.  Now, let’s jump to the interesting
part.


I think we all know we want to spin these out, as Doug says we just
need to have a plan around how we make that happen. I'm in agreement
with Doug's sentiment above.


3. The main reason a spin out makes sense from Neutron is that the
scope
for Neutron is too large for the attention advances services needs
from
the Neutron Core.  If all of advanced services spins out, I see that


There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

… this is all stuff that Octavia is talking about implementing itself
in a
basically defensive manner, instead of leveraging other work.  And
there
are specific reasons for that.  But, maybe we can at least take steps
to
not be incompatible about it.  Or maybe there is a hierarchy of
Neutron ->
Services -> LB, where we’re still spun out, but not doing it in a way
that
we have to re-implement the world all the time.  It’s at least worth a
conversation or three.


Doug, can you document this on the etherpad for the "services spinout"
[1]? I've added some brief text at the top on what the objective for
this session is, but documenting more along the lines of what you have
here would be good.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/neutron-services


Thanks,
Doug




On 10/26/14, 6:35 PM, "Brandon Logan" 
wrote:


Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the
scope
for Neutron is too large for the attention advances services needs
from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
"advanced services" will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think
we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:

Hi all,

Before we get into the details of which API goes where, I’d like to
see
us
answer the questions of:

1. Are we spinning out?
2. When?
3. With or without the rest of advanced services?
4. Do we want to wait until we (the royal “we” of “the Neutron team”)
have
had the Paris summit discussions on vendor split-out and adv.
services
spinout before we answer those questions?  (Yes, that question is
leading.)

To me, the “where does the API live” is an implementation detail, and
not
where the time will need to be spent.

For the record, my answers are:

1. Yes.
2. I don’t know.
3. I don’t know; this needs some serious discussion.
4. Yes.

Thanks,
doug

On 10/24/14, 3:47 PM, "Brandon Logan" 
wrote:


With the recent talk about advanced services spinning out of
Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin
out

of

Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that
an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it
will
need a standalone API.  Octavia's API seems to b

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Brandon Logan
Hi Jay,
Just so you have some information on the API before the meeting here is
the spec for it:

https://review.openstack.org/#/c/122338/

I'm sure there is a lot of details that might be missing but it should
give you a decent idea.  Sorry for the markup/markdown being dumb if you
try to build with spinx.  Probably easier to just read the raw .rst
file.

Thanks,
Brandon

On Mon, 2014-10-27 at 14:05 -0400, Jay Pipes wrote:
> Yup, can do! :)
> 
> -jay
> 
> On 10/27/2014 01:55 PM, Doug Wiegley wrote:
> > Hi Jay,
> >
> > Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
> > this timeslot?
> >
> > https://wiki.openstack.org/wiki/Octavia#Meetings
> >
> >
> > Thanks,
> > Doug
> >
> >
> > On 10/27/14, 11:27 AM, "Jay Pipes"  wrote:
> >
> >> Sorry for top-posting, but where can the API working group see the
> >> proposed Octavia API specification or documentation? I'd love it if the
> >> API WG could be involved in reviewing the public REST API.
> >>
> >> Best,
> >> -jay
> >>
> >> On 10/27/2014 10:01 AM, Kyle Mestery wrote:
> >>> On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley 
> >>> wrote:
>  Hi Brandon,
> 
> > 4. I brought this up now so that we can decide whether we want to
> > discuss it at the advanced services spin out session.  I don't see the
> > harm in opinions being discussed before the summit, during the summit,
> > and more thoroughly after the summit.
> 
>  I agree with this sentiment.  I’d just like to pull-up to the decision
>  level, and if we can get some consensus on how we move forward, we can
>  bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
>  We
>  love each other.  Check.  Things are going to change sometime.  Check.
>  We
>  might spin-out someday.  Check.  Now, let’s jump to the interesting
>  part.
> 
> >>> I think we all know we want to spin these out, as Doug says we just
> >>> need to have a plan around how we make that happen. I'm in agreement
> >>> with Doug's sentiment above.
> >>>
> > 3. The main reason a spin out makes sense from Neutron is that the
> > scope
> > for Neutron is too large for the attention advances services needs
> > from
> > the Neutron Core.  If all of advanced services spins out, I see that
> 
>  There is merit here, but consider the sorts of things that an advanced
>  services framework should be doing:
> 
>  - plugging into neutron ports, with all manner of topologies
>  - service VM handling
>  - plugging into nova-network
>  - service chaining
>  - applying things like security groups to services
> 
>  … this is all stuff that Octavia is talking about implementing itself
>  in a
>  basically defensive manner, instead of leveraging other work.  And
>  there
>  are specific reasons for that.  But, maybe we can at least take steps
>  to
>  not be incompatible about it.  Or maybe there is a hierarchy of
>  Neutron ->
>  Services -> LB, where we’re still spun out, but not doing it in a way
>  that
>  we have to re-implement the world all the time.  It’s at least worth a
>  conversation or three.
> 
> >>> Doug, can you document this on the etherpad for the "services spinout"
> >>> [1]? I've added some brief text at the top on what the objective for
> >>> this session is, but documenting more along the lines of what you have
> >>> here would be good.
> >>>
> >>> Thanks,
> >>> Kyle
> >>>
> >>> [1] https://etherpad.openstack.org/p/neutron-services
> >>>
>  Thanks,
>  Doug
> 
> 
> 
> 
>  On 10/26/14, 6:35 PM, "Brandon Logan" 
>  wrote:
> 
> > Good questions Doug.  My answers are as follows:
> >
> > 1. Yes
> > 2. Some time after Kilo (same as I don't know when)
> > 3. The main reason a spin out makes sense from Neutron is that the
> > scope
> > for Neutron is too large for the attention advances services needs
> > from
> > the Neutron Core.  If all of advanced services spins out, I see that
> > repeating itself within an advanced services project.  More and more
> > "advanced services" will get added in and the scope will become too
> > large.  There would definitely be benefits to it though, but I think
> > we
> > would end up being right where we are today.
> > 4. I brought this up now so that we can decide whether we want to
> > discuss it at the advanced services spin out session.  I don't see the
> > harm in opinions being discussed before the summit, during the summit,
> > and more thoroughly after the summit.
> >
> > Yes the brunt of the time will not be spent on the API, but since it
> > seemed like an opportunity to kill two birds with one stone, I figured
> > it warranted a discussion.
> >
> > Thanks,
> > Brandon
> >
> > On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
> >> Hi all,
> >>

Re: [openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread Michael Johnson
Hi Mihaela,

The polling logic that the neutron-lbaas octavia driver uses to update
the neutron database is as follows:

Once a Create/Update/Delete action is executed against a load balancer
using the Octavia driver a polling thread is created.
On every request_poll_interval the thread queries the Octavia v1 API
to check the status of the object modified.
It will save the updated state in the neutron databse and exit if the
objects provisioning status becomes on of: "ACTIVE", "DELETED", or
"ERROR".
It will repeat this polling until one of those provisioning statuses
is met, or the request_poll_timeout is exceeded.

My suspicion is the GET requests you are seeing for those objects is
occurring from another source.
You can test this by running neutron-lbaas in debug mode.  I will then
log a debug message for every polling interval.

The code for this thread is located here:
https://github.com/openstack/neutron-lbaas/blob/stable/ocata/neutron_lbaas/drivers/octavia/driver.py#L66

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-07 Thread Michael Johnson
Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-17 Thread Samuel Bercovici
+1
Subnet should be mandatory

The only thing this makes supporting load balancing servers which are not 
running in the cloud more challenging to support.
But I do not see this as a huge user story (lb in cloud load balancing IPs 
outside the cloud)

-Sam.

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Saturday, January 16, 2016 6:56 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on 
member create?

I filed a bug [1] a while ago that subnet_id should be an optional parameter 
for member creation.  Currently it is required.  Review [2] is makes it 
optional.

The original thinking was that if the load balancer is ever connected to that 
same subnet, be it by another member on that subnet or the vip on that subnet, 
then the user does not need to specify the subnet for new member if that new 
member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it 
required too many assumptions on the part of the end user, neutron lbaas, and 
driver.

If anyone wants to voice their opinion on this matter, do so on the bug report, 
review, or in response to this thread.  Otherwise, it'll probably be abandoned 
and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-17 Thread Samuel Bercovici
Btw.

I am still in favor on associating the subnets to the LB and then not specify 
them per node at all.

-Sam.


-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Sunday, January 17, 2016 10:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

+1
Subnet should be mandatory

The only thing this makes supporting load balancing servers which are not 
running in the cloud more challenging to support.
But I do not see this as a huge user story (lb in cloud load balancing IPs 
outside the cloud)

-Sam.

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Saturday, January 16, 2016 6:56 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on 
member create?

I filed a bug [1] a while ago that subnet_id should be an optional parameter 
for member creation.  Currently it is required.  Review [2] is makes it 
optional.

The original thinking was that if the load balancer is ever connected to that 
same subnet, be it by another member on that subnet or the vip on that subnet, 
then the user does not need to specify the subnet for new member if that new 
member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it 
required too many assumptions on the part of the end user, neutron lbaas, and 
driver.

If anyone wants to voice their opinion on this matter, do so on the bug report, 
review, or in response to this thread.  Otherwise, it'll probably be abandoned 
and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Jain, Vivek
If member port (IP address) is allocated by neutron, then why do we need to 
specify it explicitly? It can be derived by LBaaS driver implicitly.

Thanks,
Vivek






On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:

>Btw.
>
>I am still in favor on associating the subnets to the LB and then not specify 
>them per node at all.
>
>-Sam.
>
>
>-Original Message-
>From: Samuel Bercovici [mailto:samu...@radware.com] 
>Sent: Sunday, January 17, 2016 10:14 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
>optional on member create?
>
>+1
>Subnet should be mandatory
>
>The only thing this makes supporting load balancing servers which are not 
>running in the cloud more challenging to support.
>But I do not see this as a huge user story (lb in cloud load balancing IPs 
>outside the cloud)
>
>-Sam.
>
>-Original Message-
>From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
>Sent: Saturday, January 16, 2016 6:56 AM
>To: openstack-dev@lists.openstack.org
>Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional 
>on member create?
>
>I filed a bug [1] a while ago that subnet_id should be an optional parameter 
>for member creation.  Currently it is required.  Review [2] is makes it 
>optional.
>
>The original thinking was that if the load balancer is ever connected to that 
>same subnet, be it by another member on that subnet or the vip on that subnet, 
>then the user does not need to specify the subnet for new member if that new 
>member is on one of those subnets.
>
>At the midcycle we discussed it and we had an informal agreement that it 
>required too many assumptions on the part of the end user, neutron lbaas, and 
>driver.
>
>If anyone wants to voice their opinion on this matter, do so on the bug 
>report, review, or in response to this thread.  Otherwise, it'll probably be 
>abandoned and not done at some point.
>
>Thanks,
>Brandon
>
>[1] https://bugs.launchpad.net/neutron/+bug/1426248
>[2] https://review.openstack.org/#/c/267935/
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Stephen Balukoff
Vivek--

"Member" in this case refers to an IP address that (probably) lives on a
tenant back-end network. We can't specify just the IP address when talking
to such an IP since tenant subnets may use overlapping IP ranges (ie. in
this case, subnet is required). In the case of the namespace driver and
Octavia, we use the subnet parameter for all members to determine which
back-end networks the load balancing software needs a port on.

I think the original use case for making subnet optional was the idea that
sometimes a tenant would like to add a "member" IP that is not part of
their tenant networks at all--  this is more than likely an IP address that
lives outside the local cloud. The assumption, then, would be that this IP
address should be reachable through standard routing from wherever the load
balancer happens to live on the network. That is to say, the load balancer
will try to get to such an IP address via its default gateway, unless it
has a more specific route.

As far as I'm aware, this use case is still valid and being asked for by
tenants. Therefore, I'm in favor of making member subnet optional.

Stephen

On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:

> If member port (IP address) is allocated by neutron, then why do we need
> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>
> Thanks,
> Vivek
>
>
>
>
>
>
> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>
> >Btw.
> >
> >I am still in favor on associating the subnets to the LB and then not
> specify them per node at all.
> >
> >-Sam.
> >
> >
> >-Original Message-
> >From: Samuel Bercovici [mailto:samu...@radware.com]
> >Sent: Sunday, January 17, 2016 10:14 AM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >+1
> >Subnet should be mandatory
> >
> >The only thing this makes supporting load balancing servers which are not
> running in the cloud more challenging to support.
> >But I do not see this as a huge user story (lb in cloud load balancing
> IPs outside the cloud)
> >
> >-Sam.
> >
> >-Original Message-
> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
> >Sent: Saturday, January 16, 2016 6:56 AM
> >To: openstack-dev@lists.openstack.org
> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >I filed a bug [1] a while ago that subnet_id should be an optional
> parameter for member creation.  Currently it is required.  Review [2] is
> makes it optional.
> >
> >The original thinking was that if the load balancer is ever connected to
> that same subnet, be it by another member on that subnet or the vip on that
> subnet, then the user does not need to specify the subnet for new member if
> that new member is on one of those subnets.
> >
> >At the midcycle we discussed it and we had an informal agreement that it
> required too many assumptions on the part of the end user, neutron lbaas,
> and driver.
> >
> >If anyone wants to voice their opinion on this matter, do so on the bug
> report, review, or in response to this thread.  Otherwise, it'll probably
> be abandoned and not done at some point.
> >
> >Thanks,
> >Brandon
> >
> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
> >[2] https://review.openstack.org/#/c/267935/
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Michael Johnson
I feel that the subnet should be mandatory as there are too many
ambiguity issues due to overlapping subnets and multiple routes.
In the case of an IP being outside of the tenant networks, the user
would specify an external network that has the appropriate routes.  We
cannot always assume which tenant network with an external (or VPN)
route is the appropriate one to use.

Michael

On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  wrote:
> Vivek--
>
> "Member" in this case refers to an IP address that (probably) lives on a
> tenant back-end network. We can't specify just the IP address when talking
> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> this case, subnet is required). In the case of the namespace driver and
> Octavia, we use the subnet parameter for all members to determine which
> back-end networks the load balancing software needs a port on.
>
> I think the original use case for making subnet optional was the idea that
> sometimes a tenant would like to add a "member" IP that is not part of their
> tenant networks at all--  this is more than likely an IP address that lives
> outside the local cloud. The assumption, then, would be that this IP address
> should be reachable through standard routing from wherever the load balancer
> happens to live on the network. That is to say, the load balancer will try
> to get to such an IP address via its default gateway, unless it has a more
> specific route.
>
> As far as I'm aware, this use case is still valid and being asked for by
> tenants. Therefore, I'm in favor of making member subnet optional.
>
> Stephen
>
> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
>>
>> If member port (IP address) is allocated by neutron, then why do we need
>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>
>> Thanks,
>> Vivek
>>
>>
>>
>>
>>
>>
>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>>
>> >Btw.
>> >
>> >I am still in favor on associating the subnets to the LB and then not
>> > specify them per node at all.
>> >
>> >-Sam.
>> >
>> >
>> >-Original Message-
>> >From: Samuel Bercovici [mailto:samu...@radware.com]
>> >Sent: Sunday, January 17, 2016 10:14 AM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >+1
>> >Subnet should be mandatory
>> >
>> >The only thing this makes supporting load balancing servers which are not
>> > running in the cloud more challenging to support.
>> >But I do not see this as a huge user story (lb in cloud load balancing
>> > IPs outside the cloud)
>> >
>> >-Sam.
>> >
>> >-Original Message-
>> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>> >Sent: Saturday, January 16, 2016 6:56 AM
>> >To: openstack-dev@lists.openstack.org
>> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >I filed a bug [1] a while ago that subnet_id should be an optional
>> > parameter for member creation.  Currently it is required.  Review [2] is
>> > makes it optional.
>> >
>> >The original thinking was that if the load balancer is ever connected to
>> > that same subnet, be it by another member on that subnet or the vip on that
>> > subnet, then the user does not need to specify the subnet for new member if
>> > that new member is on one of those subnets.
>> >
>> >At the midcycle we discussed it and we had an informal agreement that it
>> > required too many assumptions on the part of the end user, neutron lbaas,
>> > and driver.
>> >
>> >If anyone wants to voice their opinion on this matter, do so on the bug
>> > report, review, or in response to this thread.  Otherwise, it'll probably 
>> > be
>> > abandoned and not done at some point.
>> >
>> >Thanks,
>> >Brandon
>> >
>> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
>> >[2] https://review.openstack.org/#/c/267935/
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/ma

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Doug Wiegley
But, by requiring an external subnet, you are assuming that the packets always 
originate from inside a neutron network. That is not necessarily the case with 
a physical device.

doug


> On Jan 19, 2016, at 11:55 AM, Michael Johnson  wrote:
> 
> I feel that the subnet should be mandatory as there are too many
> ambiguity issues due to overlapping subnets and multiple routes.
> In the case of an IP being outside of the tenant networks, the user
> would specify an external network that has the appropriate routes.  We
> cannot always assume which tenant network with an external (or VPN)
> route is the appropriate one to use.
> 
> Michael
> 
> On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  
> wrote:
>> Vivek--
>> 
>> "Member" in this case refers to an IP address that (probably) lives on a
>> tenant back-end network. We can't specify just the IP address when talking
>> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
>> this case, subnet is required). In the case of the namespace driver and
>> Octavia, we use the subnet parameter for all members to determine which
>> back-end networks the load balancing software needs a port on.
>> 
>> I think the original use case for making subnet optional was the idea that
>> sometimes a tenant would like to add a "member" IP that is not part of their
>> tenant networks at all--  this is more than likely an IP address that lives
>> outside the local cloud. The assumption, then, would be that this IP address
>> should be reachable through standard routing from wherever the load balancer
>> happens to live on the network. That is to say, the load balancer will try
>> to get to such an IP address via its default gateway, unless it has a more
>> specific route.
>> 
>> As far as I'm aware, this use case is still valid and being asked for by
>> tenants. Therefore, I'm in favor of making member subnet optional.
>> 
>> Stephen
>> 
>> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
>>> 
>>> If member port (IP address) is allocated by neutron, then why do we need
>>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>> 
>>> Thanks,
>>> Vivek
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>>> 
>>>> Btw.
>>>> 
>>>> I am still in favor on associating the subnets to the LB and then not
>>>> specify them per node at all.
>>>> 
>>>> -Sam.
>>>> 
>>>> 
>>>> -Original Message-
>>>> From: Samuel Bercovici [mailto:samu...@radware.com]
>>>> Sent: Sunday, January 17, 2016 10:14 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>>>> optional on member create?
>>>> 
>>>> +1
>>>> Subnet should be mandatory
>>>> 
>>>> The only thing this makes supporting load balancing servers which are not
>>>> running in the cloud more challenging to support.
>>>> But I do not see this as a huge user story (lb in cloud load balancing
>>>> IPs outside the cloud)
>>>> 
>>>> -Sam.
>>>> 
>>>> -Original Message-
>>>> From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>>>> Sent: Saturday, January 16, 2016 6:56 AM
>>>> To: openstack-dev@lists.openstack.org
>>>> Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>>>> optional on member create?
>>>> 
>>>> I filed a bug [1] a while ago that subnet_id should be an optional
>>>> parameter for member creation.  Currently it is required.  Review [2] is
>>>> makes it optional.
>>>> 
>>>> The original thinking was that if the load balancer is ever connected to
>>>> that same subnet, be it by another member on that subnet or the vip on that
>>>> subnet, then the user does not need to specify the subnet for new member if
>>>> that new member is on one of those subnets.
>>>> 
>>>> At the midcycle we discussed it and we had an informal agreement that it
>>>> required too many assumptions on the part of the end user, neutron lbaas,
>>>> and driver.
>>>> 
>>>> If anyone wants to voice their opinion on this matter, do so on

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Brandon Logan
So it really comes down to driver (or driver's appliance)
implementation.  Here's some scenarios to consider:

1) vip on tenant network, members on tenant network
- if a user wants to add an external IP to this configuration, how do we
handle that?  If the subnet is optional the it just uses the default
routing, then it won't ever get external unless the backend
implementation sets up routing to external from the load balancer.  I
think this is a bad idea because the tenant would probably want these
networks isolated.  But if the backend puts a load balancer on it with
external connectivity, its not as isolated as it was.  So to me, if
subnet is optional the best choice is to do default routing which
*SHOULD* fail on default routing.   This of course is something a tenant
will have to realize.  The good thing about a required subnet_id is that
the tenant has explicitly stated they wanted external connectivity and
the backend is not making assumptions as to whether they want it or
don't.

2) vip on public network, members on tenant network
- defaults route should be able to route out to external IPs now so if
subnet_id is optional it works.  If subnet_id is required then the
tenant would have to specify the public network again, which is less
than ideal and also has other issues brought up in this thread.

All other scenario permutations are similar to the above ones so I don't
think i need to go through them.

Basically, I'm waffling on this and am currently on the optional
subnet_id side but as the builders of octavia, I don't think we should
allow a load balancer external access unless the tenant has in a way
given permission by the configuration they've explicitly set.  Though,
that too should be defined.

Thanks,
Brandon
On Tue, 2016-01-19 at 12:07 -0700, Doug Wiegley wrote:
> But, by requiring an external subnet, you are assuming that the packets 
> always originate from inside a neutron network. That is not necessarily the 
> case with a physical device.
> 
> doug
> 
> 
> > On Jan 19, 2016, at 11:55 AM, Michael Johnson  wrote:
> > 
> > I feel that the subnet should be mandatory as there are too many
> > ambiguity issues due to overlapping subnets and multiple routes.
> > In the case of an IP being outside of the tenant networks, the user
> > would specify an external network that has the appropriate routes.  We
> > cannot always assume which tenant network with an external (or VPN)
> > route is the appropriate one to use.
> > 
> > Michael
> > 
> > On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  
> > wrote:
> >> Vivek--
> >> 
> >> "Member" in this case refers to an IP address that (probably) lives on a
> >> tenant back-end network. We can't specify just the IP address when talking
> >> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> >> this case, subnet is required). In the case of the namespace driver and
> >> Octavia, we use the subnet parameter for all members to determine which
> >> back-end networks the load balancing software needs a port on.
> >> 
> >> I think the original use case for making subnet optional was the idea that
> >> sometimes a tenant would like to add a "member" IP that is not part of 
> >> their
> >> tenant networks at all--  this is more than likely an IP address that lives
> >> outside the local cloud. The assumption, then, would be that this IP 
> >> address
> >> should be reachable through standard routing from wherever the load 
> >> balancer
> >> happens to live on the network. That is to say, the load balancer will try
> >> to get to such an IP address via its default gateway, unless it has a more
> >> specific route.
> >> 
> >> As far as I'm aware, this use case is still valid and being asked for by
> >> tenants. Therefore, I'm in favor of making member subnet optional.
> >> 
> >> Stephen
> >> 
> >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
> >>> 
> >>> If member port (IP address) is allocated by neutron, then why do we need
> >>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
> >>> 
> >>> Thanks,
> >>> Vivek
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
> >>> 
> >>>> Btw.
> >>>> 
> >>>> I am still in favor on associating the subnets to the LB and then not
> >>>> specify them per node at all.
> >&g

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Stephen Balukoff
to specify the public network again, which is less
> than ideal and also has other issues brought up in this thread.
>
> All other scenario permutations are similar to the above ones so I don't
> think i need to go through them.
>
> Basically, I'm waffling on this and am currently on the optional
> subnet_id side but as the builders of octavia, I don't think we should
> allow a load balancer external access unless the tenant has in a way
> given permission by the configuration they've explicitly set.  Though,
> that too should be defined.
>
> Thanks,
> Brandon
> On Tue, 2016-01-19 at 12:07 -0700, Doug Wiegley wrote:
> > But, by requiring an external subnet, you are assuming that the packets
> always originate from inside a neutron network. That is not necessarily the
> case with a physical device.
> >
> > doug
> >
> >
> > > On Jan 19, 2016, at 11:55 AM, Michael Johnson 
> wrote:
> > >
> > > I feel that the subnet should be mandatory as there are too many
> > > ambiguity issues due to overlapping subnets and multiple routes.
> > > In the case of an IP being outside of the tenant networks, the user
> > > would specify an external network that has the appropriate routes.  We
> > > cannot always assume which tenant network with an external (or VPN)
> > > route is the appropriate one to use.
> > >
> > > Michael
> > >
> > > On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff <
> sbaluk...@bluebox.net> wrote:
> > >> Vivek--
> > >>
> > >> "Member" in this case refers to an IP address that (probably) lives
> on a
> > >> tenant back-end network. We can't specify just the IP address when
> talking
> > >> to such an IP since tenant subnets may use overlapping IP ranges (ie.
> in
> > >> this case, subnet is required). In the case of the namespace driver
> and
> > >> Octavia, we use the subnet parameter for all members to determine
> which
> > >> back-end networks the load balancing software needs a port on.
> > >>
> > >> I think the original use case for making subnet optional was the idea
> that
> > >> sometimes a tenant would like to add a "member" IP that is not part
> of their
> > >> tenant networks at all--  this is more than likely an IP address that
> lives
> > >> outside the local cloud. The assumption, then, would be that this IP
> address
> > >> should be reachable through standard routing from wherever the load
> balancer
> > >> happens to live on the network. That is to say, the load balancer
> will try
> > >> to get to such an IP address via its default gateway, unless it has a
> more
> > >> specific route.
> > >>
> > >> As far as I'm aware, this use case is still valid and being asked for
> by
> > >> tenants. Therefore, I'm in favor of making member subnet optional.
> > >>
> > >> Stephen
> > >>
> > >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek 
> wrote:
> > >>>
> > >>> If member port (IP address) is allocated by neutron, then why do we
> need
> > >>> to specify it explicitly? It can be derived by LBaaS driver
> implicitly.
> > >>>
> > >>> Thanks,
> > >>> Vivek
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
> > >>>
> > >>>> Btw.
> > >>>>
> > >>>> I am still in favor on associating the subnets to the LB and then
> not
> > >>>> specify them per node at all.
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Samuel Bercovici [mailto:samu...@radware.com]
> > >>>> Sent: Sunday, January 17, 2016 10:14 AM
> > >>>> To: OpenStack Development Mailing List (not for usage questions)
> > >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should
> subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> +1
> > >>>> Subnet should be mandatory
> > >>>>
> > >>>> The only thing this makes supporting load balancing servers which
> are not
> > >>>> running in th

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Brandon Logan
I think you'll like that there will soon be a single create call for the
entire graph/tree of a load balancer so you can get those subnets up
front.  However, the API will still allow creating each entity
individually which you don't like. I have a feeling most clients and UIs
will use the single create call once its available over creating each
individual entity independently.  That should help out mostly.

Thanks,
Brandon

On Sun, 2016-01-17 at 09:05 +, Samuel Bercovici wrote:
> Btw.
> 
> I am still in favor on associating the subnets to the LB and then not specify 
> them per node at all.
> 
> -Sam.
> 
> 
> -Original Message-
> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Sunday, January 17, 2016 10:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> +1
> Subnet should be mandatory
> 
> The only thing this makes supporting load balancing servers which are not 
> running in the cloud more challenging to support.
> But I do not see this as a huge user story (lb in cloud load balancing IPs 
> outside the cloud)
> 
> -Sam.
> 
> -Original Message-
> From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
> Sent: Saturday, January 16, 2016 6:56 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional 
> on member create?
> 
> I filed a bug [1] a while ago that subnet_id should be an optional parameter 
> for member creation.  Currently it is required.  Review [2] is makes it 
> optional.
> 
> The original thinking was that if the load balancer is ever connected to that 
> same subnet, be it by another member on that subnet or the vip on that 
> subnet, then the user does not need to specify the subnet for new member if 
> that new member is on one of those subnets.
> 
> At the midcycle we discussed it and we had an informal agreement that it 
> required too many assumptions on the part of the end user, neutron lbaas, and 
> driver.
> 
> If anyone wants to voice their opinion on this matter, do so on the bug 
> report, review, or in response to this thread.  Otherwise, it'll probably be 
> abandoned and not done at some point.
> 
> Thanks,
> Brandon
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1426248
> [2] https://review.openstack.org/#/c/267935/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Brandon Logan
ing asked for by
> > >> tenants. Therefore, I'm in favor of making member subnet
> optional.
> > >>
> > >> Stephen
> > >>
>     > >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek
>  wrote:
> > >>>
> > >>> If member port (IP address) is allocated by neutron,
> then why do we need
> > >>> to specify it explicitly? It can be derived by LBaaS
> driver implicitly.
> > >>>
> > >>> Thanks,
> > >>> Vivek
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 1/17/16, 1:05 AM, "Samuel Bercovici"
>  wrote:
> > >>>
> > >>>> Btw.
> > >>>>
>     > >>>> I am still in favor on associating the subnets to the
> LB and then not
> > >>>> specify them per node at all.
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Samuel Bercovici [mailto:samu...@radware.com]
> > >>>> Sent: Sunday, January 17, 2016 10:14 AM
> > >>>> To: OpenStack Development Mailing List (not for usage
> questions)
> > >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia]
> Should subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> +1
> > >>>> Subnet should be mandatory
> > >>>>
> > >>>> The only thing this makes supporting load balancing
> servers which are not
> > >>>> running in the cloud more challenging to support.
> > >>>> But I do not see this as a huge user story (lb in cloud
> load balancing
> > >>>> IPs outside the cloud)
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Brandon Logan
> [mailto:brandon.lo...@rackspace.com]
> > >>>> Sent: Saturday, January 16, 2016 6:56 AM
> > >>>> To: openstack-dev@lists.openstack.org
> > >>>> Subject: [openstack-dev] [Neutron][LBaaS][Octavia]
> Should subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> I filed a bug [1] a while ago that subnet_id should be
> an optional
> > >>>> parameter for member creation.  Currently it is
> required.  Review [2] is
> > >>>> makes it optional.
> > >>>>
> > >>>> The original thinking was that if the load balancer is
> ever connected to
> > >>>> that same subnet, be it by another member on that
> subnet or the vip on that
> > >>>> subnet, then the user does not need to specify the
> subnet for new member if
> > >>>> that new member is on one of those subnets.
> > >>>>
> > >>>> At the midcycle we discussed it and we had an informal
> agreement that it
> > >>>> required too many assumptions on the part of the end
> user, neutron lbaas,
> > >>>> and driver.
> > >>>>
> > >>>> If anyone wants to voice their opinion on this matter,
> do so on the bug
> > >>>> report, review, or in response to this thread.
> Otherwise, it'll probably be
> > >>>> abandoned and not done at some point.
> > >>>>
> > >>>> Thanks,
> > >>>> Brandon
> > >>>>
> > >>>> [1] https://bugs.launchpad.net/neutron/+bug/1426248
> > >>>> [2] https://review.openstack.org/#/c/267935/
> > &g

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Fox, Kevin M
We are using a neutron v1 lb that has external to the cloud members in a lb 
used by a particular tenant in production. It is working well. Hoping to do the 
same thing once we get to Octavia+LBaaSv2.

Being able to tweak the routes of the load balancer would be an interesting 
feature, though I don't think I'd ever need to. Maybe that should be an 
extension? I'm guessing a lot of lb plugins won't be able to support it at all.

Thanks,
Kevin


From: Brandon Logan [brandon.lo...@rackspace.com]
Sent: Monday, January 25, 2016 1:03 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

Any additional thoughts and opinions people want to share on this.  I
don't have a horse in this race as long as we don't make dangerous
assumptions about what the user wants.  So I am fine with making
subnet_id optional.

Michael, how strong would your opposition for this be?

Thanks,
Brandon

On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> Michael-- I think you're assuming that adding an external subnet ID
> means that the load balancing service will route requests to out an
> interface with a route to said external subnet. However, the model we
> have is actually too simple to convey this information to the load
> balancing service. This is because while we know the member's IP and a
> subnet to which the load balancing service should connect to
> theoretically talk to said IP, we don't have any kind of actual
> routing information for the IP address (like, say a default route for
> the subnet).
>
>
> Consider this not far-fetched example: Suppose a tenant wants to add a
> back-end member which is reachable only over a VPN, the gateway for
> which lives on a tenant internal subnet. If we had a more feature-rich
> model to work with here, the tenant could specify the member IP, the
> subnet containing the VPN gateway and the gateway's IP address. In
> theory the load balancing service could add local routing rules to
> make sure that communication to that member happens on the tenant
> subnet and gets routed to the VPN gateway.
>
>
> If we want to support this use case, then we'd probably need to add an
> optional gateway IP parameter to the member object. (And I'd still be
> in favor of assuming the subnet_id on the member is optional, and that
> default routing should be used if not specified.)
>
>
> Let me see if I can break down several use cases we could support with
> this model. Let's assume the member model contains (among other
> things) the following attributes:
>
>
> ip_address (member IP, required)
> subnet_id (member or gateway subnet, optional)
> gateway_ip (VPN or other layer-3 gateway that should be used to access
> the member_ip. optional)
>
>
> Expected behaviors:
>
>
> Scenario 1:
> ip_address specified, subnet_id and gateway_ip are None:  Load
> balancing service assumes member IP address is reachable through
> default routing. Appropriate for members that are not part of the
> local cloud that are accessible from the internet.
>
>
>
> Scenario 2:
> ip_address and subnet_id specified, gateway_ip is None: Load balancing
> service assumes it needs an interface on subnet_id to talk directly to
> the member IP address. Appropriate for members that live on tenant
> networks. member_ip should exist within the subnet specified by
> subnet_id. This is the only scenario supported under the current model
> if we make subnet_id a required field and don't add a gateway_ip.
>
>
> Scenario 3:
> ip_address, subnet_id and gateway_ip are all specified:  Load
> balancing service assumes it needs an interface on subnet_id to talk
> to the gateway_ip. Load balancing service should add local routing
> rule (ie. to the host and / or local network namespace context of the
> load balancing service itself, not necessarily to Neutron or anything)
> to route any packets destined for member_ip to the gateway_ip.
> gateway_ip should exist within the subnet specified by subnet_id.
> Appropriate for members that are on the other side of a VPN links, or
> reachable via other local routing within a tenant network or local
> cloud.
>
>
> Scenario 4:
> ip_address and gateway_ip are specified, subnet_id is None: This is an
> invalid configuration.
>
>
> So what do y'all think of this? Am I smoking crack with how this
> should work?
>
>
> For what it's worth, I think the "member is on the other side of a
> VPN" scenario is not one our customers are champing at the bit to
> have, so I'm fine with not supporting that kind of topology if nobody
> else wants 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-27 Thread Brandon Logan
I could see it being interesting, but that would have to be something
vetted by other drivers and appliances because they may not support
that.

On Mon, 2016-01-25 at 21:37 +, Fox, Kevin M wrote:
> We are using a neutron v1 lb that has external to the cloud members in a lb 
> used by a particular tenant in production. It is working well. Hoping to do 
> the same thing once we get to Octavia+LBaaSv2.
> 
> Being able to tweak the routes of the load balancer would be an interesting 
> feature, though I don't think I'd ever need to. Maybe that should be an 
> extension? I'm guessing a lot of lb plugins won't be able to support it at 
> all.
> 
> Thanks,
> Kevin
> 
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Monday, January 25, 2016 1:03 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> Any additional thoughts and opinions people want to share on this.  I
> don't have a horse in this race as long as we don't make dangerous
> assumptions about what the user wants.  So I am fine with making
> subnet_id optional.
> 
> Michael, how strong would your opposition for this be?
> 
> Thanks,
> Brandon
> 
> On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> > Michael-- I think you're assuming that adding an external subnet ID
> > means that the load balancing service will route requests to out an
> > interface with a route to said external subnet. However, the model we
> > have is actually too simple to convey this information to the load
> > balancing service. This is because while we know the member's IP and a
> > subnet to which the load balancing service should connect to
> > theoretically talk to said IP, we don't have any kind of actual
> > routing information for the IP address (like, say a default route for
> > the subnet).
> >
> >
> > Consider this not far-fetched example: Suppose a tenant wants to add a
> > back-end member which is reachable only over a VPN, the gateway for
> > which lives on a tenant internal subnet. If we had a more feature-rich
> > model to work with here, the tenant could specify the member IP, the
> > subnet containing the VPN gateway and the gateway's IP address. In
> > theory the load balancing service could add local routing rules to
> > make sure that communication to that member happens on the tenant
> > subnet and gets routed to the VPN gateway.
> >
> >
> > If we want to support this use case, then we'd probably need to add an
> > optional gateway IP parameter to the member object. (And I'd still be
> > in favor of assuming the subnet_id on the member is optional, and that
> > default routing should be used if not specified.)
> >
> >
> > Let me see if I can break down several use cases we could support with
> > this model. Let's assume the member model contains (among other
> > things) the following attributes:
> >
> >
> > ip_address (member IP, required)
> > subnet_id (member or gateway subnet, optional)
> > gateway_ip (VPN or other layer-3 gateway that should be used to access
> > the member_ip. optional)
> >
> >
> > Expected behaviors:
> >
> >
> > Scenario 1:
> > ip_address specified, subnet_id and gateway_ip are None:  Load
> > balancing service assumes member IP address is reachable through
> > default routing. Appropriate for members that are not part of the
> > local cloud that are accessible from the internet.
> >
> >
> >
> > Scenario 2:
> > ip_address and subnet_id specified, gateway_ip is None: Load balancing
> > service assumes it needs an interface on subnet_id to talk directly to
> > the member IP address. Appropriate for members that live on tenant
> > networks. member_ip should exist within the subnet specified by
> > subnet_id. This is the only scenario supported under the current model
> > if we make subnet_id a required field and don't add a gateway_ip.
> >
> >
> > Scenario 3:
> > ip_address, subnet_id and gateway_ip are all specified:  Load
> > balancing service assumes it needs an interface on subnet_id to talk
> > to the gateway_ip. Load balancing service should add local routing
> > rule (ie. to the host and / or local network namespace context of the
> > load balancing service itself, not necessarily to Neutron or anything)
> > to route any packets destined for member_ip to the gateway_ip.
> > gateway_ip should exist within the subnet specified

  1   2   >