Re: [openstack-dev] [watcher] Stepping down as Watcher spec core

2017-08-15 Thread Susanne Balle
Thanks for all the hard work. best of luck,

Susanne

On Fri, Jul 21, 2017 at 3:48 AM, Чадин Александр (Alexander Chadin) <
a.cha...@servionica.ru> wrote:

> Antoine,
>
> Congratulations to the new step of your life!
> You’ve set high level of project management and this is big honour to me
> to fit it.
> Hope to see you in Vancouver!
>
> Best Regards,
> _
> Alexander Chadin
> OpenStack Developer
>
> On 21 Jul 2017, at 03:44, Hidekazu Nakamura 
> wrote:
>
> Hi Antoine,
>
> I am grateful for your support from my starting contributing to Watcher.
> Thanks to you I am contributing to Watcher actively now.
>
> I wish you live a happy life and a successful career.
>
> Hidekazu Nakamura
>
>
> -Original Message-
> From: Antoine Cabot [mailto:antoinecabo...@gmail.com
> ]
> Sent: Thursday, July 20, 2017 6:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [watcher] Stepping down as Watcher spec core
>
> Hey guys,
>
> It's been a long time since the last summit and our last discussions !
> I hope Watcher is going well and you are getting more traction
> everyday in the OpenStack community !
>
> As you may guess, my last 2 months have been very busy with my
> relocation in Vancouver with my family. After 8 weeks of active job
> search in the cloud industry here in Vancouver, I've got a Senior
> Product Manager position at Parsable, a start-up leading the Industry
> 4.0 revolution. I will continue to deal with very large customers but
> in different industries (Oil & Gas, Manufacturing...) to build the
> best possible product, leveraging cloud and mobile technologies.
>
> It was a great pleasure to lead the Watcher initiative from its
> infancy to the OpenStack Big Tent and be able to work with all of you.
> I hope to be part of another open source community in the near future
> but now, due to my new attributions, I need to step down as a core
> contributor to Watcher specs. Feel free to reach me in any case if I
> still hold restricted rights on launchpad or anywhere else.
>
> I hope to see you all in Vancouver next year for the summit and be
> part of the traditional Watcher dinner (I will try to find the best
> place for you guys).
>
> Cheers,
>
> Antoine
>
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] nominate Aditi Sharma to Watcher Core group

2017-08-15 Thread Susanne Balle
+1

On Thu, Aug 3, 2017 at 7:40 PM, Hidekazu Nakamura <
hid-nakam...@vf.jp.nec.com> wrote:

> +1
>
> > -Original Message-
> > From: Чадин Александр (Alexander Chadin)
> > [mailto:a.cha...@servionica.ru]
> > Sent: Wednesday, August 02, 2017 9:48 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: [openstack-dev] [watcher] nominate Aditi Sharma to Watcher Core
> > group
> >
> > Aditi Sharma (adisky in IRC) has been working on OpenStack Watcher since
> > this March
> > and has done some valuable patches[1] along with Action Plan cancelling
> > blueprint (spec and
> > implementation have been merged).
> > I’d like to nominate Aditi Sharma to Watcher Core group and waiting for
> > your vote.
> > Please, give +1/-1 in reply to this message.
> >
> > [1] :
> > https://review.openstack.org/#/q/owner:%22aditi+sharma+%253Caditi.s%25
> > 40nectechnologies.in%253E%22
> >
> > Best Regards,
> > __
> > Alexander Chadin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team

2017-07-12 Thread Susanne Balle
+1

On Fri, Jun 30, 2017 at 4:31 AM, Hidekazu Nakamura <
hid-nakam...@vf.jp.nec.com> wrote:

> +1
>
> > -Original Message-
> > From: Чадин Александр (Alexander Chadin)
> > [mailto:a.cha...@servionica.ru]
> > Sent: Tuesday, June 27, 2017 10:44 PM
> > To: OpenStack Development Mailing List
> > 
> > Subject: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team
> >
> > Hi watcher folks,
> >
> > I’d like to nominate Yumeng Bao to the core team. She has made a lot of
> > contributions including specifications,
> > features and bug fixes. Yumeng has attended PTG and Summit with her
> > presentation related to the Watcher.
> > Yumeng is active on IRC channels and take a part on weekly meetings as
> well.
> >
> > Please, vote with +1/-1.
> >
> > Best Regards,
> > _
> > Alexander Chadin
> > OpenStack Developer
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] End-of-Ocata core team updates

2017-02-28 Thread Susanne Balle
Congrats everybody! well deserved.

Jean-Emile thank you for all your hard work!

On Tue, Feb 21, 2017 at 12:06 AM, Shedimbi, Prudhvi Rao <
prudhvi.rao.shedi...@intel.com> wrote:

> Thank You for giving me this opportunity. I will try to fulfill my role in
> the Core team to the best of my ability. :)
>
> Thank You
> Prudhvi Rao Shedimbi
>
> From: Чадин Александр 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, February 20, 2017 at 8:29 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Watcher] End-of-Ocata core team updates
>
> Hi Watcher Team!
>
> There are some changes with Core Group in Watcher:
>
> 1. Li Canwei (licanwei) and Prudhvi Rao Shedimbi (pshedimb) have
> been nominated as Core Developers for Watcher.
> They have got enough votes to be included in Watcher Core group.
>
> 2. Jean-Emile DARTOIS has stepped down from Watcher Core since
> he has little time to keep up with core reviewer duties.
>
> 3. Hidekazu Nakamura is being nominated as Core Developer for Watcher.
> Have a good luck!
>
> I want to congratulate our new Core Developers and to thank Jean-Emile
> for his work and project support. He has made a lot of architecture design
> reviews and implementations and helped to make Watcher as it is.
>
> Thank you, Jean-Emile, have a good luck and remember that
> Watcher Team is always opened for you.
>
> Welcome aboard, Prudhvi Rao Shedimbi and Li Canwei!
>
> Best Regards,
> _
> Alexander Chadin
> OpenStack Developer
> Servionica LTD
> a.cha...@servionica.ru
> +7 (916) 693-58-81 <+7%20916%20693-58-81>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating licanwei to Watcher Core

2017-02-14 Thread Susanne Balle
+1

On Tue, Feb 14, 2017 at 5:37 AM, Vincent FRANÇOISE <
vincent.franco...@b-com.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> +1
>
> On 14/02/2017 11:27, ? ? wrote:
> > His activity will help Watcher Team to make this project better.
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQEcBAEBAgAGBQJYot3aAAoJEEb+XENQ/jVSvIEH/RsqFZZ7hZyA7ExieF7K4GmN
> d1f1vnPbOR3MgqQTbezixIPIwzrDw9dtpU6q8BRPARP6ja2tOPNoYHc1CZmxgwz9
> Mc5iVhvAaKuzL7KKpeROVkLkVUJ9bZnxNM/pkgiq0qXYoBaitgVdPVTIE6nBLdpV
> yHRkUG24pkojogIJGIbB2cJeKganJ+PrCB59buAof1NqEhujt00akfjHCKbc7Wc/
> oSmx2VD3aRn8OcfAhQ1pQgRYpa6MRFBRbDUPejVyiGzFWTDreWA3cLVq2xpGEcCW
> ahcq2MNsZCiFegD4u9jYroULOALhdGBUctONbluaqbfZ7PhPPqQxSJGQq96hTCg=
> =WsCi
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Prudhvi Rao Shedimbi to Watcher Core

2017-02-14 Thread Susanne Balle
+1

On Tue, Feb 14, 2017 at 9:31 AM, Prashanth Hari  wrote:

> +1
>
> On Tue, Feb 14, 2017 at 9:22 AM, Joe Cropper 
> wrote:
>
>> +1 !
>>
>> > On Feb 14, 2017, at 4:05 AM, Vincent FRANÇOISE <
>> vincent.franco...@b-com.com> wrote:
>> >
>> > Team,
>> >
>> > I would like to promote Prudhvi Rao Shedimbi to the core team. He's done
>> > a great work in reviewing many patchsets[1] and I believe that he has a
>> > good vision of Watcher as a whole.
>> >
>> > I think he would make an excellent addition to the team.
>> >
>> > Please vote
>> >
>> > [1] http://stackalytics.com/report/contribution/watcher/90
>> >
>> > Vincent FRANCOISE
>> > B<>COM
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Barcelona summit summary

2016-11-18 Thread Susanne Balle
Hi



We had a super productive summit with many very active participants. Thanks
to everybody who participated.



Detailed minutes are available at:
https://etherpad.openstack.org/p/watcher-ocata-design-session



We had a talk and 4 Watcher sessions during the summit.



- The *first session* was a Fishbowl session where we discussed the list of
strategies available in Watcher as well as went over Watcher and its
charter for new people interested in Watcher.


- On Thursday, Jean-Emile (jed56), Joe (jwcroppe) and myself (sballe) gave *a
talk on “Watcher, the Infrastructure Optimization service for OpenStack:
Plans for the O-release and beyond”.* We had more than 50 people attending
the session.


- During the *second Watcher session*, we did a Watcher Newton
retrospective for the Newton release.


   - The great accomplishment was that we got Watcher accepted into the big
   tent in May.
   - AT described a use case around NFV placement where there is a need
   to optimize a small clouds at the edge.
   - We talked about needing more people to do reviews both on spec and on
   code.

- During the *third Watcher session*, we validated Ocata priorities &
assignees. We discussed all the blueprints in details and ensured that our
load for Ocata is reasonable and inline with previous releases..


   - As part of this session we discussed the deprecation of the Ceilometer
   APIs and the impact on Watcher. While no final decision was taken around
   this topic the team felt that supporting Monasca as a backend for telemetry
   was a good strategy given the scalability issue with the current version of
   Ceilometer and Gnocchi’s early stages.

- The *fourth session* was a community meetup. We discussed
multi-datacenter(multi-region) workload optimization, how to bridge to non
OpenStack public cloud and container Clouds, etc.



Thanks again to everyone for a great summit.

Regards,

Susanne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Mascot final choice

2016-08-29 Thread Susanne Balle
When will we know about the mascot as well as what the design looks like?

Susanne

On Thu, Jul 28, 2016 at 6:46 PM, Joe Cropper  wrote:

> +2 to Jellyfish!
>
> > On Jul 28, 2016, at 4:08 PM, Antoine Cabot 
> wrote:
> >
> > Hi Watcher team,
> >
> > Last week during the mid-cycle, we came up with a list of possible
> mascots for Watcher. The only one which is in conflict with other projects
> is the bee.
> > So we have this final list :
> > 1. Jellyfish
> > 2. Eagle
> > 3. Hammerhead shark
> >
> > I'm going to confirm jellyfish as the Watcher mascot by EOW except if
> any contributor is against this choice. Please let me know.
> >
> > Antoine
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Prashanth Hari as core for watcher-specs

2016-08-24 Thread Susanne Balle
+1

Susanne

On Wed, Aug 17, 2016 at 7:10 AM, Чадин Александр 
wrote:

> +1
>
> ___
> Alexander Chadin,
> Engineer
> Software Solutions Department
> Servionica Ltd.
> Work email: a.cha...@servionica.ru
> Mobile: +7(916)693-58-81
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Newton mid-cycle meetup details

2016-06-01 Thread Susanne Balle
The Watcher Newton mid-cycle developer meetup will take place in Hillsboro,
OR on July 19-21 2016


For more details see the wiki [1]


[1] https://wiki.openstack.org/wiki/Watcher_newton_mid-cycle_meetup_agenda


Regards Susanne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] No IRC weekly meetings on December 23rd and 30th

2015-12-22 Thread Susanne Balle
Dear Watcher team

Our weekly IRC meetings on December 23rd and 30th are canceled due to the
holidays. Next meeting will take place on January 6th at the usual time on
#openstack-meeting-4 at 1400 UTC.

The team is atill available to answer questions and help on
#openstack-watcher

Regards Susanne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Susanne Balle
correction: I think that discussing who should be in what group at next
week's meeting make sense. Susanne

On Tue, Jul 21, 2015 at 3:18 PM, Susanne Balle sleipnir...@gmail.com
wrote:

 cool! thanks. I will request to be added to the correct groups.

 Susanne

 On Tue, Jul 21, 2015 at 2:53 PM, Hayes, Graham graham.ha...@hp.com
 wrote:

 Hi All,

 I have created a github org and 2 repos for us to get started in.

 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.

 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

 Core have read/write access to the repos, and admin can add / remove
 projects.

 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org

 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:

 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project

 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?

 Thanks,

 Graham

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSLB][LBaaS] Launchpad project, git repos and more!

2015-07-21 Thread Susanne Balle
cool! thanks. I will request to be added to the correct groups.

Susanne

On Tue, Jul 21, 2015 at 2:53 PM, Hayes, Graham graham.ha...@hp.com wrote:

 Hi All,

 I have created a github org and 2 repos for us to get started in.

 https://github.com/gslb/ is the org, with https://github.com/gslb/gslb
 as the main code repo.

 There is 2 teams (Github's name for groups) gslb-core, and gslb-admin.

 Core have read/write access to the repos, and admin can add / remove
 projects.

 I also created https://github.com/gslb/gslb-specs which will
 automatically publish to https://gslb-specs.readthedocs.org

 There is also a launchpad project https://launchpad.net/gslb with 2
 teams:

 gslb-drivers - people who can target bugs / bps
 gslb-core - the maintainers of the project

 All the info is on the wiki: https://wiki.openstack.org/wiki/GSLB

 So, next question - who should be in what groups? I am open to
 suggestions... should it be an item for discussion next week?

 Thanks,

 Graham

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] canceling meeting

2015-03-20 Thread Susanne Balle
Make sense to me. Susanne

On Thu, Mar 19, 2015 at 5:49 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 Hi lbaas'ers,

 Now that lbaasv2 has shipped, the need for a regular weekly meeting is
 greatly reduced. I propose that we cancel the regular meeting, and discuss
 neutron-y things during the neutron on-demand agenda, and octavia things in
 the already existing octavia meetings.

 Any objections/alternatives?

 Thanks,
 doug



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Kilo Midcycle Meetup

2014-12-10 Thread Susanne Balle
Cool! Thx

Susanne

On Wed, Dec 10, 2014 at 12:48 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 It's set.  We'll be having the meetup on Feb 2-6 in San Antonio at RAX
 HQ.  I'll add a list of hotels and the address on the etherpad.

 https://etherpad.openstack.org/p/lbaas-kilo-meetup

 Thanks,
 Brandon

 On Tue, 2014-12-02 at 17:27 +, Brandon Logan wrote:
  Per the meeting, put together an etherpad here:
 
  https://etherpad.openstack.org/p/lbaas-kilo-meetup
 
  I would like to get the location and dates finalized ASAP (preferrably
  the next couple of days).
 
  We'll also try to do the same as the neutron and octava meetups for
  remote attendees.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] meeting day/time change

2014-11-10 Thread Susanne Balle
Works for me. Susanne

On Mon, Nov 10, 2014 at 10:57 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting

 That is updated for lbaas and advanced services with the new times.

 Thanks,
 Brandon

 On Mon, 2014-11-10 at 11:07 +, Doug Wiegley wrote:
  #openstack-meeting-4
 
 
   On Nov 10, 2014, at 10:33 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
  
   Thanks,
   Evg
  
   -Original Message-
   From: Doug Wiegley [mailto:do...@a10networks.com]
   Sent: Friday, November 07, 2014 9:04 PM
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [neutron][lbaas] meeting day/time change
  
   Hi all,
  
   Neutron LBaaS meetings are now going to be Tuesdays at 16:00 UTC.
  
   Safe travels.
  
   Thanks,
   Doug
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Mid-cycle hack-a-thon

2014-11-06 Thread Susanne Balle
Are we talking about a 5 day Hackathon or 3 day with 2 days (Mon  Fri) for
travel?

On Thu, Nov 6, 2014 at 10:10 AM, Adam Harwell adam.harw...@rackspace.com
wrote:

  Any chance it could actually be the week AFTER? Or is that to close to
 the holidays? _
 On Nov 6, 2014 7:21 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 I have just learned that there will be a Neutron hack-a-thon the week of
 Dec 8 in Salt Lake City. Since we don't want to conflict with that, I would
 like to do the Octavia hack-a-thon the previous week: Dec. 1 through 5 in
 Seattle.
 On Nov 5, 2014 11:05 PM, Adam Harwell adam.harw...@rackspace.com
 wrote:

   I can probably make it up there to attend.

   --Adam

  https://keybase.io/rm_you


   From: Stephen Balukoff sbaluk...@bluebox.net
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, November 4, 2014 3:46 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Octavia] Mid-cycle hack-a-thon

   Howdy, folks!

 We are planning to have a mid-cycle hack-a-thon in Seattle from the 8th
 through the 12th of December. This will be at the HP corporate offices
 located in the Seattle convention center.

 During this week we will be concentrating on Octavia code and hope to
 make significant progress toward our v0.5 milestone.

 If you are interested in attending, please e-mail me. If you are
 interested in participating but can't travel to Seattle that week, please
 also let me know, and we will see about using other means to collaborate
 with you in real time.

 Thanks!
 Stephen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Susanne Balle
Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be
moved to various backends such as an elastic search, hadoop HDFS, Swift,
etc as well as by default (but with the option to disable it) ceilometer.
Ceilometer is the metering defacto for OpenStack so we need to support it.
We would like the integration with Ceilometer to be based on Notifications.
I believe German send a reference to that in another email. The
pre-processing will need to be optional and the amount of data aggregation
configurable.

What you describe below to me is usage gathering/metering. The billing is
independent since companies with private clouds might not want to bill but
still need usage reports for capacity planning etc. Billing/Charging is
just putting a monetary value on the various form of usage,

I agree with all points.

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).

 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.

Keep the logs: This is what we would use log forwarding to either Swift
or Elastic Search, etc.

- Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we
were in disagreement on the IRC. I am not sure why but it sounded like you
were talking about something else when you were talking about the real time
processing. If we are just taking about moving the logs to your Hadoop
cluster or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

 Hey German/Susanne,

 To continue our conversation from our IRC meeting could you all provide
 more insight into you usage requirements? Also, I'd like to clarify a few
 points related to using logging.

 I am advocating that logs be used for multiple purposes, including
 billing. Billing requirements are different that connection logging
 requirements. However, connection logging is a very accurate mechanism to
 capture billable metrics and thus, is related. My vision for this is
 something like the following:

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).
 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.
 - Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

 A) We have connection logging as a planned feature. If a customer
 turns
 on the connection logging feature for their load balancer it will already
 have a history. One important aspect of this is that customers (at least
 ours) tend to turn on logging after they realize they need it (usually
 after a tragic lb event). By already capturing the logs I'm sure customers
 will be extremely happy to see that there are already X days worth of logs
 they can immediately sift through.
 B) Operators and their support teams can leverage logs when
 providing
 service to their customers. This is huge for finding issues and resolving
 them quickly.
 C) Albeit a minor point, building support for logs from the get-go
 mitigates capacity management uncertainty. My example earlier was the
 extreme case of every customer turning on logging at the same time. While
 unlikely, I would hate to manage that!

 I agree that there are other ways to capture billing metrics but, from my
 experience, those tend to be more complex than what I am advocating and
 without the added benefits listed above. An understanding of HP's desires
 on this matter will hopefully get this to a point where we can start
 working on a spec.

 Cheers,
 --Jorge

 P.S. Real-time stats is a different beast and I envision there being an
 API call that returns real-time data such as this ==
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


 From:  Eichberger, German german.eichber...@hp.com
 Reply-To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:  Wednesday, October 22, 2014 2:41 PM
 To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements



Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Susanne Balle
Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:

 Diagrams in jpeg format..

 On 10/12/14 10:06 PM, Phillip Toohill phillip.tooh...@rackspace.com
 wrote:

 Hello all,
 
 Heres some additional diagrams and docs. Not incredibly detailed, but
 should get the point across.
 
 Feel free to edit if needed.
 
 Once we come to some kind of agreement and understanding I can rewrite
 these more to be thorough and get them in a more official place. Also, I
 understand theres other use cases not shown in the initial docs, so this
 is a good time to collaborate to make this more thought out.
 
 Please feel free to ping me with any questions,
 
 Thank you
 
 
 Google DOCS link for FLIP folder:
 
 https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
 a
 ring
 
 -diagrams are draw.io based and can be opened from within Drive by
 selecting the appropriate application.
 
 On 10/7/14 2:25 PM, Brandon Logan brandon.lo...@rackspace.com wrote:
 
 I'll add some more info to this as well:
 
 Neutron LBaaS creates the neutron port for the VIP in the plugin layer
 before drivers ever have any control.  In the case of an async driver,
 it will then call the driver's create method, and then return to the
 user the vip info.  This means the user will know the VIP before the
 driver even finishes creating the load balancer.
 
 So if Octavia is just going to create a floating IP and then associate
 that floating IP to the neutron port, there is the problem of the user
 not ever seeing the correct VIP (which would be the floating iP).
 
 So really, we need to have a very detailed discussion on what the
 options are for us to get this to work for those of us intending to use
 floating ips as VIPs while also working for those only requiring a
 neutron port.  I'm pretty sure this will require changing the way V2
 behaves, but there's more discussion points needed on that.  Luckily, V2
 is in a feature branch and not merged into Neutron master, so we can
 change it pretty easily.  Phil and I will bring this up in the meeting
 tomorrow, which may lead to a meeting topic in the neutron lbaas
 meeting.
 
 Thanks,
 Brandon
 
 
 On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
  Hello All,
 
  I wanted to start a discussion on floating IP management and ultimately
  decide how the LBaaS group wants to handle the association.
 
  There is a need to utilize floating IPs(FLIP) and its API calls to
  associate a FLIP to the neutron port that we currently spin up.
 
  See DOCS here:
 
  
 
 http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
 r
 eate.html
 
  Currently, LBaaS will make internal service calls (clean interface :/)
 to create and attach a Neutron port.
  The VIP from this port is added to the Loadbalancer object of the Load
 balancer configuration and returned to the user.
 
  This creates a bit of a problem if we want to associate a FLIP with the
 port and display the FLIP to the user instead of
  the ports VIP because the port is currently created and attached in the
 plugin and there is no code anywhere to handle the FLIP
  association.
 
  To keep this short and to the point:
 
  We need to discuss where and how we want to handle this association. I
 have a few questions to start it off.
 
  Do we want to add logic in the plugin to call the FLIP association API?
 
  If we have logic in the plugin should we have configuration that
 identifies weather to use/return the FLIP instead the port VIP?
 
  Would we rather have logic for FLIP association in the drivers?
 
  If logic is in the drivers would we still return the port VIP to the
 user then later overwrite it with the FLIP?
  Or would we have configuration to not return the port VIP initially,
 but an additional query would show the associated FLIP.
 
 
  Is there an internal service call for this, and if so would we use it
 instead of API calls?
 
 
  Theres plenty of other thoughts and questions to be asked and discussed
 in regards to FLIP handling,
  hopefully this will get us going. I'm certain I may not be completely
 understanding this and
  is the hopes of this email to clarify any uncertainties.
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
I 100% agree with what Brandon wrote below and that is why IMHO they go
together and should be part of the same codebase.

Susanne

On Tue, Sep 2, 2014 at 1:12 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

I think the best course of action is to get Octavia itself into the same
codebase as LBaaS (Neutron or spun out).  They do go together, and the
maintainers will almost always be the same for both.  This makes even
more sense when LBaaS is spun out into its own project.


On Tue, Sep 2, 2014 at 1:12 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Hi Susanne and everyone,

 My opinions are that keeping it in stackforge until it gets mature is
 the best solution.  I'm pretty sure we can all agree on that.  Whenever
 it is mature then, and only then, we should try to get it into openstack
 one way or another.  If Neutron LBaaS v2 is still incubated then it
 should be relatively easy to get it in that codebase.  If Neutron LBaaS
 has already spun out, even easier for us.  If we want Octavia to just
 become an openstack project all its own then that will be the difficult
 part.

 I think the best course of action is to get Octavia itself into the same
 codebase as LBaaS (Neutron or spun out).  They do go together, and the
 maintainers will almost always be the same for both.  This makes even
 more sense when LBaaS is spun out into its own project.

 I really think all of the answers to these questions will fall into
 place when we actually deliver a product that we are all wanting and
 talking about delivering with Octavia.  Once we prove that we can all
 come together as a community and manage a product from inception to
 maturity, we will then have the respect and trust to do what is best for
 an Openstack LBaaS product.

 Thanks,
 Brandon

 On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
  Kyle, Adam,
 
 
 
  Based on this thread Kyle is suggesting the follow moving forward
  plan:
 
 
 
  1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
  LBaas V1.0”
  2) “Eventually” It graduates into a project under the networking
  program.
  3) “At that point” We deprecate Neutron LBaaS v1.
 
 
 
  The words in “xx“ are works I added to make sure I/We understand the
  whole picture.
 
 
 
  And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
  Radware / A10 / etc appliances which is a definition I agree with BTW.
 
 
 
  What I am trying to now understand is how we will move Octavia into
  the new LBaaS project?
 
 
 
  If we do it later rather than develop Octavia in tree under the new
  incubated LBaaS project when do we plan to bring it in-tree from
  Stackforge? Kilo? Later? When LBaaS is a separate project under the
  Networking program?

 
 
  What are the criteria to bring a driver into the LBaaS project and
  what do we need to do to replace the existing reference driver? Maybe
  adding a software driver to LBaaS source tree is less of a problem
  than converting a whole project to an OpenStack project.

 
 
  Again I am open to both directions I just want to make sure we
  understand why we are choosing to do one or the other and that our
   decision is based on data and not emotions.
 
 
 
  I am assuming that keeping Octavia in Stackforge will increase the
  velocity of the project and allow us more freedom which is goodness.
  We just need to have a plan to make it part of the Openstack LBaaS
  project.
 
 
 
  Regards Susanne
 
 
 
 
  On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
  adam.harw...@rackspace.com wrote:
  Only really have comments on two of your related points:
 
 
  [Susanne] To me Octavia is a driver so it is very hard to me
  to think of it as a standalone project. It needs the new
  Neutron LBaaS v2 to function which is why I think of them
  together. This of course can change since we can add whatever
  layers we want to Octavia.
 
 
  [Adam] I guess I've always shared Stephen's
  viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
  Radware / A10 / etcappliances, not to an Openstack API layer
  like Neutron-LBaaS. It's a little tricky to clearly define
  this difference in conversation, and I have noticed that quite
  a few people are having the same issue differentiating. In a
  small group, having quite a few people not on the same page is
  a bit scary, so maybe we need to really sit down and map this
  out so everyone is together one way or the other.
 
 
  [Susanne] Ok now I am confused… But I agree with you that it
  need to focus on our use cases. I remember us discussing
  Octavia being the refenece implementation for OpenStack LBaaS
  (whatever that is). Has that changed while I was on vacation?
 
 
  [Adam] I believe that having the Octavia driver (not the
  Octavia codebase itself, technically) become the reference
  implementation

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
Doug

I agree with you but I need to understand the options. Susanne

 And I agree with Brandon’s sentiments.  We need to get something built
before I’m going to worry too
 much about where it should live.  Is this a candidate to get sucked into
LBaaS?  Sure.  Could the reverse
 happen?  Sure.  Let’s see how it develops.


On Tue, Sep 2, 2014 at 11:45 AM, Doug Wiegley do...@a10networks.com wrote:

  Hi all,

   On the other hand one could also say that Octavia is the ML2
 equivalent of LBaaS. The equivalence here is very loose. Octavia would be a
 service-VM framework for doing load balancing using a variety of drivers.
 The drivers ultimately are in charge of using backends like haproxy or
 nginx running on the service VM to implement lbaas configuration.

  This, exactly.  I think it’s much fairer to define Octavia as an LBaaS
 purpose-built service vm framework, which will use nova and haproxy
 initially to provide a highly scalable backend. But before we get into
 terminology misunderstandings, there are a bunch of different “drivers” at
 play here, exactly because this is a framework:

- Neutron lbaas drivers – what we all know and love
- Octavia’s “network driver” - this is a piece of glue that exists to
hide internal calls we have to make into Neutron until clean interfaces
exist.  It might be a no-op in the case of an actual neutron lbaas driver,
which could serve that function instead.
- Octavia’s “vm driver” - this is a piece of glue between the octavia
controller and the nova VMs that are doing the load balancing.
- Octavia’s “compute driver” - you guessed it, an abstraction to Nova
and its scheduler.

 Places that can be the “front-end” for Octavia:

- Neutron LBaaS v2 driver
- Neutron LBaaS v1 driver
- It’s own REST API

 Things that could have their own VM drivers:

- haproxy, running inside nova
- Nginx, running inside nova
- Anything else you want, running inside any hypervisor you want
- Vendor soft appliances
- Null-out the VM calls and go straight to some other backend?  Sure,
though I’m not sure I’d see the point.

 There are quite a few synergies with other efforts, and we’re monitoring
 them, but not waiting for any of them.

  And I agree with Brandon’s sentiments.  We need to get something built
 before I’m going to worry too much about where it should live.  Is this a
 candidate to get sucked into LBaaS?  Sure.  Could the reverse happen?
  Sure.  Let’s see how it develops.

  Incidentally, we are currently having a debate over the use of the term
 “vm” (and “vm driver”) as the name to describe octavia’s backends.  Feel
 free to chime in here: https://review.openstack.org/#/c/117701/

  Thanks,
 doug


   From: Salvatore Orlando sorla...@nicira.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, September 2, 2014 at 9:05 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

   Hi Susanne,

  I'm just trying to gain a good understanding of the situation here.
 More comments and questions inline.

  Salvatore

 On 2 September 2014 16:34, Susanne Balle sleipnir...@gmail.com wrote:

 Salvatore

  Thanks for your clarification below around the blueprint.

   For LBaaS v2 therefore the relationship between it and Octavia should
 be the same as with any other
  backend. I see Octavia has a blueprint for a network driver - and the
 derivable of that should definitely be
  part of the LBaaS project.

   For the rest, it would seem a bit strange to me if the LBaaS project
 incorporated a backend as well. After
   all, LBaaS v1 did not incorporate haproxy!
  Also, as Adam points out, Nova does not incorporate an Hypervisor.

  In my vision Octavia is a LBaaS framework that should not be tied to
 ha-proxy. The interfaces should be clean and at a high enough level that we
 can switch load-balancer. We should be able to switch the load-balancer to
 nginx so to me the analogy is more Octavia+LBaaSV2 == nova and hypervisor
 == load-balancer.


  Indeed I said that it would have been initially tied to haproxy
 considering the blueprints currently defined for octavia, but I'm sure the
 solution could leverage nginx or something else in the future.

  I think however it is correct to say that LBaaS v2 will have an Octavia
 driver on par with A10, radware, nestscaler and others.
 (Correct me if I'm wrong) On the other hand Octavia, within its
 implementation, might use different drivers - for instance nginx or
 haproxy. And in theory it cannot be excluded that the same appliance might
 implement some vips using haproxy and others using nginx.


  I am not sure the group is in agreement on the definition I just wrote.
 Also going back the definition of Octavia being a backend then I agree that
 we should write a blueprint to incorporate Octavia

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-09-02 Thread Susanne Balle
Just wanted to let you know that sahara has move to server groups for
anti-affinity. This is IMHO the way we should so it as well.

Susanne

Jenkins (Code Review) rev...@openstack.org
5:46 PM (0 minutes ago)
to Andrew, Sahara, Alexander, Sergey, Michael, Sergey, Vitaly, Dmitry,
Trevor
Jenkins has posted comments on this change.

Change subject: Switched anti-affinity feature to server groups
..


Patch Set 15: Verified+1

Build succeeded.

- gate-sahara-pep8
http://logs.openstack.org/59/112159/15/check/gate-sahara-pep8/15869b3 :
SUCCESS in 3m 46s
- gate-sahara-docs
http://docs-draft.openstack.org/59/112159/15/check/gate-sahara-docs/dd9eecd/doc/build/html/
:
SUCCESS in 4m 20s
- gate-sahara-python26
http://logs.openstack.org/59/112159/15/check/gate-sahara-python26/027c775 :
SUCCESS in 4m 53s
- gate-sahara-python27
http://logs.openstack.org/59/112159/15/check/gate-sahara-python27/08f492a :
SUCCESS in 3m 36s
- check-tempest-dsvm-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-full/e30530a :
SUCCESS in 59m 21s
- check-tempest-dsvm-postgres-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-postgres-full/9e90341
:
SUCCESS in 1h 19m 32s
- check-tempest-dsvm-neutron-heat-slow
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-neutron-heat-slow/70b1955
:
SUCCESS in 21m 30s
- gate-sahara-pylint
http://logs.openstack.org/59/112159/15/check/gate-sahara-pylint/55250e1 :
SUCCESS in 5m 18s (non-voting)

--
To view, visit https://review.openstack.org/112159
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I501438d84f3a486dad30081b05933f59ebab4858
Gerrit-PatchSet: 15
Gerrit-Project: openstack/sahara
Gerrit-Branch: master
Gerrit-Owner: Andrew Lazarev alaza...@mirantis.com
Gerrit-Reviewer: Alexander Ignatov aigna...@mirantis.com
Gerrit-Reviewer: Andrew Lazarev alaza...@mirantis.com
Gerrit-Reviewer: Dmitry Mescheryakov dmescherya...@mirantis.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Michael McCune mimcc...@redhat.com
Gerrit-Reviewer: Sahara Hadoop Cluster CI elastic-hadoop...@mirantis.com
Gerrit-Reviewer: Sergey Lukjanov slukja...@mirantis.com
Gerrit-Reviewer: Sergey Reshetnyak sreshetn...@mirantis.com
Gerrit-Reviewer: Trevor McKay tmc...@redhat.com
Gerrit-Reviewer: Vitaly Gridnev vgrid...@mirantis.com
Gerrit-HasComments: No


On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Nova scheduler has ServerGroupAffinityFilter and
 ServerGroupAntiAffinityFilter which does the colocation and apolocation
 for VMs.  I think this is something we've discussed before about taking
 advantage of nova's scheduling.  I need to verify that this will work
 with what we (RAX) plan to do, but I'd like to get everyone else's
 thoughts.  Also, if we do decide this works for everyone involved,
 should we make it mandatory that the nova-compute services are running
 these two filters?  I'm also trying to see if we can use this to also do
 our own colocation and apolocation on load balancers, but it looks like
 it will be a bit complex if it can even work.  Hopefully, I can have
 something definitive on that soon.

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-01 Thread Susanne Balle
Kyle, Adam,



Based on this thread Kyle is suggesting the follow moving forward plan:



1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
LBaas V1.0”
2) “Eventually” It graduates into a project under the networking program.
3) “At that point” We deprecate Neutron LBaaS v1.



The words in “xx“ are works I added to make sure I/We understand the whole
picture.



And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
Radware / A10 / etc *appliances* which is a definition I agree with BTW.



What I am trying to now understand is how we will move Octavia into the new
LBaaS project?



If we do it later rather than develop Octavia in tree under the new
incubated LBaaS project when do we plan to bring it in-tree from
Stackforge? Kilo? Later? When LBaaS is a separate project under the
Networking program?



What are the criteria to bring a driver into the LBaaS project and what do
we need to do to replace the existing reference driver? Maybe adding a
software driver to LBaaS source tree is less of a problem than converting a
whole project to an OpenStack project.



Again I am open to both directions I just want to make sure we understand
why we are choosing to do one or the other and that our  decision is based
on data and not emotions.



I am assuming that keeping Octavia in Stackforge will increase the velocity
of the project and allow us more freedom which is goodness. We just need to
have a plan to make it part of the Openstack LBaaS project.



Regards Susanne


On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell adam.harw...@rackspace.com
wrote:

   Only really have comments on two of your related points:

  [Susanne] To me Octavia is a driver so it is very hard to me to think of
 it as a standalone project. It needs the new Neutron LBaaS v2 to function
 which is why I think of them together. This of course can change since we
 can add whatever layers we want to Octavia.

  [Adam] I guess I've always shared Stephen's viewpoint — Octavia !=
 LBaaS-v2. Octavia is a peer to F5 / Radware / A10 / etc appliances, not
 to an Openstack API layer like Neutron-LBaaS. It's a little tricky to
 clearly define this difference in conversation, and I have noticed that
 quite a few people are having the same issue differentiating. In a small
 group, having quite a few people not on the same page is a bit scary, so
 maybe we need to really sit down and map this out so everyone is together
 one way or the other.

  [Susanne] Ok now I am confused… But I agree with you that it need to
 focus on our use cases. I remember us discussing Octavia being the refenece
 implementation for OpenStack LBaaS (whatever that is). Has that changed
 while I was on vacation?

  [Adam] I believe that having the Octavia driver (not the Octavia
 codebase itself, technically) become the reference implementation for
 Neutron-LBaaS is still the plan in my eyes. The Octavia Driver in
 Neutron-LBaaS is a separate bit of code from the actual Octavia project,
 similar to the way the A10 driver is a separate bit of code from the A10
 appliance. To do that though, we need Octavia to be fairly close to fully
 functional. I believe we can do this because even though the reference
 driver would then require an additional service to run, what it requires is
 still fully-open-source and (by way of our plan) available as part of
 OpenStack core.

   --Adam

  https://keybase.io/rm_you


   From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, August 29, 2014 9:19 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Stephen



 See inline comments.



 Susanne



 -



 Susanne--



 I think you are conflating the difference between OpenStack incubation
 and Neutron incubator. These are two very different matters and should be
 treated separately. So, addressing each one individually:



 *OpenStack Incubation*

 I think this has been the end-goal of Octavia all along and continues to
 be the end-goal. Under this scenario, Octavia is its own stand-alone
 project with its own PTL and core developer team, its own governance, and
 should eventually become part of the integrated OpenStack release. No
 project ever starts out as OpenStack incubated.



 [Susanne] I totally agree that the end goal is for Neutron LBaaS to become
 its own incubated project. I did miss the nuance that was pointed out by
 Mestery in an earlier email that if a Neutron incubator project wants to
 become a separate project it will have to apply for incubation again or at
 that time. It was my understanding that such a Neutron incubated project
 would be grandfathered in but again we do not have much details on the
 process yet.



 To me Octavia is a driver so it is very hard to me to think

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-29 Thread Susanne Balle
 be another implementation which lives along-side said 3rd party
vendor products (plugging into a higher level LBaaS layer via a driver),
and because we don't want to have to compromise certain design features of
Octavia to meet the lowest common denominator 3rd party vendor product.
(3rd party vendors are welcome, but we will not make design compromises to
meet the needs of a proprietary product-- compatibility with available
open-source products and standards trumps this.)
- The end-game for the above point is: In the future I see Openstack
LBaaS (or whatever the project calls itself) being a separate but
complimentary project to Octavia.
- While its true that we would like Octavia to become the reference
implementation for Neutron LBaaS, we are nowhere near being able to deliver
on that. Attempting to become a part of Neutron LBaaS right now is likely
just to create frustration (and very little merged code) for both the
Octavia and Neutron teams.



 So given that the only code in Octavia right now are a few database
 migrations, we are very, very far away from being ready for either
 OpenStack incubation or the Neutron incubator project. I don't think it's
 very useful to be spending time right now worrying about either of these
 outcomes:  We should be working on Octavia!

 Please also understand:  I realize that probably the reason you're asking
 this right now is because you have a mandate within your organization to
 use only official OpenStack branded components, and if Octavia doesn't
 fall within that category, you won't be able to use it.  Of course everyone
 working on this project wants to make that happen too, so we're doing
 everything we can to make sure we don't jeopardize that possibility. And
 there are enough voices in this project that want that to happen, so I
 think if we strayed from the path to get there, there would be sufficient
 clangor over this that it would be hard to miss. But I don't think there's
 anyone at all at this time that can honestly give you a promise that
 Octavia definitely will be incubated and will definitely end up in the
 integrated OpenStack release.

 If you want to increase the chances of that happening, please help push
 the project forward!

 Thanks,
 Stephen



 On Thu, Aug 28, 2014 at 2:57 PM, Susanne Balle sleipnir...@gmail.com
 wrote:

  I would like to discuss the pros and cons of putting Octavia into the
 Neutron LBaaS incubator project right away. If it is going to be the
 reference implementation for LBaaS v 2 then I believe Octavia belong in
 Neutron LBaaS v2 incubator.

 The Pros:
 * Octavia is in Openstack incubation right away along with the lbaas v2
 code. We do not have to apply for incubation later on.
 * As incubation project we have our own core and should be able ot commit
 our code
 * We are starting out as an OpenStack incubated project

 The Cons:
 * Not sure of the velocity of the project
 * Incubation not well defined.

 If Octavia starts as a standalone stackforge project we are assuming that
 it would be looked favorable on when time is to move it into incubated
 status.

 Susanne



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
LBaaS team,

As we discussed in the Weekly LBaaS meeting this morning we should make
sure we get the design sessions scheduled that we are interested in.

We currently agreed on the following:

* Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want
to go over status and also the whole incubator thingy and how we will best
move forward.

* Octavia: We want to schedule 2 sessions.
---  During one of the sessions I would like to discuss the pros and cons
of putting Octavia into the Neutron LBaaS incubator project right away. If
it is going to be the reference implementation for LBaaS v 2 then I believe
Octavia belong in Neutron LBaaS v2 incubator.

* Flavors which should be coordinated with markmcclain and enikanorov.
--- https://review.openstack.org/#/c/102723/

Is this too many sessions given the constraints? I am assuming that we can
also meet at the pods like we did at the last summit.

thoughts?

Regards Susanne

Thierry Carrez thie...@openstack.org
Aug 27 (1 day ago)
to OpenStack
Hi everyone,

I've been thinking about what changes we can bring to the Design Summit
format to make it more productive. I've heard the feedback from the
mid-cycle meetups and would like to apply some of those ideas for Paris,
within the constraints we have (already booked space and time). Here is
something we could do:

Day 1. Cross-project sessions / incubated projects / other projects

I think that worked well last time. 3 parallel rooms where we can
address top cross-project questions, discuss the results of the various
experiments we conducted during juno. Don't hesitate to schedule 2 slots
for discussions, so that we have time to come to the bottom of those
issues. Incubated projects (and maybe other projects, if space allows)
occupy the remaining space on day 1, and could occupy pods on the
other days.

Day 2 and Day 3. Scheduled sessions for various programs

That's our traditional scheduled space. We'll have a 33% less slots
available. So, rather than trying to cover all the scope, the idea would
be to focus those sessions on specific issues which really require
face-to-face discussion (which can't be solved on the ML or using spec
discussion) *or* require a lot of user feedback. That way, appearing in
the general schedule is very helpful. This will require us to be a lot
stricter on what we accept there and what we don't -- we won't have
space for courtesy sessions anymore, and traditional/unnecessary
sessions (like my traditional release schedule one) should just move
to the mailing-list.

Day 4. Contributors meetups

On the last day, we could try to split the space so that we can conduct
parallel midcycle-meetup-like contributors gatherings, with no time
boundaries and an open agenda. Large projects could get a full day,
smaller projects would get half a day (but could continue the discussion
in a local bar). Ideally that meetup would end with some alignment on
release goals, but the idea is to make the best of that time together to
solve the issues you have. Friday would finish with the design summit
feedback session, for those who are still around.


I think this proposal makes the best use of our setup: discuss clear
cross-project issues, address key specific topics which need
face-to-face time and broader attendance, then try to replicate the
success of midcycle meetup-like open unscheduled time to discuss
whatever is hot at this point.

There are still details to work out (is it possible split the space,
should we use the usual design summit CFP website to organize the
scheduled time...), but I would first like to have your feedback on
this format. Also if you have alternative proposals that would make a
better use of our 4 days, let me know.

Cheers,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
With a corrected Subject. Susanne


On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com
wrote:


 LBaaS team,

 As we discussed in the Weekly LBaaS meeting this morning we should make
 sure we get the design sessions scheduled that we are interested in.

 We currently agreed on the following:

 * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
 want to go over status and also the whole incubator thingy and how we will
 best move forward.

 * Octavia: We want to schedule 2 sessions.
 ---  During one of the sessions I would like to discuss the pros and cons
 of putting Octavia into the Neutron LBaaS incubator project right away. If
 it is going to be the reference implementation for LBaaS v 2 then I believe
 Octavia belong in Neutron LBaaS v2 incubator.

 * Flavors which should be coordinated with markmcclain and enikanorov.
 --- https://review.openstack.org/#/c/102723/

 Is this too many sessions given the constraints? I am assuming that we can
 also meet at the pods like we did at the last summit.

 thoughts?

 Regards Susanne

 Thierry Carrez thie...@openstack.org
 Aug 27 (1 day ago)
  to OpenStack
  Hi everyone,

 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:

 Day 1. Cross-project sessions / incubated projects / other projects

 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

 Day 2 and Day 3. Scheduled sessions for various programs

 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

 Day 4. Contributors meetups

 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.


 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.

 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

 Cheers,

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
Brandon

I am not sure how ready that nova feature is for general use and have asked
our nova lead about that. He is on vacation but should be back by the start
of next week. I believe this is the right approach for us moving forward.

We cannot make it mandatory to run the 2 filters but we can say in the
documentation that if these two filters aren't set that we cannot guaranty
Anti-affinity or Affinity.

The other way we can implement this is by using availability zones and host
aggregates. This is one technique we use to make sure we deploy our
in-cloud services in an HA model. This also would assume that the operator
is setting up Availabiltiy zones which we can't.

http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

Sahara is currently using the following filters to support host affinity
which is probably due to the fact that they did the work before
ServerGroups. I am not advocating the use of those filters but just showing
you that we can document the feature and it will be up to the operator to
set it up to get the right behavior.

Regards

Susanne

Anti-affinity
One of the problems in Hadoop running on OpenStack is that there is no
ability to control where machine is actually running. We cannot be sure
that two new virtual machines are started on different physical machines.
As a result, any replication with cluster is not reliable because all
replicas may turn up on one physical machine. Anti-affinity feature
provides an ability to explicitly tell Sahara to run specified processes on
different compute nodes. This is especially useful for Hadoop datanode
process to make HDFS replicas reliable.

The Anti-Affinity feature requires certain scheduler filters to be enabled
on Nova. Edit your/etc/nova/nova.conf in the following way:

[DEFAULT]

...

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_default_filters=DifferentHostFilter,SameHostFilter

This feature is supported by all plugins out of the box.
http://docs.openstack.org/developer/sahara/userdoc/features.html



On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Nova scheduler has ServerGroupAffinityFilter and
 ServerGroupAntiAffinityFilter which does the colocation and apolocation
 for VMs.  I think this is something we've discussed before about taking
 advantage of nova's scheduling.  I need to verify that this will work
 with what we (RAX) plan to do, but I'd like to get everyone else's
 thoughts.  Also, if we do decide this works for everyone involved,
 should we make it mandatory that the nova-compute services are running
 these two filters?  I'm also trying to see if we can use this to also do
 our own colocation and apolocation on load balancers, but it looks like
 it will be a bit complex if it can even work.  Hopefully, I can have
 something definitive on that soon.

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
Let's use a different email thread to discuss if Octavia should be part of
the Neutron incubator project right away or not. I would like to keep the
two discussions separate.

Susanne


On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com
wrote:


 LBaaS team,

 As we discussed in the Weekly LBaaS meeting this morning we should make
 sure we get the design sessions scheduled that we are interested in.

 We currently agreed on the following:

 * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
 want to go over status and also the whole incubator thingy and how we will
 best move forward.

 * Octavia: We want to schedule 2 sessions.
 ---  During one of the sessions I would like to discuss the pros and cons
 of putting Octavia into the Neutron LBaaS incubator project right away. If
 it is going to be the reference implementation for LBaaS v 2 then I believe
 Octavia belong in Neutron LBaaS v2 incubator.

 * Flavors which should be coordinated with markmcclain and enikanorov.
 --- https://review.openstack.org/#/c/102723/

 Is this too many sessions given the constraints? I am assuming that we can
 also meet at the pods like we did at the last summit.

 thoughts?

 Regards Susanne

 Thierry Carrez thie...@openstack.org
 Aug 27 (1 day ago)
  to OpenStack
  Hi everyone,

 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:

 Day 1. Cross-project sessions / incubated projects / other projects

 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

 Day 2 and Day 3. Scheduled sessions for various programs

 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

 Day 4. Contributors meetups

 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.


 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.

 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

 Cheers,

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?

2014-08-28 Thread Susanne Balle
Let's use a different email thread to discuss if Octavia should be part of
the Neutron incubator project right away or not. I would like to keep the
two discussions separate.



Susanne


On Thu, Aug 28, 2014 at 3:20 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Hi Susanne--

 Regarding the Octavia sessions:  I think we probably will have enough to
 discuss that we could use two design sessions.  However, I also think that
 we can probably come to conclusions on whether Octavia should become a part
 of Neutron Incubator right away via discussion on this mailing list.  Do we
 want to have that discussion in another thread, or should we use this one?

 Stephen


 On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle sleipnir...@gmail.com
 wrote:

 With a corrected Subject. Susanne



 On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com
 wrote:


 LBaaS team,

 As we discussed in the Weekly LBaaS meeting this morning we should make
 sure we get the design sessions scheduled that we are interested in.

 We currently agreed on the following:

 * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
 want to go over status and also the whole incubator thingy and how we will
 best move forward.

 * Octavia: We want to schedule 2 sessions.
 ---  During one of the sessions I would like to discuss the pros and
 cons of putting Octavia into the Neutron LBaaS incubator project right
 away. If it is going to be the reference implementation for LBaaS v 2 then
 I believe Octavia belong in Neutron LBaaS v2 incubator.

 * Flavors which should be coordinated with markmcclain and enikanorov.
 --- https://review.openstack.org/#/c/102723/

 Is this too many sessions given the constraints? I am assuming that we
 can also meet at the pods like we did at the last summit.

 thoughts?

 Regards Susanne

 Thierry Carrez thie...@openstack.org
 Aug 27 (1 day ago)
  to OpenStack
  Hi everyone,

 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:

 Day 1. Cross-project sessions / incubated projects / other projects

 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

 Day 2 and Day 3. Scheduled sessions for various programs

 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

 Day 4. Contributors meetups

 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.


 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.

 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

 Cheers,



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing

Re: [openstack-dev] [Octavia] Octavia VM image design

2014-08-28 Thread Susanne Balle
I agree with Michael. We need to use the OpenStack tooling.

Sahara is encountering some of the same issues we are as they are building
up their hadoop VM/clusters.

See

http://docs.openstack.org/developer/sahara/userdoc/vanilla_plugin.html
http://docs.openstack.org/developer/sahara/userdoc/diskimagebuilder.html

for inspiration,

Susanne



On Wed, Aug 27, 2014 at 6:21 PM, Michael Johnson johnso...@gmail.com
wrote:

 I am investigating building scripts that use diskimage-builder
 (https://github.com/openstack/diskimage-builder) to create a purpose
 built image.  This should allow some flexibility in the base image
 and the output image format (including a path to docker).

 The definition of purpose built is open at this point.  I will
 likely try to have a minimal Ubuntu based VM image as a starting
 point/test case and we can add/change as necessary.

 Michael


 On Wed, Aug 27, 2014 at 2:12 PM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  It seems to me there are two major approaches to the Octavia VM design:
 
  Start with a standard Linux distribution (e.g. Ubuntu 14.04 LTS) and
 install
  HAProxy 1.5 and Octavia control layer
  Develop a minimal purpose driven distribution (similar to m0n0wall) with
  just HAProxy, iproute2 and a Python runtime for the control layer.
 
  The primary difference here is additional development effort for option
 2,
  verses the increased image size of option 1. Using Ubuntu and CirrOS
 images
  a representative of the two options it looks like the image size
 difference
  is on the about 20 times larger for a full featured distribution. If one
 of
  the HA models is to spin up a replacement instance on failure the image
 size
  could be significantly affect fail-over time.
 
  For initial work I think starting with a standard distribution would be
  sensible, but we should target systemd (Debian adopted systemd as new
  default, and Ubuntu is following suit). I wanted to find out if there is
  interest in a minimal Octavia image, and if so this may affect design
  decisions on the instance control plane component.
 
 
  -Dustin
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
We need to be careful. I believe that a user can use these filters to keep
requesting VMs in the case of nova to get to the size of your cloud.

Also given that nova now has ServerGroups let's not make a quick decision
on using something that is being replaced with something better. I suggest
we investigated ServerGroups a little more before we discard it.

The operator should really decide how he/she wants Anti-affinity by setting
the right filters in nova.

Susanne


On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Trevor and I just worked through some scenarios to make sure it can
 handle colocation and apolocation.  It looks like it does, however not
 everything will so simple, especially when we introduce horizontal
 scaling.  Trevor's going to write up an email about some of the caveats
 but so far just using a table to track what LB has what VMs and on what
 hosts will be sufficient.

 Thanks,
 Brandon

 On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
  I'm trying to think of a use case that wouldn't be satisfied using
  those filters and am not coming up with anything. As such, I don't see
  a problem using them to fulfill our requirements around colocation and
  apolocation.
 
 
  Stephen
 
 
  On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
  brandon.lo...@rackspace.com wrote:
  Yeah we were looking at the SameHost and DifferentHost filters
  and that
  will probably do what we need.  Though I was hoping we could
  do a
  combination of both but we can make it work with those filters
  I
  believe.
 
  Thanks,
  Brandon
 
  On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
   Brandon
  
  
   I am not sure how ready that nova feature is for general use
  and have
   asked our nova lead about that. He is on vacation but should
  be back
   by the start of next week. I believe this is the right
  approach for us
   moving forward.
  
  
  
   We cannot make it mandatory to run the 2 filters but we can
  say in the
   documentation that if these two filters aren't set that we
  cannot
   guaranty Anti-affinity or Affinity.
  
  
   The other way we can implement this is by using availability
  zones and
   host aggregates. This is one technique we use to make sure
  we deploy
   our in-cloud services in an HA model. This also would assume
  that the
   operator is setting up Availabiltiy zones which we can't.
  
  
  
 
 http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
  
  
  
   Sahara is currently using the following filters to support
  host
   affinity which is probably due to the fact that they did the
  work
   before ServerGroups. I am not advocating the use of those
  filters but
   just showing you that we can document the feature and it
  will be up to
   the operator to set it up to get the right behavior.
  
  
   Regards
  
  
   Susanne
  
  
  
   Anti-affinity
   One of the problems in Hadoop running on OpenStack is that
  there is no
   ability to control where machine is actually running. We
  cannot be
   sure that two new virtual machines are started on different
  physical
   machines. As a result, any replication with cluster is not
  reliable
   because all replicas may turn up on one physical machine.
   Anti-affinity feature provides an ability to explicitly tell
  Sahara to
   run specified processes on different compute nodes. This is
  especially
   useful for Hadoop datanode process to make HDFS replicas
  reliable.
   The Anti-Affinity feature requires certain scheduler filters
  to be
   enabled on Nova. Edit your/etc/nova/nova.conf in the
  following way:
  
   [DEFAULT]
  
   ...
  
  
  scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
   scheduler_default_filters=DifferentHostFilter,SameHostFilter
   This feature is supported by all plugins out of the box.
  
  
  
  http://docs.openstack.org/developer/sahara/userdoc/features.html
  
  
  
  
  
   On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Nova scheduler has ServerGroupAffinityFilter and
   ServerGroupAntiAffinityFilter

[openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
 I would like to discuss the pros and cons of putting Octavia into the
Neutron LBaaS incubator project right away. If it is going to be the
reference implementation for LBaaS v 2 then I believe Octavia belong in
Neutron LBaaS v2 incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2
code. We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit
our code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that
it would be looked favorable on when time is to move it into incubated
status.

Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-08-28 Thread Susanne Balle
Just for us to learn about the incubator status, here are some of the info
on incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle sleipnir...@gmail.com
wrote:

  I would like to discuss the pros and cons of putting Octavia into the
 Neutron LBaaS incubator project right away. If it is going to be the
 reference implementation for LBaaS v 2 then I believe Octavia belong in
 Neutron LBaaS v2 incubator.

 The Pros:
 * Octavia is in Openstack incubation right away along with the lbaas v2
 code. We do not have to apply for incubation later on.
 * As incubation project we have our own core and should be able ot commit
 our code
 * We are starting out as an OpenStack incubated project

 The Cons:
 * Not sure of the velocity of the project
 * Incubation not well defined.

 If Octavia starts as a standalone stackforge project we are assuming that
 it would be looked favorable on when time is to move it into incubated
 status.

 Susanne



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers

2014-08-12 Thread Susanne Balle
In the context of Octavia and Neutron LBaaS. Susanne


On Mon, Aug 11, 2014 at 5:44 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Susanne,

 Are you asking in the context of Load Balancer services in general, or in
 terms of the Neutron LBaaS project or the Octavia project?

 Stephen


 On Mon, Aug 11, 2014 at 9:04 AM, Doug Wiegley do...@a10networks.com
 wrote:

 Hi Susanne,

 While there are a few operators involved with LBaaS that would have good
 input, you might want to also ask this on the non-dev mailing list, for a
 larger sample size.

 Thanks,
 doug

 On 8/11/14, 3:05 AM, Susanne Balle sleipnir...@gmail.com wrote:

 Gang,
 I was asked the following questions around our Neutron LBaaS use cases:
 1.  Will there be a scenario where the ³VIP² port will be in a different
 Node, from all the Member ³VMs² in a pool.
 
 
 2.  Also how likely is it for the LBaaS configured subnet to not have a
 ³router² and just use the ³extra_routes²
  option.
 3.  Is there a valid use case where customers will be using the
 ³extra_routes² with subnets instead of the ³routers².
  ( It would be great if you have some use case picture for this).
 Feel free to chime in here and I'll summaries the answers.
 Regards Susanne
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews

2014-08-11 Thread Susanne Balle
I agree with Doug as well. We should update the current patch. Susanne


On Sun, Aug 10, 2014 at 8:18 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 I took a look at https://review.openstack.org/#/c/105331/ and had one
 minor issue which I think can be addressed. Prior to approving we need to
 understand what the state of the V2 API will be.
 Thanks
 Gary

   From: Vijay Venkatachalam vijay.venkatacha...@citrix.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Sunday, August 10, 2014 at 2:57 PM
 To: OpenStack List openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Improvements to current
 reviews

   Thanks Brandon for constant  improvisation.

 I agree with Doug. Please update current one. We already hv  more number
 of reviews :-). It will be difficult to manage if we add more.

 Thanks,
 Vijay

 Sent using CloudMagic

 On Sun, Aug 10, 2014 at 3:23 AM, Doug Wiegley do...@a10networks.com
 wrote:

  I think you should update the current reviews (new patch set, not
 additional review.)

 Doug


  On Aug 9, 2014, at 3:34 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:
 
  So I've done some work on improving the code on the current pending
  reviews.  And would like to get peoples' opinions on whether I should
  add antoher patch set to those reviews, or add the changes as as another
  review dependent on the pending ones.
 
  To be clear, no matter what the first review in the chain will not
  change:
  https://review.openstack.org/#/c/105331/
 
  However, if adding another patch set is preferrable then plugin and db
  implementation review would get another patch set and then obviously
  anything depending on that.
 
  https://review.openstack.org/#/c/105609/
 
  My opinion is that I'd like to get both of these in as a new patch set.
  Reason being that the reviews don't have any +2's and there is
  uncertainty because of the GBP discussion.  So, I don't think it'd be a
  major issue if a new patch set was created.  Speak up if you think
  otherwise.  I'd like to get as many people's thoughts as possible.
 
  The changes are:
 
  1) Added data models, which are just plain python object mimicking the
  sql alchemy models, but don't have the overhead or dynamic nature of
  being sql alchemy models.  These data models are now returned by the
  database methods, instead of the sql alchemy objects.  Also, I moved the
  definition of the sql alchemy models into its own module.  I've been
  wanting to do this but since I thought I was running out of time I left
  it for later.
 
  These shouldn't cause many merge/rebase conflicts, but it probably cause
  a few because the sql alchemy models were moved to a different module.
 
 
  2) The LoadBalancerPluginv2 no longer inherits from the
  LoadBalancerPluginDbv2.  The database is now a composite attribute of
  the plugin (i.e. plugin.db.get_loadbalancer()).  This cleans up the code
  a bit and removes the necessity for _delete_db_entity methods that the
  drivers needed because now they can actually call the database methods.
  Also, this makes testing more clear, though I have not added any tests
  for this because the previous tests are sufficient for now.  Adding the
  appropriate tests would add 1k or 2k lines most likely.
 
  This will likely cause more conflicts because the _db_delete_entity
  methods have been removed.  However, the new driver interface/mixins
  defined a db_method for all drivers to use, so if that is being used
  there shouldn't be any problems.
 
  Thanks,
  Brandon
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Minutes from 8/6/2014 meeting

2014-08-11 Thread Susanne Balle
Great notes! thanks it helped me catch up after vacation. :)


On Thu, Aug 7, 2014 at 4:33 AM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 On where to capture notes like this long-term:  I would say the wiki is
 more searchable for now. When we make the transition to IRC meetings, then
 the meeting bots will capture minutes and transcripts in the usual way and
 we can link to these from the wiki.


 On Thu, Aug 7, 2014 at 1:29 AM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

 Wow, Trevor! Thanks for capturing all that!


 On Wed, Aug 6, 2014 at 9:47 PM, Trevor Vardeman 
 trevor.varde...@rackspace.com wrote:

 Agenda items are numbered, and topics, as discussed, are described
 beneath in list format.

 1) Octavia Constitution and Project Direction Documents (Road map)
 a) Constitution and Road map will potentially be adopted after
 another couple days; providing those who were busy more time to review the
 information

 2) Octavia Design Proposals
 a) Difference between version 0.5 and 1.0 isn't huge
 b) Version 2 has many network topology changes and Layer 4 routing
 + This includes N node Active-Active
 + Would like to avoid Layer 2 connectivity with Load Balancers
 (included in version 1 however)
 + Layer router driver
 + Layer router controller
 + Long term solution
 c) After refining Version 1 document (with some scrutiny) all
 changes will be propagated to the Version 2 document
 d) Version 0.5 is unpublished
 e) All control layer, anything connected to the intermediate message
 bus in version 1, will be collapsed down to 1 daemon.
 + No scale-able control, but scale-able service delivery
 + Version 1 will be the first large operator compatible version,
 that will have both scale-able control and scale-able service delivery
 + 0.5 will be a good start
 - laying out ground work
 - rough topology for the end users
 - must be approved by the networking teams for each
 contributing company
 f) The portions under control of neutron lbaas is the User API and
 the driver (for neutron lbaas)
 g) If neutron LBaaS is a sufficient front-end (user API doesn't
 suck), then Octavia will be kept as a vendor driver
 h) Potentially including a REST API on top of Octavia
 + Octavia is initially just a vendor driver, no real desire for
 another API in front of Octavia
 + If someone wants it, the work is trivial and can be done in
 another project at another time
 i) Octavia should have a loose coupling with Neutron; use a shim for
 network connectivity (one specifically for Neutron communication in the
 start)
 + This is going to hold any dirty hacks that would be required
 to get something done, keeping Octavia clean
 - Example: changing the mac address on a port

 3) Operator Network Topology Requirements
 a) One requirement is floating IPs.
 b) IPv6 is in demand, but is currently not supported reliably on
 Neutron
 + IPv6 would be represented as a different load balancer entity,
 and possibly include co-location with another Load Balancer
 c) Network interface plug-ability (potentially)
 d) Sections concerning front-end connectivity should be forwarded to
 each company's network specialists for review
 + Share findings in the mailing list, and dissect the proposals
 with the information and comment what requirements are needing added etc.

 4) HA/Failover Options/Solutions
 a) Rackspace may have a solution to this, but the conversation will
 be pushed off to the next meeting (at least)
 + Will gather more information from another member in Rackspace
 to provide to the ML for initial discussions
 b) One option for HA:  Spare pool option (similar to Libra)
 + Poor recovery time is a big problem
 c) Another option for HA:  Active/Passive
 + Bluebox uses one active and one passive configuration, and has
 sub-second fail over.  However is not resource-sufficient

 Questions:
 Q:  What is the expectation for a release time-frame
 A:  Wishful thinking; Octavia version 0.5 beta for Juno (probably not,
 but would be awesome to push for that)

 Notes:
  + We need to pressure the Neutron core reviewers to review the Neutron
 LBaaS changes to get merges.
  + Version 2 front-end topology is different than the Version 1.  Please
 review them individually, and thoroughly


 PS.  I re-wrote most of the information from the recording (thanks again
 Doug).  I have one question for everyone: should I just email this out
 after each meeting to the Octavia mailing list, or should I also add it to
 a page in an Octavia wiki for Meeting Notes/Minutes or something for review
 by anyone?  What are your thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

[openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers

2014-08-11 Thread Susanne Balle
Gang,

I was asked the following questions around our Neutron LBaaS use cases:

1.  Will there be a scenario where the “VIP” port will be in a different
Node, from all the Member “VMs” in a pool.

2.  Also how likely is it for the LBaaS configured subnet to not have a
“router” and just use the “extra_routes” option.

3.  Is there a valid use case where customers will be using the
“extra_routes” with subnets instead of the “routers”. ( It would be great
if you have some use case picture for this).

Feel free to chime in here and I'll summaries the answers.

Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] topics for Thurday's Weekly LBaaS meeting's agenda

2014-07-08 Thread Susanne Balle
Hi

I would like to discuss what talks we plan to do at the Paris' summit and
who will be submitting what? The deadline for submitting talks is July 28
so it is approaching.

Also how many working sessions do we need? and what prep work do we need
to do before the summit.

I am personally interested in co-presenting a talk on Octavia and operator
requirements with Stephen and who else wants to contribute.

Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

2014-07-08 Thread Susanne Balle
Will take a look :-) Thanks for the huge amount of work put into this.


On Tue, Jul 8, 2014 at 8:48 AM, Avishay Balderman avish...@radware.com
wrote:

 Hi Brandon
 I think the patch should be broken into few standalone sub patches.
 As for now it is huge and review is a challenge :)
 Thanks
 Avishay


 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
 Sent: Tuesday, July 08, 2014 5:26 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] LBaaS API Version 2 WIP in gerrit

 https://review.openstack.org/#/c/105331

 It's a WIP and the shim layer still needs to be completed.  Its a lot of
 code, I know.  Please review it thoroughly and point out what needs to
 change.

 Thanks,
 Brandon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-07 Thread Susanne Balle
+1 to QUEUED status.


On Fri, Jul 4, 2014 at 5:27 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Hi German,

 That actually brings up another thing that needs to be done.  There is
 no DELETED state.  When an entity is deleted, it is deleted from the
 database.  I'd prefer a DELETED state so that should be another feature
 we implement afterwards.

 Thanks,
 Brandon

 On Thu, 2014-07-03 at 23:02 +, Eichberger, German wrote:
  Hi Jorge,
 
  +1 for QUEUED and DETACHED
 
  I would suggest to make the time how long we keep entities in DELETED
 state configurable. We use something like 30 days, too, but we have made it
 configurable to adapt to changes...
 
  German
 
  -Original Message-
  From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
  Sent: Thursday, July 03, 2014 11:59 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do
 not exist in a driver backend
 
  +1 to QUEUED status.
 
  For entities that have the concept of being attached/detached why not
 have a 'DETACHED' status to indicate that the entity is not provisioned at
 all (i.e. The config is just stored in the DB). When it is attached during
 provisioning then we can set it to 'ACTIVE' or any of the other
 provisioning statuses such as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it
 wouldn't make much sense to have a 'DELETED' status on these types of
 entities until the user actually issues a DELETE API request (not to be
 confused with detaching). Which begs another question, when items are
 deleted how long should the API return responses for that resource? We have
 a 90 day threshold for this in our current implementation after which the
 API returns 404's for the resource.
 
  Cheers,
  --Jorge
 
 
 
 
  On 7/3/14 10:39 AM, Phillip Toohill phillip.tooh...@rackspace.com
  wrote:
 
  If the objects remain in 'PENDING_CREATE' until provisioned it would
  seem that the process got stuck in that status and may be in a bad
  state from user perspective. I like the idea of QUEUED or similar to
  reference that the object has been accepted but not provisioned.
  
  Phil
  
  On 7/3/14 10:28 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:
  
  With the new API and object model refactor there have been some issues
  arising dealing with the status of entities.  The main issue is that
  Listener, Pool, Member, and Health Monitor can exist independent of a
  Load Balancer.  The Load Balancer is the entity that will contain the
  information about which driver to use (through provider or flavor).
  If a Listener, Pool, Member, or Health Monitor is created without a
  link to a Load Balancer, then what status does it have?  At this point
  it only exists in the database and is really just waiting to be
  provisioned by a driver/backend.
  
  Some possibilities discussed:
  A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
  Entities just remain in PENDING_CREATE until provisioned by a driver
  Entities just remain in ACTIVE until provisioned by a driver
  
  Opinions and suggestions?
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Susanne Balle
+1


On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery mest...@noironetworks.com
wrote:

 We're coming down to the wire here with regards to Neutron BPs in
 Juno, and I wanted to bring up the topic of the flavor framework BP.
 This is a critical BP for things like LBaaS, FWaaS, etc. We need this
 work to land in Juno, as these other work items are dependent on it.
 There are still two proposals [1] [2], and after the meeting last week
 [3] it appeared we were close to conclusion on this. I now see a bunch
 of comments on both proposals.

 I'm going to again suggest we spend some time discussing this at the
 Neutron meeting on Monday to come to a closure on this. I think we're
 close. I'd like to ask Mark and Eugene to both look at the latest
 comments, hopefully address them before the meeting, and then we can
 move forward with this work for Juno.

 Thanks for all the work by all involved on this feature! I think we're
 close and I hope we can close on it Monday at the Neutron meeting!

 Kyle

 [1] https://review.openstack.org/#/c/90070/
 [2] https://review.openstack.org/102723
 [3]
 http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Mid Cycle Sprint

2014-06-17 Thread Susanne Balle
We are now on: #openstack-lbaas

The #neutron-lbaas is now deprecated.


On Tue, Jun 17, 2014 at 11:10 AM, Dustin Lundquist dus...@null-ptr.net
wrote:

 Actually the channel name is #neutron-lbaas.


 On Tue, Jun 17, 2014 at 8:03 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 Also, pop into #openstack-lbaas on Freenode, we have people there
 monitoring the channel.

 On Tue, Jun 17, 2014 at 9:19 AM, Dustin Lundquist dus...@null-ptr.net
 wrote:
  We have an Etherpad going here:
  https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon
 
 
  Dustin
 
 
  On Tue, Jun 17, 2014 at 4:05 AM, Avishay Balderman 
 avish...@radware.com
  wrote:
 
  Hi
  As the lbass mid cycle sprint starts today, is there any way to track
 and
  understand the progress (without flying to Texas... )
 
  Thanks
 
  Avishay
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-10 Thread Susanne Balle
What was discussed at yesterday's Neutron core meeting?



On Tue, Jun 10, 2014 at 3:38 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Any core neutron people have a chance to give their opinions on this
 yet?

 Thanks,
 Brandon

 On Thu, 2014-06-05 at 15:28 +, Buraschi, Andres wrote:
  Thanks, Kyle. Great.
 
  -Original Message-
  From: Kyle Mestery [mailto:mest...@noironetworks.com]
  Sent: Thursday, June 05, 2014 11:27 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
 
  On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
   Hi Andres,
   I've assumed (and we know how assumptions work) that the deprecation
   would take place in Juno and after a cyle or two it would totally be
   removed from the code.  Even if #1 is the way to go, the old /vips
   resource would be deprecated in favor of /loadbalancers and /listeners.
  
   I agree #2 is cleaner, but I don't want to start on an implementation
   (though I kind of already have) that will fail to be merged in because
   of the strategy.  The strategies are pretty different so one needs to
   be decided on.
  
   As for where LBaaS is intended to end up, I don't want to speak for
   Kyle, so this is my understanding; It will end up outside of the
   Neutron code base but Neutron and LBaaS and other services will all
   fall under a Networking (or Network) program.  That is my
   understanding and I could be totally wrong.
  
  That's my understanding as well, I think Brandon worded it perfectly.
 
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
   Hi Brandon, hi Kyle!
   I'm a bit confused about the deprecation (btw, thanks for sending
 this Brandon!), as I (wrongly) assumed #1 would be the chosen path for the
 new API implementation. I understand the proposal and #2 sounds actually
 cleaner.
  
   Just out of curiosity, Kyle, where is LBaaS functionality intended to
 end up, if long-term plans are to remove it from Neutron?
  
   (Nit question, I must clarify)
  
   Thank you!
   Andres
  
   -Original Message-
   From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
   Sent: Wednesday, June 04, 2014 2:18 PM
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [Neutron] Implementing new LBaaS API
  
   Thanks for your feedback Kyle.  I will be at that meeting on Monday.
  
   Thanks,
   Brandon
  
   On Wed, 2014-06-04 at 11:54 -0500, Kyle Mestery wrote:
On Tue, Jun 3, 2014 at 3:01 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 This is an LBaaS topic bud I'd like to get some Neutron Core
 members to give their opinions on this matter so I've just
 directed this to Neutron proper.

 The design for the new API and object model for LBaaS needs to be
 locked down before the hackathon in a couple of weeks and there
 are some questions that need answered.  This is pretty urgent to
 come on to a decision on and to get a clear strategy defined so
 we can actually do real code during the hackathon instead of
 wasting some of that valuable time discussing this.


 Implementation must be backwards compatible

 There are 2 ways that have come up on how to do this:

 1) New API and object model are created in the same extension and
 plugin as the old.  Any API requests structured for the old API
 will be translated/adapted to the into the new object model.
 PROS:
 -Only one extension and plugin
 -Mostly true backwards compatibility -Do not have to rename
 unchanged resources and models
 CONS:
 -May end up being confusing to an end-user.
 -Separation of old api and new api is less clear -Deprecating and
 removing old api and object model will take a bit more work -This
 is basically API versioning the wrong way

 2) A new extension and plugin are created for the new API and
 object model.  Each API would live side by side.  New API would
 need to have different names for resources and object models from
 Old API resources and object models.
 PROS:
 -Clean demarcation point between old and new -No translation
 layer needed -Do not need to modify existing API and object
 model, no new bugs -Drivers do not need to be immediately
 modified -Easy to deprecate and remove old API and object model
 later
 CONS:
 -Separate extensions and object model will be confusing to
 end-users -Code reuse by copy paste since old extension and
 plugin will be deprecated and removed.
 -This is basically API versioning the wrong way

 Now if #2 is chosen to be feasible and acceptable then there are
 a number of ways to actually do that.  I won't bring those up
 until a clear decision is made on which strategy above is the
 most acceptable.

Thanks for sending this out 

[openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-21 Thread Susanne Balle
We have had some discussions around how to move forward with the LBaaS
service in OpenStack.  I am trying to summarize the key points below.


Feel free to chime in if I misrepresented anything or if you disagree :-)



For simplicity in the rest of the email and so I can differentiate between
all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
LBaaS project (that we discussed at the summit): Octavia in the rest of
this email. Note that this doesn’t mean we have agree on this name.



*Goal:*

We all want to create a best in class “operator scale” Octavia LBaaS
service to our customers.

Following requirements need to be considered (these are already listed in
some of the etherpads we have worked on)

· Provides scalability, failover, config management, and
provisioning.

· Architecture need to be pluggable so we can offer support for
HAProxy, Nginx, LVS, etc.



*Some disagreements exist around the scope of the new project: *



Some of the participating companies including HP are interested in a best
in class standalone Octavia load-balancer service that is part of OpenStack
and with the “label” OpenStack. http://www.openstack.org/software/

· The Octavia LBaaS project needs to work well with OpenStack or
this effort is not worth doing. HP believes that this should be the primary
focus.

· In this case the end goal would be to have a clean interface
between Neutron and the standalone Octavia LBaaS project and have the
Octavia LBaaS project become an incubated and eventual graduated OpenStack
project.

o   We would start out as a driver to Neutron.

o   This project would deprecate Neutron LBaaS long term since part of the
Neutron LBaaS would move over to the Octavia LBaaS project.

o   This project would continue to support both vendor drivers and new
software drivers e.g. ha-proxy, etc.

· Dougwig created the following diagram which gives a good overview
of my thinking: http://imgur.com/cJ63ts3 where Octavia is represented by
“New Driver Interface” and down. The whole picture shows how we could move
from the old to the new driver interface



Other participating companies want to create a best in class standalone
load-balancer service outside of OpenStack and only create a driver to
integrate with Openstack Neutron LBaaS.

· The Octavia LBaaS driver would be part of Neutron LBaaS tree
whereas the Octavia LBaaS implementation would reside outside OpenStack
e.g. Stackforge or github, etc.



The main issue/confusion is that some of us (HP LBaaS team) do not think of
projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
which is open sourced in StackForge and when we tried to get it into
OpenStack we met resistance.



One person suggested the idea of designing the Octavia LBaaS service
totally independent of Neutron or any other service that calls. This might
makes sense for a general LBaaS service but given that we are in the
context of OpenStack this to me just makes the whole testing and developing
a nightmare to maintain and not necessary. Again IMHO we are developing and
delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
just be super good at dealing with the OpenStack env. The architecture can
still be designed to be pluggable but my experiences tell me that we will
have to make decision and trade-offs and at that point we need to remember
that we are doing this in the context of OpenStack and not in the general
context.



*How do we think we can do it?*



We have some agreement around the following approach:



· To start developing the driver/Octavia implementation in
StackForge which should allow us to increase the velocity of our
development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
that we test any change. This will allow us to ensure that changes to
Neutron do not break our driver/implementation as well as the other way
around.

o   We would use Gerrit for blueprints so we have documented reviews and
comments archived somewhere.

o   Contribute patches regularly into the Neutron LBaaS tree:

§  Kyle has volunteered himself and one more core team members to review
and help move a larger patch into Neutron tree when needed. It was also
suggested that we could do milestones of smaller patches to be merged into
Neutron LbaaS. The latter approach was preferred by most participants.



The main goal behind this approach is to make sure we increase velocity
while still maintaining a good code/design quality. The OpenStack tooling
has shown to work for large distributed virtual teams so let's take
advantage of it.

Carefully planning the various transitions.



Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-21 Thread Susanne Balle
Balaji

The plan is to work on the next version of the LBaaS APIs in parallel to
maintaining the current version of the APIs and at some point when
everything is ready have a plan to deprecate the old APIs.

Susanne


On Wed, May 21, 2014 at 7:31 AM, balaj...@freescale.com 
balaj...@freescale.com wrote:

  Hi Susanne,



 Was there any  discussion happened on LBaaS Neutron API [which are
 available now] will be modified while migrating to Octavia.?



 Just want to understand the impact on the current LBaaS implementation
 using folks and migrating to Octavia.



 Regards,

 Balaji.P



 *From:* Susanne Balle [mailto:sleipnir...@gmail.com]
 *Sent:* Wednesday, May 21, 2014 4:36 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Cuddy, Tim; Balle, Susanne; vbhamidip...@paypal.com

 *Subject:* [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator
 scale service --- Next steps and goals.





 We have had some discussions around how to move forward with the LBaaS
 service in OpenStack.  I am trying to summarize the key points below.



 Feel free to chime in if I misrepresented anything or if you disagree :-)



 For simplicity in the rest of the email and so I can differentiate between
 all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
 LBaaS project (that we discussed at the summit): Octavia in the rest of
 this email. Note that this doesn’t mean we have agree on this name.



 *Goal:*

 We all want to create a best in class “operator scale” Octavia LBaaS
 service to our customers.

 Following requirements need to be considered (these are already listed in
 some of the etherpads we have worked on)

 · Provides scalability, failover, config management, and
 provisioning.

 · Architecture need to be pluggable so we can offer support for
 HAProxy, Nginx, LVS, etc.



 *Some disagreements exist around the scope of the new project: *



 Some of the participating companies including HP are interested in a best
 in class standalone Octavia load-balancer service that is part of OpenStack
 and with the “label” OpenStack. http://www.openstack.org/software/

 · The Octavia LBaaS project needs to work well with OpenStack or
 this effort is not worth doing. HP believes that this should be the primary
 focus.

 · In this case the end goal would be to have a clean interface
 between Neutron and the standalone Octavia LBaaS project and have the
 Octavia LBaaS project become an incubated and eventual graduated OpenStack
 project.

 o   We would start out as a driver to Neutron.

 o   This project would deprecate Neutron LBaaS long term since part of
 the Neutron LBaaS would move over to the Octavia LBaaS project.

 o   This project would continue to support both vendor drivers and new
 software drivers e.g. ha-proxy, etc.

 · Dougwig created the following diagram which gives a good
 overview of my thinking: http://imgur.com/cJ63ts3 where Octavia is
 represented by “New Driver Interface” and down. The whole picture shows how
 we could move from the old to the new driver interface



 Other participating companies want to create a best in class standalone
 load-balancer service outside of OpenStack and only create a driver to
 integrate with Openstack Neutron LBaaS.

 · The Octavia LBaaS driver would be part of Neutron LBaaS tree
 whereas the Octavia LBaaS implementation would reside outside OpenStack
 e.g. Stackforge or github, etc.



 The main issue/confusion is that some of us (HP LBaaS team) do not think
 of projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
 which is open sourced in StackForge and when we tried to get it into
 OpenStack we met resistance.



 One person suggested the idea of designing the Octavia LBaaS service
 totally independent of Neutron or any other service that calls. This might
 makes sense for a general LBaaS service but given that we are in the
 context of OpenStack this to me just makes the whole testing and developing
 a nightmare to maintain and not necessary. Again IMHO we are developing and
 delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
 just be super good at dealing with the OpenStack env. The architecture can
 still be designed to be pluggable but my experiences tell me that we will
 have to make decision and trade-offs and at that point we need to remember
 that we are doing this in the context of OpenStack and not in the general
 context.



 *How do we think we can do it?*



 We have some agreement around the following approach:



 · To start developing the driver/Octavia implementation in
 StackForge which should allow us to increase the velocity of our
 development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
 that we test any change. This will allow us to ensure that changes to
 Neutron do not break our driver/implementation as well as the other way
 around.

 o   We would use Gerrit for blueprints so we have

[openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-20 Thread Susanne Balle
We have had some discussions around how to move forward with the LBaaS
service in OpenStack.  I am trying to summarize the key points below.


Feel free to chime in if I misrepresented anything or if you disagree :-)



For simplicity in the rest of the email and so I can differentiate between
all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
LBaaS project (that we discussed at the summit): Octavia in the rest of
this email. Note that this doesn’t mean we have agree on this name.



*Goal:*

We all want to create a best in class “operator scale” Octavia LBaaS
service to our customers.

Following requirements need to be considered (these are already listed in
some of the etherpads we have worked on)

· Provides scalability, failover, config management, and
provisioning.

· Architecture need to be pluggable so we can offer support for
HAProxy, Nginx, LVS, etc.



*Some disagreements exist around the scope of the new project:*



Some of the participating companies including HP are interested in a best
in class standalone Octavia load-balancer service that is part of OpenStack
and with the “label” OpenStack. http://www.openstack.org/software/

· The Octavia LBaaS project needs to work well with OpenStack or
this effort is not worth doing. HP believes that this should be the primary
focus.

· In this case the end goal would be to have a clean interface
between Neutron and the standalone Octavia LBaaS project and have the
Octavia LBaaS project become an incubated and eventual graduated OpenStack
project.

o   We would start out as a driver to Neutron.

o   This project would deprecate Neutron LBaaS long term since part of the
Neutron LBaaS would move over to the Octavia LBaaS project.

o   This project would continue to support both vendor drivers and new
software drivers e.g. ha-proxy, etc.

· Dougwig created the following diagram which gives a good overview
of my thinking: http://imgur.com/cJ63ts3 where Octavia is represented by
“New Driver Interface” and down. The whole picture shows how we could move
from the old to the new driver interface



Other participating companies want to create a best in class standalone
load-balancer service outside of OpenStack and only create a driver to
integrate with Openstack Neutron LBaaS.

· The Octavia LBaaS driver would be part of Neutron LBaaS tree
whereas the Octavia LBaaS implementation would reside outside OpenStack
e.g. Stackforge or github, etc.



The main issue/confusion is that some of us (HP LBaaS team) do not think of
projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
which is open sourced in StackForge and when we tried to get it into
OpenStack we met resistance.



One person suggested the idea of designing the Octavia LBaaS service
totally independent of Neutron or any other service that calls. This might
makes sense for a general LBaaS service but given that we are in the
context of OpenStack this to me just makes the whole testing and developing
a nightmare to maintain and not necessary. Again IMHO we are developing and
delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
just be super good at dealing with the OpenStack env. The architecture can
still be designed to be pluggable but my experiences tell me that we will
have to make decision and trade-offs and at that point we need to remember
that we are doing this in the context of OpenStack and not in the general
context.



*How do we think we can do it?*



We have some agreement around the following approach:



· To start developing the driver/Octavia implementation in
StackForge which should allow us to increase the velocity of our
development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
that we test any change. This will allow us to ensure that changes to
Neutron do not break our driver/implementation as well as the other way
around.

o   We would use Gerrit for blueprints so we have documented reviews and
comments archived somewhere.

o   Contribute patches regularly into the Neutron LBaaS tree:

§  Kyle has volunteered himself and one more core team members to review
and help move a larger patch into Neutron tree when needed. It was also
suggested that we could do milestones of smaller patches to be merged into
Neutron LbaaS. The latter approach was preferred by most participants.



The main goal behind this approach is to make sure we increase velocity
while still maintaining a good code/design quality. The OpenStack tooling
has shown to work for large distributed virtual teams so let's take
advantage of it.

Carefully planning the various transitions.



Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

2014-05-19 Thread Susanne Balle
Great summit!! fantastic to meeting you all in person.

We now have agreement on the Object model. How do we turn that into
blueprints and also how do we start making progress on the rest of the
items we agree upon at the summit?

Susanne


On Fri, May 16, 2014 at 2:07 AM, Brandon Logan
brandon.lo...@rackspace.comwrote:

  Yeah that’s a good point.  Thanks!

   From: Eugene Nikanorov enikano...@mirantis.com
 Reply-To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Date: Thursday, May 15, 2014 at 10:38 PM

 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

   Brandon,

  It's allowed right now just per API. It's up to a backend to decide the
 status of a node in case some monitors find it dead.

  Thanks,
 Eugene.



 On Fri, May 16, 2014 at 4:41 AM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

  I have concerns about multiple health monitors on the same pool.  Is
 this always going to be the same type of health monitor?  There’s also
 ambiguity in the case where one health monitor fails and another doesn’t.
  Is it an AND or OR that determines whether the member is down or not?

  Thanks,
 Brandon Logan

   From: Eugene Nikanorov enikano...@mirantis.com
 Reply-To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Date: Thursday, May 15, 2014 at 9:55 AM
 To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

   Vijay,

  Pools-monitors are still many to many, if it's not so on the picture -
 we'll fix that.
 I brought this up as an example of how we dealt with m:n via API.

  Thanks,
 Eugene.


 On Thu, May 15, 2014 at 6:43 PM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com wrote:

  Thanks for the clarification. Eugene.



 A tangential point since you brought healthmon and pool.



 There will be an additional entity called ‘PoolMonitorAssociation’ which
 results in a many to many relationship between pool and monitors. Right?



 Now, the model is indicating a pool can have only one monitor. So a
 minor correction is required to indicate the many to many relationship via
 PoolMonitorAssociation.



 Thanks,

 Vijay V.





 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Thursday, May 15, 2014 7:36 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?



 Hi Vijay,




 When you say API is not available, it means this should not be
 considered like a resource/entity. Correct?



 But then, there would be API like a bind API, that accepts
 loadbalancer_id  listener_id,  which creates this object.

 And also, there would be an API that will be used to list the listeners
 of a LoadBalancer.



 Right?

  Right, that's the same as health monitors and pools work right now:
 there are separate REST action to associate healthmon to a pool





 Thanks,

 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Minutes from meeting around Advanced Services (particularly LBaaS) and Neutron

2014-05-14 Thread Susanne Balle
All,

Thank you for attending the meeting around the Advanced Services and
Neutron yesterday in the Neutron pod.

We had a great meeting. The goal of this session was to discuss how the
current Neutron LBaaS doesn't fit the need of what is now defined as
operators e.g. large scale (private, public) cloud service providers.

Background info on Why we needed this session can be found at:
https://etherpad.openstack.org/p/AdvancedServices_and_Neutron

More than 60 people attended and we had representtives from many companies
incl. PayPal, Yahoo, Rackspace, BlueBox, HP, Comcast, Intel, F5,  and many
more. We had several Neutron core team members incl the current PTL Kyle
Mestery, the previous PTL Mark McClain, Maru Newby and leads for the
Advanced services Sumit Naiksatam and Nachi Ueno.

Several issues were discussed including the lack of velocity and progress
in the LBaaS project, the operator requirements and prioritization
documented on the Wiki, operator use cases documented on the wiki, lack of
architectural and design documentations, issues around being able to
contribute to the project, etc... A lot of frustration was aired and in the
end the neutron core team members signed up to help get LBaaS/Advanced
services back on track by educating the newcomers, by helping unblock
issues, and by helping speed up the review process.

Outcomes from the Meeting

Immediate action items:

* The Neutron team will assign a core team member to facilitate the dialog
with the LBaaS team to mentor the LBaaS newcomers through the various
processes including what it takes to become a core member.
-- Kyle Mestery signed up to be this liaison person/dedicated core reviewer
since he has already started attending the various LBaaS weekly meetings.
Thanks Kyle :-)

* The Neutron core team will provide a list of the plug-in integration
points that are problematic. Owner: Kyle


Short term goals (Juno)
* Create a better abstraction between the Advanced Services and Neutron
core.
* Well-defined and clean interfaces
* Clean REST API for User (will be discussed Thursday 11:00-11:40 during
the LBaaS Session)
* Mentorship

Long term goal (~3 release cycles)
* Spin-out LBaaS from Neutron (with all the things that include: clean
architecture, QA, etc...)

We plan to revisit this goal at each summit to track our progress against
it.

Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron

2014-05-12 Thread Susanne Balle
I apologize if you received this email already 

Reminder that we plan to meet tomorrow

Tuesday May 13 at 2pm at the Neutron pod on level 3.

Susanne

We are setting up a meeting to discuss if it makes sense to separate the
advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects.
We want a healthy discussion around  the pros and cons of separating the
advanced services from Neutron and its short or long term feasibility.



The meeting is planned for:

*Tuesday May 13th at 2pm in the Neutron pod.*


On Mon, May 12, 2014 at 12:40 PM, Balle, Susanne susanne.ba...@hp.comwrote:

 Reminder that we plan to meet tomorrow

 Tuesday May 13 at 2pm at the Neutron pod on level 3.

 Susanne

 Sent from my iPhone

 On May 7, 2014, at 7:45 AM, Susanne Balle sleipnir...@gmail.commailto:
 sleipnir...@gmail.com wrote:

 Hi Advanced Services/LBaaS Stackers,

 We are setting up a meeting to discuss if it makes sense to separate the
 advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects.
 We want a healthy discussion around  the pros and cons of separating the
 advanced services from Neutron and its short or long term feasibility.

 The meeting is planned for:
 Tuesday May 13th at 2pm in the Neutron pod.

 There will be a designated pod for each of the official programs at:
 https://wiki.openstack.org/wiki/Programs
 Some programs share a pod. There will be a map at the center of the space,
 as well as signage up to help find the relevant pod.

 Based on discussions with Rackspace, Mirantis, and others it is clear that
 the advanced services (i.e. LBaaS) in Neutron are not getting the attention
 and the support to move forward and create a first in class load-balancer
 service; from a service provider or operator's perspective. We currently
 have a lot of momentum and energy behind the LBaaS effort but are being
 told that the focus for Neutron is bug fixing given the instability in
 Neutron itself. While the latter is totally understandable, as a high
 priority for Neutron it leaves the advanced services out in the cold with
 no way to make progress in developing features that are needed to support
 the many companies that rely on LBaaS for large scale deployments.

 The current Neutron LB API and feature set meet minimum requirements for
 small-medium private cloud deployments, but does not meet the needs of
 larger, provider (or operator) deployments that include hundreds if not
 thousands of load balancers and multiple domain users (discrete customer
 organizations). The OpenStack LBaaS community looked at requirements and
 noted that the following operator-focused requirements are currently
 missing:

 • Scalability
 • SSL Certificate management – for an operator-based service, SSL
 certificate management is a much more important function that is currently
 not addressed in the current API or blueprint
 • Metrics Collection – a very limited set of metrics are currently
 provided by the current API.
 • Separate admin API for NOC and support operations
 • Minimal downtime when migrating to newer versions
 • Ability to migrate load balancers (SW to HW, etc.)
 • Resiliency functions like HA and failover
 • Operator-based load balancer health checks
 • Support multiple, simultaneous drivers.

 We have had great discussions on the LBaaS mailing list and on IRC about
 all the things we want to do, the new APIs, the User use cases,
 requirements and priorities, the operator requirements for LBaaS, etc. and
 I am at this point wondering if Neutron LBaaS as a sub-project of Neutron
 can fulfill our requirements.

 I would like this group to discuss the pros and cons of separating the
 advanced services, including LB, VPN, and FW, out of Neutron and allow for
 each of the three currently existing advanced services to become
 stand-alone projects or one standalone project.

 This should be done under the following assumptions:

 • Keep backwards compatibility with the current Neutron LBaaS
 plugin/driver API (to some point) so that existing drivers/plug-ins
 continues to work for people who have already invested in Neutron LBaaS

 • Migration strategy.

 We have a precedence in OpenStack of splitting up services that are
 becoming too big or where sub-services deserve to become an entity of its
 own e.g. baremetal Nova and Ironic, Nova-network and Neutron,
 nova-scheduler is being worked into the Gantt project, etc.

 At a high-level I see the following steps/blueprints needing to be carried
 out:

 • Identify and create a library similar in concept to OpenStack
 core that contains the common components pieces needed by the advanced
 services in order to minimize code duplication between the advanced
 services and Neutron. This library should be consumable by external
 projects and will allow for cleaner code reuse by not only the three
 existing advanced services

[openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron

2014-05-07 Thread Susanne Balle
Hi Advanced Services/LBaaS Stackers,



We are setting up a meeting to discuss if it makes sense to separate the
advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects.
We want a healthy discussion around  the pros and cons of separating the
advanced services from Neutron and its short or long term feasibility.



The meeting is planned for:

*Tuesday May 13th at 2pm in the Neutron pod.*



There will be a designated pod for each of the official programs at:
https://wiki.openstack.org/wiki/Programs

Some programs share a pod. There will be a map at the center of the space,
as well as signage up to help find the relevant pod.



Based on discussions with Rackspace, Mirantis, and others it is clear that
the advanced services (i.e. LBaaS) in Neutron are not getting the attention
and the support to move forward and create a first in class load-balancer
service; from a service provider or operator's perspective. We currently
have a lot of momentum and energy behind the LBaaS effort but are being
told that the focus for Neutron is bug fixing given the instability in
Neutron itself. While the latter is totally understandable, as a high
priority for Neutron it leaves the advanced services out in the cold with
no way to make progress in developing features that are needed to support
the many companies that rely on LBaaS for large scale deployments.



The current Neutron LB API and feature set meet minimum requirements for
small-medium private cloud deployments, but does not meet the needs of
larger, provider (or operator) deployments that include hundreds if not
thousands of load balancers and multiple domain users (discrete customer
organizations). The OpenStack LBaaS community looked at requirements and
noted that the following operator-focused requirements are currently
missing:



· Scalability

· SSL Certificate management – for an operator-based service, SSL
certificate management is a much more important function that is currently
not addressed in the current API or blueprint

· Metrics Collection – a very limited set of metrics are currently
provided by the current API.

· Separate admin API for NOC and support operations

· Minimal downtime when migrating to newer versions

· Ability to migrate load balancers (SW to HW, etc.)

· Resiliency functions like HA and failover

· Operator-based load balancer health checks

· Support multiple, simultaneous drivers.



We have had great discussions on the LBaaS mailing list and on IRC about
all the things we want to do, the new APIs, the User use cases,
requirements and priorities, the operator requirements for LBaaS, etc. and
I am at this point wondering if Neutron LBaaS as a sub-project of Neutron
can fulfill our requirements.



I would like this group to discuss the pros and cons of separating the
advanced services, including LB, VPN, and FW, out of Neutron and allow for
each of the three currently existing advanced services to become
stand-alone projects or one standalone project.



This should be done under the following assumptions:

· Keep backwards compatibility with the current Neutron LBaaS
plugin/driver API (to some point) so that existing drivers/plug-ins
continues to work for people who have already invested in Neutron LBaaS

· Migration strategy.



We have a precedence in OpenStack of splitting up services that are
becoming too big or where sub-services deserve to become an entity of its
own e.g. baremetal Nova and Ironic, Nova-network and Neutron,
nova-scheduler is being worked into the Gantt project, etc.



At a high-level I see the following steps/blueprints needing to be carried
out:

· Identify and create a library similar in concept to OpenStack
core that contains the common components pieces needed by the advanced
services in order to minimize code duplication between the advanced
services and Neutron. This library should be consumable by external
projects and will allow for cleaner code reuse by not only the three
existing advanced services but by new services as well.

· Start a new repo for the standalone LBaaS

o   http://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/

· Write a patch to bridge Neutron LBaaS with the standalone LBaaS
for backwards compatibility. Longer term we can deprecate Neutron LBaaS
which will be possible once the new LBaaS service is a graduated OpenStack
service.



Some of the background reasoning for suggesting this is available at:

https://etherpad.openstack.org/p/AdvancedServices_and_Neutron



Hope to see you there to discuss how we best make sure that the advanced
services can support the many companies that rely on LBaaS or other
advanced services for large scale deployment.



Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron

2014-05-07 Thread Susanne Balle
Sam,

Perfect. I saw Eugene added something too. Let's get more of the known
facts and issues down on the etherpad so we are better prep'ed for the Tues
meeting.

Susanne




On Wed, May 7, 2014 at 9:01 AM, Samuel Bercovici samu...@radware.comwrote:

  Hi,

 I have added to
 https://etherpad.openstack.org/p/AdvancedServices_and_Neutron a note
 recalling two  technical challenges that do not exists when LBaaS runs as a
 Neutron extension.

 -Sam.





 *From:* Susanne Balle [mailto:sleipnir...@gmail.com]
 *Sent:* Wednesday, May 07, 2014 2:45 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Balle, Susanne
 *Subject:* [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced
 Services (particularly LBaaS) and Neutron



 Hi Advanced Services/LBaaS Stackers,



 We are setting up a meeting to discuss if it makes sense to separate the
 advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects.
 We want a healthy discussion around  the pros and cons of separating the
 advanced services from Neutron and its short or long term feasibility.



 The meeting is planned for:

 *Tuesday May 13th at 2pm in the Neutron pod.*



 There will be a designated pod for each of the official programs at:
 https://wiki.openstack.org/wiki/Programs

 Some programs share a pod. There will be a map at the center of the space,
 as well as signage up to help find the relevant pod.



 Based on discussions with Rackspace, Mirantis, and others it is clear that
 the advanced services (i.e. LBaaS) in Neutron are not getting the attention
 and the support to move forward and create a first in class load-balancer
 service; from a service provider or operator's perspective. We currently
 have a lot of momentum and energy behind the LBaaS effort but are being
 told that the focus for Neutron is bug fixing given the instability in
 Neutron itself. While the latter is totally understandable, as a high
 priority for Neutron it leaves the advanced services out in the cold with
 no way to make progress in developing features that are needed to support
 the many companies that rely on LBaaS for large scale deployments.



 The current Neutron LB API and feature set meet minimum requirements for
 small-medium private cloud deployments, but does not meet the needs of
 larger, provider (or operator) deployments that include hundreds if not
 thousands of load balancers and multiple domain users (discrete customer
 organizations). The OpenStack LBaaS community looked at requirements and
 noted that the following operator-focused requirements are currently
 missing:



 · Scalability

 · SSL Certificate management – for an operator-based service, SSL
 certificate management is a much more important function that is currently
 not addressed in the current API or blueprint

 · Metrics Collection – a very limited set of metrics are
 currently provided by the current API.

 · Separate admin API for NOC and support operations

 · Minimal downtime when migrating to newer versions

 · Ability to migrate load balancers (SW to HW, etc.)

 · Resiliency functions like HA and failover

 · Operator-based load balancer health checks

 · Support multiple, simultaneous drivers.



 We have had great discussions on the LBaaS mailing list and on IRC about
 all the things we want to do, the new APIs, the User use cases,
 requirements and priorities, the operator requirements for LBaaS, etc. and
 I am at this point wondering if Neutron LBaaS as a sub-project of Neutron
 can fulfill our requirements.



 I would like this group to discuss the pros and cons of separating the
 advanced services, including LB, VPN, and FW, out of Neutron and allow for
 each of the three currently existing advanced services to become
 stand-alone projects or one standalone project.



 This should be done under the following assumptions:

 · Keep backwards compatibility with the current Neutron LBaaS
 plugin/driver API (to some point) so that existing drivers/plug-ins
 continues to work for people who have already invested in Neutron LBaaS

 · Migration strategy.



 We have a precedence in OpenStack of splitting up services that are
 becoming too big or where sub-services deserve to become an entity of its
 own e.g. baremetal Nova and Ironic, Nova-network and Neutron,
 nova-scheduler is being worked into the Gantt project, etc.



 At a high-level I see the following steps/blueprints needing to be carried
 out:

 · Identify and create a library similar in concept to OpenStack
 core that contains the common components pieces needed by the advanced
 services in order to minimize code duplication between the advanced
 services and Neutron. This library should be consumable by external
 projects and will allow for cleaner code reuse by not only the three
 existing advanced services but by new services as well.

 · Start a new repo

Re: [openstack-dev] [neutron] Design Summit Sessions

2014-05-06 Thread Susanne Balle
Kyle

Will the Neutron pod location be a known place to attendees? Will there be
signs, etc?

Thanks Susanne


On Tue, May 6, 2014 at 10:10 AM, Kyle Mestery mest...@noironetworks.comwrote:

 On Mon, May 5, 2014 at 9:39 PM, Tina TSOU tina.tsou.zout...@huawei.com
 wrote:
  Dear Kyle,
 
  Thanks for leading this.
 
  We filed a BP per new process
 
 https://blueprints.launchpad.net/neutron/+spec/scaling-network-performance
 
  Hope we can have a talk in the Pod area.
 
 Absolutely! The pod is available for Neutron anytime during the
 Summit, per my understanding. It would be good to advertise a time you
 want to discuss this on the ML prior to the Summit so people can plan
 to attend.

 Thanks,
 Kyle

 
  Thank you,
  Tina
 
  On Apr 25, 2014, at 9:19 PM, Kyle Mestery mest...@noironetworks.com
  wrote:
 
  Hi everyone:
 
  I've pushed out the Neutron Design Summit Schedule to sched.org [1].
  Like the other projects, it was tough to fit everything in. If your
  proposal didn't make it, there will still be opportunities to talk
  about it at the Summit in the project Pod area. Also, I encourage
  you to still file a BP using the new Neutron BP process [2].
 
  I expect some slight juggling of the schedule may occur as the entire
  Summit schedule is set, but this should be approximately where things
  land.
 
  Thanks!
  Kyle
 
  [1] http://junodesignsummit.sched.org/overview/type/neutron
  [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Juno-Summit] availability of the project project pod rooms on Monday May 12th?

2014-05-06 Thread Susanne Balle
Thierry

How do I reserve a pod for a specific time and day. I am getting ready to
setup a meeting.

Susanne


On Tue, May 6, 2014 at 3:53 PM, Thierry Carrez thie...@openstack.orgwrote:

 Carl Baldwin wrote:
  Is there a map, a list, or some other official reference?  I may like
  to use a pod for a cross-project discussion about DNS between Nova,
  Neutron, and Designate.  Not a big deal but it might be nice to know
  more about what we're looking for when we get there.

 There will be a designated pod for each of the official programs at:
 https://wiki.openstack.org/wiki/Programs

 Some programs share a pod. There will be a map at the center of the
 space, as well as signage up to help find the relevant pod.

 For a cross-project discussion you'd have to pick a pod where to have
 it. In your case I'd recommend the Neutron pod since Nova shares its pod
 with Glance and there is no Designate pod.

 Cheers,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] HA functionality discussion

2014-04-17 Thread Susanne Balle
I agree that the HA should be hidden to the user/tenant. IMHO a tenant
should just use a load-balancer as a “managed” black box where the service
is resilient in itself.



Our current Libra/LBaaS implementation in the HP public cloud uses a pool
of standby LB to replace failing tenant’s LB. Our LBaaS service is
monitoring itself and replacing LB when they fail. This is via a set of
Admin API server.



http://libra.readthedocs.org/en/latest/admin_api/index.html

The Admin server spawns several scheduled threads to run tasks such as
building new devices for the pool, monitoring load balancer devices and
maintaining IP addresses.



http://libra.readthedocs.org/en/latest/pool_mgm/about.html


Susanne


On Thu, Apr 17, 2014 at 6:49 PM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Heyas, y'all!

 So, given both the prioritization and usage info on HA functionality for
 Neutron LBaaS here:
 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing

 It's clear that:

 A. HA seems to be a top priority for most operators
 B. Almost all load balancer functionality deployed is done so in an
 Active/Standby HA configuration

 I know there's been some round-about discussion about this on the list in
 the past (which usually got stymied in implementation details
 disagreements), but it seems to me that with so many players putting a high
 priority on HA functionality, this is something we need to discuss and
 address.

 This is also apropos, as we're talking about doing a major revision of the
 API, and it probably makes sense to seriously consider if or how HA-related
 stuff should make it into the API. I'm of the opinion that almost all the
 HA stuff should be hidden from the user/tenant, but that the admin/operator
 at the very least is going to need to have some visibility into HA-related
 functionality. The hope here is to discover what things make sense to have
 as a least common denominator and what will have to be hidden behind a
 driver-specific implementation.



 I certainly have a pretty good idea how HA stuff works at our
 organization, but I have almost no visibility into how this is done
 elsewhere, leastwise not enough detail to know what makes sense to write
 API controls for.

 So! Since gathering data about actual usage seems to have worked pretty
 well before, I'd like to try that again. Yes, I'm going to be asking about
 implementation details, but this is with the hope of discovering any least
 common denominator factors which make sense to build API around.

 For the purposes of this document, when I say load balancer devices I
 mean either physical or virtual appliances, or software executing on a host
 somewhere that actually does the load balancing. It need not directly
 correspond with anything physical... but probably does. :P

 And... all of these questions are meant to be interpreted from the
 perspective of the cloud operator.

 Here's what I'm looking to learn from those of you who are allowed to
 share this data:

 1. Are your load balancer devices shared between customers / tenants, not
 shared, or some of both?

 1a. If shared, what is your strategy to avoid or deal with collisions of
 customer rfc1918 address space on back-end networks? (For example, I know
 of no load balancer device that can balance traffic for both customer A and
 customer B if both are using the 10.0.0.0/24 subnet for their back-end
 networks containing the nodes to be balanced, unless an extra layer of
 NATing is happening somewhere.)

 2. What kinds of metrics do you use in determining load balancing capacity?

 3. Do you operate with a pool of unused load balancer device capacity
 (which a cloud OS would need to keep track of), or do you spin up new
 capacity (in the form of virtual servers, presumably) on the fly?

 3a. If you're operating with a availability pool, can you describe how new
 load balancer devices are added to your availability pool?  Specifically,
 are there any steps in the process that must be manually performed (ie. so
 no API could help with this)?

 4. How are new devices 'registered' with the cloud OS? How are they
 removed or replaced?

 5. What kind of visibility do you (or would you) allow your user base to
 see into the HA-related aspects of your load balancing services?

 6. What kind of functionality and visibility do you need into the
 operations of your load balancer devices in order to maintain your
 services, troubleshoot, etc.? Specifically, are you managing the
 infrastructure outside the purview of the cloud OS? Are there certain
 aspects which would be easier to manage if done within the purview of the
 cloud OS?

 7. What kind of network topology is used when deploying load balancing
 functionality? (ie. do your load balancer devices live inside or outside
 customer firewalls, directly on tenant networks? Are you using layer-3
 routing? etc.)

 8. Is there any other data you can share which would be useful in
 considering 

Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-09 Thread Susanne Balle
Hi



I wasn't able to get % for the spreadsheet but our Product Manager
prioritized the features:



*Function*

*Priority (0 = highest)*

*HTTP+HTTPS on one device*

5

*L7 Switching*

2

*SSL Offloading*

1

*High Availability*

0

*IP4  IPV6 Address Support*

6

*Server Name Indication (SNI) Support*

3

*UDP Protocol*

7

*Round Robin Algorithm*

4



 Susanne


On Thu, Apr 3, 2014 at 9:32 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com wrote:



 The document has Vendor  column, it should be from Cloud
 Operator?



 Thanks,

 Vijay V.





 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Thursday, April 3, 2014 11:23 AM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases.
 Data from Operators needed.



 Stephen,



 Agree with you. Basically the page starts looking as requirements page.

 I think we need to move to google spreadsheet, where table is organized
 easily.

 Here's the doc that may do a better job for us:


 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing



 Thanks,

 Eugene.



 On Thu, Apr 3, 2014 at 5:34 AM, Prashanth Hari hvpr...@gmail.com wrote:

  More additions to the use cases (
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases).

 I have updated some of the features we are interested in.







 Thanks,

 Prashanth





 On Wed, Apr 2, 2014 at 8:12 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

  Hi y'all--



 Looking at the data in the page already, it looks more like a feature
 wishlist than actual usage data. I thought we agreed to provide data based
 on percentage usage of a given feature, the end result of the data
 collection being that it would become more obvious which features are the
 most relevant to the most users, and therefore are more worthwhile targets
 for software development.



 Specifically, I was expecting to see something like the following (using
 hypothetical numbers of course, and where technical people from Company A
  etc. fill out the data for their organization):



 == L7 features ==



 Company A (Cloud operator serving external customers): 56% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 92% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 0% of
 load-balancer instances use



 == SSL termination ==



 Company A (Cloud operator serving external customers): 95% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 20% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 50% of
 load-balancer instances use.



 == Racing stripes ==



 Company A (Cloud operator serving external customers): 100% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 100% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 100% of
 load-balancer instances use





 In my mind, a wish-list of features is only going to be relevant to this
 discussion if (after we agree on what the items under consideration ought
 to be) each technical representative presents a prioritized list for their
 organization. :/ A wish-list is great for brain-storming what ought to be
 added, but is less relevant for prioritization.



 In light of last week's meeting, it seems useful to list the features most
 recently discussed in that meeting and on the mailing list as being points
 on which we want to gather actual usage data (ie. from what people are
 actually using on the load balancers in their organization right now).
 Should we start a new page that lists actual usage percentages, or just
 re-vamp the one above?  (After all, wish-list can be useful for discovering
 things we're missing, especially if we get people new to the discussion to
 add their $0.02.)



 Thanks,

 Stephen







 On Wed, Apr 2, 2014 at 3:46 PM, Jorge Miramontes 
 jorge.miramon...@rackspace.com wrote:

   Thanks Eugene,



 I added our data onto the requirements page since I was hoping to
 prioritize requirements based on the operator data that gets provided. We
 can move it over to the other page if you think that makes sense. See
 everyone on the weekly meeting tomorrow!



 Cheers,

 --Jorge



 *From: *Susanne Balle sleipnir...@gmail.com
 *Reply-To: *OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 *Date: *Tuesday, April 1, 2014 4:09 PM
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Subject: *Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases.
 Data from Operators needed.



 I added two more. I am still working on our HA use cases. Susanne



 On Tue, Apr 1, 2014 at 4:16 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  I added our priorities. I hope its

Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-09 Thread Susanne Balle
Ditto. I am interested in contributing as well.

Does Gant work with Devstack? I am assuming the link will give me
directions on how to test it and contribute to the project.

Susanne


On Wed, Apr 9, 2014 at 12:44 PM, Henrique Truta 
henriquecostatr...@gmail.com wrote:

 @Oleg, @Sylvain, @Leandro, Thanls. I'll check the Gantt project and the
 blueprint


 2014-04-09 12:59 GMT-03:00 Sylvain Bauza sylvain.ba...@gmail.com:




 2014-04-09 17:47 GMT+02:00 Jay Lau jay.lau@gmail.com:

 @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
 placement policy or run time policy or both, can you help clarify?


 I don't want to talk on behalf of Oleg, but Gantt is targeted to be the
 forklift of the current Nova scheduler. So, a placement decision based on
 dynamic metrics would be worth it.
 That said, as Gantt is not targeted to be delivered until Juno at least
 (with Nova sched deprecated), I think any progress on a BP should target
 Nova with respect to the forklift efforts, so it would automatically be
 ported to Gantt once the actual fork would happen.

 -Sylvain

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-01 Thread Susanne Balle
I added two more. I am still working on our HA use cases. Susanne


On Tue, Apr 1, 2014 at 4:16 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  I added our priorities. I hope its formatted well enough. I just took a
 stab in the dark.

 Thanks,
 Kevin
  --
 *From:* Eugene Nikanorov [enikano...@mirantis.com]
 *Sent:* Tuesday, April 01, 2014 3:02 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron][LBaaS] Load balancing use cases.
 Data from Operators needed.

   Hi folks,

  On the last meeting we decided to collect usage data so we could
 prioritize features and see what is demanded most.

  Here's the blank page to do that (in a free form). I'll structure it
 once we have some data.
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases

  Please fill with the data you have.

  Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-27 Thread Susanne Balle
Geoff

I noticed the following two blueprints:


https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms


This blueprint defines a framework for creating, managing and deploying
Neutron advanced services implemented as virtual machines. The goal is to
enable advanced network services (e.g. Load Balancing, Security,
Monitoring) that may be supplied by third party vendors, are deployed as
virtual machines, and are launched and inserted into the tenant network on
demand.

https://blueprints.launchpad.net/neutron/+spec/dynamic-network-resource-mgmt


This blueprint proposes the addition to OpenStack of a framework for
dynamic network resource management (DNRM). This framework includes a new
OpenStack resource management and provisioning service, a refactored scheme
for Neutron API extensions, a policy-based resource allocation system, and
dynamic mapping of resources to plugins. It is intended to address a number
of use cases, including multivendor environments, policy-based resource
scheduling, and virtual appliance provisioning. We are proposing this as a
single blueprint in order to create an efficiently integrated
implementation.


the latter was submitted by you. This sounds like step in the right
direction and I would like to understand the designs/scope/limitation in a
little more details.


What is the status of your blueprint? Any early designs/use cases that you
would be willing to share?


Regards Susanne




On Tue, Mar 25, 2014 at 10:07 AM, Geoff Arnold ge...@geoffarnold.comwrote:

 There are (at least) two ways of expressing differentiation:
 - through an API extension, visible to the tenant
 - though an internal policy mechanism, with specific policies inferred
 from tenant or network characteristics

 Both have their place. Please don't fall into the trap of thinking that
 differentiation requires API extension.

 Sent from my iPhone - please excuse any typos or creative spelling
 corrections!

 On Mar 25, 2014, at 1:36 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi John,


 On Tue, Mar 25, 2014 at 7:26 AM, John Dewey j...@dewey.ws wrote:

  I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

 Not really, whole point of the service is to abstract the user from
 specifics of backend implementation. So for any feature there is a common
 API, not specific to any implementation.

 There probably could be some exception to this guide line that lays in the
 area of admin API, but that's yet to be discussed.


 I see the SSL work is well underway, and I am in the process of defining
 L7 scripting requirements.  However, I will definitely need L7 scripting
 prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

 I may say that core team has rejected 'vendor extensions' idea due to
 potential non-uniform user API experience. That becomes even worse with
 flavors introduced, because users don't know what vendor is backing up the
 service they have created.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-26 Thread Susanne Balle
Jorge: I agree with you around ensuring different drivers support the API
contract and the no vendor lock-in.

All: How do we move this forward? It sounds like we have agreement that
this is worth investigating.

How do we move forward with the investigation and how to best architect
this? Is this a topic for tomorrow's LBaaS weekly meeting? or should I
schedule a hang-out meeting for us to discuss?

Susanne




On Tue, Mar 25, 2014 at 6:16 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

   Hey Susanne,

  I think it makes sense to group drivers by each LB software. For
 example, there would be a driver for HAProxy, one for Citrix's Netscalar,
 one for Riverbed's Stingray, etc. One important aspect about Openstack that
 I don't want us to forget though is that a tenant should be able to move
 between cloud providers at their own will (no vendor lock-in). The API
 contract is what allows this. The challenging aspect is ensuring different
 drivers support the API contract in the same way. What components should
 drivers share is also and interesting conversation to be had.

  Cheers,
 --Jorge

   From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 25, 2014 6:59 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   John, Brandon,

 I agree that we cannot have a multitude of drivers doing the same thing or
 close to because then we end-up in the same situation as we are today where
 we have duplicate effort and technical debt.

  The goal would be here to be able to built a framework around the
 drivers that would allow for resiliency, failover, etc...

  If the differentiators are in higher level APIs then we can have one a
 single driver (in the best case) for each software LB e.g. HA proxy, nginx,
 etc.

  Thoughts?

  Susanne


 On Mon, Mar 24, 2014 at 11:26 PM, John Dewey j...@dewey.ws wrote:

 I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

  I see the SSL work is well underway, and I am in the process of
 defining L7 scripting requirements.  However, I will definitely need L7
 scripting prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

  John

 On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

   Creating a separate driver for every new need brings up a concern I
 have had.  If we are to implement a separate driver for every need then the
 permutations are endless and may cause a lot drivers and technical debt.
  If someone wants an ha-haproxy driver then great.  What if they want it to
 be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
 scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
 spinning up processes on the host machine we want a nova VM or a container
 to house it?  As you can see the permutations will begin to grow
 exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
 worrying too much about it because hopefully most cloud operators will use
 the same driver that addresses those basic needs, but worst case scenarios
 we have a ton of drivers that do a lot of similar things but are just
 different enough to warrant a separate driver.
  --
 *From:* Susanne Balle [sleipnir...@gmail.com]
 *Sent:* Monday, March 24, 2014 4:59 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   Eugene,

  Thanks for your comments,

  See inline:

  Susanne


  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

  Hi Susanne,

  a couple of comments inline:




 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.



   As far as I understand these two abstracts, you're talking about
 making LBaaS API more high-level than it is right now.
 I think that was not on our roadmap because another project (Heat) is
 taking care of more abstracted service.
 The LBaaS goal is to provide vendor-agnostic

Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-25 Thread Susanne Balle
John, Brandon,

I agree that we cannot have a multitude of drivers doing the same thing or
close to because then we end-up in the same situation as we are today where
we have duplicate effort and technical debt.

The goal would be here to be able to built a framework around the drivers
that would allow for resiliency, failover, etc...

If the differentiators are in higher level APIs then we can have one a
single driver (in the best case) for each software LB e.g. HA proxy, nginx,
etc.

Thoughts?

Susanne


On Mon, Mar 24, 2014 at 11:26 PM, John Dewey j...@dewey.ws wrote:

  I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

 I see the SSL work is well underway, and I am in the process of defining
 L7 scripting requirements.  However, I will definitely need L7 scripting
 prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

 John

 On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

  Creating a separate driver for every new need brings up a concern I have
 had.  If we are to implement a separate driver for every need then the
 permutations are endless and may cause a lot drivers and technical debt.
  If someone wants an ha-haproxy driver then great.  What if they want it to
 be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
 scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
 spinning up processes on the host machine we want a nova VM or a container
 to house it?  As you can see the permutations will begin to grow
 exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
 worrying too much about it because hopefully most cloud operators will use
 the same driver that addresses those basic needs, but worst case scenarios
 we have a ton of drivers that do a lot of similar things but are just
 different enough to warrant a separate driver.
  --
 *From:* Susanne Balle [sleipnir...@gmail.com]
 *Sent:* Monday, March 24, 2014 4:59 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   Eugene,

  Thanks for your comments,

  See inline:

  Susanne


  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

 Hi Susanne,

  a couple of comments inline:




 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.



   As far as I understand these two abstracts, you're talking about making
 LBaaS API more high-level than it is right now.
 I think that was not on our roadmap because another project (Heat) is
 taking care of more abstracted service.
 The LBaaS goal is to provide vendor-agnostic management of load balancing
 capabilities and quite fine-grained level.
 Any higher level APIs/tools can be built on top of that, but are out of
 LBaaS scope.


  [Susanne] Yes. Libra currently has some internal APIs that get triggered
 when an action needs to happen. We would like similar functionality in
 Neutron LBaaS so the user doesn't have to manage the load-balancers but can
 consider them as black-boxes. Would it make sense to maybe consider
 integrating Neutron LBaaS with heat to support some of these use cases?




 We like where Neutron LBaaS is going with regards to L7 policies and SSL
 termination support which Libra is not currently supporting and want to
 take advantage of the best in each project.

 We have a draft on how we could make Neutron LBaaS take advantage of Libra
 in the back-end.

 The details are available at:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


  I looked at the proposal briefly, it makes sense to me. Also it seems to
 be the simplest way of integrating LBaaS and Libra - create a Libra driver
 for LBaaS.


  [Susanne] Yes that would be the short team solution to get us where we
 need to be. But We do not want to continue to enhance Libra. We would like
 move to Neutron LBaaS and not have duplicate efforts.




  While this would allow us to fill a gap short term we would like to
 discuss the longer term strategy since we believe that everybody would
 benefit from having such managed services artifacts built directly

Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-25 Thread Susanne Balle
On Tue, Mar 25, 2014 at 9:24 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:





 That for sure can be implemented. I only would recommend to implement
 such kind of management system out of Neutron/LBaaS tree, e.g. to only have
 client within Libra driver that will communicate with the management
 backend.


 [Susanne] Again this would only be a short term solution since as we move
 forward and want to contribute new features it would result in duplication
 of efforts because the features might need to be done in Libra and not
 Neutron LBaaS.


 That seems to be a way other vendors are taking right now. Regarding the
 features, could you point to description of those?


Our end goal is to be able to move to just use Neutron LBaaS. For example
SSL termination is not in Libra and we don't want to have to implement it
when it is already in Neutron LBaaS. the same with L7 policies.

Having the service be resilient beyond just a pair of HA proxies is a biggy
for us. We cannot expect our customers to manage the LB themselves.

Susanne




 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-25 Thread Susanne Balle
On Tue, Mar 25, 2014 at 9:36 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi John,


 On Tue, Mar 25, 2014 at 7:26 AM, John Dewey j...@dewey.ws wrote:

  I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

 Not really, whole point of the service is to abstract the user from
 specifics of backend implementation. So for any feature there is a common
 API, not specific to any implementation.

 There probably could be some exception to this guide line that lays in the
 area of admin API, but that's yet to be discussed.


Admin APIs would make sense.



 I see the SSL work is well underway, and I am in the process of defining
 L7 scripting requirements.  However, I will definitely need L7 scripting
 prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

 I may say that core team has rejected 'vendor extensions' idea due to
 potential non-uniform user API experience. That becomes even worse with
 flavors introduced, because users don't know what vendor is backing up the
 service they have created.

 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Glossary

2014-03-24 Thread Susanne Balle
Looks good, Thanks Susanne


On Mon, Mar 24, 2014 at 6:55 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi,

 Here's the wiki page with a list of terms we're usually operate when
 discussing lbaas object model:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary

 Feel free to add/modify/ask questions.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services concept

2014-03-24 Thread Susanne Balle
Hi Neutron LBaaS folks,


I have been getting up to speed on the Neutron LBaaS implementation and
have been wondering how to make it fit our needs in HP public cloud as well
as as an enterprise-grade load balancer service for HP Openstack
implementations. We are currently using Libra as our LBaaS implementation
and are interested in moving to the Neutron LBaaS service in the future.


I have been looking at the LBaaS requirements posted by Jorge at:

https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements


When we started looking at existing packages for our LBaaS service we had a
focus on requirements needed to create a managed service where the user
would just interact with the service APIs and not have to deal with
resiliency, HA, monitoring, and reporting functions themselves. Andrew
Hutchings became the HP Tech Lead for the open source Libra project. For
historical reasons around why we decided to contribute to Libra see:

http://openstack.10931.n7.nabble.com/Neutron-Relationship-between-Neutron-LBaaS-and-Libra-td29562.html


We would like to discuss adding the concept of managed services to the
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
proxy. The latter could be a second approach for some of the software
load-balancers e.g. HA proxy since I am not sure that it makes sense to
deploy Libra within Devstack on a single VM.


Currently users would have to deal with HA, resiliency, monitoring and
managing their load-balancers themselves.  As a service provider we are
taking a more managed service approach allowing our customers to consider
the LB as a black box and the service manages the resiliency, HA,
monitoring, etc. for them.


We like where Neutron LBaaS is going with regards to L7 policies and SSL
termination support which Libra is not currently supporting and want to
take advantage of the best in each project.

We have a draft on how we could make Neutron LBaaS take advantage of Libra
in the back-end.

The details are available at:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


While this would allow us to fill a gap short term we would like to discuss
the longer term strategy since we believe that everybody would benefit from
having such managed services artifacts built directly into Neutron LBaaS.


There are blueprints on high-availability for the HA proxy software
load-balancer and we would like to suggest implementations that fit our
needs as services providers.


One example where the managed service approach for the HA proxy load
balancer is different from the current Neutron LBaaS roadmap is around HA
and resiliency. The 2 LB HA setup proposed (
https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
appropriate for service providers in that users would have to pay for the
extra load-balancer even though it is not being actively used.  An
alternative approach is to implement resiliency using a pool of stand-by
load-and preconfigured load balancers own by e.g. LBaaS tenant and assign
load-balancers from the pool to tenants environments. We currently are
using this approach in the public cloud with Libra and it takes
approximately 80 seconds for the service to decide that a load-balancer has
failed, swap the floating ip and update the db, etc. and have a new LB
running.


Regards Susanne


--

Susanne Balle

HP Cloud
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-24 Thread Susanne Balle
My apologies if you receive this twice. I seems to have problems with my
gmail account.



Hi Neutron LBaaS folks,



I have been getting up to speed on the Neutron LBaaS implementation and
have been wondering how to make it fit our needs in HP public cloud as well
as as an enterprise-grade load balancer service for HP Openstack
implementations. We are currently using Libra as our LBaaS implementation
and are interested in moving to the Neutron LBaaS service in the future.



I have been looking at the LBaaS requirements posted by Jorge at:

https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements



When we started looking at existing packages for our LBaaS service we had a
focus on requirements needed to create a managed service where the user
would just interact with the service APIs and not have to deal with
resiliency, HA, monitoring, and reporting functions themselves. Andrew
Hutchings became the HP Tech Lead for the open source Libra project. For
historical reasons around why we decided to contribute to Libra see:

http://openstack.10931.n7.nabble.com/Neutron-Relationship-between-Neutron-LBaaS-and-Libra-td29562.html



We would like to discuss adding the concept of managed services to the
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
proxy. The latter could be a second approach for some of the software
load-balancers e.g. HA proxy since I am not sure that it makes sense to
deploy Libra within Devstack on a single VM.



Currently users would have to deal with HA, resiliency, monitoring and
managing their load-balancers themselves.  As a service provider we are
taking a more managed service approach allowing our customers to consider
the LB as a black box and the service manages the resiliency, HA,
monitoring, etc. for them.



We like where Neutron LBaaS is going with regards to L7 policies and SSL
termination support which Libra is not currently supporting and want to
take advantage of the best in each project.

We have a draft on how we could make Neutron LBaaS take advantage of Libra
in the back-end.

The details are available at:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft



While this would allow us to fill a gap short term we would like to discuss
the longer term strategy since we believe that everybody would benefit from
having such managed services artifacts built directly into Neutron LBaaS.



There are blueprints on high-availability for the HA proxy software
load-balancer and we would like to suggest implementations that fit our
needs as services providers.



One example where the managed service approach for the HA proxy load
balancer is different from the current Neutron LBaaS roadmap is around HA
and resiliency. The 2 LB HA setup proposed (
https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
appropriate for service providers in that users would have to pay for the
extra load-balancer even though it is not being actively used.  An
alternative approach is to implement resiliency using a pool of stand-by
load-and preconfigured load balancers own by e.g. LBaaS tenant and assign
load-balancers from the pool to tenants environments. We currently are
using this approach in the public cloud with Libra and it takes
approximately 80 seconds for the service to decide that a load-balancer has
failed, swap the floating ip and update the db, etc. and have a new LB
running.



Regards Susanne

---

Susanne M. Balle
Hewlett-Packard
HP Cloud Services

Please consider the environment before printing this email.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-24 Thread Susanne Balle
Eugene,

Thanks for your comments,

See inline:

Susanne


On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi Susanne,

 a couple of comments inline:




 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.



 As far as I understand these two abstracts, you're talking about making
 LBaaS API more high-level than it is right now.
 I think that was not on our roadmap because another project (Heat) is
 taking care of more abstracted service.
 The LBaaS goal is to provide vendor-agnostic management of load balancing
 capabilities and quite fine-grained level.
 Any higher level APIs/tools can be built on top of that, but are out of
 LBaaS scope.


[Susanne] Yes. Libra currently has some internal APIs that get triggered
when an action needs to happen. We would like similar functionality in
Neutron LBaaS so the user doesn't have to manage the load-balancers but can
consider them as black-boxes. Would it make sense to maybe consider
integrating Neutron LBaaS with heat to support some of these use cases?




 We like where Neutron LBaaS is going with regards to L7 policies and SSL
 termination support which Libra is not currently supporting and want to
 take advantage of the best in each project.

 We have a draft on how we could make Neutron LBaaS take advantage of
 Libra in the back-end.

 The details are available at:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft


 I looked at the proposal briefly, it makes sense to me. Also it seems to
 be the simplest way of integrating LBaaS and Libra - create a Libra driver
 for LBaaS.


[Susanne] Yes that would be the short team solution to get us where we need
to be. But We do not want to continue to enhance Libra. We would like move
to Neutron LBaaS and not have duplicate efforts.




  While this would allow us to fill a gap short term we would like to
 discuss the longer term strategy since we believe that everybody would
 benefit from having such managed services artifacts built directly into
 Neutron LBaaS.

 I'm not sure about building it directly into LBaaS, although we can
 discuss it.


[Susanne] The idea behind the managed services aspect/extensions would be
reusable for other software LB.


 For instance, HA is definitely on roadmap and everybody seems to agree
 that HA should not require user/tenant to do any specific configuration
 other than choosing HA capability of LBaaS service. So as far as I see it,
 requirements for HA in LBaaS look very similar to requirements for HA in
 Libra.


[Susanne] Yes. Libra works well for us in the public cloud but we would
like to move to Neutron LBaaS and not have duplicate efforts: Libra and
Neutron LBaaS. We were hoping to be able to take the best of Libra and add
it to Neutron LBaaS and help shape Neutron LBaaS to fit a wider spectrum of
customers/users.



 There are blueprints on high-availability for the HA proxy software
 load-balancer and we would like to suggest implementations that fit our
 needs as services providers.



 One example where the managed service approach for the HA proxy load
 balancer is different from the current Neutron LBaaS roadmap is around HA
 and resiliency. The 2 LB HA setup proposed (
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
 appropriate for service providers in that users would have to pay for the
 extra load-balancer even though it is not being actively used.

 One important idea of the HA is that its implementation is
 vendor-specific, so each vendor or cloud provider can implement it in the
 way that suits their needs. So I don't see why particular HA solution for
 haproxy should be considered as a common among other vendors/providers.


[Susanne] Are you saying that we should create a driver that would be a
peer to the current loadbalancer/ ha-proxy driver? So for example
 loadbalancer/managed-ha-proxy (please don't get hung-up on the name I
picked) would be a driver we would implement to get our interaction with a
pool of stand-by load-and preconfigured load balancers instead of the 2 LB
HA servers? And it would be part of the Neutron LBaaS branch?



I am assuming that blueprints need to be approved before the feature is
accepted into a release. Then the feature is implemented and accepted by
the core members into the main repo. What the process would we have to
follow if we wanted to 

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-10 Thread Susanne Balle
+1. Count me in. Susanne


On Fri, Mar 7, 2014 at 10:06 AM, Stephen Wong s3w...@midokura.com wrote:

 +1 - that is a good idea! Having it several days before the J-Summit in
 Atlanta would be great.

 - Stephen


 On Fri, Mar 7, 2014 at 1:33 AM, Eugene Nikanorov 
 enikano...@mirantis.comwrote:

 I think mini summit is no worse than the summit itself.
 Everyone who wants to participate can join.
 In fact what we really need is a certain time span of focused work.
 ML, meetings are ok, it's just that dedicated in person meetings (design
 sessions) could be more productive.
 I'm thinking what if such mini-summit is held in Atlanta 1-2-3 days prior
 to the OS summit?
 That could save attendees a lot of time/money.

 Thanks,
 Eugene.



 On Fri, Mar 7, 2014 at 9:51 AM, Mark McClain mmccl...@yahoo-inc.comwrote:


 On Mar 6, 2014, at 4:31 PM, Jay Pipes jaypi...@gmail.com wrote:

  On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
  +1
 
  I think if we can have it before the Juno summit, we can take
  concrete, well thought-out proposals to the community at the summit.
 
  Unless something has changed starting at the Hong Kong design summit
  (which unfortunately I was not able to attend), the design summits have
  always been a place to gather to *discuss* and *debate* proposed
  blueprints and design specs. It has never been about a gathering to
  rubber-stamp proposals that have already been hashed out in private
  somewhere else.

 You are correct that is the goal of the design summit.  While I do think
 it is wise to discuss the next steps with LBaaS at this point in time, I am
 not a proponent of in person mini-design summits.  Many contributors to
 LBaaS are distributed all over the global, and scheduling a mini summit
 with short notice will exclude valuable contributors to the team.  I'd
 prefer to see an open process with discussions on the mailing list and
 specially scheduled IRC meetings to discuss the ideas.

 mark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Susanne Balle
What I'd like to find out now:

1) who's interested in this topic?

Please include me.

2) who's interested in helping flesh out the guidelines for various log
levels?

Please include me.

3) who's interested in helping get these kinds of patches into various
projects in OpenStack?
4) which projects are interested in participating (i.e. interested in
prioritizing landing these kinds of UX improvements)

This is going to be progressive and iterative. And will require lots of
folks involved.

Regards Susanne



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev