Re: [openstack-dev] [kolla] Stepping down from core

2016-07-08 Thread Angus Salkeld
On Fri, Jul 8, 2016 at 10:01 AM Michal Rostecki 
wrote:

> Hi,
>
> I'd like to announce that I'm leaving the Kolla core team and I will not
> work on this project anymore (at least in the near future). I'm fully
> focusing on working in Kubernetes upstream directly. That said, I want
> to thank you for working together!
>

Steve: I must do this too (Michal and I are working together upstream in
kubernetes) I just don't have the time to do any reviews in kolla). Thanks
for encouraging us, the kolla-mesos stuff was fun and I learned a lot.

-Angus


>
> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Angus Salkeld
On Sun, May 1, 2016 at 12:54 AM Steven Dake (stdake) 
wrote:

> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level
> directory.  Individuals in kolla-k8s-core that consistently approve (+2) or
> disapprove with a (-2) votes to TLD directories other then kubernetes will
> be handled on a case by case basis with several "training warnings"
> followed by removal of the kolla-k8s-core group.  The kolla-k8s-core group
> will be added as a subgroup of the kolla-core reviewer team, which means
> they in effect have all of the ACL access as the existing kolla
> repository.  I think it is better in this case to trust these individuals
> to the right thing and only approve changes for the kubernetes directory
> until proposed for the kolla-core reviewer group where they can gate
> changes to any part of the repository.
>
>
>- Britt Houser
>
>
>- mark casey
>
>
>- Steven Dake (delta-alpha-kilo-echo)
>
>
>- Michael Schmidt
>
>
>- Marian Schwarz
>
>
>- Andrew Battye
>
>
>- Kevin Fox (kfox)
>
>
>- Sidharth Surana (ssurana)
>
>
>-  Michal Rostecki (mrostecki)
>
>
>-   Swapnil Kulkarni (coolsvap)
>
>
>-   MD NADEEM (mail2nadeem92)
>
>
>-   Vikram Hosakote (vhosakot)
>
>
>-   Jeff Peeler (jpeeler)
>
>
>-   Martin Andre (mandre)
>
>
>-   Ian Main (Slower)
>
>
>- Hui Kang (huikang)
>
>
>- Serguei Bezverkhi (sbezverk)
>
>
>- Alex Polvi (polvi)
>
>
>- Rob Mason
>
>
>- Alicja Kwasniewska
>
>
>- sean mooney (sean-k-mooney)
>
>
>- Keith Byrne (kbyrne)
>
>
>- Zdenek Janda (xdeu)
>
>
>- Brandon Jozsa (v1k0d3n)
>
>
>- Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to
> the kolla-k8s-core team as you will already have the necessary ACLs to do
> the job.  If you feel you would like to join this initial bootstrapping
> process, please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6
> months (in this case 2 commits) to the repository.  Folks that don't meet
> the qualifications are still welcome to commit to the repository and
> contribute code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla  
> repository
> under the kubernees top level directory.  Contributors that become active
> in the kolla repository itself will be proposed over time to the kolla-core
> group.  Only core-kolla members will be permitted to participate in policy
> decisions and voting thereof, so there is some minimal extra responsibility
> involved in joining the kolla-core ACL team for those folks wanting to move
> into the kolla core team over time.  The goal will be to over time entirely
> remove the kolla-k8s-core team and make one core reviewer team in the
> kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
> change to the main kolla repository via ACLs, however, I propose we trust
> these folks to only +2/-2 changes related to the kubernetes directory in
> the kolla repository and remove folks that consistently break this
> agreement.  Initial errors as folks learn the system will be tolerated and
> commits reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I
> feel needs correction.  Our first error the kolla-mesos-core team made a
> lack of a diversely affiliated team membership developing the code base.
> The above list has significant diversity.  The second error is that the
> repository was split in the first place.  This resulted in a separate ABI
> to the containers being implemented which was a sore spot for me
> personally.  We did our
>

I did this for you ages ago:
https://etherpad.openstack.org/p/kolla-set_configs

My opinion on this is that kolla images should provide mechanisms
to make configuration easy and the deployment tools should use
these in which ever way makes sense to them.


> best to build both sides of the bridge here, but this time I'd like the
> bridge between these two interests 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Angus Salkeld
On Mon, May 2, 2016 at 7:07 AM Steven Dake (stdake) 
wrote:

> Ryan had rightly pointed out that when we made the original proposal 9am
> morning we had asked folks if they wanted to participate in a separate
>  repository.
>
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit.  Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email.  The reasons I heard
> were:
>
>- Better integration of the community
>- Better integration of the code base
>- Doesn't present an us vs them mentality that one could argue
>happened during kolla-mesos
>- A second repository makes k8s a second class citizen deployment
>architecture without a voice in the full deployment methodology
>- Two gating methods versus one
>- No going back to a unified repository while preserving git history
>
> I favor of the separate repositories I heard
>
>- It presents a unified workspace for kubernetes alone
>- Packaging without ansible is simpler as the ansible directory need
>not be deleted
>
> There were other complaints but not many pros.  Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.
>
> I'll leave it open to the new folks that want to do the work if they want
> to work on an offshoot repository and open us up to the possible problems
> above.
>

+1 to the separate repo

I think the separate repo worked very well for us and would encourage you
to replicate that again. Having one repo doing one thing makes the goal of
the repo obvious and makes the api between the images and deployment
clearer (also the stablity of that
api and things like permissions *cough* drop-root).

-Angus


>
> If you are on this list:
>
>
>- Ryan Hallisey
>- Britt Houser
>
>
>- mark casey
>
>
>- Steven Dake (delta-alpha-kilo-echo)
>
>
>- Michael Schmidt
>
>
>- Marian Schwarz
>
>
>- Andrew Battye
>
>
>- Kevin Fox (kfox)
>
>
>- Sidharth Surana (ssurana)
>
>
>-  Michal Rostecki (mrostecki)
>
>
>-   Swapnil Kulkarni (coolsvap)
>
>
>-   MD NADEEM (mail2nadeem92)
>
>
>-   Vikram Hosakote (vhosakot)
>
>
>-   Jeff Peeler (jpeeler)
>
>
>-   Martin Andre (mandre)
>
>
>-   Ian Main (Slower)
>
>
>- Hui Kang (huikang)
>
>
>- Serguei Bezverkhi (sbezverk)
>
>
>- Alex Polvi (polvi)
>
>
>- Rob Mason
>
>
>- Alicja Kwasniewska
>
>
>- sean mooney (sean-k-mooney)
>
>
>- Keith Byrne (kbyrne)
>
>
>- Zdenek Janda (xdeu)
>
>
>- Brandon Jozsa (v1k0d3n)
>
>
>- Rajath Agasthya (rajathagasthya)
>- Jinay Vora
>- Hui Kang
>- Davanum Srinivas
>
>
>
> Please speak up if you are in favor of a separate repository or a unified
> repository.
>
> The core reviewers will still take responsibility for determining if we
> proceed on the action of implementing kubernetes in general.
>
> Thank you
> -steve
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

2016-04-24 Thread Angus Salkeld
+1

On Sat, 23 Apr 2016 3:08 am Michał Rostecki 
wrote:

> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos development being scaled back

2016-04-20 Thread Angus Salkeld
On Wed, Apr 20, 2016 at 4:43 PM Steven Dake (stdake) 
wrote:

> Hey folks,
>
> I know a lot of folks are already aware of this, but as we head into
> summit, in the interest of an Open Community, it is important to share this
> information widely.  The major implementors of the kolla-emsos repository
> listed in this specification [1] don't intend to continue its development
> within the Kolla project governance.  I suspect we will end up placing this
> repository in the attic and removing it s a deliverable for Kolla under
> Kolla governance and TC oversight.
>
> Heading into summit, I want folks to understand this fact - kolla–mesos
> was started as an experimental repository to re-use much of the design of
> Kolla, and the project has had some success at achieving this goal.
> Unfortunately the project implementors don't intend to continue its
> development in the Open, but instead take it "internal" and work on it
> privately.  I disagree with this approach, but as the PTL of Kolla I have
> done everything I can to provide an inviting positive working environment
> for the folks working on kolla-mesos.  It is such a shame, as kolla-emsos
> is making good progress.  The core-reviewers will have to vote (after
> summit concludes) what to do with the repository and all of the good work
> produced so far, but I don't believe there is enough support to continue
> the repository development.
>

Hi

So yeah, this was a business decision (out of my hands) and not related to
anything in the community. In fact I must thank everyone
that has helped out with reviews and encouragement along the way.

Sorry that it will be not continued, I have really enjoyed working on
it. For me personally I'll be working on Murano and Heat from now on.

-Angus


>
> Regards,
> -steve
>
> [1]
> https://github.com/openstack/kolla/blob/master/specs/mesos-deployment.rst
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for Core Reviewer

2016-03-29 Thread Angus Salkeld
+1

On Wed, Mar 30, 2016 at 6:45 AM Michał Jastrzębski  wrote:

> +1
>
> On 29 March 2016 at 11:39, Ryan Hallisey  wrote:
> > +1
> >
> > - Original Message -
> > From: "Paul Bourke" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Tuesday, March 29, 2016 12:10:38 PM
> > Subject: Re: [openstack-dev] [kolla][vote] Nominating Vikram Hosakot for
> Core Reviewer
> >
> > +1
> >
> > On 29/03/16 17:07, Steven Dake (stdake) wrote:
> >> Hey folks,
> >>
> >> Consider this proposal a +1 in favor of Vikram joining the core reviewer
> >> team.  His reviews are outstanding.  If he doesn’t have anything useful
> >> to add to a review, he doesn't pile on the review with more –1s which
> >> are slightly disheartening to people.  Vikram has started a trend
> >> amongst the core reviewers of actually diagnosing gate failures in
> >> peoples patches as opposed to saying gate failed please fix.  He does
> >> this diagnosis in nearly every review I see, and if he is stumped  he
> >> says so.  His 30 days review stats place him in pole position and his 90
> >> day review stats place him in second position.  Of critical notice is
> >> that Vikram is ever-present on IRC which in my professional experience
> >> is the #1 indicator of how well a core reviewer will perform long term.
> >>Besides IRC and review requirements, we also have code requirements
> >> for core reviewers.  Vikram has implemented only 10 patches so far, butI
> >> feel he could amp this up if he had feature work to do.  At the moment
> >> we are in a holding pattern on master development because we need to fix
> >> Mitaka bugs.  That said Vikram is actively working on diagnosing root
> >> causes of people's bugs in the IRC channel pretty much 12-18 hours a day
> >> so we can ship Mitaka in a working bug-free state.
> >>
> >> Our core team consists of 11 people.  Vikram requires at minimum 6 +1
> >> votes, with no veto –2 votes within a 7 day voting window to end on
> >> April 7th.  If there is a veto vote prior to April 7th I will close
> >> voting.  If there is a unanimous vote prior to April 7th, I will make
> >> appropriate changes in gerrit.
> >>
> >> Regards
> >> -steve
> >>
> >> [1] http://stackalytics.com/report/contribution/kolla-group/30
> >> [2] http://stackalytics.com/report/contribution/kolla-group/90
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-06 Thread Angus Salkeld
+1

On Sat, 5 Mar 2016 2:58 am Steven Dake (stdake)  wrote:

> Core Reviewers,
>
> Alicja has been instrumental in our work around jinja2 docker file
> creation, removing our symlink madness.  She has also been instrumental in
> actually getting Diagnostics implemented in a sanitary fashion.  She has
> also done a bunch of other work that folks in the community already know
> about that I won't repeat here.
>
> I had always hoped she would start reviewing so we could invite her to the
> core review team, and over the last several months she has reviewed quite a
> bit!  Her 90 day stats[1] place her at #9 with a solid ratio of 72%.  Her
> 30 day stats[2] are even better and place her at #6 with an improving ratio
> of 67%.  She also just doesn't rubber stamp reviews or jump in reviews at
> the end; she sticks with them from beginning to end and finds real
> problems, not trivial things.  Finally Alicja is full time on Kolla as
> funded by her employer so she will be around for the long haul and always
> available.
>
> Please consider my proposal to be a +1 vote.
>
> To be approved for the core reviewer team, Alicja requires a majority vote
> of 6 total votes with no veto within the one week period beginning now and
> ending Friday March 11th.  If your on the fence, you can always abstain.
> If the vote is unanimous before the voting ends, I will make appropriate
> changes to gerrit's acls.  If their is a veto vote, voting will close prior
> to March 11th.
>
> Regards,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
> [2] http://stackalytics.com/report/contribution/kolla-group/30
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-28 Thread Angus Salkeld
On Mon, Feb 29, 2016 at 5:04 AM Steven Dake (stdake) <std...@cisco.com>
wrote:

> Angus,
>
> You received 7 +1 votes with no veto votes in the voting window and we
> require 6 votes or more to join the core team at the time you were
> proposed.  As a result, I have removed you from kolla-mesos-core and added
> you to kolla-core.
>
> Welcome to the core reviewer team!
>

Thank you! I'll also pay more attention to main kolla reviews (I realise
that I have been somewhat focused on the kolla-mesos stuff).

-Angus


>
> Regards
> -steve
>
>
> From: Steven Dake <std...@cisco.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, February 19, 2016 at 11:04 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for
> kolla-core
>
> Angus is already in kolla-mesos-core but doesn't have broad ability to
> approve changes for all of kolla-core.  We agreed by majority vote in Tokyo
> that folks in kolla-mesos-core that integrated well with the project would
> be moved from kolla-mesos-core to kolla-core.  Once kolla-mesos-core is
> empty, we will deprecate that group.
>
> Angus has clearly shown his commitment to Kolla:
> He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
> solid PDE of 64 (meaning 64 days of interaction with either reviews,
> commits, or mailing list participation.
>
> Count my vote as a +1.  If your on the fence, feel free to abstain.  A
> vote of –1 is a VETO vote, which terminates the voting process.  If there
> is unanimous approval prior to February 26, or a veto vote, the voting will
> be closed and appropriate changes made.
>
> Remember now we agreed it takes a majority vote to approve a core
> reviewer, which means Angus needs a +1 support from at least 6 core
> reviewers with no veto votes.
>
> Regards,
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Angus Salkeld
On Fri, Jan 15, 2016 at 7:44 AM Steven Dake (stdake) 
wrote:

> I am -2 on a separate meeting.
>
> If there are problems with the current approach to how the agenda is
> managed as it relates to mesos topics, lets fix that.  If there are
> timezone problems, lets try to fix that.  Lets fix the problems you point
>

The current meeting is 16:30 UTC, that is 2:30am for me:-(
I suspect that I maybe the only person in this situation, so basically I
can't attend meetings. So that kinda sucks...

-A


> out rather then create another meeting for our core team to have to try to
> make.  Even of the core team is not actively writing code, they are
> actively reviewing code (I hope:) and it is helpful for the core team to
> act as one unit when it comes to decision making.  This includes
> informational meetings like kolla-mesos.
>
> I find as running the typical meetings goes, I typically have over half
> the agenda time available for open topics unrelated to normal project
> business (such as figuring out what will be discussed at the midcycle or
> summit as an example).  It would be a real shame to have two shorter
> meetings when we can just jam the one topic Kolla related stuff into one
> meeting..
>
> Regards
> -steve
>
>
>
> On 1/15/16, 4:38 AM, "Michal Rostecki"  wrote:
>
> >Hi,
> >
> >Currently we're discussing stuff about kolla-mesos project on kolla IRC
> >meetings[1]. We have an idea of creating the separate meeting for
> >kolla-mesos. I see the following reasons for that:
> >
> >- kolla-mesos has some contributors which aren't able to attend kolla
> >meeting because of timezone reasons
> >- kolla meetings had a lot of topics recently and there was a short time
> >for discussing kolla-mesos things
> >- in the most of kolla meetings, we treated the whole kolla-mesos as one
> >topic, which is bad in terms of analyzing single problems inside this
> >project
> >
> >The things I would like to know from you is:
> >- whether you're +1 or -1 to the whole idea of having separate meeting
> >- what is your preferred time of meeting - please use this etherpad[2]
> >(I already added there some names of most active contributors from who
> >I'd like to hear an opinion, so if you're interested - please "override
> >color"; if not, remove the corresponding line)
> >
> >About the time of meeting and possible conflicts - I think that in case
> >of conflicting times and the equal number of votes, opinion of core
> >reviewers and people who are already contributing to the project
> >(reviews and commits) will be more important. You can see the
> >contributors here[3][4].
> >
> >[1] https://wiki.openstack.org/wiki/Meetings/Kolla
> >[2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
> >[3] http://stackalytics.com/?module=kolla-mesos
> >[4] http://stackalytics.com/?module=kolla-mesos=commits
> >
> >Cheers,
> >Michal
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-02 Thread Angus Salkeld
On Tue, Nov 3, 2015 at 3:04 AM Steven Dake (stdake) <std...@cisco.com>
wrote:

> Hey folks,
>
> We had an informal vote at the mid cycle from the core reviewers, and it
> was a majority vote, so we went ahead and started the process of the
> introduction of mesos orchestration into Kolla.
>
> For background for our few core reviewers that couldn’t make it and the
> broader community, Angus Salkeld has committed himself and 3 other Mirantis
> engineers full time to investigate if Mesos could be used as an
> orchestration engine in place of Ansible.  We are NOT dropping our
> Ansible implementation in the short or long term.  Kolla will continue to
> lead with Ansible.  At some point in Mitaka or the N cycle we may move the
> ansible bits to a repository called “kolla-ansible” and the kolla
> repository would end up containing the containers only.
>
> The general consensus was that if folks wanted to add additional
> orchestration systems for Kolla, they were free to do so if they did the
> development and made a commitment to maintaining one core reviewer team
> with broad expertise among the core reviewer team of how these various
> systems work.
>
> Angus has agreed to the following
>
>1. A new team called “kolla-mesos-core” with 2 members.  One of the
>members is Angus Salkeld, the other is selected by Angus Salkeld since this
>is a cookie cutter empty repository.  This is typical of how new projects
>would operate, but we don’t want a code dump and instead want an integrated
>core team.  To prevent a situation which the current Ansible expertise shy
>away from the Mesos implementation, the core reviewer team has committed to
>reviewing the mesos code to get a feel for it.
>2. Over the next 6-8 weeks these two folks will strive to join the
>Kolla core team by typical means 1) irc participation 2) code generation 3)
>effective and quality reviews 4) mailing list participation
>3. Angus will create a technical specification which will we will
>roll-call voted and only accepted once a majority of core review team is
>satisfied with the solution.
>
> I'll get stuck into this now.


>
>1. The kolla-mesos deliverable will be under Kolla governance and be
>managed by the Kolla core reviewer team after the kolla-mesos-core team is
>deprecated.
>2. If the experiment fails, kolla-mesos will be placed in the attic.
>There is no specific window for the experiments, it is really up to Angus
>to decide if the technique is viable down the road.
>3. For the purpose of voting, the kolla-mesos-core team won’t be
>permitted to vote (on things like this or other roll-call votes in the
>community) until they are “promoted” to the koala-core reviewer team.
>
>
> The core reviewer team has agreed to the following
>
>1. Review patches in kolla-mesos repository
>2. Actively learn how the mesos orchestration system works in the
>context of Kolla
>3. Actively support Angus’s effort in the existing Kolla code base as
>long as it is not harmful to the Kolla code base
>
> We all believe this will lead to a better outcome then Mirantis developing
> some code on their own and later dumping it into the Kolla governance or
> operating as a fork.
>

Absolutely.


>
> I’d like to give the core reviewers another chance to vote since the
> voting was semi-rushed.
>
> I am +1 given the above constraints.  I think this will help Kolla grow
> and potentially provide a better (or arguably different) orchestration
> system and is worth the investigation.  At no time will we put the existing
> Kolla Ansible + Docker goodness into harms way, so I see no harm in an
> independent repository especially if the core reviewer team strives to work
> as one team (rather then two independent teams with the same code base).
>
> Abstaining is the same as voting as –1, so please vote one way or another
> with a couple line blob about your thoughts on the idea.
>
> Note of the core reviewers there, we had 7 +1 votes (and we have a 9
> individual core reviewer team so there is already a majority but I’d like
> to give everyone an opportunity weigh in).
>
>
Thanks for doing this Steve, we want to do this as much as possible in the
open (we only have a very basic bits of PoC code, we will get stuck into
getting this code up ASAP - and pushing it forward).

Here is the review for the new repo:
 https://review.openstack.org/#/c/240433

Angus


> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.

Re: [openstack-dev] [heat] resource_registry base_url

2015-10-22 Thread Angus Salkeld
On Fri, Oct 23, 2015 at 8:10 AM Steve Baker  wrote:

> On 23/10/15 09:35, Jay Dobies wrote:
> > In looking through the environment file loading code, I keep seeing a
> > check for base_url in the resource registry. It looks like a way to
> > have the registry entries only be the filenames (I suppose relative
> > filenames work as well) instead of having to enter the full path every
> > time. The base_url would be used as the root URL for those filenames
> > when loading them.
> >
> > Thing is, I can't find any reference to that in the docs. I did a
> > search for resource_registry, but the only thing I can find is [1]
> > which doesn't talk about base_url.
>

Hi

I think the problem is, this is a heat client only feature. Meaning that the
docs for the API and service will not have any details of it.

There are other features in this boat too. I am not sure where the docs for
them
should live. I suspect in the heat client tree.


> >
> > Is this something that's still supported or was it "turned off" (so to
> > speak) by removing the docs about it so users didn't know to use it?
> > Is the syntax to just sit it side by side with the resource
> > definitions, similar to:
> >
> > resource_registry:
> >   "base_url": /home/jdob/my_templates
> >   "OS::Nova::Server": my_nova.yaml
> >
>

I think this is correct usage, tho' it's been ages since I wrote it and the
code has changed
heaps since then.

-Angus


> > Or am I just totally missing where in the docs it's talked about
> > (which is also terribly possible)?
> >
> > [1]
> >
> http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=resource_registry
> >
> > Thanks :)
>
> I'm not sure, since I've never used an explicit base_url. I just put the
> environment file which defines the resource_registry in the same
> directory as my_nova.yaml and the relative paths will resolve. Is that
> an option for you?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team nomination

2015-10-20 Thread Angus Salkeld
+1 to both

On Tue, 20 Oct 2015 11:42 pm Sergey Kraynev  wrote:

I'd like to propose new candidates for heat core-team:
Rabi

Mishra


Peter Razumovsky

According statistic both candidate made a big effort in Heat as
reviewers and as contributors [1][2].
They were involved in Heat community work  during last several releases and
showed good understanding of Heat code.
I think, that they are ready to became core-reviewers.

Heat-cores, please vote with +/- 1.

[1] http://stackalytics.com/report/contribution/heat-group/180
[2] http://stackalytics.com/?module=heat-group=person-day
--
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-09-03 Thread Angus Salkeld
On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbit...@redhat.com> wrote:

> On 02/09/15 04:55, Steven Hardy wrote:
> > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> >> On 2 September 2015 at 11:53, Angus Salkeld <asalk...@mirantis.com>
> wrote:
> >>
> >>> 1. limit the number of resource actions in parallel (maybe base on the
> >>> number of cores)
> >>
> >> I'm having trouble mapping that back to 'and heat-engine is running on
> >> 3 separate servers'.
> >
> > I think Angus was responding to my test feedback, which was a different
> > setup, one 4-core laptop running heat-engine with 4 worker processes.
> >
> > In that environment, the level of additional concurrency becomes a
> problem
> > because all heat workers become so busy that creating a large stack
> > DoSes the Heat services, and in my case also the DB.
> >
> > If we had a configurable option, similar to num_engine_workers, which
> > enabled control of the number of resource actions in parallel, I probably
> > could have controlled that explosion in activity to a more managable
> series
> > of tasks, e.g I'd set num_resource_actions to (num_engine_workers*2) or
> > something.
>
> I think that's actually the opposite of what we need.
>
> The resource actions are just sent to the worker queue to get processed
> whenever. One day we will get to the point where we are overflowing the
> queue, but I guarantee that we are nowhere near that day. If we are
> DoSing ourselves, it can only be because we're pulling *everything* off
> the queue and starting it in separate greenthreads.
>

worker does not use a greenthread per job like service.py does.
This issue is if you have actions that are fast you can hit the db hard.

QueuePool limit of size 5 overflow 10 reached, connection timed out,
timeout 30

It seems like it's not very hard to hit this limit. It comes from simply
loading
the resource in the worker:
"/home/angus/work/heat/heat/engine/worker.py", line 276, in check_resource
"/home/angus/work/heat/heat/engine/worker.py", line 145, in _load_resource
"/home/angus/work/heat/heat/engine/resource.py", line 290, in load
resource_objects.Resource.get_obj(context, resource_id)



>
> In an ideal world, we might only ever pull one task off that queue at a
> time. Any time the task is sleeping, we would use for processing stuff
> off the engine queue (which needs a quick response, since it is serving
> the ReST API). The trouble is that you need a *huge* number of
> heat-engines to handle stuff in parallel. In the reductio-ad-absurdum
> case of a single engine only processing a single task at a time, we're
> back to creating resources serially. So we probably want a higher number
> than 1. (Phase 2 of convergence will make tasks much smaller, and may
> even get us down to the point where we can pull only a single task at a
> time.)
>
> However, the fewer engines you have, the more greenthreads we'll have to
> allow to get some semblance of parallelism. To the extent that more
> cores means more engines (which assumes all running on one box, but
> still), the number of cores is negatively correlated with the number of
> tasks that we want to allow.
>
> Note that all of the greenthreads run in a single CPU thread, so having
> more cores doesn't help us at all with processing more stuff in parallel.
>

Except, as I said above, we are not creating greenthreads in worker.

-A


>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-09-03 Thread Angus Salkeld
On Fri, Sep 4, 2015 at 12:48 AM Zane Bitter <zbit...@redhat.com> wrote:

> On 03/09/15 02:56, Angus Salkeld wrote:
> > On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbit...@redhat.com
> > <mailto:zbit...@redhat.com>> wrote:
> >
> > On 02/09/15 04:55, Steven Hardy wrote:
> >  > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> >  >> On 2 September 2015 at 11:53, Angus Salkeld
> > <asalk...@mirantis.com <mailto:asalk...@mirantis.com>> wrote:
> >  >>
> >  >>> 1. limit the number of resource actions in parallel (maybe base
> > on the
> >  >>> number of cores)
> >  >>
> >  >> I'm having trouble mapping that back to 'and heat-engine is
> > running on
> >  >> 3 separate servers'.
> >  >
> >  > I think Angus was responding to my test feedback, which was a
> > different
> >  > setup, one 4-core laptop running heat-engine with 4 worker
> processes.
> >  >
> >  > In that environment, the level of additional concurrency becomes
> > a problem
> >  > because all heat workers become so busy that creating a large
> stack
> >  > DoSes the Heat services, and in my case also the DB.
> >  >
> >  > If we had a configurable option, similar to num_engine_workers,
> which
> >  > enabled control of the number of resource actions in parallel, I
> > probably
> >  > could have controlled that explosion in activity to a more
> > managable series
> >  > of tasks, e.g I'd set num_resource_actions to
> > (num_engine_workers*2) or
> >  > something.
> >
> > I think that's actually the opposite of what we need.
> >
> > The resource actions are just sent to the worker queue to get
> processed
> > whenever. One day we will get to the point where we are overflowing
> the
> > queue, but I guarantee that we are nowhere near that day. If we are
> > DoSing ourselves, it can only be because we're pulling *everything*
> off
> > the queue and starting it in separate greenthreads.
> >
> >
> > worker does not use a greenthread per job like service.py does.
> > This issue is if you have actions that are fast you can hit the db hard.
> >
> > QueuePool limit of size 5 overflow 10 reached, connection timed out,
> > timeout 30
> >
> > It seems like it's not very hard to hit this limit. It comes from simply
> > loading
> > the resource in the worker:
> > "/home/angus/work/heat/heat/engine/worker.py", line 276, in
> check_resource
> > "/home/angus/work/heat/heat/engine/worker.py", line 145, in
> _load_resource
> > "/home/angus/work/heat/heat/engine/resource.py", line 290, in load
> > resource_objects.Resource.get_obj(context, resource_id)
>
> This is probably me being naive, but that sounds strange. I would have
> thought that there is no way to exhaust the connection pool by doing
> lots of actions in rapid succession. I'd have guessed that the only way
> to exhaust a connection pool would be to have lots of connections open
> simultaneously. That suggests to me that either we are failing to
> expeditiously close connections and return them to the pool, or that we
> are - explicitly or implicitly - processing a bunch of messages in
> parallel.
>

I suspect we are leaking sessions, I have updated this bug to make sure we
focus on figuring out the root cause of this before jumping to conclusions:
https://bugs.launchpad.net/heat/+bug/1491185

-A


>
> > In an ideal world, we might only ever pull one task off that queue
> at a
> > time. Any time the task is sleeping, we would use for processing
> stuff
> > off the engine queue (which needs a quick response, since it is
> serving
> > the ReST API). The trouble is that you need a *huge* number of
> > heat-engines to handle stuff in parallel. In the reductio-ad-absurdum
> > case of a single engine only processing a single task at a time,
> we're
> > back to creating resources serially. So we probably want a higher
> number
> > than 1. (Phase 2 of convergence will make tasks much smaller, and may
> > even get us down to the point where we can pull only a single task
> at a
> > time.)
> >
> > However, the fewer engines you have, the more greenthreads we'll
> have to
> > allow to get some semblance of parallelism. To the extent that more
> > cores means more engine

Re: [openstack-dev] [trove] [heat] Multi region support

2015-09-01 Thread Angus Salkeld
On Wed, Sep 2, 2015 at 8:30 AM Lowery, Mathew  wrote:

> Thank you Zane for the clarifications!
>
> I misunderstood #2 and that led to the other misunderstandings.
>
> Further questions:
> * Are nested stacks aware of their nested-ness? In other words, given any
> nested stack (colocated with parent stack or not), can I trace it back to
> the parent stack? (On a possibly related note, I see that adopting a stack
>

Yes, there is a link (url) to the parent_stack in the links section of show
stack.


> is an option to reassemble a new parent stack from its regional parts in
> the event that the old parent stack is lost.)
> * Has this design met the users' needs? In other words, are there any
> plans to make major modifications to this design?
>

AFAIK we have had zero feedback from the multi region feature.
No more plans, but we would obviously love feedback and suggestions
on how to improve region support.

-Angus


>
> Thanks!
>
> On 9/1/15, 1:47 PM, "Zane Bitter"  wrote:
>
> >On 01/09/15 11:41, Lowery, Mathew wrote:
> >> This is a Trove question but including Heat as they seem to have solved
> >> this problem.
> >>
> >> Summary: Today, it seems that Trove is not capable of creating a cluster
> >> spanning multiple regions. Is that the case and, if so, are there any
> >> plans to work on that? Also, are we aware of any precedent solutions
> >> (e.g. remote stacks in Heat) or even partially completed spec/code in
> >>Trove?
> >>
> >> More details:
> >>
> >> I found this nice diagram
> >>
> >><
> https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for
> >>_Heat/The_Missing_Diagram> created
> >> for Heat. As far as I understand it,
> >
> >Clarifications below...
> >
> >> #1 is the absence of multi-region
> >> support (i.e. what we have today). #2 seems to be a 100% client-based
> >> solution. In other words, the Heat services never know about the other
> >> stacks.
> >
> >I guess you could say that.
> >
> >> In fact, there is nothing tying these stacks together at all.
> >
> >I wouldn't go that far. The regional stacks still appear as resources in
> >their parent stack, so they're tied together by whatever inputs and
> >outputs are connected up in that stack.
> >
> >> #3
> >> seems to show a "master" Heat server that understands "remote stacks"
> >> and simply converts those "remote stacks" into calls on regional Heats.
> >> I assume here the master stack record is stored by the master Heat.
> >> Because the "remote stacks" are full-fledged stacks, they can be managed
> >> by their regional Heats if availability of master or other regional
> >> Heats is lost.
> >
> >Yeah.
> >
> >> #4, the diagram doesn't seem to match the description
> >> (instead of one global Heat, it seems the diagram should show two
> >> regional Heats).
> >
> >It does (they're the two orange boxes).
> >
> >> In this one, a single arbitrary region becomes the
> >> owner of the stack and remote (low-level not stack) resources are
> >> created as needed. One problem is the manageability is lost if the Heat
> >> in the owning region is lost. Finally, #5. In #5, it's just #4 but with
> >> one and only one Heat.
> >>
> >> It seems like Heat solved this  >
> >> using #3 (Master Orchestrator)
> >
> >No, we implemented #2.
> >
> >> but where there isn't necessarily a
> >> separate master Heat. Remote stacks can be created by any regional
> >>stack.
> >
> >Yeah, that was the difference between #3 and #2 :)
> >
> >cheers,
> >Zane.
> >
> >> Trove questions:
> >>
> >>  1. Having sub-clusters (aka remote clusters aka nested clusters) seems
> >> to be useful (i.e. manageability isn't lost when a region is lost).
> >> But then again, does it make sense to perform a cluster operation on
> >> a sub-cluster?
> >>  2. You could forego sub-clusters and just create full-fledged remote
> >> standalone Trove instances.
> >>  3. If you don't create full-fledged remote Trove instances (but instead
> >> just call remote Nova), then you cannot do simple things like
> >> getting logs from a node without going through the owning region's
> >> Trove. This is an extra hop and a single point of failure.
> >>  4. Even with sub-clusters, the only record of them being related lives
> >> only in the "owning" region. Then again, some ID tying them all
> >> together could be passed to the remote regions.
> >>  5. Do we want to allow the piecing together of clusters (sort of like
> >> Heat's "adopt")?
> >>
> >> These are some questions floating around my head and I'm sure there are
> >> plenty more. Any thoughts on any of this?
> >>
> >> Thanks,
> >> Mat
> >>
> >>
> >>
> >>_
> >>_
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> 

Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-09-01 Thread Angus Salkeld
On Tue, Sep 1, 2015 at 10:45 PM Steven Hardy <sha...@redhat.com> wrote:

> On Fri, Aug 28, 2015 at 01:35:52AM +0000, Angus Salkeld wrote:
> >Hi
> >I have been running some rally tests against convergence and our
> existing
> >implementation to compare.
> >So far I have done the following:
> > 1. defined a template with a resource
> >groupA
> https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
> > 2. the inner resource looks like
> >this:A
> https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.templateA
> (it
> >uses TestResource to attempt to be a reasonable simulation of a
> >server+volume+floatingip)
> > 3. defined a rally
> >job:A
> https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yamlA
> that
> >creates X resources then updates to X*2 then deletes.
> > 4. I then ran the above with/without convergence and with 2,4,8
> >heat-engines
> >Here are the results compared:
> >
> https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing
> >Some notes on the results so far:
> >  * A convergence with only 2 engines does suffer from RPC overload
> (it
> >gets message timeouts on larger templates). I wonder if this is
> the
> >problem in our convergence gate...
> >  * convergence does very well with a reasonable number of engines
> >running.
> >  * delete is slightly slower on convergence
> >Still to test:
> >  * the above, but measure memory usage
> >  * many small templates (run concurrently)
>
> So, I tried running my many-small-templates here with convergence enabled:
>
> https://bugs.launchpad.net/heat/+bug/1489548
>
> In heat.conf I set:
>
> max_resources_per_stack = -1
> convergence_engine = true
>
> Most other settings (particularly RPC and DB settings) are defaults.
>
> Without convergence (but with max_resources_per_stack disabled) I see the
> time to create a ResourceGroup of 400 nested stacks (each containing one
> RandomString resource) is about 2.5 minutes (core i7 laptop w/SSD, 4 heat
> workers e.g the default for a 4 core machine).
>
> With convergence enabled, I see these errors from sqlalchemy:
>
> File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 652, in
> _checkout\nfairy = _ConnectionRecord.checkout(pool)\n', u'  File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 444, in
> checkout\nrec = pool._do_get()\n', u'  File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 980, in
> _do_get\n(self.size(), self.overflow(), self._timeout))\n',
> u'TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
> timed out, timeout 30\n'].
>
> I assume this means we're loading the DB much more in the convergence case
> and overflowing the QueuePool?
>

Yeah, looks like it.


>
> This seems to happen when the RPC call from the ResourceGroup tries to
> create some of the 400 nested stacks.
>
> Interestingly after this error, the parent stack moves to CREATE_FAILED,
> but the engine remains (very) busy, to the point of being partially
> responsive, so it looks like maybe the cancel-on-fail isnt' working (I'm
> assuming it isn't error_wait_time because the parent stack has been marked
> FAILED and I'm pretty sure it's been more than 240s).
>
> I'll dig a bit deeper when I get time, but for now you might like to try
> the stress test too.  It's a bit of a synthetic test, but it turns out to
> be a reasonable proxy for some performance issues we observed when creating
> large-ish TripleO deployments (which also create a large number of nested
> stacks concurrently).
>

Thanks a lot for testing Steve! I'll make 2 bugs for what you have raised
1. limit the number of resource actions in parallel (maybe base on the
number of cores)
2. the cancel on fail error

-Angus


>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deployment of Lock / Unlock Operation like MUTEX

2015-08-30 Thread Angus Salkeld
On Sat, Aug 29, 2015 at 10:33 AM Shinobu Kinjo ski...@redhat.com wrote:

 Hello,

 Here is a situation which I faced, that can be reproduced.

 I did heat resource-signal multiple times simultaneously to scale
 instance.
 As a result 3 resources were made in the scaling group having max_size=2.

 stack-list -n showed me 3 stacks in one parent.

 Heat itself seems not to actually check resource data in the database.

 Is there any lock/unlock mechanism like mutex in heat implementation like
 locking
 database when auto-scaling feature is triggered.


Hi

In master there is such a check:
https://github.com/openstack/heat/blob/master/heat/scaling/cooldown.py#L33-L50

This makes sure that there are no concurrent scaling actions.

-Angus



 Or is there plan to deploy such a mutex mechanism.
 What I'm concerning about more is that cielometer also has some feature to
 triggering
 auto-scaling.

 So I would like to make sure that there is a mechanism to keep data
 consistency on
 each component.

 Please let me know, if I've missed anything.

 Shinobu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence rally test results (so far)

2015-08-28 Thread Angus Salkeld
On Fri, Aug 28, 2015 at 6:35 PM Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi,

 great, it seems like migration to convergence could happen soon.

 How many times you were running each test case? Does time changing with
 number of iterations? Are you planning to test parallel stacks creation?


Given the test matrix convergence/non-convergence and 2,4,8 engines, I have
not done a lot of iterations - it's just time consuming. I might kill off
the 2-engine case to gain more iterations.
But from what I have observed the duration does not vary significantly.

I'll test smaller stacks with lots of iterations and with a high
concurrency. All this testing is currently on just one host so it is
somewhat limited. Hopefully this is at least giving a useful comparison
with these limitations.

-Angus



 Thanks.

 On Fri, Aug 28, 2015 at 10:17 AM, Sergey Kraynev skray...@mirantis.com
 wrote:

 Angus!

 it's Awesome!  Thank you for the investigation.
 I had a talk with guys from Sahara team and we decided to start testing
 convergence with Sahara after L release.
 I suppose, that Murano can also join to this process.

 Also AFAIK Sahara team plan to create functional tests with Heat-engine.
 We may add it as a non-voting job for our gate.
 Probably it will be good to have two different type of this job: with
 convergence and with default Heat.

 On 28 August 2015 at 04:35, Angus Salkeld asalk...@mirantis.com wrote:

 Hi

 I have been running some rally tests against convergence and our
 existing implementation to compare.

 So far I have done the following:

1. defined a template with a resource group

 https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
2. the inner resource looks like this:

 https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
  (it
uses TestResource to attempt to be a reasonable simulation of a
server+volume+floatingip)
3. defined a rally job:

 https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
  that
creates X resources then updates to X*2 then deletes.
4. I then ran the above with/without convergence and with 2,4,8
heat-engines

 Here are the results compared:

 https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing


 Results look pretty nice (especially for create) :)
 The strange thing for me: why on update 8 engines shows worse results
 then with 4 engines? (may be mistake in graph... ?)





 Some notes on the results so far:

-  convergence with only 2 engines does suffer from RPC overload (it
gets message timeouts on larger templates). I wonder if this is the 
 problem
in our convergence gate...

 Good spotting. If it's true, probably we should try to change  number of
 engines... (not sure, how gate hardware react on it).


- convergence does very well with a reasonable number of engines
running.
- delete is slightly slower on convergence


 Also about delete - may be we may to optimize it later, when convergence
 way get more feedback.


 Still to test:

- the above, but measure memory usage
- many small templates (run concurrently)
- we need to ask projects using Heat to try with convergence
(Murano, TripleO, Magnum, Sahara, etc..)

 Any feedback welcome (suggestions on what else to test).

 -Angus


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 Regards,
 Sergey.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] convergence rally test results (so far)

2015-08-27 Thread Angus Salkeld
Hi

I have been running some rally tests against convergence and our existing
implementation to compare.

So far I have done the following:

   1. defined a template with a resource group
   
https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
   2. the inner resource looks like this:
   
https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.template
(it
   uses TestResource to attempt to be a reasonable simulation of a
   server+volume+floatingip)
   3. defined a rally job:
   
https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yaml
that
   creates X resources then updates to X*2 then deletes.
   4. I then ran the above with/without convergence and with 2,4,8
   heat-engines

Here are the results compared:
https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing

Some notes on the results so far:

   -  convergence with only 2 engines does suffer from RPC overload (it
   gets message timeouts on larger templates). I wonder if this is the problem
   in our convergence gate...
   - convergence does very well with a reasonable number of engines running.
   - delete is slightly slower on convergence


Still to test:

   - the above, but measure memory usage
   - many small templates (run concurrently)
   - we need to ask projects using Heat to try with convergence (Murano,
   TripleO, Magnum, Sahara, etc..)

Any feedback welcome (suggestions on what else to test).

-Angus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-30 Thread Angus Salkeld
On 31 Jul 2015 2:37 pm, Steve Baker sba...@redhat.com wrote:

 I believe the heat project would benefit from Kanagaraj Manickam and
Ethan Lynn having the ability to approve heat changes.

 Their reviews are valuable[1][2] and numerous[3], and both have been
submitting useful commits in a variety of areas in the heat tree.

 Heat cores, please express your approval with a +1 / -1.

+1 to both.

Angus

 [1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
 [2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
 [3] http://stackalytics.com/report/contribution/heat-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-17 Thread Angus Salkeld
On Fri, Jul 17, 2015 at 8:52 PM, Julien Danjou jul...@danjou.info wrote:

 On Thu, Jul 16 2015, Angus Salkeld wrote:

 Hi Angus,

  I just saw this, and I am concerned this is going to kill Heat's gate
 (and
  user's templates).
 
  Will this be hidden within the client so that as long as we have aodh
  enabled in our gate's devstack
  this will just work?

 As Gordon said, don't worry, we'll do everything to not break Heat's
 gate, managing transition. We're setting up a plan and I think Mehdi
 already has patches doing magic so it's transparent and painless for
 everyone.


OK, thanks - phew.

-Angus



 --
 Julien Danjou
 # Free Software hacker
 # http://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-15 Thread Angus Salkeld
On Tue, Jun 30, 2015 at 6:09 PM, Julien Danjou jul...@danjou.info wrote:

 On Mon, Jun 29 2015, Ildikó Váncsa wrote:

  I think removing options from the API requires version bump. So if we
 plan to
  do this, that should be introduced in v3 as opposed to v2, which should
 remain
  the same and maintained for two cycles (assuming that we still have this
 policy
  in OpenStack). It this is achievable by removing the old code and
 relying on
  the new repo that would be the best, if not then we need to figure out
 how to
  freeze the old code.

 This is not an API change as we're not modifying anything in the API.
 It's just the endpoint *potentially* split in two. But you can also
 merge them as they are 2 separate entities (/v2/alarms and /v2/*).
 So there's no need for a v3 here.


Hi Julien,

I just saw this, and I am concerned this is going to kill Heat's gate (and
user's templates).

Will this be hidden within the client so that as long as we have aodh
enabled in our gate's devstack
this will just work?

-Angus



 As the consensus goes toward removal, I'll work on a patch for that.

 --
 Julien Danjou
 /* Free Software hacker
http://julien.danjou.info */

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Heat Autoscaling and Instance Names

2015-07-09 Thread Angus Salkeld
On Thu, Jul 9, 2015 at 10:41 PM, Paul Seymour paul.seym...@ig.com wrote:

 Hello,

 Is it possible to create a heat template with Autoscaling groups with
 named instances so they can have a prefix and then suffixed with an
 incrementing id/number ?


Only with ResouceGroups.



 I have looked at the various examples out there and nothing leaps out I
 have tried it with

 hosts:
 depends_on: external_router_interface
 type: OS::Heat::AutoScalingGroup
 properties:
   desired_capacity: {get_param: node_count}
   cooldown: 60
   min_size: 0
   max_size: 100
   resource:
 type: host.yaml
 properties:
   image: {get_param: server_image}
   flavor: {get_param: flavor}
   key_name: {get_param: ssh_key_name}
   external_network: {get_param: external_network}
   fixed_network: {get_resource: fixed_network}
   fixed_subnet: {get_resource: fixed_subnet}

 With this being called as host.yaml:-
 resources:
   indexed_group:
 type: OS::Heat::ResourceGroup
 properties:
   index_var: suffix
   resource_def:
 type: OS::Nova::Server
 properties:
   name: instanceprefix_suffix


This should work, try with the default index_var %index%
if it is still not working for you please raise a bug.
(https://bugs.launchpad.net/heat)


   image: {get_param: image}
   flavor: {get_param: flavor}
   key_name: {get_param: key_name}
   networks:
 - port: {get_resource: port}
   admin_user: cloud-user
   user_data_format: RAW

 But this creates with the same instance name.


 Thanks
 Paul
 The information contained in this email is strictly confidential and for
 the use of the addressee only, unless otherwise indicated. If you are not
 the intended recipient, please do not read, copy, use or disclose to others
 this message or any attachment. Please also notify the sender by replying
 to this email or by telephone (+44(020 7896 0011) and then delete the email
 and any copies of it. Opinions, conclusion (etc) that do not relate to the
 official business of this company shall be understood as neither given nor
 endorsed by it. IG is a trading name of IG Markets Limited (a company
 registered in England and Wales, company number 04008957) and IG Index
 Limited (a company registered in England and Wales, company number
 01190902). Registered address at Cannon Bridge House, 25 Dowgate Hill,
 London EC4R 2YA. Both IG Markets Limited (register number 195355) and IG
 Index Limited (register number 114059) are authorised and regulated by the
 Financial Conduct Authority.

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-06-29 Thread Angus Salkeld
On Tue, Jun 30, 2015 at 8:23 AM, Fox, Kevin M kevin@pnnl.gov wrote:

 Needing to fork templates to tweak things is a very common problem.

 Adding conditionals to Heat was discussed at the Summit. (
 https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I
 want to say, someone was going to prototype it using YAQL, but I don't
 remember who.


I was going to do that, but I would not expect that ready in a very short
time frame. It needs
some investigation and agreement from others. I'd suggest making you
decision based
on what we have now.

-Angus



 Would it be reasonable to keep if conditionals worked?

 Thanks,
 Kevin
 
 From: Hongbin Lu [hongbin...@huawei.com]
 Sent: Monday, June 29, 2015 3:01 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

 Agree. The motivation of pulling templates out of Magnum tree is hoping
 these templates can be leveraged by a larger community and get more
 feedback. However, it is unlikely to be the case in practise, because
 different people has their own version of templates for addressing
 different use cases. It is proven to be hard to consolidate different
 templates even if these templates share a large amount of duplicated code
 (recall that we have to copy-and-paste the original template to add support
 for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates.

 Best regards,
 Hongbin

 -Original Message-
 From: Tom Cammann [mailto:tom.camm...@hp.com]
 Sent: June-29-15 11:16 AM
 To: openstack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates

 Hello team,

 I've been doing work in Magnum recently to align our templates with the
 upstream templates from larsks/heat-kubernetes[1]. I've also been porting
 these changes to the stackforge/heat-coe-templates[2] repo.

 I'm currently not convinced that maintaining a separate repo for Magnum
 templates (stackforge/heat-coe-templates) is beneficial for Magnum or the
 community.

 Firstly it is very difficult to draw a line on what should be allowed into
 the heat-coe-templates. We are currently taking out changes[3] that
 introduced useful autoscaling capabilities in the templates but that
 didn't fit the Magnum plan. If we are going to treat the heat-coe-templates
 in that way then this extra repo will not allow organic development of new
 and old container engine templates that are not tied into Magnum.
 Another recent change[4] in development is smart autoscaling of bays which
 introduces parameters that don't make a lot of sense outside of Magnum.

 There are also difficult interdependency problems between the templates
 and the Magnum project such as the parameter fields. If a required
 parameter is added into the template the Magnum code must be also updated
 in the same commit to avoid functional test failures. This can be avoided
 using Depends-On:
 #xx
 feature of gerrit, but it is an additional overhead and will require some
 CI setup.

 Additionally we would have to version the templates, which I assume would
 be necessary to allow for packaging. This brings with it is own problems.

 As far as I am aware there are no other people using the
 heat-coe-templates beyond the Magnum team, if we want independent growth of
 this repo it will need to be adopted by other people rather than Magnum
 commiters.

 I don't see the heat templates as a dependency of Magnum, I see them as a
 truly fundamental part of Magnum which is going to be very difficult to cut
 out and make reusable without compromising Magnum's development process.

 I would propose to delete/deprecate the usage of heat-coe-templates and
 continue with the usage of the templates in the Magnum repo. How does the
 team feel about that?

 If we do continue with the large effort required to try and pull out the
 templates as a dependency then we will need increase the visibility of repo
 and greatly increase the reviews/commits on the repo. We also have a fairly
 significant backlog of work to align the heat-coe-templates with the
 templates in heat-coe-templates.

 Thanks,
 Tom

 [1] https://github.com/larsks/heat-kubernetes
 [2] https://github.com/stackforge/heat-coe-templates
 [3] https://review.openstack.org/#/c/184687/
 [4] https://review.openstack.org/#/c/196505/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [heat] Can't make it to the summit.

2015-05-20 Thread Angus Salkeld
On Wed, May 20, 2015 at 1:08 AM, Anant Patil anant.pa...@hp.com wrote:

 Hi,

 Due to some visa issues, my travel to Vancouver summit is canceled.


Sorry to hear it, we will try and fill you in as best as possible.

-Angus


 I will miss meeting the Heat team and the planned session on convergence
 phase 2. Kanagaraj M has some context on the phase 2 plans and he should
 be able to drive it. Hope you will have a good discussion.

 - Anant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic

2015-05-14 Thread Angus Salkeld
On Fri, May 15, 2015 at 4:46 AM, Mike Bayer mba...@redhat.com wrote:



 On 5/14/15 11:58 AM, Doug Hellmann wrote:


 At one point we were exploring having both sqlalchemy-migrate and
 alembic run, one after the other, so that we only need to create new
 migrations with alembic and do not need to change any of the existing
 migrations. Was that idea dropped?


 to my knowledge the idea wasn't dropped.   If a project wants to implement
 that using the oslo.db system, that is fine, however from my POV I'd prefer
 to just port the SQLA-migrate files over and drop the migrate dependency
 altogether.  Whether or not a project does the run both step as an
 interim step doesn't affect that effort very much.


Hi Mike

Just a quick question: How would the alembic scripts know where to start
the migration from if the current installation had been up until that
point been using migrate (I believe both alembic and migrate write to a
small table what the current version is, would you look for that?)?

-Angus





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Question about Heat Tempest tests

2015-05-14 Thread Angus Salkeld
On Fri, May 15, 2015 at 1:53 AM, Yaroslav Lobankov yloban...@mirantis.com
wrote:

 Hello everyone,

 I have a question about Heat Tempest tests. Is there any dsvm job that
 runs these tests? At first glance no dsvm job runs them.


We are using in tree functional tests now:
https://github.com/openstack/heat/tree/master/heat_integrationtests

And heat has a tempest check job too (for the API tests):
check-tempest-dsvm-heat
for example: https://review.openstack.org/#/c/182971/

-Angus

Thank you!

 Regards,
 Yaroslav Lobankov.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic

2015-05-14 Thread Angus Salkeld
On Fri, May 15, 2015 at 11:54 AM, Mike Bayer mba...@redhat.com wrote:



 On 5/14/15 7:12 PM, Angus Salkeld wrote:



 On Fri, May 15, 2015 at 4:46 AM, Mike Bayer mba...@redhat.com wrote:



 On 5/14/15 11:58 AM, Doug Hellmann wrote:


 At one point we were exploring having both sqlalchemy-migrate and
 alembic run, one after the other, so that we only need to create new
 migrations with alembic and do not need to change any of the existing
 migrations. Was that idea dropped?


  to my knowledge the idea wasn't dropped.   If a project wants to
 implement that using the oslo.db system, that is fine, however from my POV
 I'd prefer to just port the SQLA-migrate files over and drop the migrate
 dependency altogether.  Whether or not a project does the run both step
 as an interim step doesn't affect that effort very much.


  Hi Mike

  Just a quick question: How would the alembic scripts know where to start
 the migration from if the current installation had been up until that
 point been using migrate (I believe both alembic and migrate write to a
 small table what the current version is, would you look for that?)?

 Thinking about that issue, the most controllable and clean-break way to do
 it would be to add support to Alembic itself that augments its own handling
 of the alembic_version table to transfer data from an existing
 sqlalchemy_migrate table.  I can even see using traditional alembic
 hex-style version numbers in migration files which then also can refer to
 their previous numerically-based migrate file.

 It's not unreasonable that Alembic would support some standard upgrade
 path from Migrate, the only other migration tool SQLAlchemy has ever had,
 so I'd just add that as a feature.


Thanks. Would you suggest we hold off moving to alembic (in Heat) until you
have this ironed out? I just want to make sure we
don't do this prematurely.

-Angus






  -Angus





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-13 Thread Angus Salkeld
On Tue, May 12, 2015 at 8:22 AM, Steve Baker sba...@redhat.com wrote:

 On 12/05/15 09:57, Joe Gordon wrote:

 When learning about how a project works one of the first things I look
 for is a brief architecture description along with a diagram. For most
 OpenStack projects, all I can find is a bunch of random third party slides
 and diagrams.

 Most Individual OpenStack projects have either no architecture diagram or
 ascii art. Searching for 'OpenStack X architecture' where X is any of the
 OpenStack projects turns up pretty sad results. For example heat [0] an
 Keystone [1] have no diagram. Nova on the other hand does have a diagram,
 but its ascii art [2]. I don't think ascii art makes for great user facing
 documentation (for any kind of user).

 So how can we do better then ascii art architecture diagrams?

  How about ascii source diagrams?

 [0] http://docs.openstack.org/developer/heat/architecture.html
 [1] http://docs.openstack.org/developer/keystone/architecture.html
 [2] http://docs.openstack.org/developer/nova/devref/architecture.html


 These are all sphinx generated documents, so we could use something like
 blockdiag to generate all manner of diagrams
 https://pypi.python.org/pypi/sphinxcontrib-blockdiag
 http://blockdiag.com/en/


+1 maybe we can trial it in Heat specs?

-Angus




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] removing Angus Salkeld and Nick Barcet from ceilometer-core‏

2015-05-07 Thread Angus Salkeld
On Fri, May 8, 2015 at 3:56 AM, gordon chung g...@live.ca wrote:

 hi folks,

 as both have moved on to other endeavours, today we will be removing two
 founding contributors of Ceilometer from the core team. thanks to both of
 you for guiding the project in it's early days!


+1 from me, it's somewhat overdue, I appreciate having been core - thank
you all..

-Angus



 cheers,
 *gord*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Heat-engine fails to start

2015-05-07 Thread Angus Salkeld
On Thu, May 7, 2015 at 5:47 PM, Murali B mbi...@gmail.com wrote:

 Hi

 I installed heat on juno version.

 When I start heat-engine it fails and I am seeing the below error

 cal/lib/python2.7/dist-packages/stevedore/extension.py:156
 2015-05-07 13:06:36.076 10670 DEBUG stevedore.extension [-] found
 extension EntryPoint.parse('routing =
 oslo.messaging.notify._impl_routing:RoutingDriver') _load_plugins
 /usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
 2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found
 extension EntryPoint.parse('test =
 oslo.messaging.notify._impl_test:TestDriver') _load_plugins
 /usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
 2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found
 extension EntryPoint.parse('messaging =
 oslo.messaging.notify._impl_messaging:MessagingDriver') _load_plugins
 /usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
 2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found
 extension EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 =
 heat.engine.cfn.template:CfnTemplate') _load_plugins
 /usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
 2015-05-07 13:06:36.084 10670 CRITICAL heat.engine [-] Could not load
 AWSTemplateFormatVersion.2010-09-09: (sqlalchemy-migrate 0.9.6
 (/usr/local/lib/python2.7/dist-packages),
 Requirement.parse('sqlalchemy-migrate==0.9.1'))


Hi

Heat templates are stevedore plugins. you have
AWSTemplateFormatVersion.2010-09-09 that was installed (with pip install)
that needs 'sqlalchemy-migrate==0.9.1', but your system has
'sqlalchemy-migrate 0.9.6' (so it refuses to run)

I'd suggest reinstalling heat either with a proper package manager or pip
install.

-Angus




 your help is appreciated



 Thanks
 -Murali





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Angus Salkeld
On Thu, Apr 30, 2015 at 9:25 PM, Jastrzebski, Michal 
michal.jastrzeb...@intel.com wrote:

 Hello,

 After discussions, we've spotted possible gap in versioned objects:
 backporting of too-new versions in RPC.
 Nova does that by conductor, but not every service has something like
 that. I want to propose another approach:

 1. Milestone pinning - we need to make single reference to versions of
 various objects - for example heat in version 15.1 will mean stack in
 version 1.1 and resource in version 1.5.
 2. Compatibility mode - this will add flag to service
 --compatibility=15.1, that will mean that every outgoing RPC communication
 will be backported before sending to object versions bound to this
 milestone.

 With this 2 things landed we'll achieve rolling upgrade like that:
 1. We have N nodes in version V
 2. We take down 1 node and upgrade code to version V+1
 3. Run code in ver V+1 with --compatibility=V
 4. Repeat 2 and 3 until every node will have version V+1
 5. Restart each service without compatibility flag

 This approach has one big disadvantage - 2 restarts required, but should
 solve problem of backporting of too-new versions.
 Any ideas? Alternatives?


AFAIK if nova gets a message that is too new, it just forwards it on (and a
newer server will handle it).

With that this *should* work, shouldn't it?
1. rolling upgrade of heat-engine
2. db sync
3. rolling upgrade of heat-api

-Angus



 Regards,
 Michał

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Angus Salkeld
On Mon, May 4, 2015 at 6:33 PM, Jastrzebski, Michal 
michal.jastrzeb...@intel.com wrote:

 W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30, 2015 at
 9:25 PM, Jastrzebski, Michal
  michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com
 wrote:
 
  Hello,
 
  After discussions, we've spotted possible gap in versioned objects:
  backporting of too-new versions in RPC.
  Nova does that by conductor, but not every service has something
  like that. I want to propose another approach:
 
  1. Milestone pinning - we need to make single reference to versions
  of various objects - for example heat in version 15.1 will mean
  stack in version 1.1 and resource in version 1.5.
  2. Compatibility mode - this will add flag to service
  --compatibility=15.1, that will mean that every outgoing RPC
  communication will be backported before sending to object versions
  bound to this milestone.
 
  With this 2 things landed we'll achieve rolling upgrade like that:
  1. We have N nodes in version V
  2. We take down 1 node and upgrade code to version V+1
  3. Run code in ver V+1 with --compatibility=V
  4. Repeat 2 and 3 until every node will have version V+1
  5. Restart each service without compatibility flag
 
  This approach has one big disadvantage - 2 restarts required, but
  should solve problem of backporting of too-new versions.
  Any ideas? Alternatives?
 
 
  AFAIK if nova gets a message that is too new, it just forwards it on
  (and a newer server will handle it).
 
  With that this *should* work, shouldn't it?
  1. rolling upgrade of heat-engine

 That will be hard part. When we'll have only one engine from given
 version, we lose HA. Also, since we never know where given task lands, we
 might end up with one task bouncing from old version to old version, making
 call indefinitely long. Ofc with each upgraded engine we'll lessen change
 for that to happen, but I think we should aim for lowest possible downtime.
 That being said, that might be good idea to solve this problem
 not-too-clean, but quickly.


I don't think losing HA in the time it takes some heat-engines to stop,
install new software and restart the heat-engines is a big deal (IMHO).

-Angus



  2. db sync
  3. rolling upgrade of heat-api
 
  -Angus
 
 
  Regards,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack/Resource updated_at conventions

2015-04-28 Thread Angus Salkeld
On Tue, Apr 28, 2015 at 1:46 AM, Steven Hardy sha...@redhat.com wrote:

 Hi all,

 I've been looking into $subject recently, I raised this bug:

 https://bugs.launchpad.net/heat/+bug/1448155

 Basically we've got some historically weird and potentially inconsistent
 behavior around updated_at, and I'm trying to figure out the best way to
 proceed.

 Now, we selectively update updated_at only on the transition to
 UPDATE_COMPLETE, where we store the time that we moved into
 UPDATE_IN_PROGRESS.  During the update, there's no way to derive the
 time we started the update.

 Also, we inconsistently store the time associated with the transition into
 IN_PROGRESS for suspend, resume, snapshot, restore and check actions (even
 though many of these don't modify the stack definition).

 The reason I need this is the hook/breakpoint API - the only way to detect
 if you've hit a breakpoint is via events, and to detect you've hit a hook
 during multiple sequential updates (some of which may fail or time out with
 hooks pending), you need to filter the events to only consider those with a
 timestamp newer than the transition of the stack to the update IN_PROGRESS.

 AFAICT there's two options:

 1. Update the stack.Stack so we store now at every transition (e.g in
 state_set)

 2. Stop trying to explicitly control updated_at, and just allow the oslo
 TimestampMixin to do it's job and update updated_at every time the DB model
 is updated.

 What are peoples thoughts?  Either will solve my problem, but I'm leaning
 towards (2) as the cleanest and most technically correct solution.


Just beware:
https://github.com/openstack/heat/blob/master/heat/engine/resources/stack_resource.py#L328-L346
and
https://review.openstack.org/#/c/173045/

This is our only current way for knowing if something has changed between 2
updates.

-A


 Similar problems exist for resource.Resource AFAICT.

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-23 Thread Angus Salkeld
On Fri, Apr 24, 2015 at 2:14 AM, Chris Dent chd...@redhat.com wrote:


 This might be a bit presumptuous, but why not give it a try...

 This cycle's TC elections didn't come with a set of prepackaged
 questions and though the self-nomination messages have included some
 very interesting stuff I think it would be useful to get answers
 from the candidates on at least one topical but open-ended
 question. Maybe other people have additional questions they think
 are important but this one is the one that matters to me and also
 captures the role that I wish the TC filled more strongly. Here's
 the preamble:

 There are lots of different ways to categorize the various
 stakeholders in the OpenStack community, no list is complete. For
 the sake of this question the people I'm concerned with are the
 developers, end-users and operators of OpenStack: the individuals
 who are actively involved with it on a daily basis. I'm intentionally
 leaving out things like the downstream.

 There are many different ways to define quality. For the sake of
 this question feel free to use whatever definition you like but take
 it as given that quality needs to be improved.

 Here's the question:

 What can and should the TC at large, and you specifically, do to ensure
 quality improves for the developers, end-users and operators of
 OpenStack as a full system, both as a project being developed and a
 product being used?


I think this can be broader still - How can the TC herd cats :-O

Jokes aside, OpenStack is an opensource project and it's not easy for
TC members or PTL's to make developers do their bidding.

I'd like to think we at least need a better feedback loop from the user
survey
(or consumers of OpenStack in general) to what development actually
happens.
At the moment we have user feedback, but that doesn't always result in
developers doing exactly that. I think we need a number of tools for
PTLs and the TC be able to set priorities for particular themes OpenStack
wide.

1. I think the TC and PTLs need a place to say what the priorities are
(that is visible to everyone).
2. Then follow up with assigning blueprints and bug priorities accordingly
3. Be open to saying no to work that is not within these themes if it is
creating
a distraction.
4. recognition of work in these themes, perhaps on stackalytics (or else
where).

With some version of the above, we can hopefully get better at turning
user requests into solutions. (or better quality).

Regards
Angus




 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][novaclient] heat gate is broken because of new novaclient

2015-04-22 Thread Angus Salkeld
On Wed, Apr 22, 2015 at 11:49 PM, Sean Dague s...@dague.net wrote:

 On 04/22/2015 07:07 AM, Sean Dague wrote:
  On 04/22/2015 04:09 AM, Angus Salkeld wrote:
  Hi
 
  Can we please have a new release of novaclient (after the below fix)?
 
  Heat's unit tests pass fine with:
  python-novaclient (2.23.0)
 
  but python-novaclient 2.24.0 introduces this bug:
  https://bugs.launchpad.net/python-novaclient/+bug/1437244
 
  We still need this patch in: https://review.openstack.org/176228
 
  We should also update requirements to exclude 2.24.0
 
  I've got an alternate fix here -
 https://review.openstack.org/#/c/176252/
 
  I was -1 for a long time on the complex repr stuff in the introducing
  patch, and I think that's just going to get us into other problems down
  the road. The repr fall back for Resource is totally fine, and we should
  just use that.

 python-novaclient 2.24.1 has been released with
 https://review.openstack.org/#/c/176252/ in place. Hopefully that fixes
 things for Heat.


Thanks Sean!

Shouldn't we also exclude 2.24.0 from requirements?

-Angus



 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][novaclient] heat gate is broken because of new novaclient

2015-04-22 Thread Angus Salkeld
Hi

Can we please have a new release of novaclient (after the below fix)?

Heat's unit tests pass fine with:
python-novaclient (2.23.0)

but python-novaclient 2.24.0 introduces this bug:
https://bugs.launchpad.net/python-novaclient/+bug/1437244

We still need this patch in: https://review.openstack.org/176228

We should also update requirements to exclude 2.24.0

Thanks
Angus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2015-04-21 Thread Angus Salkeld
I'd like to announce my candidacy for the Technical Committee elections.


This last (Kilo) cycle I have been the PTL for Heat, but will not be for
Liberty.

Because of this, if I get elected to the TC, I can dedicate a significant
amount of time to TC duties.

I have an interest in projects that exist in the upper layers in OpenStack
(Heat, Murano, Mistral, Ceilometer/Monasca, Zaqar), so this gives me a
slightly different viewpoint than the existing TC members (I hope this will
be complementary).


Topics that I am interested in pursuing:


1. How can the TC/PTL's better influence what developers work on?

We currently have working groups and project tags, but are there other ways
that we can encourage developers to work on global issues that users and
operators are asking for?

So if the TC came up with say 2 global themes every cycle, let's say
ease of upgrades and scale, what can we use as a carrot to encourage a
developer to work on these topics rather than something else. Clearly this
has limits, but if the user survey is screaming we need X, it would be
great to have a tool to help focus developers onto this.

One idea might be to use stackalytics main page to show the work done on TC
selected blueprints/bugs. We could tag particular blueprints/bugs and
highlight the developers that have worked on these (i.e. can we use
stackalytics as a tool to motivate developers to work on
TC supported themes).

A note on this (some motivation for taking advantage of stackalytics):
It looks like this commit changed the default view on stackalytics from
commits to reviews
https://github.com/stackforge/stackalytics/commit/eb3bebbaa66a718e284d3095dfa8f496bb1d298c
(14 Apr 2014)
http://stackalytics.com/?release=allmetric=commits
http://stackalytics.com/?release=all

See the sudden plateau of commits and increase in reviews after that time
on the top graph (I am sure there are other factors at play too).


2. How can we better support new foundational projects that we expect other
projects to consume.

Under the new big tent model, we are welcoming more projects into the
openstack/ code namespace while simultaneously shifting the determination
of production-worthiness to distributors and operators. This poses a
problem for projects like Zaqar, Mistral, and even Heat, as these projects
may be consumed/used by other upper-layer projects. Consuming projects end
up with lots of conditional code as they must take into account that the
deployed cloud may not have these dependent projects installed.

How can we make the end user (and inter-project) experience richer and
friendlier with a consistent method of negotiating and discovering the
underlying features a deployed cloud offers?

Could tenant-based services/endpoints be helpful in allowing end users to
install optional projects?


3. Getting notification messages to the end user.

Based on this:
http://lists.openstack.org/pipermail/openstack-dev/2015-April/060748.html

Other clouds provide better messaging feedback on what the infrastructure
is doing. We are really missing this and I'd like to help to get us to the
point where the user (that doesn't have access to the logs) can understand
what has happened to their resources.


I'm interested in either leading or participating in working groups from
within the TC to address these issues.


Thanks
Angus Salkeld
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-20 Thread Angus Salkeld
On Tue, Apr 21, 2015 at 8:38 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  As an Op, a few things that come to mind in that category are:
  * RDO packaging (stated earlier). If its not easy to install, its not
 going to be deployed as much. I haven't installed it yet, because I haven't
 had time to do much other then yum install it...
  * Horizon UI
  * Heat Resources. (Some basic stuff like create/delete queue to go along
 with the stack. also link #1 below)


Here you go:
https://github.com/openstack/heat/tree/master/contrib/heat_zaqar



 Horizon has a discovery aspect to it. If users don't know a service is
 available, its hard for them to use it. Even with the most simple UI of
 Create/Delete/List queues, discovery is handled. The user knows it exists,
 and can look for documentation on how to make it work, can ask an admin how
 to use it, or start poking at the cli for advanced features.

 Thanks,
 Kevin

 1. https://blueprints.launchpad.net/heat/+spec/software-config-zaqar
  --
 *From:* Vipul Sabhaya [vip...@gmail.com]
 *Sent:* Monday, April 20, 2015 1:39 PM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)



 On Mon, Apr 20, 2015 at 12:07 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Another parallel is Manilla vs Swift. Both provides something like a
 share for users to store files.

 The former is a multitenant api to provision non multitenant file shares.
 The latter is a multitenant api to provide file sharing.

 Cue is a multitenant api to provision non multitenant queues.
 Zaqar is an api for a multitenant queueing system.

 They are complimentary services.


  Agreed, it’s not an either/or, there is room for both.  While Cue could
 provision Zaqar, it doesn’t make sense, since it is already multi-tenant.
 As has been said, Cue’s goal is to bring non-multi-tenant message brokers
 to the cloud.

  On the question of adoption, what confuses me is why the measurement of
 success of a project is whether other OpenStack services are integrating or
 not.  Zaqar exposes an API that seems best fit for application workloads
 running on an OpenStack cloud.  The question should be raised to operators
 as to what’s preventing them from running Zaqar in their public cloud,
 distro, or whatever.

  Looking at other services that we consider to be successful, such as
 Trove, we did not attempt to integrate with other OpenStack projects.
 Rather, we solved the concerns that operators had.



 Thanks,
 Kevin
 
 From: Ryan Brown [rybr...@redhat.com]
 Sent: Monday, April 20, 2015 11:38 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

 On 04/20/2015 02:22 PM, Michael Krotscheck wrote:
  What's the difference between openstack/zaqar and stackforge/cue?
  Looking at the projects, it seems like zaqar is a ground-up
  implementation of a queueing system, while cue is a provisioning api for
  queuing systems that could include zaqar, but could also include rabbit,
  zmq, etc...
 
  If my understanding of the projects is correct, the latter is far more
  versatile, and more in line with similar openstack approaches like
  trove. Is there a use case nuance I'm not aware of that warrants
  duplicating efforts? Because if not, one of the two should be retired
  and development focused on the other.
 
  Note: I do not have a horse in this race. I just feel it's strange that
  we're building a thing that can be provisioned by the other thing.
 

 Well, with Trove you can provision databases, but the MagnetoDB project
 still provides functionality that trove won't.


 The Trove : MagnetoDB and Cue : Zaqar comparison fits well.

 Trove provisions one instance of X (some database) per tenant, where
 MagnetoDB is one instance (collection of hosts to do database things)
 that serves many tenants.

 Cue's goal is I have a not-very-multitenant message bus (rabbit, or
 whatever) and makes that multitenant by provisioning one per tenant,
 while Zaqar has a single install (of as many machines as needed) to
 support messaging for all cloud tenants. This enables great stuff like
 cross-tenant messaging, better physical resource utilization in
 sparse-tenant cases, etc.

 As someone who wants to adopt Zaqar, I'd really like to see it continue
 as a project because it provides things other message broker approaches
 don't.

 --
 Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:

Re: [openstack-dev] [heat] is cloudwatch really deprecated?

2015-04-20 Thread Angus Salkeld
On Tue, Apr 21, 2015 at 12:33 PM, x Lyn xuanlangj...@gmail.com wrote:

 Since cloudwatch is deprecated, is there any plan to remove it away from
 codes? Or still has some functions depend on this service?


No plans as yet, but all you have to do is not run the binary (heat-api-cw).



 2015-04-18 2:29 GMT+08:00 Zane Bitter zbit...@redhat.com:

 On 17/04/15 13:54, Matt Fischer wrote:

 On Fri, Apr 17, 2015 at 11:03 AM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:

 On 17/04/15 12:46, Matt Fischer wrote:

 The wiki for Using Cloudwatch states:

 This feature will be deprecated or removed during the Havana
 cycle as
 we move to using Ceilometer as a metric/alarm service instead.
 [1]

 However it seems that cloudwatch is still being developed.


 It doesn't seem that way to me, and without at least some kind of
 hint I'm not in a position to speculate on why it might seem that
 way to you.

 So is it deprecated or not?


 Yes, it's very deprecated.

 In fact, we should go ahead and disable it in the default config.

 - ZB


 I was just looking at the dates in the commit log for the cloudwatch
 folder and seeing things from 2015. If it's truly deprecated, that's
 great, I'll remove it from my environment.


 OK, that's what I looked at too, and there were a lot of recent changes
 but they all appeared to be global cleanups that went across the whole Heat
 codebase. The last thing that looked like active development was in July
 2013. You definitely won't regret removing it ;)


So before we get too excited, ceilometer AFAIK does not have a native guest
client that doesn't use a token.
ATM heat-api-cw bounces metrics to ceilometer that were associated with
ceilometer alarms.

-Angus


 cheers,
 Zane.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] autoscaling and load balancers

2015-04-08 Thread Angus Salkeld
On Thu, Apr 9, 2015 at 8:03 AM, Miguel Grinberg miguel.s.grinb...@gmail.com
 wrote:

 Zane, replies inline.

 On Wed, Apr 8, 2015 at 3:46 PM, Zane Bitter zbit...@redhat.com wrote:

 On 07/04/15 22:02, Miguel Grinberg wrote:

 Hi,

 The OS::Heat::AutoScalingGroup resource is somewhat limited at this
 time, because when a scaling even occurs it does not notify dependent
 resources, such as a load balancer, that the pool of instances has
 changed.


 As Thomas mentioned, the 'approved' way to solve this is to make your
 scaled unit a stack, and include a Neutron PoolMember resource in it.


 LBAAS is an optional, now even external component, not part of the Neutron
 API. Many installations don't have it. Allowing the use of custom load
 balancers is a desirable option, in my opinion, more so while LBAAS is not
 core neutron functionality.




  The AWS::AutoScaling::AutoScalingGroup resource, on the other side, has
 a LoadBalancerNames property that takes a list of
 AWS::ElasticLoadBalancing::LoadBalancer resources that get updated
 anytime the size of the ASG changes.


 Which is an appalling hack.

 Yes. This is hacky, but it seems it models the AWS load balancing APIs,
 so there isn't much that can be done here, right?


 (If it called the Neutron LBaaS API, like the equivalent in
 CloudFormation does with ELB, it would be OK. But in reality, as you know,
 it's a hack that makes calls directly to another resource plugin within
 Heat.)

  I'm trying to implement this notification mechanism for HOT templates,
 but there are a few aspects that I hope to do better.

 1. A HOT template can have get_attr function calls that invoke
 attributes of the ASG. None of these update when the ASG resizes at this
 time, a scaling even does a partial update that only affects the ASG
 resource. I would like to address this.


 In the medium-term I think this is something that I believe Convergence
 will be able to solve for us. I'm not sure that it's worth putting in a
 short-term work-around for.


 Here is where we disagree. In my opinion this is broken functionality.
 After a scaling event there are resources that go stale because they are
 never told that the ASG resized. This to me is clearly a bug that deserves
 fixing, even if in the future a better/nicer fix can be crafted.


So the problem is the result of get_attr is dynamic and we do not support
triggering stack updates on changes to their results.

As Zane suggested, you should think of autoscaling as been in a different
service.

A possible solution:
You have a top level template that has an StackUpdatePolicy (a new thing),
and it gets triggered by a Ceilometer Alarm based
on the following notification:
https://github.com/openstack/heat/blob/master/heat/engine/resources/aws/autoscaling/autoscaling_group.py#L338-L344

This then runs an update to refresh the stack.

-Angus







  2. The AWS solution relies on the well known LoadBalancer resource, but
 often load balancers are just regular instances that get loaded with a
 load balancer such as haproxy in a custom way. I'd like custom load
 balancers to also update when the ASG resizes.


 TBH the correct answer for load balancers specifically is use the Neutron
 LBaaS API, end of story.


 This does not help me, as I don't have LBAAS. But as a said above, even if
 I had it, I may want to use my own load balancer, why not let me use my own
 if that is what I need for my project? Or what if I had another resource
 type that is not a load balancer, maybe a custom resource from a plugin
 that wants to be notified when the ASG resizes? If this can be done for
 regular stack updates, my opinion is that it should also work for these
 special signal-triggered updates to the ASG.


 But you're right that there are many uses for a more generic notification
 mechanism. (For example, in OpenShift we need to notify the controller when
 we add or remove nodes.) The design goal for ASG was always that we would
 have an arbitrary scaled unit (defined by a template) and an arbitrary
 non-scaled unit that could receive notifications about changes to the
 scaling group. So far we have delivered on only the first part of that
 promise.

 My vision for the second part has always been that we'd use hooks, the
 initial implementation of which Tomas has landed in Kilo. We'll need to
 implement some more hook types to do it - post-create, post-update and
 pre-delete at a minimum. We also need some way of notifying the user
 asynchronously about when the hooks are triggered, so that they can take
 whatever action (e.g. add to load balancer) before calling the API to clear
 the hook. (At the moment the only way to find out when your hook should run
 is by polling the Heat API.)


 I'm not really sure I understand how this would work. If I have a resource
 that sets one of its properties to { get_attr: [my_asg, size] }, then on a
 stack-update I don't need a hook to update my resource, it automatically
 updates. On an alarm triggered 

Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-08 Thread Angus Salkeld
On Thu, Apr 9, 2015 at 2:15 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Angus Salkeld's message of 2015-04-06 19:55:37 -0700:
  Hi all
 
  For quite some time we (Heat team) have wanted to be able to send
 messages
  to our
  users (by user I do not mean the Operator, but the User that is
 interacting
  with the client).
 
  What do I mean by user messages, and how do they differ from our
 current
  log messages
  and notifications?
  - Our current logs are for the operator and have information that the
 user
  should not have
(ip addresses, hostnames, configuration options, other tenant info
 etc..)
  - Our notifications (that Ceilometer uses) *could* be used, but I am not
  sure if it quite fits.
(they seem a bit heavy weight for a log message and aimed at higher
 level
  events)
 
  These messages could be (based on Heat's use case):
 
  - Specific user oriented log messages (distinct from our normal operator
  logs)

 These currently go in the Heat events API, yes?


Well I wanted a replacement for our events table that could also take
smaller messages
and fulfil some other use cases (be a stream, that the user doesn't have to
poll).

We are sending RPC notifications at the moment:
stack.action.begin/end/error
autoscaling.begin/end/error

We need to add:
per resource notifications (to replace the event table)

Add some useful log-like messages:
x property is deprecated, please update your template to use Y

There are also a bunch of other log messages that should really be going to
the user, but
just end up in the Operator's log file.


  - Deprecation messages (if they are using old resource
 properties/template
  features)

 I think this could fit with the bits above.

  - Progress and resource state changes (an application doesn't want to
 poll
  an api for a state change)

 These also go in the current Heat events.

  - Automated actions (autoscaling events, time based actions)

 As do these?

  - Potentially integrated server logs (from in guest agents)
 
  I wanted to raise this to [all] as it would be great to have a general
  solution that
  all projects can make use of.
 
  What do we have now:
  - The user can not get any kind of log message from services. The closest
  thing
ATM is the notifications in Ceilometer, but I have the feeling that
 these
  have a different aim.
  - nova console log
  - Heat has a DB event table for users (we have long wanted to get rid
 of
  this)

 So if we forget the DB part of it, the API is also lacking things like
 pagination and search that one would want in an event/logging API.


I'd almost rather spend the time getting an OpenStack wide solution then
switch
our client (and API) to use that.



 
  What do other clouds provide:
  - https://devcenter.heroku.com/articles/logging
  - https://cloud.google.com/logging/docs/
  - https://aws.amazon.com/blogs/aws/cloudwatch-log-service/
  - http://aws.amazon.com/cloudtrail/
  (other examples...)
 
  What are some options we could investigate:
  1. remote syslog
  The user provides a rsyslog server IP/port and we send their messages
  to that.
  [pros] simple, and the user could also send their server's log
 messages
  to the same
rsyslog - great visibility into what is going on.
 
There are great tools like loggly/logstash/papertrailapp
 that
  source logs from remote syslog
It leaves the user in control of what tools they get to
 use.
 
  [cons] Would we become a spam agent (just sending traffic to an
  IP/Port) - I guess that's how remote syslog
 works. I am not sure if this is an issue or not?
 
This might be a lesser solution for the use case of an
  application doesn't want to poll an api for a state change
 
I am not sure how we would integrate this with horizon.
 

 I think this one puts too much burden on the user to setup a good
 receiver.

  2. Zaqar
  We send the messages to a queue in Zaqar.
  [pros] multi tenant OpenStack project for messaging!
 
  [cons] I don't think Zaqar is installed in most installations (tho'
  please correct me here if this
 is wrong). I know Mirantis does not currently support
 Zaqar,
  so that would be a problem for me.
 
There is not the level of external tooling like in option
 1
  (logstash and friends)
 

 I agree with your con, and would also add that after the long
 discussions we had in the past we had some concerns about scaling.

  3. Other options:
 Please chip in with suggestions/links!
 

 There's this:

 https://wiki.openstack.org/wiki/Cue

 I think that could be a bit like 1, but provide the user with an easy
 target for the messages.

 I also want to point out that what I'd actually rather see is that all
 of the services provide functionality like this. Users would be served
 by having an event stream from Nova telling them when their instances
 are active, deleted, stopped, 

Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2015-04-06 Thread Angus Salkeld
On Fri, Apr 3, 2015 at 8:51 PM, Daniel Comnea comnea.d...@gmail.com wrote:

 Hi all,

 Does anyone know if the above use case has made it into the convergence
 project and in which release was/ is going to be merged?


Hi

Phase one of convergence should be merged in early L (we have some of the
patches merged now).
Phase two is to separate the checking of actual state into a new RPC
service within Heat.
Then you *could* run that checker periodically (or receive RPC
notifications) to learn of changes
to the stack's state during the lifetime of the stack. That *might* get
done in L - we will just have to see
how things go.

-Angus


 Thanks,
 Dani

 On Tue, Oct 28, 2014 at 5:40 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

 Thanks all for reply.

 I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed
 this is not possible as of today. Someone else - sorry i forgot his nicname
 on IRC suggested to write a Ceilometer query to count the number of
 instances but what @ZhiQiang said is true and this is what we have seen via
 the instance sample

 *@Clint - *that is the case indeed

 *@ZhiQiang* - what do you mean by *count of resource should be queried
 from specific service's API*? Is it related to Ceilometer's event types
 configuration?

 *@Mike - *my use case is very simple: i have a group of instances and in
 case the # of instances reach the minimum number i set, i would like a new
 instance to be spun up - think like a cluster where i want to maintain a
 minimum number of members

 With regards to the proposal you made -
 https://review.openstack.org/#/c/127884/ that works but only in a
 specific use case hence is not generic because the assumption is that my
 instances are hooked behind a LBaaS which is not always the case.

 Looking forward to see the 'convergence' in action.


 Cheers,
 Dani

 On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer mspre...@us.ibm.com
 wrote:

 Daniel Comnea comnea.d...@gmail.com wrote on 10/27/2014 07:16:32 AM:

  Yes i did but if you look at this example
 
 
 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 

  the flow is simple:

  CPU alarm in Ceilometer triggers the type: OS::Heat::ScalingPolicy
  which then triggers the type: OS::Heat::AutoScalingGroup

 Actually the ScalingPolicy does not trigger the ASG.  BTW,
 ScalingPolicy is mis-named; it is not a full policy, it is only an action
 (the condition part is missing --- as you noted, that is in the Ceilometer
 alarm).  The so-called ScalingPolicy does the action itself when
 triggered.  But it respects your configured min and max size.

 What are you concerned about making your scaling group smaller than your
 configured minimum?  Just checking here that there is not a
 misunderstanding.

 As Clint noted, there is a large-scale effort underway to make Heat
 maintain what it creates despite deletion of the underlying resources.

 There is also a small-scale effort underway to make ASGs recover from
 members stopping proper functioning for whatever reason.  See
 https://review.openstack.org/#/c/127884/ for a proposed interface and
 initial implementation.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] how to send messages (and events) to our users

2015-04-06 Thread Angus Salkeld
Hi all

For quite some time we (Heat team) have wanted to be able to send messages
to our
users (by user I do not mean the Operator, but the User that is interacting
with the client).

What do I mean by user messages, and how do they differ from our current
log messages
and notifications?
- Our current logs are for the operator and have information that the user
should not have
  (ip addresses, hostnames, configuration options, other tenant info etc..)
- Our notifications (that Ceilometer uses) *could* be used, but I am not
sure if it quite fits.
  (they seem a bit heavy weight for a log message and aimed at higher level
events)

These messages could be (based on Heat's use case):

- Specific user oriented log messages (distinct from our normal operator
logs)
- Deprecation messages (if they are using old resource properties/template
features)
- Progress and resource state changes (an application doesn't want to poll
an api for a state change)
- Automated actions (autoscaling events, time based actions)
- Potentially integrated server logs (from in guest agents)

I wanted to raise this to [all] as it would be great to have a general
solution that
all projects can make use of.

What do we have now:
- The user can not get any kind of log message from services. The closest
thing
  ATM is the notifications in Ceilometer, but I have the feeling that these
have a different aim.
- nova console log
- Heat has a DB event table for users (we have long wanted to get rid of
this)

What do other clouds provide:
- https://devcenter.heroku.com/articles/logging
- https://cloud.google.com/logging/docs/
- https://aws.amazon.com/blogs/aws/cloudwatch-log-service/
- http://aws.amazon.com/cloudtrail/
(other examples...)

What are some options we could investigate:
1. remote syslog
The user provides a rsyslog server IP/port and we send their messages
to that.
[pros] simple, and the user could also send their server's log messages
to the same
  rsyslog - great visibility into what is going on.

  There are great tools like loggly/logstash/papertrailapp that
source logs from remote syslog
  It leaves the user in control of what tools they get to use.

[cons] Would we become a spam agent (just sending traffic to an
IP/Port) - I guess that's how remote syslog
   works. I am not sure if this is an issue or not?

  This might be a lesser solution for the use case of an
application doesn't want to poll an api for a state change

  I am not sure how we would integrate this with horizon.

2. Zaqar
We send the messages to a queue in Zaqar.
[pros] multi tenant OpenStack project for messaging!

[cons] I don't think Zaqar is installed in most installations (tho'
please correct me here if this
   is wrong). I know Mirantis does not currently support Zaqar,
so that would be a problem for me.

  There is not the level of external tooling like in option 1
(logstash and friends)

3. Other options:
   Please chip in with suggestions/links!


https://blueprints.launchpad.net/heat/+spec/user-visible-logs


Regards
Angus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-06 Thread Angus Salkeld
On Tue, Apr 7, 2015 at 1:53 PM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 04/06/2015 08:55 PM, Angus Salkeld wrote:

 Hi all

 For quite some time we (Heat team) have wanted to be able to send
 messages to our
 users (by user I do not mean the Operator, but the User that is
 interacting with
 the client).

 What do I mean by user messages, and how do they differ from our
 current log
 messages and notifications?
 - Our current logs are for the operator and have information that the user
 should not have
(ip addresses, hostnames, configuration options, other tenant info
 etc..)
 - Our notifications (that Ceilometer uses) *could* be used, but I am not
 sure if
 it quite fits.
(they seem a bit heavy weight for a log message and aimed at higher
 level events)


 snip

  What are some options we could investigate:
 1. remote syslog
 2. Zaqar
 3. Other options:
 Please chip in with suggestions/links!


 What about a per-user notification topic using the existing notification
 backend?


Wouldn't that require the Operator to provide the end user with access to
the message bus?
Seems scary to me.

-Angus



 Chris

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-26 Thread Angus Salkeld
On Fri, Mar 27, 2015 at 6:26 AM, Zane Bitter zbit...@redhat.com wrote:

 On 26/03/15 10:38, Pavlo Shchelokovskyy wrote:

 Hi all,

 following IRC discussion here is a summary of what I propose can be done
 in this regard, in the order of increased decoupling:

 1) make a separate requirements.txt for integration tests and modify the
 tox job to use it. The code of these tests is pretty much decoupled
 already, not using any modules from the main heat tree. The actual
 dependencies are mostly api clients and test framework. Making this
 happen should decrease the time needed to setup the tox env and thus
 speed up the test run somewhat.


 +1

  2) provide separate distutils' setup.py/setup.cfg
 http://setup.py/setup.cfg to ease packaging and installing this test
 suit to run it against an already deployed cloud (especially scenario
 tests seem to be valuable in this regard).


 +1

  3) move the integration tests to a separate repo and use it as git
 submodule in the main tree. The main reasons not to do it as far as I've
 collected are not being able to provide code change and test in the same
 (or dependent) commits, and lesser reviewers' attention to a separate
 repo.


 -0

 I'm not sure what the advantage is here, and there are a bunch of
 downsides (basically, I agree with Ryan). Unfortunately I missed the IRC
 discussion, can you elaborate on how decoupling to this degree might help
 us?


I think the overall goal is to make it easier for an operator to run tests
against their cloud to make sure
everything is working. We should really have a common approach to this so
they don't have to do something
different for each project. Any opinions from the QA team?

Maybe have it as it's own package, then you can install it and run
something like:
os-functional-tests-run package-name auth args here

-A




 cheers,
 Zane.

  What do you think about it? Please share your comments.

 Best regards,

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com http://www.mirantis.com


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-17 Thread Angus Salkeld
There have been no other candidates within the allowed time, so
congratulations Steve on being the new Kolla PTL.

Regards
Angus Salkeld



On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld asalk...@mirantis.com
wrote:

 Candidacy confirmed.

 -Angus

 On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

  I am running for PTL for the Kolla project.  I have been executing in
 an unofficial PTL capacity for the project for the Kilo cycle, but I feel
 it is important for our community to have an elected PTL and have asked
 Angus Salkeld, who has no outcome in the election, to officiate the
 election [1].

  For the Kilo cycle our community went from zero LOC to a fully working
 implementation of most of the services based upon Kubernetes as the
 backend.  Recently I led the effort to remove Kubernetes as a backend and
 provide container contents, building, and management on bare metal using
 docker-compose which is nearly finished.  At the conclusion of Kilo, it
 should be possible from one shell script to start an AIO full deployment of
 all of the current OpenStack git-namespaced services using containers built
 from RPM packaging.

  For Liberty, I’d like to take our community and code to the next
 level.  Since our containers are fairly solid, I’d like to integrate with
 existing projects such as TripleO, os-ansible-deployment, or Fuel.
 Alternatively the community has shown some interest in creating a
 multi-node HA-ified installation toolchain.

  I am deeply committed to leading the community where the core
 developers want the project to go, wherever that may be.

  I am strongly in favor of adding HA features to our container
 architecture.

  I would like to add .deb package support and from-source support to our
 docker container build system.

  I would like to implement a reference architecture where our containers
 can be used as a building block for deploying a reference platform of 3
 controller nodes, ~100 compute nodes, and ~10 storage nodes.

  I am open to expanding our scope to address full deployment, but would
 prefer to merge our work with one or more existing upstreams such as
 TripelO, os-ansible-deployment, and Fuel.

  Finally I want to finish the job on functional testing, so all of our
 containers are functionally checked and gated per commit on Fedora, CentOS,
 and Ubuntu.

  I am experienced as a PTL, leading the Heat Orchestration program from
 zero LOC through OpenStack integration for 3 development cycles.  I write
 code as a PTL and was instrumental in getting the Magnum Container Service
 code-base kicked off from zero LOC where Adrian Otto serves as PTL.  My
 past experiences include leading Corosync from zero LOC to a stable
 building block of High Availability in Linux.  Prior to that I was part of
 a team that implemented Carrier Grade Linux.  I have a deep and broad
 understanding of open source, software development, high performance team
 leadership, and distributed computing.

  I would be pleased to serve as PTL for Kolla for the Liberty cycle and
 welcome your vote.

  Regards
 -steve

  [1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-12 Thread Angus Salkeld
Candidacy confirmed.

-Angus

On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) std...@cisco.com
wrote:

  I am running for PTL for the Kolla project.  I have been executing in an
 unofficial PTL capacity for the project for the Kilo cycle, but I feel it
 is important for our community to have an elected PTL and have asked Angus
 Salkeld, who has no outcome in the election, to officiate the election [1].

  For the Kilo cycle our community went from zero LOC to a fully working
 implementation of most of the services based upon Kubernetes as the
 backend.  Recently I led the effort to remove Kubernetes as a backend and
 provide container contents, building, and management on bare metal using
 docker-compose which is nearly finished.  At the conclusion of Kilo, it
 should be possible from one shell script to start an AIO full deployment of
 all of the current OpenStack git-namespaced services using containers built
 from RPM packaging.

  For Liberty, I’d like to take our community and code to the next level.
 Since our containers are fairly solid, I’d like to integrate with existing
 projects such as TripleO, os-ansible-deployment, or Fuel.  Alternatively
 the community has shown some interest in creating a multi-node HA-ified
 installation toolchain.

  I am deeply committed to leading the community where the core developers
 want the project to go, wherever that may be.

  I am strongly in favor of adding HA features to our container
 architecture.

  I would like to add .deb package support and from-source support to our
 docker container build system.

  I would like to implement a reference architecture where our containers
 can be used as a building block for deploying a reference platform of 3
 controller nodes, ~100 compute nodes, and ~10 storage nodes.

  I am open to expanding our scope to address full deployment, but would
 prefer to merge our work with one or more existing upstreams such as
 TripelO, os-ansible-deployment, and Fuel.

  Finally I want to finish the job on functional testing, so all of our
 containers are functionally checked and gated per commit on Fedora, CentOS,
 and Ubuntu.

  I am experienced as a PTL, leading the Heat Orchestration program from
 zero LOC through OpenStack integration for 3 development cycles.  I write
 code as a PTL and was instrumental in getting the Magnum Container Service
 code-base kicked off from zero LOC where Adrian Otto serves as PTL.  My
 past experiences include leading Corosync from zero LOC to a stable
 building block of High Availability in Linux.  Prior to that I was part of
 a team that implemented Carrier Grade Linux.  I have a deep and broad
 understanding of open source, software development, high performance team
 leadership, and distributed computing.

  I would be pleased to serve as PTL for Kolla for the Liberty cycle and
 welcome your vote.

  Regards
 -steve

  [1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-11 Thread Angus Salkeld
On Wed, Mar 11, 2015 at 7:01 PM, ELISHA, Moshe (Moshe) 
moshe.eli...@alcatel-lucent.com wrote:

  Hey,



 Can someone please share the current status of the “Autoscaling signals to
 allow parameter passing for UserData” blueprint -
 https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.



 We have a very concrete use case that require passing parameters on scale
 out.

 What is the best way to revive this blueprint?


Hi

You can remove a particular instance from a resource group by doing an
update and
adding the instance to be removed to the removal_list.

See the section removal_policies here:
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup

Let us know if this could do the job for you.

Regards
Angus




 Thanks.



 Moshe Elisha.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Unknown resource OS::Heat::ScaledResource

2015-03-10 Thread Angus Salkeld
On Wed, Mar 11, 2015 at 3:04 AM, Zane Bitter zbit...@redhat.com wrote:

 On 10/03/15 12:26, Manickam, Kanagaraj wrote:

 Hi,

 I observed in one of the patch mentioned below, OS::Heat::ScaledResource
 is reported as unknown, could anyone help here to resolve the issue.
 Thanks.

 http://logs.openstack.org/76/157376/8/check/check-heat-
 dsvm-functional-mysql/c9a1be3/logs/screen-h-eng.txt.gz


   reports OS::Heat::ScaledResource as unknown


 Some more context for anyone looking at this:

 * The resource type mapping is stored in the environment (for all
 autoscaling group nested stacks).
 * The error is happening when deleting the autoscaling group - i.e. upon
 loading the stack from the database the mapping is no longer present in the
 environment
 * https://review.openstack.org/#/c/157376/8 is the patch in question,
 which is changing the way the environment is stored (evidently incorrectly)


Yeah, I'll sort it out. It's in stack_resource.py (it's not passing the env
into the template constructor.

-Angus


 cheers,
 Zane.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-09 Thread Angus Salkeld
On Tue, Mar 10, 2015 at 8:53 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Magnum Team,

 In the following review, we have the start of a discussion about how to
 tackle bay status:

 https://review.openstack.org/159546

 I think a key issue here is that we are not subscribing to an event feed
 from Heat to tell us about each state transition, so we have a low degree
 of confidence that our state will match the actual state of the stack in
 real-time. At best, we have an eventually consistent state for Bay
 following a bay creation.


Hi Adrian

Currently Heat does not an event stream, but instead an event table and
REST resource. This sucks as you have to poll it.
We have been long wanting some integration with Zaqar - we are all
convinced, just needs someone to do the work.
So the idea here is we send user related events via a Zaqar queue and the
user (Magnum) subscribes and get events.
From last summit https://etherpad.openstack.org/p/kilo-heat-summit-topics
(see line 73).


 Here are some options for us to consider to solve this:

 1) Propose enhancements to Heat (or learn about existing features) to emit
 a set of notifications upon state changes to stack resources so the state
 can be mirrored in the Bay resource.


See above, you have anyone to drive this?



 2) Spawn a task to poll the Heat stack resource for state changes, and
 express them in the Bay status, and allow that task to exit once the stack
 reaches its terminal (completed) state.

 3) Don’t store any state in the Bay object, and simply query the heat
 stack for status as needed.


If it's not too frequent then this might be your best bet until we get 1).

Hope this helps
-Angus



 Are each of these options viable? Are there other options to consider?
 What are the pro/con arguments for each?

 Thanks,

 Adrian



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Angus Salkeld
On Tue, Mar 3, 2015 at 9:45 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Tue, 2015-02-24 at 12:05 +0100, Thierry Carrez wrote:
  Daniel P. Berrange wrote:
   [...]
   The key observations
   
  
   The first key observation from the schedule is that although we have
   a 6 month release cycle, we in fact make 4 releases in that six
   months because there are 3 milestones releases approx 6-7 weeks apart
   from each other, in addition to the final release. So one of the key
   burdens of a more frequent release cycle is already being felt, to
   some degree.
  
   The second observation is that thanks to the need to support a
   continuous deployment models, the GIT master branches are generally
   considered to be production ready at all times. The tree does not
   typically go through periods of major instability that can be seen
   in other projects, particular those which lack such comprehensive
   testing infrastructure.
  
   The third observation is that due to the relatively long cycle, and
   increasing amounts of process, the work accomplished during the
   cycles is becoming increasingly bursty. This is in turn causing
   unacceptably long delays for contributors when their work is unlucky
   enough to not get accepted during certain critical short windows of
   opportunity in the cycle.
  
   The first two observations strongly suggest that the choice of 6
   months as a cycle length is a fairly arbitrary decision that can be
   changed without unreasonable pain. The third observation suggests a
   much shorter cycle length would smooth out the bumps and lead to a
   more efficient  satisfying development process for all involved.
 
  I think you're judging the cycle from the perspective of developers
  only. 6 months was not an arbitrary decision. Translations and
  documentation teams basically need a month of feature/string freeze in
  order to complete their work. Since we can't reasonably freeze one month
  every 2 months, we picked 6 months.

 Actually, this is possible: look at Linux, it freezes for 10 weeks of a
 12 month release cycle (or 6 weeks of an 8 week one).  More on this
 below.

  It's also worth noting that we were on a 3-month cycle at the start of
  OpenStack. That was dropped after a cataclysmic release that managed the
  feat of (a) not having anything significant done, and (b) have out of
  date documentation and translations.
 
  While I agree that the packagers and stable teams can opt to skip a
  release, the docs, translations or security teams don't really have that
  luxury... Please go beyond the developers needs and consider the needs
  of the other teams.
 
  Random other comments below:
 
   [...]
   Release schedule
   
  
   First the releases would probably be best attached to a set of
   pre-determined fixed dates that don't ever vary from year to year.
   eg releses happen Feb 1st, Apr 1st, Jun 1st, Aug 1st, Oct 1st, and
   Dec 1st. If a particular release slips, don't alter following release
   dates, just shorten the length of the dev cycle, so it becomes fully
   self-correcting. The even numbered months are suggested to avoid a
   release landing in xmas/new year :-)
 
  The Feb 1 release would probably be pretty empty :)
 
   [...]
   Stable branches
   ---
  
   The consequences of a 2 month release cycle appear fairly severe for
   the stable branch maint teams at first sight. This is not, however,
   an insurmountable problem. The linux kernel shows an easy way forward
   with their approach of only maintaining stable branches for a subset
   of major releases, based around user / vendor demand. So it is still
   entirely conceivable that the stable team only provide stable branch
   releases for 2 out of the 6 yearly releases. ie no additional burden
   over what they face today. Of course they might decide they want to
   do more stable branches, but maintain each for a shorter time. So I
   could equally see them choosing todo 3 or 4 stable branches a year.
   Whatever is most effective for those involved and those consuming
   them is fine.
 
  Stable branches may have the luxury of skipping releases and designate a
  stable one from time to time (I reject the Linux comparison because
  the kernel is at a very different moment in software lifecycle). The
  trick being, making one release special is sure to recreate the peak
  issues you're trying to solve.

 I don't disagree with the observation about different points in the
 lifecycle, but perhaps it might be instructive to ask if the linux
 kernel ever had a period in its development history that looks somewhat
 like OpenStack does now.  I would claim it did: before 2.6, we had the
 odd/even develop/stabilise cycle.  The theory driving it was that we
 needed a time for everyone to develop then a time for everyone to help
 make stable.  You yourself said this in the other thread:

  Joe Gordon wrote:
   [...]
   I think 

Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-23 Thread Angus Salkeld
On Mon, Feb 23, 2015 at 9:18 PM, Aggarwal, Nikunj nikunj.aggar...@hp.com
wrote:

  Hi Angus,

  I am working Timur and Drago on HOT builder. I am using this -
 https://github.com/rackerlabs/hotbuilder as the base and made some
 improvements over the existing code and wish to give a demonstration on how
 easy it is to create a HOT template from Horizon.


That's awesome, looking forward to see it!

-Angus




 Regards,

 Nikunj



 *From:* Angus Salkeld [mailto:asalk...@mirantis.com]
 *Sent:* Monday, February 23, 2015 4:52 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit
 topic The Heat Orchestration Template Builder: A demonstration



 On Sat, Feb 21, 2015 at 4:21 AM, Aggarwal, Nikunj nikunj.aggar...@hp.com
 wrote:

  Hi,



 I have submitted  presentations for Openstack L summit :



 The Heat Orchestration Template Builder: A demonstration
 https://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration





 Hi

 Nice to see work on a HOT builder progressing, but..

 are you planning on integrating this with the other HOT builder efforts?

 Is the code public (link)?


 This is more of a framework to make these easier to build:
 https://github.com/stackforge/merlin
 https://wiki.openstack.org/wiki/Merlin

 Timur (who works on Merlin) is working with Rackers to build this upstream
 - I am not sure on the progress.
 https://github.com/rackerlabs/hotbuilder
 https://github.com/rackerlabs/foundry



 It would be nice if we could all work together (I am hoping you are
 already)?

 Hopefully some of the others that are working on this can chip in and say
 where they

 are.


 -Angus





 Please cast your vote if you feel it is worth for presentation.



 Thanks  Regards,

 Nikunj




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-22 Thread Angus Salkeld
On Sat, Feb 21, 2015 at 4:21 AM, Aggarwal, Nikunj nikunj.aggar...@hp.com
wrote:

  Hi,



 I have submitted  presentations for Openstack L summit :



 The Heat Orchestration Template Builder: A demonstration
 https://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration




Hi

Nice to see work on a HOT builder progressing, but..
are you planning on integrating this with the other HOT builder efforts?
Is the code public (link)?

This is more of a framework to make these easier to build:
https://github.com/stackforge/merlin
https://wiki.openstack.org/wiki/Merlin

Timur (who works on Merlin) is working with Rackers to build this upstream
- I am not sure on the progress.
https://github.com/rackerlabs/hotbuilder
https://github.com/rackerlabs/foundry

It would be nice if we could all work together (I am hoping you are
already)?
Hopefully some of the others that are working on this can chip in and say
where they
are.

-Angus



 Please cast your vote if you feel it is worth for presentation.



 Thanks  Regards,

 Nikunj



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Changing expression delimiters in Mistral DSL

2015-02-18 Thread Angus Salkeld
On Tue, Feb 17, 2015 at 7:06 AM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 SUMMARY:
 

 We are changing the syntax for inlining YAQL expressions in Mistral YAML
 from {1+$.my.var} (or “{1+$.my.var}”) to % 1+$.my.var %

 Below I explain the rationale and the criteria for the choice. Comments
 and suggestions welcome.

 DETAILS:
 -

 We faced a number of problems with using YAQL expressions in Mistral DSL:
 [1] must handle any YAQL, not only the ones started with $; [2] must
 preserve types and [3] must comply with YAML. We fixed these problems by
 applying Ansible style syntax, requiring quotes around delimiters (e.g.
 “{1+$.my.yaql.var}”). However, it lead to unbearable confusion in DSL
 readability, in regards to types:

 publish:
intvalue1: {1+1}” # Confusing: you expect quotes to be string.
intvalue2: {int(1+1)}” # Even this doestn’ clean the confusion
whatisthis:{$.x + $.y}” # What type would this return?

 We got a very strong push back from users in the filed on this syntax.

 The crux of the problem is using { } as delimiters YAML. It is plain wrong
 to use the reserved character. The clean solution is to find a delimiter
 that won’t conflict with YAML.

 Criteria for selecting best alternative are:
 1) Consistently applies to to all cases of using YAML in DSL
 2) Complies with YAML
 3) Familiar to target user audience - openstack and devops

 We prefer using two-char delimiters to avoid requiring extra escaping
 within the expressions.

 The current winner is % %. It fits YAML well. It is familiar to
 openstack/devops as this is used for embedding Ruby expressions in Puppet
 and Chef (for instance, [4]). It plays relatively well across all cases of
 using expressions in Mistral (see examples in [5]):


A really long time ago I posted this patch for Heat:
https://review.openstack.org/#/c/41858/2/doc/source/template_guide/functions.rst
(adds a jinja2 function to Heat http://jinja.pocoo.org/docs/dev/)

I also used % %, it seems to be what people use when using jinja2 on yaml.

This was rejected because of security concerns of Jinja2.



 ALTERNATIVES considered:
 --

 1) Use Ansible-like syntax:
 http://docs.ansible.com/YAMLSyntax.html#gotchas
 Rejected for confusion around types. See above.

 2) Use functions, like Heat HOT or TOSCA:

 HOT templates and TOSCA doesn’t seem to have a concept of typed variables
 to borrow from (please correct me if I missed it). But they have functions:
 function: { function_name: {foo: [parameter1, parameter 2], bar:xxx”}}.
 Applied to Mistral, it would look like:

 publish:
  - bool_var: { yaql: “1+1+$.my.var  100” }


You *could* have the expression as a list, like this (but might not work in
all cases):
{ yaql: [1, +, 1, $.my.var, , 100] }

Generally in Heat we make the functions and args a natural part of the yaml
so it's not one big string that gets parsed separately.
Tho' it would be nice to have a common approach to this, so I am partial to
the one you have here.

-Angus



 Not bad, but currently rejected as it reads worse than delimiter-based
 syntax, especially in simplified one-line action invocation.

 3)   paired with other symbols: php-styoe  ? ..?


 *REFERENCES: *
 --

 [1] Allow arbitrary YAQL expressions, not just ones started with $ :
 https://github.com/stackforge/mistral/commit/5c10fb4b773cd60d81ed93aec33345c0bf8f58fd
 [2] Use Ansible-like syntax to make YAQL expressions YAML complient
 https://github.com/stackforge/mistral/commit/d9517333b1fc9697d4847df33d3b774f881a111b
 [3] Preserving types in YAQL
 https://github.com/stackforge/mistral/blob/d9517333b1fc9697d4847df33d3b774f881a111b/mistral/tests/unit/test_expressions.py#L152-L184
 [4]Using % % in Puppet
 https://docs.puppetlabs.com/guides/templating.html#erb-is-plain-text-with-embedded-ruby

 [5] Etherpad with discussion
 https://etherpad.openstack.org/p/mistral-YAQL-delimiters
 [6] Blueprint
 https://blueprints.launchpad.net/mistral/+spec/yaql-delimiters


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Angus Salkeld
On Tue, Feb 3, 2015 at 10:52 AM, Steve Baker sba...@redhat.com wrote:

 A spec has been raised to add a config option to allow operators to choose
 whether to use the new convergence engine for stack operations. For some
 context you should read the spec first [1]

 Rather than doing this, I would like to propose the following:
 * Users can (optionally) choose which engine to use by specifying an
 engine parameter on stack-create (choice of classic or convergence)
 * Operators can set a config option which determines which engine to use
 if the user makes no explicit choice
 * Heat developers will set the default config option from classic to
 convergence when convergence is deemed sufficiently mature

 I realize it is not ideal to expose this kind of internal implementation
 detail to the user, but choosing convergence _will_ result in different
 stack behaviour (such as multiple concurrent update operations) so there is
 an argument for giving the user the choice. Given enough supporting
 documentation they can choose whether convergence might be worth trying for
 a given stack (for example, a large stack which receives frequent updates)

 Operators likely won't feel they have enough knowledge to make the call
 that a heat install should be switched to using all convergence, and users
 will never be able to try it until the operators do (or the default
 switches).

 Finally, there are also some benefits to heat developers. Creating a whole
 new gate job to test convergence-enabled heat will consume its share of CI
 resource. I'm hoping to make it possible for some of our functional tests
 to run against a number of scenarios/environments. Being able to run tests
 under classic and convergence scenarios in one test run will be a great
 help (for performance profiling too).


Hi

I didn't have a good initial response to this, but it's growing on me. One
issue is the specific option that we expose, it's not nice having
a dead option once we totally switch over and remove classic. So is it
worth coming up with a real feature that convergence-phase-1 enables
and use that (like enable-concurrent-updates). Then we need to think if we
would actually want to keep that feature around (as in
once classic is gone is it possible to maintain
disable-concurrent-update).

Regards
Angus



 If there is enough agreement then I'm fine with taking over and updating
 the convergence-config-option spec.

 [1] https://review.openstack.org/#/c/152301/2/specs/kilo/
 convergence-config-option.rst

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Angus Salkeld
On Thu, Jan 29, 2015 at 4:47 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Angus Salkeld's message of 2015-01-27 17:36:41 -0800:
  Hi all
 
  After having a look at the stats:
  http://stackalytics.com/report/contribution/heat-group/90
  http://stackalytics.com/?module=heat-groupmetric=person-day
 
  I'd like to propose the following changes to the Heat core team:
 
  Add:
  Qiming Teng
  Huang Tianhua


Congrats Qiming and Huang!


 
  Remove:
  Bartosz Górski (Bartosz has indicated that he is happy to be removed and
  doesn't have the time to work on heat ATM).
 
  Core team please respond with +/- 1.
 

 I think I'm far enough removed from day to day reviewing of Heat that you
 can remove me from the core team as well. I really haven't been able to
 focus on Heat directly lately, and thus I don't have any idea of Qiming
 and Huang are suitable for core reviewer status.


OK, sad to see you go, but if you refocus on Heat we can fast track you to
core.

Regards
Angus



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] core team changes

2015-01-27 Thread Angus Salkeld
Hi all

After having a look at the stats:
http://stackalytics.com/report/contribution/heat-group/90
http://stackalytics.com/?module=heat-groupmetric=person-day

I'd like to propose the following changes to the Heat core team:

Add:
Qiming Teng
Huang Tianhua

Remove:
Bartosz Górski (Bartosz has indicated that he is happy to be removed and
doesn't have the time to work on heat ATM).

Core team please respond with +/- 1.

Thanks
Angus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][hot]

2015-01-27 Thread Angus Salkeld
On Tue, Jan 27, 2015 at 7:00 PM, Dmitry mey...@gmail.com wrote:

 I have another question, is it possible to get the stack name in the hot
 script?
 E.g.
 params:
  $stack_name: {get_global_variable: $stack.name}


See:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#pseudo-parameters

Regards
Angus



 On Tue, Jan 27, 2015 at 3:53 AM, Qiming Teng teng...@linux.vnet.ibm.com
 wrote:

 On Mon, Jan 26, 2015 at 07:44:25PM +0200, Dmitry wrote:
  thanks, exactly what I was looking for:
  curl http://169.254.169.254/1.0/meta-data/instance-id

 or, /var/lib/cloud/data/instance-id, if cloud-init is there.

 Regards,
   Qiming

  On Mon, Jan 26, 2015 at 7:31 PM, Zane Bitter zbit...@redhat.com
 wrote:
 
   On 25/01/15 10:41, Dmitry wrote:
  
   Hello,
   I need to receive instance id as part of the instance installation
 script.
   Something like:
   params:
  $current_id: {get_param: $this.id http://this.id}
  
  
   I have no idea what this is supposed to mean, sorry.
  
Is it possible?
  
  
   The get_resource function will return the server UUID for a server
   resource, but you can't use it from within that resource itself (it
 would
   be a circular reference).
  
   The UUID of a server is provided to the server through the Nova
 metadata;
   you should retrieve it from there in your user_data script.
  
   cheers,
   Zane.
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] potentially breaking release of oslo.messaging Tuesday 27th

2015-01-26 Thread Angus Salkeld
On Tue, Jan 27, 2015 at 8:05 AM, Doug Hellmann d...@doughellmann.com
wrote:

 We’ve held up the oslo.messaging release with the namespace package work
 for a while now while we work with the nova, designate, and heat teams to
 fix things up so their tests won’t break. We think the one remaining issue
 is in heat, where some tests are mocking private parts of oslo.messaging.

 There’s a bug filed at https://bugs.launchpad.net/heat/+bug/1412836

 I think asalkeld fixed similar tests in
 https://review.openstack.org/#/c/145094/1/heat/tests/test_stack_lock.py
 but missed these at the time.

 I would propose a fix, but I really don’t understand the test suite or
 what’s going on there. If someone else does propose a fix, please ping me
 in #openstack-oslo on IRC and I’ll test the pre-release against it to see
 if the issue is resolved.


This should sort it out: https://review.openstack.org/150185

-Angus


 I plan to release the new version of oslo.messaging around 15:00 UTC on 27
 Jan, but I will wait if there’s a fix in heat’s merge queue.

 Doug


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence Phase 1 implementation plan

2015-01-26 Thread Angus Salkeld
On Sat, Jan 24, 2015 at 7:00 AM, Zane Bitter zbit...@redhat.com wrote:

 I've mentioned this in passing a few times, but I want to lay it out here
 in a bit more detail for comment. Basically we're implementing convergence
 at a time when we still have a lot of 'unit' tests that are really
 integration tests, and we don't want to have to rewrite them to anticipate
 this new architecture, nor wait until they have all been converted into
 functional tests. And of course it goes without saying that we have to land
 all of these changes without breaking anything for users.

 To those ends, my proposal is that we (temporarily) support two code
 paths: the existing, legacy in-memory path and the new, distributed
 convergence path. Stacks will contain a field indicating which code path
 they were created with, and each stack will be operated on only by that
 same code path throughout its lifecycle (i.e. a stack created in legacy
 mode will always use the legacy code). We'll add a config option, off by
 default, to enable the new code path. That way users can switch over at a
 time of their choosing. When we're satisfied that it's stable enough we can
 flip the default (note: IMHO this would have to happen before kilo-3 in
 order to make it for the Kilo release).


+1



 Based on this plan, I had a go at breaking the work down into discrete
 tasks, and because it turned out to be really long I put it in an etherpad
 rather than include it here:

 https://etherpad.openstack.org/p/heat-convergence-tasks


Thanks Zane, this looks great.


 If anyone has additions/changes then I suggest adding them to that
 etherpad and replying to this message to flag your changes.

 To be clear, it's unlikely I will have time to do the implementation work
 on any of these tasks myself (although I will be trying to review as many
 of them as possible). So the goal here is to get as many contributors
 involved in doing stuff in parallel as we can.

 There are obviously dependencies between many of these tasks, so my plan
 is to raise each one as a blueprint so we can see the magic picture that
 Launchpad shows. I want to get feedback first though, because there are 18
 of them so far, and rejigging everything in response to feedback would be a
 bit of a pain.

 I'm also prepared to propose specs for all of these _if_ people think that
 would be helpful. I see three options here:
  - Propose 18 fairly minimal specs (maybe in a single review?)


This sounds fine, but if possible group them a bit 18 sounds like a lot and
many of these look like small jobs.
I am also open to using bugs for smaller items. Basically this is just the
red tape, so what ever is the least effort
and makes things easier to divide the work up.

-Angus


  - Propose 1 large spec (essentially the contents of that etherpad)
  - Just discuss in the etherpad rather than Gerrit


 Obviously that's in decreasing order of the amount of work required, but
 I'll do whatever folks think best for discussion.

 cheers,
 Zane.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Heat] Heat supports OpenStack operation

2015-01-22 Thread Angus Salkeld
On Thu, Jan 22, 2015 at 4:06 PM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
li-gong.d...@hp.com wrote:

  Hi Angus,



  We are currently fleshing out a spec for Mistral resource types that
 should do
  what you suggest above:
 https://review.openstack.org/#/c/143989/12/specs/kilo/mistral-resources.rst

  (see the example on line 108).

 Thank you for provide this info.

 Mistral seems to provide workflow/workbook functions which OpenStack lacks.

 Will Mistral will go into OpenStack or have it become an openstack
 integrated or incubated project?


Hopefully!

-Angus




 Regards,

 Gary





 *From:* Angus Salkeld [mailto:asalk...@mirantis.com]
 *Sent:* Wednesday, January 21, 2015 8:00 AM
 *To:* Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
 *Cc:* Steven Hardy; openstack@lists.openstack.org

 *Subject:* Re: [Openstack] [Heat] Heat supports OpenStack operation



 On Tue, Jan 20, 2015 at 5:54 PM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
 li-gong.d...@hp.com wrote:

 Hi Steven,

 Thanks for your explanation on Heat.

  -Original Message-
  From: Steven Hardy [mailto:sha...@redhat.com]
  Sent: Friday, January 16, 2015 5:12 PM
  To: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
  Cc: openstack@lists.openstack.org
  Subject: Re: [Openstack] [Heat] Heat supports OpenStack operation
 
  On Fri, Jan 16, 2015 at 08:34:47AM +, Duan, Li-Gong (Gary@HPServers-
  Core-OE-PSC) wrote:
  Does Heat support executing an OpenStack operation, such as
 migrating
  an
  Nova instance, powering off a Nova instance?
 
  We call state changes which don't affect the definition of the stack an
  action, and we only support suspend and resume at present (e.g heat
  action-suspend stack name)

 It does make sense in that Heat centers around stack and does a great job
 In stack operation.

  It may be possible to add support for more actions (until now nobody has
  asked for them), but note it only really makes sense to drive such
 actions via
  heat when dependencies/ordering are important.

  For example, when suspending then resuming a stack containing a
  WebServer instance and a DatabaseServer instance, you want
  DatabaseServer to be resumed before WebServer (same order as on stack
  create)
 
  So, it might make sense to have an action which can power-off a whole
 stack,
  turning off each nova node in the right order (you could write a little
 script
  which does the same thing quite easily though..).

 It is an easy job to write a script to implement such actions as turning
 off each
 Nova node. But I would like to figure out a more elegant way to do it, for
 example, automatically turning off each Nova node at a specified condition.
 This can be implemented by using Ceilometer::Alarm to implement triggering
 this operation, but it seems that there is no appropriate place/component
 to
 implement the action (say, turning off each nova node). That's the reason
 why
 I want to see whether Heat can supports such operations.



 Hi

 We are currently fleshing out a spec for Mistral resource types that
 should do
 what you suggest above:
 https://review.openstack.org/#/c/143989/12/specs/kilo/mistral-resources.rst

 (see the example on line 108).

 -Angus




  It probably doesn't make much sense for heat to support things like
  migrating an instance, since it's an operation which isn't scoped to the
 stack
  and it's dependencies, it's likely an operator wants to migrate a
 workload off
  a specific compute node, which is something Heat has no visibility of at
 all.
 
  I know currently Heat does a good job on launching cloud cluster or
  application, such as deploying a Nova instance with specified
 network
  configuration, but not sure how to control(not launch or delete) a
 Nova
  instance or cinder volume.
 
  Right now, the easiest answer is write a little script which uses
 information
  from heat (e.g to get the UUID's for the resources you want to
  control) then e.g calls nova.

 As mentioned above, the basic idea is to trigger a specified set of
 operations
 once a specified condition is reached. In this case, monitoring the
 condition
 and trigger action can be implemented with Ceilometer::Alarm, but I want to
 see whether it is possible to implement the set of operations in Heat
 template.

 Considering that Heat is dealing with sets of resources(stack), I am
 wondering
 whether it can deal with sets of operations, too.

  If Heat does not support these OpenStack operation, what is the best
  Heat
  way if we want to execute some operations, such as powering off a
 Nova
  instance, in Heat template?
 
  As mentioned above, these lifecycle operations affect the stack state,
 but
  not it's definition, so it probably doesn't make sense to expose actions
 like
  powering off an instance in the heat template.

 Now I understand that it doesn't make much sense to implement operation set
 in heat template and I need to figure other way to implement the set of
 operations

Re: [openstack-dev] [Heat] query property on heat OS::Ceilometer::Alarm for juno/stable

2015-01-21 Thread Angus Salkeld
On Thu, Jan 22, 2015 at 5:16 AM, Karolyn Chambers chamb...@us.ibm.com
wrote:

 In Kilo, with the bug fix from 1326721, I can create an alarm like the
 following with the query property. I use this alarm to monitor the health
 of a server instance in the pool. In juno/stable, without the query
 property, ceilometer alarms can only be created via heat with matching
 metadata. This limits the metrics I can create alarms for, as not every
 resource has metadata associated with it. Without this bug fix, I've been
 unable to find a way to monitor the health of my server in Juno via heat. I'd
 like thoughts on back porting 1326721 back to juno/stable, so this can be
 work the same as it does in Kilo https://review.openstack.org/#/c/146624/
 https://review.openstack.org/#/c/146624/ .Thanks


Hi

I appreciate that it is a useful feature for you, but people rely on stable
branches being stable. This is a reasonably big patch (163) for a back port.
So I understand Zane's -1. That said, it is contained within ceilometer
alarm resource and the code now matches master. And the feature has been in
ceilometer for
ages.

I am OK with this going in, if stable branch folk are OK with it. (I'll put
a +1 on the review).

-Angus


 gone_alarm:
type: OS::Ceilometer::Alarm
properties:
  description: Detect server being unresponsive
  repeat_actions: False
  meter_name: network.services.lb.member
  statistic: avg
  period: 600
  evaluation_periods: 1
  threshold: 1
  alarm_actions: [ {get_attr: [restart, AlarmUrl]} ]
  query:
  - field: resource_id
op: eq
value: {get_resource: member}
  comparison_operator: lt

  member:
type: OS::Neutron::PoolMember
properties:
  pool_id: {get_param: pool_id}
  address: {get_attr: [server_node, first_address]}
  protocol_port: { get_param: loadbalance_port }


 Karolyn Chambers

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Heat] Heat supports OpenStack operation

2015-01-20 Thread Angus Salkeld
On Tue, Jan 20, 2015 at 5:54 PM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
li-gong.d...@hp.com wrote:

 Hi Steven,

 Thanks for your explanation on Heat.

  -Original Message-
  From: Steven Hardy [mailto:sha...@redhat.com]
  Sent: Friday, January 16, 2015 5:12 PM
  To: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
  Cc: openstack@lists.openstack.org
  Subject: Re: [Openstack] [Heat] Heat supports OpenStack operation
 
  On Fri, Jan 16, 2015 at 08:34:47AM +, Duan, Li-Gong (Gary@HPServers-
  Core-OE-PSC) wrote:
  Does Heat support executing an OpenStack operation, such as
 migrating
  an
  Nova instance, powering off a Nova instance?
 
  We call state changes which don't affect the definition of the stack an
  action, and we only support suspend and resume at present (e.g heat
  action-suspend stack name)

 It does make sense in that Heat centers around stack and does a great job
 In stack operation.

  It may be possible to add support for more actions (until now nobody has
  asked for them), but note it only really makes sense to drive such
 actions via
  heat when dependencies/ordering are important.

  For example, when suspending then resuming a stack containing a
  WebServer instance and a DatabaseServer instance, you want
  DatabaseServer to be resumed before WebServer (same order as on stack
  create)
 
  So, it might make sense to have an action which can power-off a whole
 stack,
  turning off each nova node in the right order (you could write a little
 script
  which does the same thing quite easily though..).

 It is an easy job to write a script to implement such actions as turning
 off each
 Nova node. But I would like to figure out a more elegant way to do it, for
 example, automatically turning off each Nova node at a specified condition.
 This can be implemented by using Ceilometer::Alarm to implement triggering
 this operation, but it seems that there is no appropriate place/component
 to
 implement the action (say, turning off each nova node). That's the reason
 why
 I want to see whether Heat can supports such operations.


Hi

We are currently fleshing out a spec for Mistral resource types that should
do
what you suggest above:
https://review.openstack.org/#/c/143989/12/specs/kilo/mistral-resources.rst
(see the example on line 108).

-Angus



  It probably doesn't make much sense for heat to support things like
  migrating an instance, since it's an operation which isn't scoped to the
 stack
  and it's dependencies, it's likely an operator wants to migrate a
 workload off
  a specific compute node, which is something Heat has no visibility of at
 all.
 
  I know currently Heat does a good job on launching cloud cluster or
  application, such as deploying a Nova instance with specified
 network
  configuration, but not sure how to control(not launch or delete) a
 Nova
  instance or cinder volume.
 
  Right now, the easiest answer is write a little script which uses
 information
  from heat (e.g to get the UUID's for the resources you want to
  control) then e.g calls nova.

 As mentioned above, the basic idea is to trigger a specified set of
 operations
 once a specified condition is reached. In this case, monitoring the
 condition
 and trigger action can be implemented with Ceilometer::Alarm, but I want to
 see whether it is possible to implement the set of operations in Heat
 template.

 Considering that Heat is dealing with sets of resources(stack), I am
 wondering
 whether it can deal with sets of operations, too.

  If Heat does not support these OpenStack operation, what is the best
  Heat
  way if we want to execute some operations, such as powering off a
 Nova
  instance, in Heat template?
 
  As mentioned above, these lifecycle operations affect the stack state,
 but
  not it's definition, so it probably doesn't make sense to expose actions
 like
  powering off an instance in the heat template.

 Now I understand that it doesn't make much sense to implement operation set
 in heat template and I need to figure other way to implement the set of
 operations.

 Regards,
 Gary

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [heat] Remove deprecation properties

2015-01-18 Thread Angus Salkeld
On Fri, Jan 16, 2015 at 11:10 PM, Sergey Kraynev skray...@mirantis.com
wrote:

 Steve, Thanks for the feedback.

 On 16 January 2015 at 15:09, Steven Hardy sha...@redhat.com wrote:

 On Thu, Dec 25, 2014 at 01:52:43PM +0400, Sergey Kraynev wrote:
 Hi all.
 In the last time we got on review several patches, which removes old
 deprecation properties [1],A
 and one mine [2].
 The aim is to delete deprecated code and redundant tests. It looks
 simple,
 but the main problem, which we met, is backward compatibility.
 F.e. user has created resource (FIP) with old property schema, i.e.
 using
 SUBNET_ID instead of SUBNET. On first look nothing bad will not
 happen,
 because:

 FWIW I think it's too soon to remove the Neutron subnet_id/network_id
 properties, they were only deprecated in Juno [1], and it's evident that
 users are still using them on icehouse [2]

 I thought the normal deprecation cycle was at least two releases, but I
 can't recall where I read that.  Considering the overhead of maintaining
 these is small, I'd favour leaning towards giving more time for users to
 update their templates, rather then breaking them via very aggresive
 removal of deprecated interfaces.


 Honestly I thought, that we use 1 release cycle, but I have not any
 objections to do it after two releases.
 Will  be glad to know what is desired deprecation period.



 I'd suggest some or all of the following:

 - Add a planned for removal in $release to the SupportStatus string
   associated with the deprecation, so we can document the planned removal.
 - Wait for at least two releases between deprecation and removal, and
   announce the interfaces which will be removed in the release notes for
   the release before removal e.g:
 - Deprecated in Juno
 - Announce planned removal in Kilo release notes
 - Remove in J


 I like this idea, IMO it will make our deprecation process clear.





 [1] https://review.openstack.org/#/c/82853/
 [2]
 http://lists.openstack.org/pipermail/openstack/2015-January/011156.html

 1. handle_delete use resource_id and any changes in property schema
 does
 not affect other actions.
 2. If user want to use old template, he will get adequate error
 message,
 that this property is not presented in schema. After that he just
 should
 switch to new property and update stack using this new property.
 In the same time we have one big issues for shadow dependencies,
 which is
 actual for neutron resources. The simplest approach will not be
 worked
 [3], due to old properties was deleted from property_schema.
 Why is it bad ?
 - we will get again all bugs related with such dependencies.
 - how to make sure:A
 A  A  - create stack with old property (my template [4])
 A  A  - open horizon, and look on topology
 A  A  - download patch [2] and restart engine
 A  A  - reload horizon page with topology
 A  A  - as result it will be different
 A
 I have some ideas about how to solve this, but none of them is not
 enough
 good for me:
 A - getting such information from self.properties.data is bad,
 because we
 will skip all validations mentioned in properties.__getitem__
 A - renaming old key in data to new or creating copy with new key is
 not
 correct for me, because in this case we actually change properties
 (resource representation) invisibly from user.
 A - as possible we may leave old deprecated property and mark it
 something
 like (removed), which will have similar behavior such as for
 implemented=False. I do not like it, because it means, that we never
 remove this support code, because wants to be compatible with old
 resources. (User may be not very lazy to do simple update or
 something
 else ...)
 - last way, which I have not tried yet, is usingA
 _stored_properties_data
 forA extraction necessary information.
 So now I have the questions:
 Should we support such case with backward compatibility?A
 If yes, how will be better to do it for us and user?
 May be we should create some strategy forA removingA A deprecated
 properties ?

 Yeah, other than the process issues I mentioned above, Angus has pointed
 out some technical challenges which may mean property removal breaks
 existing stacks.  IMHO this is something we *cannot* do - folks must be
 able to upgrade heat over multiple versions without breaking their stacks.

 As you say, delete may work, but it's likely several scenarios around
 update will break if the stored stack definition doesn't match the schema
 of the resource, and maintaining the internal references to removed or
 obsolete properties doesn't seem like a good plan long term.

 Could we provide some sort of migration tool, which re-writes the
 definition of existing stacks (via a special patch stack update maybe?)
 before upgrading heat?


 Yeah, I thought about it. Probably it's good solution 

Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-18 Thread Angus Salkeld
On Sun, Jan 18, 2015 at 10:41 PM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 Dear all,
 One question we constantly get from Heat users is about the support
 status of resource types. Some users are not well informed of this
 information so that is something we can improve.

 Though some resource types are already labelled with support status,
 there are quite some of them not identified yet. Helps are needed to
 complete the list.


I think the only why to figure these out is using the git history. This is
going to be
tedious work:-(

-Angus


 +--+++
 | name | support_status | since  |
 +--+++
 | AWS::AutoScaling::AutoScalingGroup   || 2014.1 |
 | AWS::AutoScaling::LaunchConfiguration|||
 | AWS::AutoScaling::ScalingPolicy  |||
 | AWS::CloudFormation::Stack   |||
 | AWS::CloudFormation::WaitCondition   || 2014.1 |
 | AWS::CloudFormation::WaitConditionHandle || 2014.1 |
 | AWS::CloudWatch::Alarm   |||
 | AWS::EC2::EIP|||
 | AWS::EC2::EIPAssociation |||
 | AWS::EC2::Instance   |||
 | AWS::EC2::InternetGateway|||
 | AWS::EC2::NetworkInterface   |||
 | AWS::EC2::RouteTable || 2014.1 |
 | AWS::EC2::SecurityGroup  |||
 | AWS::EC2::Subnet |||
 | AWS::EC2::SubnetRouteTableAssociation|||
 | AWS::EC2::VPC|||
 | AWS::EC2::VPCGatewayAttachment   |||
 | AWS::EC2::Volume |||
 | AWS::EC2::VolumeAttachment   |||
 | AWS::ElasticLoadBalancing::LoadBalancer  |||
 | AWS::IAM::AccessKey  |||
 | AWS::IAM::User   |||
 | AWS::RDS::DBInstance |||
 | AWS::S3::Bucket  |||
 | My::TestResource |||
 | OS::Ceilometer::Alarm|||
 | OS::Ceilometer::CombinationAlarm || 2014.1 |
 | OS::Cinder::Volume   |||
 | OS::Cinder::VolumeAttachment |||
 | OS::Glance::Image|| 2014.2 |
 | OS::Heat::AccessPolicy   |||
 | OS::Heat::AutoScalingGroup   || 2014.1 |
 | OS::Heat::CloudConfig|| 2014.1 |
 | OS::Heat::HARestarter| DEPRECATED ||
 | OS::Heat::InstanceGroup  |||
 | OS::Heat::MultipartMime  || 2014.1 |
 | OS::Heat::RandomString   || 2014.1 |
 | OS::Heat::ResourceGroup  || 2014.1 |
 | OS::Heat::ScalingPolicy  |||
 | OS::Heat::SoftwareComponent  || 2014.2 |
 | OS::Heat::SoftwareConfig || 2014.1 |
 | OS::Heat::SoftwareDeployment || 2014.1 |
 | OS::Heat::SoftwareDeployments|| 2014.2 |
 | OS::Heat::Stack  |||
 | OS::Heat::StructuredConfig   || 2014.1 |
 | OS::Heat::StructuredDeployment   || 2014.1 |
 | OS::Heat::StructuredDeployments  || 2014.2 |
 | OS::Heat::SwiftSignal|| 2014.2 |
 | OS::Heat::SwiftSignalHandle  || 2014.2 |
 | OS::Heat::UpdateWaitConditionHandle  || 2014.1 |
 | OS::Heat::WaitCondition  || 2014.2 |
 | OS::Heat::WaitConditionHandle|| 2014.2 |
 | OS::Neutron::Firewall|||
 | OS::Neutron::FirewallPolicy  |||
 | OS::Neutron::FirewallRule|||
 | OS::Neutron::FloatingIP  |||
 | OS::Neutron::FloatingIPAssociation   |||
 | OS::Neutron::HealthMonitor   |   

Re: [openstack-dev] [heat] Remove deprecation properties

2015-01-18 Thread Angus Salkeld
On Sat, Jan 17, 2015 at 5:01 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 Murano uses Heat templates with almost all available resources. Neutron
 resources are definitely used.
 I think Murano can update our Heat resources handling properly, but there
 are at least two scenarios which should be considered:
 1) Murano generated stacks are long lasting. Murano uses stack update to
 modify stacks so it is expected that stack update process is not affected
 by Heat upgrade and resource schema deprecation.
 2) Murano uses application packages which contain HOT snippets.
 Application authors heavily rely on backward compatibility so that
 applications written on Icehouse version should work on later OpenStack
 versions. If it is not the case there should be some mechanism to
 automatically translate old resource schema to a new one.

 I hope all the changes will be documented somewhere. I think it will be
 good to have a wiki page with a list of schema versions and changes. This
 will help Heat users to modify their templates accordingly.

 Another potential issue I see is the fact that it is quite often when
 multiple versions of OpenStack are used in data centers. Like previous
 version in production and new versions of OpenStack are in stage and Dev
 environment which are used to prepare for production upgrade and current
 development. If these different versions of OpenStack will require
 different version of Heat templates it might be a problem as instead of
 upgrading just infrastructure services one will need to synchronously
 upgrade different external components which rely on Heat templates.


Thank Georgy,

We will tread carefully here. Once we add a property, I don't see how we
can ever totally remove support for it.

-Angus



 Thanks
 Georgy


 On Fri, Jan 16, 2015 at 5:10 AM, Sergey Kraynev skray...@mirantis.com
 wrote:

 Steve, Thanks for the feedback.

 On 16 January 2015 at 15:09, Steven Hardy sha...@redhat.com wrote:

 On Thu, Dec 25, 2014 at 01:52:43PM +0400, Sergey Kraynev wrote:
 Hi all.
 In the last time we got on review several patches, which removes old
 deprecation properties [1],A
 and one mine [2].
 The aim is to delete deprecated code and redundant tests. It looks
 simple,
 but the main problem, which we met, is backward compatibility.
 F.e. user has created resource (FIP) with old property schema, i.e.
 using
 SUBNET_ID instead of SUBNET. On first look nothing bad will not
 happen,
 because:

 FWIW I think it's too soon to remove the Neutron subnet_id/network_id
 properties, they were only deprecated in Juno [1], and it's evident that
 users are still using them on icehouse [2]

 I thought the normal deprecation cycle was at least two releases, but I
 can't recall where I read that.  Considering the overhead of maintaining
 these is small, I'd favour leaning towards giving more time for users to
 update their templates, rather then breaking them via very aggresive
 removal of deprecated interfaces.


 Honestly I thought, that we use 1 release cycle, but I have not any
 objections to do it after two releases.
 Will  be glad to know what is desired deprecation period.



 I'd suggest some or all of the following:

 - Add a planned for removal in $release to the SupportStatus string
   associated with the deprecation, so we can document the planned
 removal.
 - Wait for at least two releases between deprecation and removal, and
   announce the interfaces which will be removed in the release notes for
   the release before removal e.g:
 - Deprecated in Juno
 - Announce planned removal in Kilo release notes
 - Remove in J


 I like this idea, IMO it will make our deprecation process clear.





 [1] https://review.openstack.org/#/c/82853/
 [2]
 http://lists.openstack.org/pipermail/openstack/2015-January/011156.html

 1. handle_delete use resource_id and any changes in property schema
 does
 not affect other actions.
 2. If user want to use old template, he will get adequate error
 message,
 that this property is not presented in schema. After that he just
 should
 switch to new property and update stack using this new property.
 In the same time we have one big issues for shadow dependencies,
 which is
 actual for neutron resources. The simplest approach will not be
 worked
 [3], due to old properties was deleted from property_schema.
 Why is it bad ?
 - we will get again all bugs related with such dependencies.
 - how to make sure:A
 A  A  - create stack with old property (my template [4])
 A  A  - open horizon, and look on topology
 A  A  - download patch [2] and restart engine
 A  A  - reload horizon page with topology
 A  A  - as result it will be different
 A
 I have some ideas about how to solve this, but none of them is not
 enough
 good for me:
 A - getting such information from self.properties.data is bad,
 because we
 

Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Angus Salkeld
On Mon, Jan 12, 2015 at 10:17 PM, Konstantin Danilov kdani...@mirantis.com
wrote:

 Boris,

 Move from sync http to something like websocket requires a lot of work and
 not directly connected
 with API issue. When openstack api servers begin to support
 websockets - it would be easy to change implementation of monitoring
 thread without breaking compatibility.
 At the moment periodical pooling from additional thread looks reasonable
 for me
 and it creates same amount of http requests as all current implementation.

 BP is not about improving performance,
 but about providing convenient and common API to handle background tasks.

  So we won't need to retrieve 100500 times information about object.
 As I sad before - this API creates same amount of load as any code which
 we use to check background task currently.
 It even can decrease load due to requests aggregation in some cases (but
 there a points to discuss).

  As well this pattern doesn't look great.
  I would prefer to see something like:
  vm = novaclient.servers.create(, sync=True)

 This is completely different pattern. It is blocking call, which don't
 allows you to start two(or more) background tasks
 and from same thread and make some calculations while they running in
 background.


Except if you use threads (eventlet or other) - I am still struggling to
enjoy Futures/yield based flowcontrol, lost battle i guess:(.







 On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic bpavlo...@mirantis.com
 wrote:

 Konstantin,


 I believe it's better to work on server side, and use some modern
 approach like web sockets for async operations. So we won't need to
 retrieve 100500 times information about object. And then use this feature
 in clients.

  create_future = novaclient.servers.create_async()
 .
 vm = create_future.result()


 As well this pattern doesn't look great.

 I would prefer to see something like:

   vm = novaclient.servers.create(, sync=True)


 Best regards,
 Boris Pavlovic





 On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov 
 kdani...@mirantis.com wrote:

 Hi all.

 There a set of openstack api functions which starts background actions
 and return preliminary results - like 'novaclient.create'. Those
 functions
 requires periodically check results and handle timeouts/errors
 (and often cleanup + restart help to fix an error).

 Check/retry/cleanup code duplicated over a lot of core projects.
 As examples - heat, tempest, rally, etc and definitely in many
 third-party scripts.


We have some very similar code at the moment, but we are keen to move away
from it to
something like making use of rpc .{start,end} notifications to reduce the
load we put on keystone and
friends.



 I propose to provide common higth-level API for such functions, which
 uses
 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way
 to
 present background task.

 Idea is to add to each background-task-starter function a complimentary
 call,
 that returns 'future' object. E.g.

 create_future = novaclient.servers.create_async()
 .
 vm = create_future.result()


Is that going to return on any state change or do you pass in a list of
acceptable states?

-Angus



 This allows to unify(and optimize) monitoring cycles, retries, etc.
 Please found complete BP at
 https://github.com/koder-ua/os_api/blob/master/README.md

 Thanks
 --
 Kostiantyn Danilov aka koder http://koder.ua
 Principal software engineer, Mirantis

 skype:koder.ua
 http://koder-ua.blogspot.com/
 http://mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kostiantyn Danilov aka koder.ua
 Principal software engineer, Mirantis

 skype:koder.ua
 http://koder-ua.blogspot.com/
 http://mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Angus Salkeld
On Fri, Jan 9, 2015 at 3:22 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.com wrote:

  Steve,



 My reasoning to have a “--continue” like functionality was to run it as a
 periodic task and substitute continuous observer for now.


I am not in favor of the --continue as an API. I'd suggest responding to
resource timeouts and if there is no response from the task, then re-start
(continue)
the task.

-Angus



 “--continue” based command should work on realized vs. actual graph and
 issue a stack update.



 I completely agree that user action should not be needed to realize a
 partially completed stack.



 Your thoughts.



 *From:* vishnu [mailto:ckmvis...@gmail.com]
 *Sent:* Friday, January 9, 2015 10:08 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence



 Steve,



 Auto recovery is the plan. Engine failure should be detected by way of
 heartbeat or recover partially realised stack on engine startup in case of
 a single engine scenario.



 --continue command was just a additional helper api.














 *Visnusaran Murugan*

 about.me/ckmvishnu









 On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy sha...@redhat.com wrote:

 On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
 Hi Zane,
 I was wondering if we could push changes relating to backup stack
 removal
 and to not load resources as part of stack. There needs to be a
 capability
 to restart jobs left over by dead engines.A
 something like heat stack-operation --continue [git rebase --continue]

 To me, it's pointless if the user has to restart the operation, they can do
 that already, e.g by triggering a stack update after a failed stack create.

 The process needs to be automatic IMO, if one engine dies, another engine
 should detect that it needs to steal the lock or whatever and continue
 whatever was in-progress.

 Had a chat with shady regarding this. IMO this would be a valuable
 enhancement. Notification based lead sharing can be taken up upon
 completion.

 I was referring to a capability for the service to transparently recover
 if, for example, a heat-engine is restarted during a service upgrade.

 Currently, users will be impacted in this situation, and making them
 manually restart failed operations doesn't seem like a super-great solution
 to me (like I said, they can already do that to some extent)

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] heat unit tests mocking private parts of oslo.messaging

2015-01-05 Thread Angus Salkeld
On Tue, Jan 6, 2015 at 9:03 AM, Doug Hellmann d...@doughellmann.com wrote:

 As part of updating oslo.messaging to move it out of the oslo namespace
 package I ran into some issues with heat. While debugging, I tried running
 the heat unit tests using the modified version of oslo.messaging and ran
 into test failures because the tests are mocking private parts of the
 library that are moving to have new names.

 Mocking internal parts of Oslo libraries isn’t supported, and so I need
 someone from the heat team to work with me to fix the heat tests and
 possibly add missing fixtures to oslo.messaging to avoid breaking heat when
 we release the updated oslo.messaging. I tried raising attention on IRC in
 #heat but I think I’m in the wrong timezone compared to most of the heat
 devs.


Hi Doug,

This should help you along: https://review.openstack.org/#/c/145094/

-Angus



 Here’s an example of one of the failing tests:

 ==
 FAIL:
 heat.tests.test_stack_lock.StackLockTest.test_failed_acquire_existing_lock_engine_alive
 tags: worker-3
 --
 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'
 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'

 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to
 OS::Heat::CWLiteAlarm}}}

 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'

 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'

 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to
 OS::Heat::CWLiteAlarm}}}

 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute ‘_CallContext'


 That class _CallContext isn’t part of the public API for oslo.messaging,
 and so it is not being exposed through the redirect modules I’m creating
 for backwards compatibility. We need to look for a way to create a fixture
 to do whatever it is these tests are trying to do — I don’t understand the
 tests, which is why I need a heat developer to help out.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] cancel the next 2 weekly meetings

2014-12-23 Thread Angus Salkeld
Hi

Lets cancel the next 2 weekly meetings as they neatly fall on
Christmas eve and new years day.

Happy holidays!

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Angus Salkeld
On Tue, Dec 23, 2014 at 6:42 AM, Zane Bitter zbit...@redhat.com wrote:

 On 22/12/14 13:21, Steven Hardy wrote:

 Hi all,

 So, lately I've been having various discussions around $subject, and I
 know
 it's something several folks in our community are interested in, so I
 wanted to get some ideas I've been pondering out there for discussion.

 I'll start with a proposal of how we might replace HARestarter with
 AutoScaling group, then give some initial ideas of how we might evolve
 that
 into something capable of a sort-of active/active failover.

 1. HARestarter replacement.

 My position on HARestarter has long been that equivalent functionality
 should be available via AutoScalingGroups of size 1.  Turns out that
 shouldn't be too hard to do:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 1
max_size: 1
resource:
  type: ha_server.yaml

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_group_id: {get_resource: server_group}
scaling_adjustment: 1


 One potential issue with this is that it is a little bit _too_ equivalent
 to HARestarter - it will replace your whole scaled unit (ha_server.yaml in
 this case) rather than just the failed resource inside.

  So, currently our ScalingPolicy resource can only support three adjustment
 types, all of which change the group capacity.  AutoScalingGroup already
 supports batched replacements for rolling updates, so if we modify the
 interface to allow a signal to trigger replacement of a group member, then
 the snippet above should be logically equivalent to HARestarter AFAICT.

 The steps to do this should be:

   - Standardize the ScalingPolicy-AutoScaling group interface, so
 aynchronous adjustments (e.g signals) between the two resources don't use
 the adjust method.

   - Add an option to replace a member to the signal interface of
 AutoScalingGroup

   - Add the new replace adjustment type to ScalingPolicy


 I think I am broadly in favour of this.


  I posted a patch which implements the first step, and the second will be
 required for TripleO, e.g we should be doing it soon.

 https://review.openstack.org/#/c/143496/
 https://review.openstack.org/#/c/140781/

 2. A possible next step towards active/active HA failover

 The next part is the ability to notify before replacement that a scaling
 action is about to happen (just like we do for LoadBalancer resources
 already) and orchestrate some or all of the following:

 - Attempt to quiesce the currently active node (may be impossible if it's
in a bad state)

 - Detach resources (e.g volumes primarily?) from the current active node,
and attach them to the new active node

 - Run some config action to activate the new node (e.g run some config
script to fsck and mount a volume, then start some application).

 The first step is possible by putting a SofwareConfig/SoftwareDeployment
 resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
 node is too bricked to respond and specifying DELETE action so it only
 runs
 when we replace the resource).

 The third step is possible either via a script inside the box which polls
 for the volume attachment, or possibly via an update-only software config.

 The second step is the missing piece AFAICS.

 I've been wondering if we can do something inside a new heat resource,
 which knows what the current active member of an ASG is, and gets
 triggered on a replace signal to orchestrate e.g deleting and creating a
 VolumeAttachment resource to move a volume between servers.

 Something like:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 2
max_size: 2
resource:
  type: ha_server.yaml

server_failover_policy:
  type: OS::Heat::FailoverPolicy
  properties:
auto_scaling_group_id: {get_resource: server_group}
resource:
  type: OS::Cinder::VolumeAttachment
  properties:
  # FIXME: refs is a ResourceGroup interface not currently
  # available in AutoScalingGroup
  instance_uuid: {get_attr: [server_group, refs, 1]}

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_policy_id: {get_resource: server_failover_policy}
scaling_adjustment: 1


 This actually fails because a VolumeAttachment needs to be updated in
 place; if you try to switch servers but keep the same Volume when replacing
 the attachment you'll get an error.

 TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting
 here, so in theory you could just have an OS::Cinder::VolumeAttachment
 instead of the 

Re: [openstack-dev] [Mistral] For-each

2014-12-18 Thread Angus Salkeld
On Mon, Dec 15, 2014 at 8:00 PM, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:

 Hi,

 Here is the doc with suggestions on specification for for-each feature.

 You are free to comment and ask questions.


 https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing



Just as a drive by comment, there is a Heat spec for a for-each:
https://review.openstack.org/#/c/140849/
(there hasn't been a lot of feedback for it yet tho')

Nice to have these somewhat consistent.

-Angus



 --
 Best Regards,
 Nikolay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful

2014-12-09 Thread Angus Salkeld
On Wed, Dec 10, 2014 at 8:43 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Dec 8, 2014 at 5:03 PM, Brant Knudson b...@acm.org wrote:


 Not too long ago projects added a maximum complexity check to tox.ini,
 for example keystone has max-complexity=24. Seemed like a good idea at
 the time, but in a recent attempt to lower the maximum complexity check in
 keystone[1][2], I found that the maximum complexity check can actually lead
 to less understandable code. This is because the check includes an embedded
 function's complexity in the function that it's in.


 This behavior is expected.

 Nested functions cannot be unit tested on there own.  Part of the issue is
 that nested functions can access variables scoped to the outer function, so
 the following function is valid:

  def outer():
 var = outer
 def inner():
 print var
 inner()


 Because nested functions cannot easily be unit tested, and can be harder
 to reason about since they can access variables that are part of the outer
 function, I don't think they are easier to understand (there are still
 cases where a nested function makes sense though).


I think the improvement in ease of unit testing is a huge plus from my
point of view (when splitting the function to the same level).
This seems in the balance to be far more helpful than harmful.

-Angus



 The way I would have lowered the complexity of the function in keystone
 is to extract the complex part into a new function. This can make the
 existing function much easier to understand for all the reasons that one
 defines a function for code. Since this new function is obviously only
 called from the function it's currently in, it makes sense to keep the new
 function inside the existing function. It's simpler to think about an
 embedded function because then you know it's only called from one place.
 The problem is, because of the existing complexity check behavior, this
 doesn't lower the complexity according to the complexity check, so you
 wind up putting the function as a new top-level, and now a reader is has to
 assume that the function could be called from anywhere and has to be much
 more cautious about changes to the function.


 Since the complexity check can lead to code that's harder to understand,
 it must be considered harmful and should be removed, at least until the
 incorrect behavior is corrected.


 Why do you think the max complexity check is harmful? because it prevents
 large amounts of nested functions?




 [1] https://review.openstack.org/#/c/139835/
 [2] https://review.openstack.org/#/c/139836/
 [3] https://review.openstack.org/#/c/140188/

 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Angus Salkeld
On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli stef...@openstack.org
wrote:

 On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
  We already have a solution for tracking the contributor-IRC
  mapping--add it to your Foundation Member Profile. For example, mine
  is in there already:
 
  http://www.openstack.org/community/members/profile/5479

 I recommend updating the openstack.org member profile and add IRC
 nickname there (and while you're there, update your affiliation history).

 There is also a search engine on:

 http://www.openstack.org/community/members/


Except that info doesn't appear nicely in review. Some people put their
nick in their Full Name in
gerrit. Hopefully Clint doesn't mind:

https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z

I *think* that's done here: https://review.openstack.org/#/settings/contact

At least with that it is really obvious without having to go to another
site what your nick is.

-Angus


 /stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-12-01 Thread Angus Salkeld
On Tue, Dec 2, 2014 at 8:15 AM, Zane Bitter zbit...@redhat.com wrote:

 On 28/11/14 02:33, Qiming Teng wrote:

 Dear all,

 Auto-Scaling is an important feature supported by Heat and needed by
 many users we talked to.  There are two flavors of AutoScalingGroup
 resources in Heat today: the AWS-based one and the Heat native one.  As
 more requests coming in, the team has proposed to separate auto-scaling
 support into a separate service so that people who are interested in it
 can jump onto it.  At the same time, Heat engine (especially the resource
 type code) will be drastically simplified.  The separated AS service
 could move forward more rapidly and efficiently.

 This work was proposed a while ago with the following wiki and
 blueprints (mostly approved during Havana cycle), but the progress is
 slow.  A group of developers now volunteer to take over this work and
 move it forward.


 Thank you!


  wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
 BPs:
   - https://blueprints.launchpad.net/heat/+spec/as-lib-db
   - https://blueprints.launchpad.net/heat/+spec/as-lib
   - https://blueprints.launchpad.net/heat/+spec/as-engine-db
   - https://blueprints.launchpad.net/heat/+spec/as-engine
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
   - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
   - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
   - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-
 trigger-resource
   - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources

 Once this whole thing lands, Heat engine will talk to the AS engine in
 terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
 how auto-scaling is implemented although the AS engine may in turn ask
 Heat to create/update stacks for scaling's purpose.  In theory, AS
 engine can create/destroy resources by directly invoking other OpenStack
 services.  This new AutoScaling service may eventually have its own DB,
 engine, API, api-client.  We can definitely aim high while work hard on
 real code.

 After reviewing the BPs/Wiki and some communication, we get two options
 to push forward this.  I'm writing this to solicit ideas and comments
 from the community.

 Option A: Top-Down Quick Split
 --

 This means we will follow a roadmap shown below, which is not 100%
 accurate yet and very rough:

1) Get the separated REST service in place and working
2) Switch Heat resources to use the new REST service

 Pros:
- Separate code base means faster review/commit cycle
- Less code churn in Heat
 Cons:
- A new service need to be installed/configured/launched
- Need commitments from dedicated, experienced developers from very
  beginning


 Anything that involves a kind of flag-day switchover like this
 (maintaining the implementation in two different places) will be very hard
 to land, and if by some miracle it does will likely cause a lot of
 user-breaking bugs.


Well we can use the environment to provide the two options for a cycle
(like the cloud watch lite) and the operator can switch when they feel
comfortable.
The reason I'd like to keep the door somewhat open to this is the huge
burden of work we will put on Qiming and his
team for option B (and the load on the core team). As you know this has
been thought of before and fizzled out, I don't want that to happen again.
If we can make this more manageable for the team doing this, then I think
that is a good thing. We could implement the guts of the AS in
a library and import it from both places (to prevent duplicate
implementations).



  Option B: Bottom-Up Slow Growth
 ---

 The roadmap is more conservative, with many (yes, many) incremental
 patches to migrate things carefully.

1) Separate some of the autoscaling logic into libraries in Heat
2) Augment heat-engine with new AS RPCs
3) Switch AS related resource types to use the new RPCs
4) Add new REST service that also talks to the same RPC
   (create new GIT repo, API endpoint and client lib...)

 Pros:
- Less risk breaking user lands with each revision well tested
- More smooth transition for users in terms of upgrades


I think this is only true up until 4), at that point it's the same pain
as option A
(the operator needs a new REST endpoint, daemons to run, etc) - so delayed
pain.


 Cons:
- A lot of churn within Heat code base, which means long review cycles
- Still need commitments from cores to supervise the whole process


 I vote for option B (surprise!), and I will sign up right now to as many
 nagging emails as you care to send when you need reviews if you will take
 on this work :)

  There could be option C, D... but the two above are what we came up with
 during the discussion.


I'd suggest a combination between A and B.

1) Separate 

Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-28 Thread Angus Salkeld
On Fri, Nov 28, 2014 at 5:33 PM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 Dear all,

 Auto-Scaling is an important feature supported by Heat and needed by
 many users we talked to.  There are two flavors of AutoScalingGroup
 resources in Heat today: the AWS-based one and the Heat native one.  As
 more requests coming in, the team has proposed to separate auto-scaling
 support into a separate service so that people who are interested in it
 can jump onto it.  At the same time, Heat engine (especially the resource
 type code) will be drastically simplified.  The separated AS service
 could move forward more rapidly and efficiently.

 This work was proposed a while ago with the following wiki and
 blueprints (mostly approved during Havana cycle), but the progress is
 slow.  A group of developers now volunteer to take over this work and
 move it forward.


Thank you for taking on this big project!



 wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
 BPs:
  - https://blueprints.launchpad.net/heat/+spec/as-lib-db
  - https://blueprints.launchpad.net/heat/+spec/as-lib
  - https://blueprints.launchpad.net/heat/+spec/as-engine-db
  - https://blueprints.launchpad.net/heat/+spec/as-engine
  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
  - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
  - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
  -
 https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource
  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources

 Once this whole thing lands, Heat engine will talk to the AS engine in
 terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
 how auto-scaling is implemented although the AS engine may in turn ask
 Heat to create/update stacks for scaling's purpose.  In theory, AS
 engine can create/destroy resources by directly invoking other OpenStack
 services.  This new AutoScaling service may eventually have its own DB,
 engine, API, api-client.  We can definitely aim high while work hard on
 real code.


Yes, I think AS is the last major bit of code that needs to be moved out
into its own
service. Tho' hopefully still using Heat to orchestrate.



 After reviewing the BPs/Wiki and some communication, we get two options
 to push forward this.  I'm writing this to solicit ideas and comments
 from the community.

 Option A: Top-Down Quick Split
 --

 This means we will follow a roadmap shown below, which is not 100%
 accurate yet and very rough:

   1) Get the separated REST service in place and working
   2) Switch Heat resources to use the new REST service

 Pros:
   - Separate code base means faster review/commit cycle
   - Less code churn in Heat
 Cons:
   - A new service need to be installed/configured/launched
   - Need commitments from dedicated, experienced developers from very
 beginning

 Option B: Bottom-Up Slow Growth
 ---

 The roadmap is more conservative, with many (yes, many) incremental
 patches to migrate things carefully.

   1) Separate some of the autoscaling logic into libraries in Heat
   2) Augment heat-engine with new AS RPCs
   3) Switch AS related resource types to use the new RPCs
   4) Add new REST service that also talks to the same RPC
  (create new GIT repo, API endpoint and client lib...)

 Pros:
   - Less risk breaking user lands with each revision well tested
   - More smooth transition for users in terms of upgrades

 Cons:
   - A lot of churn within Heat code base, which means long review cycles
   - Still need commitments from cores to supervise the whole process



At summit people were leaning towards B, but I am very tempted by the
potential speed of development of A and the reduced code churn on the heat
repo (assuming we pull it out into a new repo). Given the other code churn
going on in our code base (convergence) it might make non stop AS rework
difficult to manage.



 There could be option C, D... but the two above are what we came up with
 during the discussion.

 Another important thing we talked about is about the open discussion on
 this.  OpenStack Wiki seems a good place to document settled designs but
 not for interactive discussions.  Probably we should leverage etherpad
 and the mailinglist when moving forward.  Suggestions on this are also
 welcomed.


I think a mixture of here (mailing list) and the weekly meeting should be ok
to getting some consensus about the way forward.

-Angus



 Thanks.

 Regards,
  Qiming


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-11-26 Thread Angus Salkeld
On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter zbit...@redhat.com wrote:

 A bunch of us have spent the last few weeks working independently on proof
 of concept designs for the convergence architecture. I think those efforts
 have now reached a sufficient level of maturity that we should start
 working together on synthesising them into a plan that everyone can forge
 ahead with. As a starting point I'm going to summarise my take on the three
 efforts; hopefully the authors of the other two will weigh in to give us
 their perspective.


 Zane's Proposal
 ===

 https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph

 I implemented this as a simulator of the algorithm rather than using the
 Heat codebase itself in order to be able to iterate rapidly on the design,
 and indeed I have changed my mind many, many times in the process of
 implementing it. Its notable departure from a realistic simulation is that
 it runs only one operation at a time - essentially giving up the ability to
 detect race conditions in exchange for a completely deterministic test
 framework. You just have to imagine where the locks need to be.
 Incidentally, the test framework is designed so that it can easily be
 ported to the actual Heat code base as functional tests so that the same
 scenarios could be used without modification, allowing us to have
 confidence that the eventual implementation is a faithful replication of
 the simulation (which can be rapidly experimented on, adjusted and tested
 when we inevitably run into implementation issues).

 This is a complete implementation of Phase 1 (i.e. using existing resource
 plugins), including update-during-update, resource clean-up, replace on
 update and rollback; with tests.

 Some of the design goals which were successfully incorporated:
 - Minimise changes to Heat (it's essentially a distributed version of the
 existing algorithm), and in particular to the database
 - Work with the existing plugin API
 - Limit total DB access for Resource/Stack to O(n) in the number of
 resources
 - Limit overall DB access to O(m) in the number of edges
 - Limit lock contention to only those operations actually contending (i.e.
 no global locks)
 - Each worker task deals with only one resource
 - Only read resource attributes once

 Open questions:
 - What do we do when we encounter a resource that is in progress from a
 previous update while doing a subsequent update? Obviously we don't want to
 interrupt it, as it will likely be left in an unknown state. Making a
 replacement is one obvious answer, but in many cases there could be serious
 down-sides to that. How long should we wait before trying it? What if it's
 still in progress because the engine processing the resource already died?


Also, how do we implement resource level timeouts in general?



 Michał's Proposal
 =

 https://github.com/inc0/heat-convergence-prototype/tree/iterative

 Note that a version modified by me to use the same test scenario format
 (but not the same scenarios) is here:

 https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted

 This is based on my simulation framework after a fashion, but with
 everything implemented synchronously and a lot of handwaving about how the
 actual implementation could be distributed. The central premise is that at
 each step of the algorithm, the entire graph is examined for tasks that can
 be performed next, and those are then started. Once all are complete (it's
 synchronous, remember), the next step is run. Keen observers will be asking
 how we know when it is time to run the next step in a distributed version
 of this algorithm, where it will be run and what to do about resources that
 are in an intermediate state at that time. All of these questions remain
 unanswered.


Yes, I was struggling to figure out how it could manage an IN_PROGRESS
state as it's stateless. So you end up treading on the other action's toes.
Assuming we use the resource's state (IN_PROGRESS) you could get around
that. Then you kick off a converge when ever an action completes (if there
is nothing new to be
done then do nothing).



 A non-exhaustive list of concerns I have:
 - Replace on update is not implemented yet
 - AFAIK rollback is not implemented yet
 - The simulation doesn't actually implement the proposed architecture
 - This approach is punishingly heavy on the database - O(n^2) or worse


Yes, re-reading the state of all resources when ever run a new converge is
worrying, but I think Michal had some ideas to minimize this.


 - A lot of phase 2 is mixed in with phase 1 here, making it difficult to
 evaluate which changes need to be made first and whether this approach
 works with existing plugins
 - The code is not really based on how Heat works at the moment, so there
 would be either a major redesign required or lots of radical changes in
 Heat or both

 I think there's a fair chance that given another 3-4 weeks to work on

Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-25 Thread Angus Salkeld
On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer m...@koderer.com wrote:

 Hi all,

 as discussed during our summit sessions we would like to expand the scope
 of the Telco WG (aka OpenStack NFV group) and start working
 on the orchestration topic (ETSI MANO).

 Therefore we started with an etherpad [1] to collect ideas, use-cases and
 requirements.


Hi Marc,

You have quite a high acronym per sentence ratio going on that etherpad;)

From Heat's perspective, we have a lot going on already, but we would love
to support
what you are doing.

You need to start getting specific about what you need and what the missing
gaps are.
I see you are already looking at higher layers (TOSCA) also check out
Murano as well.


Regards
-Angus


 Goal is to discuss this document and move it onto the Telco WG wiki [2]
 when
 it becomes stable.

 Feedback welcome ;)

 Regards
 Marc
 Deutsche Telekom

 [1] https://etherpad.openstack.org/p/telco_orchestration
 [2] https://wiki.openstack.org/wiki/TelcoWorkingGroup

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Angus Salkeld
On Tue, Nov 25, 2014 at 7:06 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 11/24/2014 03:11 PM, Joshua Harlow wrote:

 Dan Smith wrote:

 3. vish brought up one draw back of versioned objects: the difficulty in
 cherry picking commits for stable branches - Is this a show stopper?.


 After some discussion with some of the interested parties, we're
 planning to add a third .z element to the version numbers and use that
 to handle backports in the same way that we do for RPC:

 https://review.openstack.org/#/c/134623/

  Next steps:
 - Jay suggested making a second spec that would lay out what it would
 look like if we used google protocol buffers.
 - Dan: do you need some help in making this happen, do we need some
 volunteers?


 I'm not planning to look into this, especially since we discussed it a
 couple years ago when deciding to do what we're currently doing. If
 someone else does, creates a thing that is demonstrably more useful than
 what we have, and provides a migration plan, then cool. Otherwise, I'm
 not really planning to stop what I'm doing at the moment.

  - Are there any other concrete things we can do to get this usable by
 other projects in a timely manner?


 To be honest, since the summit, I've not done anything with the current
 oslo spec, given the potential for doing something different that was
 raised. I know that cinder folks (at least) are planning to start
 copying code into their tree to get moving.

 I think we need a decision to either (a) dump what we've got into the
 proposed library (or incubator) and plan to move forward incrementally
 or (b) each continue doing our own thing(s) in our own trees while we
 wait for someone to create something based on GPB that does what we want.


 I'd prefer (a); although I hope there is a owner/lead for this library
 (dan?) and it's not just dumped on the oslo folks as that won't work out
 so well I think. It'd be nice if said owner could also look into (b) but
 that's at there own (or other library supporter) time I suppose (I
 personally think (b) would probably allow for a larger community of
 folks to get involved in this library, would potentially reduce the
 amount of custom/overlapping code and other similar benefits...).


 I gave some comments at the very end of the summit session on this, and I
 want to be clear about something. I definitely like GPB, and there's
 definite overlap with some things that GPB does and things that
 nova.objects does.

 That said, I don't think it's wise to make oslo-versionedobjects be a
 totally new thing. I think we should use nova.objects as the base of a new
 oslo-versionedobjects library, and we should evolve oslo-versionedobjects
 slowly over time, eventually allowing for nova, ironic, and whomever else
 is currently using nova/objects, to align with an Oslo library vision for
 this.

 So, in short, I also think a) is the appropriate path to take.


Yeah, my concern with (b) is the time it will take for other projects to
get to use it, esp. since no one is
jumping to take the work on.

-Angus



 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Online midcycle meetup

2014-11-23 Thread Angus Salkeld
On Fri, Nov 21, 2014 at 1:31 AM, Brad Topol bto...@us.ibm.com wrote:

 Angus,

 This may sound crazy but  what if in addition to having the online meetup
 you denoted two different locations as an optional physical meetup?

  That way you would get some of the benefits of having folks meet together
 in person while not forcing everyone to have to travel across the globe. So
 for example, if you had one location in Raleigh and one wherever else folks
 are co-located  you could still get the benefits of having some group of
 folks collaborating face to face.


Hi Brad

Yeah, that might help.
I'll leave it to people in these locations to chip in (I am in Brisbane, AU
and there are not too many close-by Heat hackers).

-Angus



 Just a thought.

 --Brad

 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680



 From:Angus Salkeld asalk...@mirantis.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:11/19/2014 06:56 PM
 Subject:[openstack-dev] [Heat] Online midcycle meetup
 --



 Hi all

 As agreed from our weekly meeting we are going to try an online meetup.

 Why?

 We did a poll (*https://doodle.com/b9m4bf8hvm3mna97#table*
 https://doodle.com/b9m4bf8hvm3mna97#table) and it is
 split quite evenly by location. The story I am getting from the community
 is:

 We want a midcycle meetup if it is nearby, but are having trouble getting
 finance
 to travel far.

 Given that the Heat community is evenly spread across the globe this
 becomes
 impossible to hold without excluding a significant group.

 So let's try and figure out how to do an online meetup!
 (but let's not spend 99% of the time arguing about the software to use
 please)

 I think more interesting is:

 1) How do we minimize the time zone pain?
 2) Can we make each session really focused so we are productive.
 3) If we do this right it does not have to be midcycle but when ever we
 want.

 I'd be interested in feedback from others that have tried this too.

 -Angus
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] LP/review cleanup day

2014-11-19 Thread Angus Salkeld
Hi all

As an action from our meeting I'd like to announce a cleanup day on the 2nd
of December.

What does this mean?

1) We have been noticing a lot of old and potentially out of date bugs that
need
some attention (re-test/triage/mark invalid). Also we have 97 bugs
in-progress
I wonder if that is real? Maybe some have partial fixes and have been left
in-progress.

2) We have probably need to do a manual-abandon on some really old reviews
so it doesn't clutter up the review.

3) We have a lot of out of date blueprints that basically need deleting.
We need to go through and agree on a list of them to kill.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Online midcycle meetup

2014-11-19 Thread Angus Salkeld
Hi all

As agreed from our weekly meeting we are going to try an online meetup.

Why?

We did a poll (https://doodle.com/b9m4bf8hvm3mna97#table) and it is
split quite evenly by location. The story I am getting from the community
is:

We want a midcycle meetup if it is nearby, but are having trouble getting
finance
to travel far.

Given that the Heat community is evenly spread across the globe this becomes
impossible to hold without excluding a significant group.

So let's try and figure out how to do an online meetup!
(but let's not spend 99% of the time arguing about the software to use
please)

I think more interesting is:

1) How do we minimize the time zone pain?
2) Can we make each session really focused so we are productive.
3) If we do this right it does not have to be midcycle but when ever we
want.

I'd be interested in feedback from others that have tried this too.

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-13 Thread Angus Salkeld
On Thu, Nov 13, 2014 at 4:00 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Zane Bitter's message of 2014-11-12 08:42:44 -0800:
  On 12/11/14 10:10, Clint Byrum wrote:
   Excerpts from Zane Bitter's message of 2014-11-11 13:06:17 -0800:
   On 11/11/14 13:34, Ryan Brown wrote:
   I am strongly against allowing arbitrary Javascript functions for
   complexity reasons. It's already difficult enough to get meaningful
   errors when you  up your YAML syntax.
  
   Agreed, and FWIW literally everyone that Clint has pitched the JS idea
   to thought it was crazy ;)
  
  
   So far nobody has stepped up to defend me,
 
  I'll defend you, but I can't defend the idea :)
 
   so I'll accept that maybe
   people do think it is crazy. What I'm really confused by is why we have
   a new weird ugly language like YAQL (sorry, it, like JQ, is hideous),
 
  Agreed, and appealing to its similarity with Perl or PHP (or BASIC!) is
  probably not the way to win over Python developers :D
 
   and that would somehow be less crazy than a well known mature language
   that has always been meant for embedding such as javascript.
 
  JS is a Turing-complete language, it's an entirely different kettle of
  fish to a domain-specific language that is inherently safe to interpret
  from user input. Sure, we can try to lock it down. It's a very tricky
  job to get right. (Plus it requires a new external dependency of unknown
  quality... honestly if you're going to embed a Turing-complete language,
  Python is a much more obvious choice than JS.)
 

 There's a key difference though. Python was never designed to be run
 from untrusted sources. Javascript was _from the beginning_. There are
 at least two independent javascript implementations which both have been
 designed from the ground up to run code from websites in the local
 interpreter. From the standpoint of Heat, it would be even easier to do
 this.

 Perhaps I can carve out some of that negative-1000-days of free time I
 have and I can make it a resource plugin, with the properties being code
 and references to other resources, and the attributes being the return.

   Anyway, I'd prefer YAQL over trying to get the intrinsic functions in
   HOT just right. Users will want to do things we don't expect. I say,
 let
   them, or large sections of the users will simply move on to something
   else.
 
  The other side of that argument is that users are doing one of two
  things with data they have obtained from resources in the template:
 
  1) Passing data to software deployments
  2) Passing data to other resources
 
  In case (1) they can easily transform the data into whatever format they
  want using their own scripts, running on their own server.
 
  In case (2), if it's not easy for them to just do what they want without
  having to perform this kind of manipulation, we have failed to design
  good resources. And if we give people the tools to just paper over the
  problem, we'll never hear about it so we can correct it at the source,
  just launch a thousand hard-to-maintain hacks into the world.
 


case (3) is trying to write a half useful template resource. With what we
have
this is very difficult. I think for non-trivial templates people very
quickly run
into the limitations of HOT.



 I for one would rather serve the users than ourselves, and preventing
 them from papering over the problems so they have to whine at us is a
 self-serving agenda.

 As a primary whiner about Heat for a long time, I respect a lot that
 this development team _bends over backwards_ to respond to user
 requests. It's amazing that way.

 However, I think to grow beyond open source savvy, deeply integrated
 users like me, one has to let the users solve their own problems. They'll
 know that their javascript or YAQL is debt sometimes, and they can
 come to Heat's development community with suggestions like If you had
 a coalesce function I wouldn't need to write it in javascript. But if
 you don't give them _something_, they'll just move on.


Agree, I think we need to get this done. We can't just keep ignoring users
when
they are begging for the same feature, because supposedly they are doing it
wrong.

-Angus



 Anyway, probably looking further down the road than I need to, but
 please keep an open mind for this idea, as users tend to use tools that
 solve their problem _and_ get out of their way in all other cases.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Angus Salkeld
On Thu, Nov 13, 2014 at 6:29 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.com wrote:

  Hi all,



 Convergence-POC distributes stack operations by sending resource actions
 over RPC for any heat-engine to execute. Entire stack lifecycle will be
 controlled by worker/observer notifications. This distributed model has its
 own advantages and disadvantages.



 Any stack operation has a timeout and a single engine will be responsible
 for it. If that engine goes down, timeout is lost along with it. So a
 traditional way is for other engines to recreate timeout from scratch. Also
 a missed resource action notification will be detected only when stack
 operation timeout happens.



 To overcome this, we will need the following capability:

 1.   Resource timeout (can be used for retry)

We will shortly have a worker job, can't we have a job that just sleeps
that gets started in parallel with the job that is doing the work?
It gets to the end of the sleep and runs a check.

  2.   Recover from engine failure (loss of stack timeout, resource
 action notification)




My suggestion above could catch failures as long as it was run in a
different process.

-Angus




 Suggestion:

 1.   Use task queue like celery to host timeouts for both stack and
 resource.

 2.   Poll database for engine failures and restart timers/ retrigger
 resource retry (IMHO: This would be a traditional and weighs heavy)

 3.   Migrate heat to use TaskFlow. (Too many code change)



 I am not suggesting we use Task Flow. Using celery will have very minimum
 code change. (decorate appropriate functions)





 Your thoughts.



 -Vishnu

 IRC: ckmvishnu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] HA cross project session summary and next steps

2014-11-13 Thread Angus Salkeld
On Tue, Nov 11, 2014 at 12:13 PM, Angus Salkeld asalk...@mirantis.com
wrote:

 Hi all

 The HA session was really well attended and I'd like to give some feedback
 from the session.

 Firstly there is some really good content here:
 https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

 1. We SHOULD provide better health checks for OCF resources (
 http://linux-ha.org/wiki/OCF_Resource_Agents).
 These should be fast and reliable. We should probably bike shed on some
 convention like project-manage healthcheck
 and then roll this out for each project.

 2. We should really move
 https://github.com/madkiss/openstack-resource-agents to stackforge or
 openstack if the author is agreeable to it (it's referred to in our
 official docs).


I have chatted to the author of this repo and he is happy for it to live
under stackforge or openstack. Or each OCF resource going into each of the
projects.
Does anyone have any particular preference? I suspect stackforge will be
the path of least resistance.

-Angus


 3. All services SHOULD support Active/Active configurations
 (better scaling and it's always tested)

 4. We should be testing HA (there are a number of ideas on the etherpad
 about this)

 5. Many services do not recovery in the case of failure mid-task
 This seems like a big problem to me (some leave the DB in a mess).
 Someone linked to an interesting article (
 crash-only-software: http://lwn.net/Articles/191059/)
 http://lwn.net/Articles/191059/ that suggests that we if we do this
 correctly we should not need the concept of clean shutdown.
  (
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L459-L471
 )
  I'd be interested in how people think this needs to be approached
 (just raise bugs for each?).

 Regards
 Angus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] midcycle meetup venue selection

2014-11-12 Thread Angus Salkeld
Hi

As promised at the dev sessions I have made a doodle so we can vote for
a venue for the midcycle meetup.

Here is the link:
https://doodle.com/b9m4bf8hvm3mna97

Note: I have added one more option to the doodle no meetup. I understand
that this is a big financial burden and I'd like to get an idea on who just
can't
make it. Maybe we can give an electronic skype/hangout a go if the costs are
too high.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-12 Thread Angus Salkeld
On Wed, Nov 12, 2014 at 9:33 PM, Alexis Lee alex...@hp.com wrote:

 Zane Bitter said on Tue, Nov 11, 2014 at 04:06:17PM -0500:
  FWIW literally everyone that Clint has pitched the JS
  idea to thought it was crazy ;)

 +1

  YAQL ... looks like line noise to me

 YAML representing function call stacks (an AST) looks pretty bad to me
 :)

 The YAQL doc is not great at the moment but the language is not
 difficult. It's pretty similar to XPath, jq and JSP EL, i.e. dotted
 syntax with square brackets for attribute selection and slicing. That's
 also pretty similar to Python btw.

 Is there another expression language you prefer?

  From the beginning we've tried to have the absolute minimum
  number of intrinsic functions in HOT.

 This is understandable but there are things I want to do (as an operator
 - like first_nonnull) which I cannot. The options as I see them:

   * Add a bunch of operators and a LOT of functions
   * Add a bunch of operators, a few functions and lambda
   * Add a bunch of functions and use an expression language for
 arithmetic, comparison etc
   * Use an expression language for all complex processing



I think it's pretty clear we need something more powerful that adding
heaps of intrinsic functions, IMHO as long as we can prove it is safe
we should add yaql (it's nice that there would be a consistent user
experience
between these projects -mistral/heat/murano).

-Angus


  I would much prefer to resurrect the previous proposal for adding
  conditionals and then see what is still missing than to just dive
  straight in to embedding a whole other language in HOT.

 Do you mean this? https://blueprints.launchpad.net/heat/+spec/intrinsics
 This only provides string equality and boolean logic. It's insufficient
 to implement first_nonnull for example.

 I'd like to emphasise that we don't have to offer the whole YAQL stdlib
 in HOT initially. We can use it just for its operator suite and a few
 key functions then add others as usecases are discovered.


+1


  On 11/11/14 13:34, Ryan Brown wrote:
  Vendored HOT
  the real issue is that it's under the control of the operator and not
  the template author.

 Yep, the solution has to be through the template as that's 99% of the
 user interface.


 Alexis
 --
 Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-12 Thread Angus Salkeld
On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews everett.to...@rackspace.com
wrote:

 Hi All,

 Chris Yeoh started the use of an APIImpact flag in commit messages for
 specs in Nova. It adds a requirement for an APIImpact flag in the commit
 message for a proposed spec if it proposes changes to the REST API. This
 will make it much easier for people such as the API Working Group who want
 to review API changes across OpenStack to find and review proposed API
 changes.

 For example, specifications with the APIImpact flag can be found with the
 following query:


 https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

 Chris also proposed a similar change to many other projects and I did the
 rest. Here’s the complete list if you’d like to review them.

 Barbican: https://review.openstack.org/131617
 Ceilometer: https://review.openstack.org/131618
 Cinder: https://review.openstack.org/131620
 Designate: https://review.openstack.org/131621
 Glance: https://review.openstack.org/131622
 Heat: https://review.openstack.org/132338
 Ironic: https://review.openstack.org/132340
 Keystone: https://review.openstack.org/132303
 Neutron: https://review.openstack.org/131623
 Nova: https://review.openstack.org/#/c/129757
 Sahara: https://review.openstack.org/132341
 Swift: https://review.openstack.org/132342
 Trove: https://review.openstack.org/132346
 Zaqar: https://review.openstack.org/132348

 There are even more projects in stackforge that could use a similar
 change. If you know of a project in stackforge that would benefit from
 using an APIImapct flag in its specs, please propose the change and let us
 know here.


I seem to have missed this, I'll place my review comment here too.

I like the general idea of getting more consistent/better API. But, is
reviewing every spec across all projects just going to introduce a new non
scalable bottle neck into our work flow (given the increasing move away
from this approach: moving functional tests to projects, getting projects
to do more of their own docs, etc..). Wouldn't a better approach be to have
an API liaison in each project that can keep track of new guidelines and
catch potential problems?

I see have added a new section here:
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Isn't that enough?

Regards
Angus


 Thanks,
 Everett


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-11 Thread Angus Salkeld
On Wed, Nov 12, 2014 at 1:35 AM, Alexis Lee alex...@hp.com wrote:

 Alexis Lee said on Mon, Nov 10, 2014 at 05:34:13PM +:
  How about we support YAQL expressions? https://github.com/ativelkov/yaql
  Plus some HOFs (higher-order functions) like cond, map, filter, foldleft
  etc?

 We could also use YAQL to provide the HOFs.

  Here's first_nonnull:
 
config:
  Fn::Select
- 0
filter:
  - yaql: $.0 != null
  - item1
  - itemN

   config:
 yaql: $[$ != null][0]
 - item1
 - itemN

 This approach requires less change to Heat, at the price of learning
 more YAQL.


+1 to YAQL


-Angus




 Alexis
 --
 Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Orchestration Template] Hierarchical Heat templates

2014-11-11 Thread Angus Salkeld
On Wed, Nov 12, 2014 at 1:43 PM, Pradip Mukhopadhyay 
pradip.inte...@gmail.com wrote:

 Would like to ask the forum if something has already been considered to
 make the HOT (Heat Orchestration Template) hierarchical.

 Basically the template file might consist of different personas putting
 things together. So from manageability of the templates - it will be a good
 idea to separate it our based on different personas and have a master file
 that will 'instantiate' the respective templates.personas.


 If not considered already - any thought on this?


 If already considered - any cons that was discussed earlier?



Hi

I am not sure what you mean about personas, but you can have hierarchical
templates.
see:
http://docs.openstack.org/developer/heat/template_guide/composition.html

Regards
Angus




 Thanks,
 Pradip



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-10 Thread Angus Salkeld
Hi all

I just wanted to make sure we are all under the same understanding of the
outcomes and what the next steps for the versioned objects session are.

1. There is a lot of interest in other projects using oslo versioned
objects and it is worth progressing with this (
https://review.openstack.org/#/c/127532).
2. jpipes and jharlow suggested experimenting/investigating google protocol
buffers (https://developers.google.com/protocol-buffers/) instead of  the
custom serialization and version code. This *could* be an implementation
detail, but also could make the adoption by nova more complicated (as it
has a different mechanism in place).
3. vish brought up one draw back of versioned objects: the difficulty in
cherry picking commits for stable branches - Is this a show stopper?.

Next steps:
- Jay suggested making a second spec that would lay out what it would look
like if we used google protocol buffers.
- Dan: do you need some help in making this happen, do we need some
volunteers?
- Are there any other concrete things we can do to get this usable by other
projects in a timely manner?

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] HA cross project session summary and next steps

2014-11-10 Thread Angus Salkeld
Hi all

The HA session was really well attended and I'd like to give some feedback
from the session.

Firstly there is some really good content here:
https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

1. We SHOULD provide better health checks for OCF resources (
http://linux-ha.org/wiki/OCF_Resource_Agents).
These should be fast and reliable. We should probably bike shed on some
convention like project-manage healthcheck
and then roll this out for each project.

2. We should really move
https://github.com/madkiss/openstack-resource-agents to stackforge or
openstack if the author is agreeable to it (it's referred to in our
official docs).

3. All services SHOULD support Active/Active configurations
(better scaling and it's always tested)

4. We should be testing HA (there are a number of ideas on the etherpad
about this)

5. Many services do not recovery in the case of failure mid-task
This seems like a big problem to me (some leave the DB in a mess).
Someone linked to an interesting article (
crash-only-software: http://lwn.net/Articles/191059/)
http://lwn.net/Articles/191059/ that suggests that we if we do this
correctly we should not need the concept of clean shutdown.
 (
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L459-L471
)
 I'd be interested in how people think this needs to be approached
(just raise bugs for each?).

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] New function: first_nonnull

2014-11-09 Thread Angus Salkeld
On 06/11/2014 8:32 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Lee, Alexis's message of 2014-11-05 15:46:43 +0100:
  I'm considering adding a function which takes a list and returns the
first
  non-null, non-empty value in that list.
 
  So you could do EG:
 
  some_thing:
  config:
  ControlVIP:
  first_nonnull:
  - {get_param: ControlVIP}
  - {get_attr: [ControlVirtualIP, fixed_ips, 0,
ip_address]}]}
 
  I'm open to other names, EG some, or, fallback_list etc.
 
  Steve Hardy suggested building this into get_attr or Fn::Select. My
feeling
  is that those each do one job well right now, I'm happy to take a steer
  though.
 
  What do you think please?
 

 Yes this is super useful for writing responsive, reusable templates.

 I'd like to suggest that this be called 'coalesce' as that is what SQL
 calls it.

Although I have no clue why they called it that (colalesce mean join/merge
not get first non-null). I'd rather it be called what it does
first_nonnull() seems more obvious to me. We could also try the
conditional as Zane suggested.

-Angus


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] no meeting this week

2014-11-04 Thread Angus Salkeld
Hi

As we are all (mostly) at summit, let's cancel this week's meeting.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Angus Salkeld
On Thu, Oct 30, 2014 at 5:48 PM, Eoghan Glynn egl...@redhat.com wrote:


IIRC, there is no method for removing foundation members. So there
are likely a number of people listed who have moved on to other
activities and are no longer involved with OpenStack. I'd actually
be quite interested to see the turnout numbers with voters who
missed the last two elections prior to this one filtered out.
  
   Well, the base electorate for the TC are active contributors with
   patches landed to official projects within the past year, so these
   are devs getting their code merged but not interested in voting.
   This is somewhat different from (though potentially related to) the
   dead weight foundation membership on the rolls for board
   elections.
  
   Also, foundation members who have not voted in two board elections
   are being removed from the membership now, from what I understand
   (we just needed to get to the point where we had two years worth of
   board elections in the first place).
 
  Thanks, I lost my mind here and confused the board with the TC.
 
  So then my next question is, of those who did not vote, how many are
  from under-represented companies? A higher percentage there might point
  to disenfranchisement.

 Well, that we don't know, because the ballots are anonymized.

 So we can only make a stab at detecting partisan voting patterns, in
 the form a strict preference for candidates from one company over all
 others, but we've no way of knowing whether voters from those same
 companies actually cast the ballots in question.


I'd love to see a rule that says you can't vote for people from your own
company.
That would turn things around :-)

-A



 ... i.e. from these data, the conclusion that the preferred pairs of
 candidates were just more popular across-the-board would be equally
 valid.

 Conversely, we've no way of knowing if the voters employed by those
 under-represented companies you mention had a higher or lower turnout
 than the average.

 If there is a concern about balanced representation, then the biggest
 single change we could make to address this, IMO, would be to contest
 all TC seats at all elections.

 Staggered terms optimize for continuity, but by amplifying the majority
 voice (if such a thing exists in our case), they tend to pessimize for
 balanced representation.

 Cheers,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] database hoarding

2014-10-30 Thread Angus Salkeld
On Fri, Oct 31, 2014 at 8:30 AM, Abel Lopez alopg...@gmail.com wrote:

 It seems that every release, there is more and more emphasis on
 upgradability. This is a good thing, I've love to see production users
 easily go from old to new.

 As an operator, I've seen first hand the results of neglecting the
 databases that openstack creates. If we intend to have users go
 year-over-year with upgrades, we're going to expect them to carry huge
 databases around.

 Just in my lab, I have over 10 deleted instances in the last two
 months.
 Frankly, I'm not convinced that simply archiving deleted rows is the best
 idea. Sure, gets your production databases and tables to a manageable size,
 but you're simply hoarding old data.

 As an operator, I'd prefer to have time based criteria over number of
 rows, too.
 I envision something like `nova-manage db purge [days]` where we can leave
 it up to the administrator to decide how much of their old data (if any)
 they'd be OK losing.


Heat has something like this:
http://docs.openstack.org/developer/heat/man/heat-manage.html

heat-manage purge_deleted [-g {days,hours,minutes,seconds}] [age]

Although I am not sure it is popular:
https://bugs.launchpad.net/heat/+bug/1239377

-Angus


 Think about data destruction guidelines too, some companies require old
 data be destroyed when not needed, others require maintaining it.
 We can easily provide both here.

 I've drafted a simple blueprint
 https://blueprints.launchpad.net/nova/+spec/database-purge

 I've love some input from the community.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Angus Salkeld
On Fri, Oct 31, 2014 at 5:39 AM, Eoghan Glynn egl...@redhat.com wrote:



  The low (and dropping) level of turnout is worrying, particularly in
  light of that analysis showing the proportion of drive-by contributors
  is relatively static, but it is always going to be hard to discern the
  motives of people who didn't vote from the single bit of data we have on
  them.
 
  There is, however, another metric that we can pull from the actual
  voting data: the number of candidates actually ranked by each voter:
 
  Candidates
 rankedFrequency
 
08   2%
1   17   3%
2   24   5%
3   20   4%
4   31   6%
5   36   7%
6   68  13%
7   39   8%
8   17   3%
99   2%
   10   21   4%
   11-   -
   12  216  43%
 
  (Note that it isn't possible to rank exactly n-1 candidates.)
 
  So even amongst the group of people who were engaged enough to vote,
  fewer than half ranked all of the candidates. A couple of hypotheses
  spring to mind:
 
  1) People don't understand the voting system.
 
  Under Condorcet, there is no such thing as tactical voting by an
  individual. So to the extent that these figures might reflect deliberate
  'tactical' voting, it means people don't understand Condorcet. The size
  of the spike at 6 (the number of positions available - the same spike
  appeared at 7 in the previous election) strongly suggests that lack of
  understanding of the voting system is at least part of the story. The
  good news is that this problem is eminently addressable.

 Addressable by educating the voters on the subtleties of Condorcet, or
 by switching to another model such as the single-transferable vote?

 I can see the attractions of Condorcet, in particular it tends to favor
 consensus over factional candidates. Which could be seen as A Good Thing.

 But in our case, seems to me, we're doubling down on consensus.

 By combining Condorcet with staggered terms and no term limits, seems
 we're favoring both consensus in general and the tendency to return the
 *same* consensus candidates. (No criticism of the individual candidates
 intended, just the sameness)

 STV on the other hand combined with simultaneous terms, is actually
 used in the real world[1] and has the advantage of ensuring factions
 get some proportional representation and hence don't feel excluded
 or disenfranchised.


+1 to this, with a term limit.

-Angus


 Just a thought ... given that we're in the mood, as a community, to
 consider radical structural reforms.

 Cheers,
 Eoghan

 [1] so at least would be familiar to the large block of Irish and
 Australian voters ... though some centenarian citizens of
 Marquette, Michigan, may be a tad more comfortable with Condorcet ;)


  2) People aren't familiar with the candidates
 
  This is the one that worries me - it looks a lot like most voters are
  choosing not to rank many of the candidates because they don't feel they
  know enough about them to have an opinion. It seems to me that the TC
  has failed to engage the community enough on the issues of the day to
  move beyond elections as a simple name-recognition contest. (Kind of
  like how I imagine it is when you have to vote for your local
  dog-catcher here in the US. I have to imagine because they don't let me
  vote.) It gets worse, because the less the TC tries to engage the
  community on the issues and the less it attempts to actually lead (as
  opposed to just considering checklists and voting to ask for more time
  to consider checklists), the more entrenched the current revolving-door
  membership becomes. So every election serves to reinforce the TC
  members' perception that everything is going great, and also to
  reinforce the perception of those whose views are not represented that
  the TC is an echo chamber from which their views are more or less
  structurally excluded. That's a much harder problem to address.
 
  cheers,
  Zane.
 
  
   Cheers,
   Eoghan
  
   So the proportion of single-patch committers is creeping up slowly,
 but
   not at a rate that would account for the decline in voter turnout.
  
   And since we've no way of knowing if voting patterns among the
   single-patch
   committers differs in any way from the norm, these data don't tell us
   much.
  
   If we're serious about improving participation rates, then I think we
   should consider factors what would tend to drive interest levels and
   excitement around election time. My own suspicion is that open races
   where the outcome is in doubt are more likely to garner attention from
   voters, than contests that feel like a foregone conclusion. That would
   suggest un-staggering the terms as a first step.
  
   Cheers,
   Eoghan
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   

Re: [openstack-dev] [all] How can we get more feedback from users?-- Great idea Angus!

2014-10-26 Thread Angus Salkeld
On Sat, Oct 25, 2014 at 1:20 AM, Brad Topol bto...@us.ibm.com wrote:

 +100!   Angus this is awesome!!!   Anyway to get one of these for each
 project?


Anyone can make an etherpad, but as Tim suggested I think we need to work
with the Application Ecosystem WG
to do this in a consistent way. I'll look into doing that.
https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group

-Angus


 Thanks,

 Brad


 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680



 From:Sandy Walsh sandy.wa...@rackspace.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:10/24/2014 09:46 AM
 Subject:Re: [openstack-dev] [all] How can we get more feedback
 from users?
 --



 Nice work Angus ... great idea. Would love to see more of this.

 -S

 --

 *From:* Angus Salkeld [asalk...@mirantis.com]
 * Sent:* Friday, October 24, 2014 1:32 AM
 * To:* OpenStack Development Mailing List (not for usage questions)
 * Subject:* [openstack-dev] [all] How can we get more feedback from users?

 Hi all

 I have felt some grumblings about usability issues with Heat
 templates/client/etc..
 and wanted a way that users could come and give us feedback easily (low
 barrier). I started an etherpad (
 *https://etherpad.openstack.org/p/heat-useablity-improvements*
 https://etherpad.openstack.org/p/heat-useablity-improvements) - the
 first win is it is spelt wrong :-O

 We now have some great feedback there in a very short time, most of this
 we should be able to solve.

 This lead me to think, should OpenStack have a more general mechanism for
 users to provide feedback. The idea is this is not for bugs or support,
 but for users to express pain points, requests for features and docs/howtos.

 It's not easy to improve your software unless you are listening to your
 users.

 Ideas?

 -Angus___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-24 Thread Angus Salkeld
On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli stef...@openstack.org
wrote:

 Hi Angus,

 quite a noble intent, one that requires lots of attempts like this you
 have started.

 On 10/23/2014 09:32 PM, Angus Salkeld wrote:
  I have felt some grumblings about usability issues with Heat
  templates/client/etc..
  and wanted a way that users could come and give us feedback easily (low
  barrier). I started an etherpad
  (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
  first win is it is spelt wrong :-O

 :)

  We now have some great feedback there in a very short time, most of this
  we should be able to solve.
 
  This lead me to think, should OpenStack have a more general mechanism
  for users to provide feedback. The idea is this is not for bugs or
  support, but for users to express pain points, requests for features and
  docs/howtos.

 One place to start is to pay attention to what happens on the operators
 mailing list. Posting this message there would probably help since lots
 of users hang out there.

 In Paris there will be another operators mini-summit, the fourth IIRC,
 one every 3 months more or less (I can't find the details at the moment,
 I assume they'll be published soon -- Ideas are being collected on
 https://etherpad.openstack.org/p/PAR-ops-meetup).


Thanks for those pointers, we very interested in feedback from operators,
but
in this case I am talking more about end users not operators (people that
actually use our API).

-Angus


 Another effort to close this 'feedback loop' is the new working group
 temporarily named 'influencers' that will meet in Paris for the first
 time:

 https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

 It's great to see lots of efforts going in the same direction. Keep 'em
 coming.

 /stef

 --
 Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >