Re: [Openstack-operators] [openstack-dev] [Openstack-sigs] Dropping lazy translation support

2018-11-06 Thread Rochelle Grober
I seem to recall list discussion on this quite a ways back.  I think most of it 
happened on the Docs ml, though. Maybe Juno/Kilo timeframe?  If possible, it 
would be good to search over the code bases for places it was called to see its 
current footprint.  I'm pretty sure it was the docs folks working with the oslo 
folks to make it work.  But then the question was put to the ops folks about 
translations of logs (maybe the New York midcycle) and ops don't use 
translation.  The ops input was broadcast to dev and docs and most efforts 
stopped at that point.  But, I believe some projects had already done some work 
on lazy translation.  I suspect the amount done, though was pretty low.

Maybe the fastest way to get info would be to turn it off and see where the 
code barfs in a long run (to catch as many projects as possible)?

--rocky

> From: Ben Nemec > Sent: Monday, November 05, 2018 1:40 PM
> 
> On 11/5/18 3:13 PM, Matt Riedemann wrote:
> > On 11/5/2018 1:36 PM, Doug Hellmann wrote:
> >> I think the lazy stuff was all about the API responses. The log
> >> translations worked a completely different way.
> >
> > Yeah maybe. And if so, I came across this in one of the blueprints:
> >
> > https://etherpad.openstack.org/p/disable-lazy-translation
> >
> > Which says that because of a critical bug, the lazy translation was
> > disabled in Havana to be fixed in Icehouse but I don't think that ever
> > happened before IBM developers dropped it upstream, which is further
> > justification for nuking this code from the various projects.
> >
> 
> It was disabled last-minute, but I'm pretty sure it was turned back on (hence
> why we're hitting issues today). I still see coercion code in oslo.log that 
> was
> added to fix the problem[1] (I think). I could be wrong about that since this
> code has undergone significant changes over the years, but it looks to me like
> we're still forcing things to be unicode.[2]
> 
> 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py
> 2:
> https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38
> d72106ea0b8b8f/oslo_log/formatters.py#L414
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [Openstack-sigs] Dropping lazy translation support

2018-11-06 Thread Rochelle Grober
I seem to recall list discussion on this quite a ways back.  I think most of it 
happened on the Docs ml, though. Maybe Juno/Kilo timeframe?  If possible, it 
would be good to search over the code bases for places it was called to see its 
current footprint.  I'm pretty sure it was the docs folks working with the oslo 
folks to make it work.  But then the question was put to the ops folks about 
translations of logs (maybe the New York midcycle) and ops don't use 
translation.  The ops input was broadcast to dev and docs and most efforts 
stopped at that point.  But, I believe some projects had already done some work 
on lazy translation.  I suspect the amount done, though was pretty low.

Maybe the fastest way to get info would be to turn it off and see where the 
code barfs in a long run (to catch as many projects as possible)?

--rocky

> From: Ben Nemec > Sent: Monday, November 05, 2018 1:40 PM
> 
> On 11/5/18 3:13 PM, Matt Riedemann wrote:
> > On 11/5/2018 1:36 PM, Doug Hellmann wrote:
> >> I think the lazy stuff was all about the API responses. The log
> >> translations worked a completely different way.
> >
> > Yeah maybe. And if so, I came across this in one of the blueprints:
> >
> > https://etherpad.openstack.org/p/disable-lazy-translation
> >
> > Which says that because of a critical bug, the lazy translation was
> > disabled in Havana to be fixed in Icehouse but I don't think that ever
> > happened before IBM developers dropped it upstream, which is further
> > justification for nuking this code from the various projects.
> >
> 
> It was disabled last-minute, but I'm pretty sure it was turned back on (hence
> why we're hitting issues today). I still see coercion code in oslo.log that 
> was
> added to fix the problem[1] (I think). I could be wrong about that since this
> code has undergone significant changes over the years, but it looks to me like
> we're still forcing things to be unicode.[2]
> 
> 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py
> 2:
> https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38
> d72106ea0b8b8f/oslo_log/formatters.py#L414
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Rochelle Grober
Oh, very definitely +1000



--
Rochelle Grober Rochelle Grober
M: +1-6508889722(preferred)
E: rochelle.gro...@huawei.com<mailto:rochelle.gro...@huawei.com>
2012实验室-硅谷研究所技术规划及合作部
2012 Laboratories-Silicon Valley Technology Planning & 
Cooperation,Silicon Valley Research Center
From:Mathieu Gagné
To:openstack-s...@lists.openstack.org,
Cc:OpenStack Development Mailing List (not for usage questions),OpenStack 
Operators,
Date:2018-09-26 12:41:24
Subject:Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal 
selection for T series

+1 Yes please!

--
Mathieu

On Wed, Sep 26, 2018 at 2:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
> python-*client CLIs to python-openstackclient" from the etherpad and propose 
> this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
> extensive end user facing documentation which explains how to use the 
> OpenStack along with CERN specific features (such as workflows for requesting 
> projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is 
> inconsistent. In some cases, we find projects which are not covered by the 
> unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
> the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully 
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be 
> defined)
> - Many administrator actions would also benefit from integration (reader 
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with the 
> cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)
>
> The end user perception of a solution will be greatly enhanced by a single 
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud 
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev , openstack-operators 
> , openstack-sigs 
> 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T   
>   series
>
> It's time to start thinking about community-wide goals for the T series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Rochelle Grober
Oh, very definitely +1000



--
Rochelle Grober Rochelle Grober
M: +1-6508889722(preferred)
E: rochelle.gro...@huawei.com<mailto:rochelle.gro...@huawei.com>
2012实验室-硅谷研究所技术规划及合作部
2012 Laboratories-Silicon Valley Technology Planning & 
Cooperation,Silicon Valley Research Center
From:Mathieu Gagné
To:openstack-s...@lists.openstack.org,
Cc:OpenStack Development Mailing List (not for usage questions),OpenStack 
Operators,
Date:2018-09-26 12:41:24
Subject:Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal 
selection for T series

+1 Yes please!

--
Mathieu

On Wed, Sep 26, 2018 at 2:56 PM Tim Bell  wrote:
>
>
> Doug,
>
> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
> python-*client CLIs to python-openstackclient" from the etherpad and propose 
> this for a T/U series goal.
>
> To give it some context and the motivation:
>
> At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
> extensive end user facing documentation which explains how to use the 
> OpenStack along with CERN specific features (such as workflows for requesting 
> projects/quotas/etc.).
>
> One regular problem we come across is that the end user experience is 
> inconsistent. In some cases, we find projects which are not covered by the 
> unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
> the function which require the native project client.
>
> I would strongly support a goal which targets
>
> - All new projects should have the end user facing functionality fully 
> exposed via the unified client
> - Existing projects should aim to close the gap within 'N' cycles (N to be 
> defined)
> - Many administrator actions would also benefit from integration (reader 
> roles are end users too so list and show need to be covered too)
> - Users should be able to use a single openrc for all interactions with the 
> cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)
>
> The end user perception of a solution will be greatly enhanced by a single 
> command line tool with consistent syntax and authentication framework.
>
> It may be a multi-release goal but it would really benefit the cloud 
> consumers and I feel that goals should include this audience also.
>
> Tim
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday, 26 September 2018 at 18:00
> To: openstack-dev , openstack-operators 
> , openstack-sigs 
> 
> Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T   
>   series
>
> It's time to start thinking about community-wide goals for the T series.
>
> We use community-wide goals to achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently improve
> certain areas where technical debt payments have become too high -
> across all OpenStack projects. Community input is important to ensure
> that the TC makes good decisions about the goals. We need to consider
> the timing, cycle length, priority, and feasibility of the suggested
> goals.
>
> If you are interested in proposing a goal, please make sure that before
> the summit it is described in the tracking etherpad [1] and that you
> have started a mailing list thread on the openstack-dev list about the
> proposal so that everyone in the forum session [2] has an opportunity to
> consider the details.  The forum session is only one step in the
> selection process. See [3] for more details.
>
> Doug
>
> [1] https://etherpad.openstack.org/p/community-goals
> [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
> [3] https://governance.openstack.org/tc/goals/index.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Rochelle Grober
Sounds like an important discussion to have with the operators in Denver. 
Should put this on the schedule for the Ops meetup.

--Rocky

> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Thursday, September 06, 2018 1:59 PM
> To: OpenStack Development Mailing List (not for usage questions)
> ; openstack-
> operat...@lists.openstack.org
> Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-
> specific news on extraction
> 
> I wanted to recap some upgrade-specific stuff from today outside of the
> other [1] technical extraction thread.
> 
> Chris has a change up for review [2] which prompted the discussion.
> 
> That change makes placement only work with placement.conf, not
> nova.conf, but does get a passing tempest run in the devstack patch [3].
> 
> The main issue here is upgrades. If you think of this like deprecating config
> options, the old config options continue to work for a release and then are
> dropped after a full release (or 3 months across boundaries for CDers) [4].
> Given that, Chris's patch would break the standard deprecation policy. Clearly
> one simple way outside of code to make that work is just copy and rename
> nova.conf to placement.conf and voila. But that depends on *all*
> deployment/config tooling to get that right out of the gate.
> 
> The other obvious thing is the database. The placement repo code as-is
> today still has the check for whether or not it should use the placement
> database but falls back to using the nova_api database [5]. So technically you
> could point the extracted placement at the same nova_api database and it
> should work. However, at some point deployers will clearly need to copy the
> placement-related tables out of the nova_api DB to a new placement DB and
> make sure the 'migrate_version' table is dropped so that placement DB
> schema versions can reset to 1.
> 
> With respect to grenade and making this work in our own upgrade CI testing,
> we have I think two options (which might not be mutually
> exclusive):
> 
> 1. Make placement support using nova.conf if placement.conf isn't found for
> Stein with lots of big warnings that it's going away in T. Then Rocky 
> nova.conf
> with the nova_api database configuration just continues to work for
> placement in Stein. I don't think we then have any grenade changes to make,
> at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs
> in Stein use placement.conf and a placement-specific database, then
> upgrades from Stein to T should also be OK with respect to grenade, but
> likely punts the cut-over issue for all other deployment projects (because we
> don't CI with grenade doing
> Rocky->Stein->T, or FFU in other words).
> 
> 2. If placement doesn't support nova.conf in Stein, then grenade will require
> an (exceptional) [6] from-rocky upgrade script which will (a) write out
> placement.conf fresh and (b) run a DB migration script, likely housed in the
> placement repo, to create the placement database and copy the placement-
> specific tables out of the nova_api database. Any script like this is likely
> needed regardless of what we do in grenade because deployers will need to
> eventually do this once placement would drop support for using nova.conf (if
> we went with option 1).
> 
> That's my attempt at a summary. It's going to be very important that
> operators and deployment project contributors weigh in here if they have
> strong preferences either way, and note that we can likely do both options
> above - grenade could do the fresh cutover from rocky to stein but we allow
> running with nova.conf and nova_api DB in placement in stein with plans to
> drop that support in T.
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-
> September/subject.html#134184
> [2] https://review.openstack.org/#/c/600157/
> [3] https://review.openstack.org/#/c/600162/
> [4]
> https://governance.openstack.org/tc/reference/tags/assert_follows-
> standard-deprecation.html#requirements
> [5]
> https://github.com/openstack/placement/blob/fb7c1909/placement/db_api
> .py#L27
> [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-
> upgrade
> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Rochelle Grober
Sounds like an important discussion to have with the operators in Denver. 
Should put this on the schedule for the Ops meetup.

--Rocky

> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Thursday, September 06, 2018 1:59 PM
> To: OpenStack Development Mailing List (not for usage questions)
> ; openstack-
> operat...@lists.openstack.org
> Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-
> specific news on extraction
> 
> I wanted to recap some upgrade-specific stuff from today outside of the
> other [1] technical extraction thread.
> 
> Chris has a change up for review [2] which prompted the discussion.
> 
> That change makes placement only work with placement.conf, not
> nova.conf, but does get a passing tempest run in the devstack patch [3].
> 
> The main issue here is upgrades. If you think of this like deprecating config
> options, the old config options continue to work for a release and then are
> dropped after a full release (or 3 months across boundaries for CDers) [4].
> Given that, Chris's patch would break the standard deprecation policy. Clearly
> one simple way outside of code to make that work is just copy and rename
> nova.conf to placement.conf and voila. But that depends on *all*
> deployment/config tooling to get that right out of the gate.
> 
> The other obvious thing is the database. The placement repo code as-is
> today still has the check for whether or not it should use the placement
> database but falls back to using the nova_api database [5]. So technically you
> could point the extracted placement at the same nova_api database and it
> should work. However, at some point deployers will clearly need to copy the
> placement-related tables out of the nova_api DB to a new placement DB and
> make sure the 'migrate_version' table is dropped so that placement DB
> schema versions can reset to 1.
> 
> With respect to grenade and making this work in our own upgrade CI testing,
> we have I think two options (which might not be mutually
> exclusive):
> 
> 1. Make placement support using nova.conf if placement.conf isn't found for
> Stein with lots of big warnings that it's going away in T. Then Rocky 
> nova.conf
> with the nova_api database configuration just continues to work for
> placement in Stein. I don't think we then have any grenade changes to make,
> at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs
> in Stein use placement.conf and a placement-specific database, then
> upgrades from Stein to T should also be OK with respect to grenade, but
> likely punts the cut-over issue for all other deployment projects (because we
> don't CI with grenade doing
> Rocky->Stein->T, or FFU in other words).
> 
> 2. If placement doesn't support nova.conf in Stein, then grenade will require
> an (exceptional) [6] from-rocky upgrade script which will (a) write out
> placement.conf fresh and (b) run a DB migration script, likely housed in the
> placement repo, to create the placement database and copy the placement-
> specific tables out of the nova_api database. Any script like this is likely
> needed regardless of what we do in grenade because deployers will need to
> eventually do this once placement would drop support for using nova.conf (if
> we went with option 1).
> 
> That's my attempt at a summary. It's going to be very important that
> operators and deployment project contributors weigh in here if they have
> strong preferences either way, and note that we can likely do both options
> above - grenade could do the fresh cutover from rocky to stein but we allow
> running with nova.conf and nova_api DB in placement in stein with plans to
> drop that support in T.
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-
> September/subject.html#134184
> [2] https://review.openstack.org/#/c/600157/
> [3] https://review.openstack.org/#/c/600162/
> [4]
> https://governance.openstack.org/tc/reference/tags/assert_follows-
> standard-deprecation.html#requirements
> [5]
> https://github.com/openstack/placement/blob/fb7c1909/placement/db_api
> .py#L27
> [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-
> upgrade
> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Rochelle Grober

Zane Bitter wrote:
> On 31/05/18 14:35, Julia Kreger wrote:
> > Back to the topic of nitpicking!
> >
> > I virtually sat down with Doug today and we hammered out the positive
> > aspects that we feel like are the things that we as a community want
> > to see as part of reviews coming out of this effort. The principles
> > change[1] in governance has been updated as a result.
> >
> > I think we are at a point where we have to state high level
> > principles, and then also update guidelines or other context providing
> > documentation to re-enforce some of items covered in this
> > discussion... not just to educate new contributors, but to serve as a
> > checkpoint for existing reviewers when making the decision as to how
> > to vote change set. The question then becomes where would such
> > guidelines or documentation best fit?
> 
> I think the contributor guide is the logical place for it. Kendall pointed 
> out this
> existing section:
> 
> https://docs.openstack.org/contributors/code-and-documentation/using-
> gerrit.html#reviewing-changes
> 
> It could go in there, or perhaps we separate out the parts about when to use
> which review scores into a separate page from the mechanics of how to use
> Gerrit.
> 
> > Should we explicitly detail the
> > cause/effect that occurs? Should we convey contributor perceptions, or
> > maybe even just link to this thread as there has been a massive amount
> > of feedback raising valid cases, points, and frustrations.
> >
> > Personally, I'd lean towards a blended approach, but the question of
> > where is one I'm unsure of. Thoughts?
> 
> Let's crowdsource a set of heuristics that reviewers and contributors should
> keep in mind when they're reviewing or having their changes reviewed. I
> made a start on collecting ideas from this and past threads, as well as my own
> reviewing experience, into a document that I've presumptuously titled "How
> to Review Changes the OpenStack Way" (but might be more accurately called
> "The Frank Sinatra Guide to Code Review"
> at the moment):
> 
> https://etherpad.openstack.org/p/review-the-openstack-way
> 
> It's in an etherpad to make it easier for everyone to add their suggestions
> and comments (folks in #openstack-tc have made some tweaks already).
> After a suitable interval has passed to collect feedback, I'll turn this into 
> a
> contributor guide change.

I offer the suggestion that there are some real examples of Good/Not Good in 
the document or maybe an addendum.  Since we have many non-native speakers in 
our community, examples are like pictures -- worth a thousand foreign words;-)

Maybe Zhipeng has a few favorites to supply.  I would suggest both score and 
comment to go with score.  In some cases, the example would show how to score 
and avoid nitpicking, in others, valid scores, but comments that are reasonable 
or not for the score.

--Rocky
 
> Have at it!
> 
> cheers,
> Zane.
> 
> > -Julia
> >
> > [1]: https://review.openstack.org/#/c/570940/
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-18 Thread Rochelle Grober
Thanks, Lance!

Also, the more I think about it, the more I think Maintainer has too much 
baggage to use that term for this role.  It really is “continuity” that we are 
looking for.  Continuous important fixes, continuous updates of tools used to 
produce the SW.

Keep this in the back of your minds for the discussion.  And yes, this is a 
discussion to see if we are interested, and only if there is interest, how to 
move forward.

--Rocky

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Friday, May 18, 2018 2:03 PM
To: Rochelle Grober <rochelle.gro...@huawei.com>; openstack-dev 
<openstack-...@lists.openstack.org>; openstack-operators 
<openstack-operators@lists.openstack.org>; user-committee 
<user-commit...@lists.openstack.org>
Subject: Re: [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- 
time to get serious on Maintainers -- Session etherpad and food for thought for 
discussion

Here is the link to the session in case you'd like to add it to your schedule 
[0].

[0] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers
On 05/17/2018 07:55 PM, Rochelle Grober wrote:
Folks,

TL;DR
The last session related to extended releases is: OpenStack is "mature" -- time 
to get serious on Maintainers
It will be in room 220 at 11:00-11:40
The etherpad for the last session in the series on Extended releases is here:
https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3

There are links to info on other communities’ maintainer 
process/role/responsibilities also, as reference material on how other have 
made it work (or not).

The nitty gritty details:

The upcoming Forum is filled with sessions that are focused on issues needed to 
improve and maintain the sustainability of OpenStack projects for the long 
term.  We have discussion on reducing technical debt, extended releases, fast 
forward installs, bringing Ops and User communities closer together, etc.  The 
community is showing it is now invested in activities that are often part of 
“Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak).  
We are doing this; we are thinking about the moving parts to do this; let’s 
think about the contributors who want to do these and bring some clarity to 
their roles and the processes they need to be successful.  I am hoping you read 
this and keep these ideas in mind as you participate in the various Forum 
sessions.  Then you can bring the ideas generated during all these discussions 
to the Maintainers session near the end of the Summit to brainstorm how to 
visualize and define this new(ish) component of our technical community.

So, who has been doing the maintenance work so far?  Mostly (mostly) unsung 
heroes like the Stable Release team, Release team, Oslo team, project liaisons 
and the community goals champions (yes, moving to py3 is a 
sustaining/maintenance type of activity).  And some operators (Hi, mnaser!).  
We need to lean on their experience and what we think the community will need 
to reduce that technical debt to outline what the common tasks of maintainers 
should be, what else might fall in their purview, and how to partner with them 
to better serve them.

With API lower limits, new tool versions, placement, py3, and even projects 
reaching “code complete” or “maintenance mode,” there is a lot of work for 
maintainers to do (I really don’t like that term, but is there one that fits 
OpenStack’s community?).  It would be great if we could find a way to share the 
load such that we can have part time contributors here.  We know that operators 
know how to cherrypick, test in there clouds, do bug fixes.  How do we pair 
with them to get fixes upstreamed without requiring them to be full on 
developers?  We have a bunch of alumni who have stopped being “cores” and 
sometimes even developers, but who love our community and might be willing and 
able to put in a few hours a week, maybe reviewing small patches, providing 
help with user/ops submitted patch requests, or whatever.  They were trusted 
with +2 and +W in the past, so we should at least be able to trust they know 
what they know.  We  would need some way to identify them to Cores, since they 
would be sort of 1.5 on the voting scale, but……

So, burn out is high in other communities for maintainers.  We need to find a 
way to make sustaining the stable parts of OpenStack sustainable.

Hope you can make the talk, or add to the etherpad, or both.  The etherpad is 
very musch still a work in progress (trying to organize it to make sense).  If 
you want to jump in now, go for it, otherwise it should be in reasonable shape 
for use at the session.  I hope we get a good mix of community and a good 
collection of those who are already doing the job without title.

Thanks and see you next week.
--rocky




华为技术有限公司 Huawei Technologies Co., L

Re: [openstack-dev] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-18 Thread Rochelle Grober
Thanks, Lance!

Also, the more I think about it, the more I think Maintainer has too much 
baggage to use that term for this role.  It really is “continuity” that we are 
looking for.  Continuous important fixes, continuous updates of tools used to 
produce the SW.

Keep this in the back of your minds for the discussion.  And yes, this is a 
discussion to see if we are interested, and only if there is interest, how to 
move forward.

--Rocky

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Friday, May 18, 2018 2:03 PM
To: Rochelle Grober <rochelle.gro...@huawei.com>; openstack-dev 
<openstack-dev@lists.openstack.org>; openstack-operators 
<openstack-operat...@lists.openstack.org>; user-committee 
<user-commit...@lists.openstack.org>
Subject: Re: [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- 
time to get serious on Maintainers -- Session etherpad and food for thought for 
discussion

Here is the link to the session in case you'd like to add it to your schedule 
[0].

[0] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers
On 05/17/2018 07:55 PM, Rochelle Grober wrote:
Folks,

TL;DR
The last session related to extended releases is: OpenStack is "mature" -- time 
to get serious on Maintainers
It will be in room 220 at 11:00-11:40
The etherpad for the last session in the series on Extended releases is here:
https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3

There are links to info on other communities’ maintainer 
process/role/responsibilities also, as reference material on how other have 
made it work (or not).

The nitty gritty details:

The upcoming Forum is filled with sessions that are focused on issues needed to 
improve and maintain the sustainability of OpenStack projects for the long 
term.  We have discussion on reducing technical debt, extended releases, fast 
forward installs, bringing Ops and User communities closer together, etc.  The 
community is showing it is now invested in activities that are often part of 
“Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak).  
We are doing this; we are thinking about the moving parts to do this; let’s 
think about the contributors who want to do these and bring some clarity to 
their roles and the processes they need to be successful.  I am hoping you read 
this and keep these ideas in mind as you participate in the various Forum 
sessions.  Then you can bring the ideas generated during all these discussions 
to the Maintainers session near the end of the Summit to brainstorm how to 
visualize and define this new(ish) component of our technical community.

So, who has been doing the maintenance work so far?  Mostly (mostly) unsung 
heroes like the Stable Release team, Release team, Oslo team, project liaisons 
and the community goals champions (yes, moving to py3 is a 
sustaining/maintenance type of activity).  And some operators (Hi, mnaser!).  
We need to lean on their experience and what we think the community will need 
to reduce that technical debt to outline what the common tasks of maintainers 
should be, what else might fall in their purview, and how to partner with them 
to better serve them.

With API lower limits, new tool versions, placement, py3, and even projects 
reaching “code complete” or “maintenance mode,” there is a lot of work for 
maintainers to do (I really don’t like that term, but is there one that fits 
OpenStack’s community?).  It would be great if we could find a way to share the 
load such that we can have part time contributors here.  We know that operators 
know how to cherrypick, test in there clouds, do bug fixes.  How do we pair 
with them to get fixes upstreamed without requiring them to be full on 
developers?  We have a bunch of alumni who have stopped being “cores” and 
sometimes even developers, but who love our community and might be willing and 
able to put in a few hours a week, maybe reviewing small patches, providing 
help with user/ops submitted patch requests, or whatever.  They were trusted 
with +2 and +W in the past, so we should at least be able to trust they know 
what they know.  We  would need some way to identify them to Cores, since they 
would be sort of 1.5 on the voting scale, but……

So, burn out is high in other communities for maintainers.  We need to find a 
way to make sustaining the stable parts of OpenStack sustainable.

Hope you can make the talk, or add to the etherpad, or both.  The etherpad is 
very musch still a work in progress (trying to organize it to make sense).  If 
you want to jump in now, go for it, otherwise it should be in reasonable shape 
for use at the session.  I hope we get a good mix of community and a good 
collection of those who are already doing the job without title.

Thanks and see you next week.
--rocky




华为技术有限公司 Huawei Technologies Co., L

Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-17 Thread Rochelle Grober

Thierry Carrez [mailto:thie...@openstack.org]
> 
> Graham Hayes wrote:
> > Any additional background on why we allowed LCOO to operate like this
> > would help a lot.
> 
The group was started back when OPNFV was first getting involved with 
OpenStack.  Many of the members came from that community.  They had a "vision" 
that the members would have to commit to provide developers to address the 
feature gaps the group was concerned with.  There was some interaction between 
them and the Product WG, and I at least attempted to get them to meet and talk 
with the Large Deployment Team(?) (an ops group that met at the Ops midcycles 
and discussed their issues, workarounds, gaps, etc.)

Are they still active?  Is anyone aware of any docs/code/bugfixes/features that 
came out of the group?

--Rocky

> We can't prevent any group of organizations to work in any way they prefer -
> - we can, however, deny them the right to be called an OpenStack
> workgroup if they fail at openly collaborating. We can raise the topic, but in
> the end it is a User Committee decision though, since the LCOO is a User
> Committee-blessed working group.
> 
> Source: https://governance.openstack.org/uc/
> 
> --
> Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-17 Thread Rochelle Grober
Folks,

TL;DR
The last session related to extended releases is: OpenStack is "mature" -- time 
to get serious on Maintainers
It will be in room 220 at 11:00-11:40
The etherpad for the last session in the series on Extended releases is here:
https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3

There are links to info on other communities’ maintainer 
process/role/responsibilities also, as reference material on how other have 
made it work (or not).

The nitty gritty details:

The upcoming Forum is filled with sessions that are focused on issues needed to 
improve and maintain the sustainability of OpenStack projects for the long 
term.  We have discussion on reducing technical debt, extended releases, fast 
forward installs, bringing Ops and User communities closer together, etc.  The 
community is showing it is now invested in activities that are often part of 
“Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak).  
We are doing this; we are thinking about the moving parts to do this; let’s 
think about the contributors who want to do these and bring some clarity to 
their roles and the processes they need to be successful.  I am hoping you read 
this and keep these ideas in mind as you participate in the various Forum 
sessions.  Then you can bring the ideas generated during all these discussions 
to the Maintainers session near the end of the Summit to brainstorm how to 
visualize and define this new(ish) component of our technical community.

So, who has been doing the maintenance work so far?  Mostly (mostly) unsung 
heroes like the Stable Release team, Release team, Oslo team, project liaisons 
and the community goals champions (yes, moving to py3 is a 
sustaining/maintenance type of activity).  And some operators (Hi, mnaser!).  
We need to lean on their experience and what we think the community will need 
to reduce that technical debt to outline what the common tasks of maintainers 
should be, what else might fall in their purview, and how to partner with them 
to better serve them.

With API lower limits, new tool versions, placement, py3, and even projects 
reaching “code complete” or “maintenance mode,” there is a lot of work for 
maintainers to do (I really don’t like that term, but is there one that fits 
OpenStack’s community?).  It would be great if we could find a way to share the 
load such that we can have part time contributors here.  We know that operators 
know how to cherrypick, test in there clouds, do bug fixes.  How do we pair 
with them to get fixes upstreamed without requiring them to be full on 
developers?  We have a bunch of alumni who have stopped being “cores” and 
sometimes even developers, but who love our community and might be willing and 
able to put in a few hours a week, maybe reviewing small patches, providing 
help with user/ops submitted patch requests, or whatever.  They were trusted 
with +2 and +W in the past, so we should at least be able to trust they know 
what they know.  We  would need some way to identify them to Cores, since they 
would be sort of 1.5 on the voting scale, but……

So, burn out is high in other communities for maintainers.  We need to find a 
way to make sustaining the stable parts of OpenStack sustainable.

Hope you can make the talk, or add to the etherpad, or both.  The etherpad is 
very musch still a work in progress (trying to organize it to make sense).  If 
you want to jump in now, go for it, otherwise it should be in reasonable shape 
for use at the session.  I hope we get a good mix of community and a good 
collection of those who are already doing the job without title.

Thanks and see you next week.
--rocky




华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-23 Thread Rochelle Grober
Doug Hellmann wrote:
> I would like for us to collect some more data about what efforts teams are
> making with encouraging new contributors, and what seems to be working or
> not. In the past we've done pretty well at finding new techniques by
> experimenting within one team and then adapting the results to scale them
> out to other teams.
> 
> Does anyone have any examples of things that we ought to be trying more
> of?
> 

Okay, here I am sticking my foot in it after reading all the other excellent 
replies.  Lots of good suggestions.  Matt, Zane, Chris, Rico, etc.  Here are is 
another one:

I've noticed that as the projects mature, they have developed some new 
processes that are regular, but not daily.  Some are baked into the schedule, 
others are scheduled on a semi recurring basis but not "official.  One that 
I've seen a few times is the "bug swat day".  Some projects are scheduling 
triage and fix days throughout the cycle.  One project just decided to make it 
monthly.  This is great.  Invite Ops and users to participate.  Invite the 
folks who filed the bugs you might fix to participate.  Use IRC, paste and 
etherpad to develop the fixes and show the symptoms.  Maybe to develop the test 
to demonstrate the fix works, too.  If an operator really wants to see a bug 
fixed, they let the project know and let them know when she will turn up in IRC 
to help.  If they help enough, add them as co-owner of the patch.  Don't make 
them get all the accounts (if that's possible with Gerrit), just put their name 
on it.  They'll be overjoyed to both have the bug fixed *and* get some credit 
for stepping up.  This get devs, users and ops all on the same IRC page, 
focusing on enduser problems and collaborating on solutions in a regularly 
scheduled day(time) slot.  And the "needs more info" problem for bugs gets 
solved.

You can also invite everyone to Spec review days, or test writing days, or 
documentation days.  And you can invites students, academicians, etc.  If 
people know to show up, and they know *if* they show up *and* they are willing 
to discuss symptoms, ask questions, provide logs, whatever, that some pain in 
their butt will be closer to getting fixed, some will show up.  You give them 
credit and they'll feel even better showing up.

Not quite drive-by contributors, but it means pain points get addressed based 
on participation and existing contributors partner with the people who know the 
pain points to solve them.  On a regularly scheduled basis.  Oh, and you can 
put these days on the OpenStack event schedule, too.

--Rocky
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding "not docs" banner to specs website?

2018-03-21 Thread Rochelle Grober
It could be *really* useful if you could include the date (month/year would be 
good enough)of the last significant patch (not including the reformat to 
Openstackdocstheme).  That could give folks a great stick in the mud for what 
"past" is for the spec.  It might even incent some to see if there are newer, 
conflicting or enhancing specs or docs to reference.

--Eoxky

> 
Doug Hellmann wrote:
> 
> Excerpts from Jim Rollenhagen's message of 2018-03-19 19:06:38 +:
> > On Mon, Mar 19, 2018 at 3:46 PM, Jeremy Stanley 
> wrote:
> >
> > > On 2018-03-19 14:57:58 + (+), Jim Rollenhagen wrote:
> > > [...]
> > > > What do folks think about a banner at the top of the specs website
> > > > (or each individual spec) that points this out? I'm happy to do
> > > > the work if we agree it's a good thing to do.
> > > [...]
> > >
> > > Sounds good in principle, but the execution may take a bit of work.
> > > Specs sites are independently generated Sphinx documents stored in
> > > different repositories managed by different teams, and don't
> > > necessarily share a common theme or configuration.
> >
> >
> > Huh, I had totally thought there was a theme for the specs site that
> > most/all projects use. I may try to accomplish this anyway, but will
> > likely be more work that I thought. I'll poke around at options (small
> > sphinx plugin, etc).
> 
> We want them all to use the openstackdocstheme so you could look into
> creating a "subclass" of that one with the extra content in the header, then
> ensure all of the specs repos use it.  We would have to land a small patch to
> trigger a rebuild, but the patch switching them from oslosphinx to
> openstackdocstheme would serve for that and a small change to the readme
> or another file would do it for any that are already using the theme.
> 
> Doug
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-16 Thread Rochelle Grober
Submission is no longer anonymous, but the results are not public, still.  The 
submitter decides whether the guideline results are public, but if they do, 
only the guideline tests are made public.  If the submitter does not  actively 
select public  availability for the test results, all results default to 
private.

--Rocky

> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Thursday, March 15, 2018 7:41 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus
> 'OpenStack Powered' Tests
> 
> On 2018-03-15 14:16:30 + (+), arkady.kanev...@dell.com wrote:
> [...]
> > This can be submitted anonymously if you like.
> 
> Anonymous submissions got disabled (and the existing set of data from them
> deleted). See the announcement from a month ago for
> details:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2018-
> February/127103.html
> 
> --
> Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DriverLog] DriverLog future

2018-03-01 Thread Rochelle Grober


> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Thursday, March 01, 2018 4:19 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [DriverLog] DriverLog future
> 
> On 3/1/2018 10:44 AM, Ilya Shakhat wrote:
> >
> > For those who do not know, DriverLog is a community registry of
> > 3rd-party drivers for OpenStack hosted together with Stackalytics [1].
> > The project started 4 years ago and by now contains information about
> > 220 drivers. The data from DriverLog is also consumed by official
> > Marketplace [2].
> >
> > Here I would like to discuss directions for DriverLog and 3rd-party
> > driver registry as general.
> >
> > 1) Being a single community-wide registry was good initially, it
> > allowed to quickly collect description for most of drivers in a single 
> > place.
> > But in a long term this approach stopped working - not many projects
> > remember to update the information stored in some random place, right?
> >
> > Mike already pointed to this problem a year ago [3] and the idea was
> > to move driver list to projects (and thus move responsibility to them
> > too) and have an aggregated list of drivers produced by infra. Do we
> > have any progress in this direction? Is it a time to start deprecation
> > of DriverLog and consider transition during Rocky release?
> >
> > 2) As a project with 4 years history DriverLog's list only increased
> > over the time with quite few removals. Now it still has drivers with
> > the latest version Liberty or drivers for non-maintained projects (e.g.
> > Fuel). While it maybe makes sense to keep all of them for operators
> > who run older versions, it may produce a feeling that the majority of
> > drivers are old. One of solutions for this is to show by default
> > drivers for active releases only (Pike and ahead). If done this will
> > apply to both DriverLog and Marketplace.

If you want to default to showing only drivers for active releases, you have to 
provide a method for users to find which drivers are available for *any* 
specific release no matter how old (although Juno is likely the furthest back 
we would want to go).  There are lots of people who haven't upgraded to 
"living" releases, but still need to maintain their clouds, which might mean 
getting an as yet not acquired driver for their cloud release.  Remember, even 
Interop certification goes back three releases.

You can unclutter the pages a bit by defaulting to displaying current drivers, 
but you must still provide the historical lists.

--Rocky

> >
> > Any other ideas or suggestions?
> 
> As having recently went through that repo to update some of the nova driver
> maintainers, I noted the very old status of several of them.
> 
> I agree this information should live in the per-project repo documentation,
> not in a centralized location. Nova does a decent job about keeping the virt
> driver feature support matrix up to date, but definitely not when it's a
> separate repo. This is a similar problem to the centralized docs issue
> addressed as a community in Pike.
> 
> The OSIC team tried working on a feature classification effort [1] for a few
> releases which was similar to the driver log, specifically for showing which
> drivers and features had CI coverage. That work is *very* incomplete and no
> longer maintained, and I've actually been suggesting lately that we drop it
> since misinformation is almost worse than no information.
> 
> I suggested to Mike the other day that at the very least, the driver log docs
> should put a big red warning, like in [1], that the information may be old.
> 
> [1] https://docs.openstack.org/nova/latest/user/feature-classification.html
> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping down from core

2017-12-15 Thread Rochelle Grober
Armando,

You’ve been great for Neutron.  It’s sad to see you have to cut back, but it’s 
great to hear you aren’t totally leaving.

Thank you for all of your hard work.  You’ve brought Neutron along quite 
nicely.  I’d also like to thank you for all of your help with the stadium 
projects.  Your mentorship has been invaluable.

And thanks for all the fish,
--Rocky

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, December 15, 2017 11:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron] Stepping down from core

Hi neutrinos,

To some of you this email may not come as a surprise.

During the past few months my upstream community engagements have been more and 
more sporadic. While I tried hard to stay committed and fulfill my core 
responsibilities I feel like I failed to retain the level of quality and 
consistency that I would have liked ever since I stepped down from being the 
Neutron PTL back at the end of Ocata.

I stated many times when talking to other core developers that being core is a 
duty rather than a privilege, and I personally feel like it's way overdue for 
me to recognize on the mailing list that it's the time that I state officially 
my intention to step down due to other commitments.

This does not mean that I will disappear tomorrow. I'll continue to be on 
neutron IRC channels, support the neutron team, being the release liasion for 
Queens, participate at meetings, and be open to providing feedback to anyone 
who thinks my opinion is still valuable, especially when dealing with the 
neutron quirks for which I might be (git) blamed :)

Cheers,
Armando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help needed with Fusion SPhere

2017-11-20 Thread Rochelle Grober
MISTAKE!!!

Ooops.  I’m sorry.  Please ignore.

--Rocky

From: Rochelle Grober
Sent: Monday, November 20, 2017 5:23 PM
To: Farhad Sunavala <farhad.sunav...@huawei.com>; Zhiqiang Yang 
<zhiqiang.y...@huawei.com>
Cc: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: RE: Help needed with Fusion SPhere

Hey, Farhad and Henry.

I am shipping your request to the OpenStack email list.  Though many of these 
people are in China, there are both upstream (OpenStack) and downstream 
(FusionSphere) people on this list.

I hope someone will speak up and work with Henry to accomplish the installation.

Thanks,

--Rocky

From: Farhad Sunavala
Sent: Monday, November 20, 2017 4:54 PM
To: Zhiqiang Yang <zhiqiang.y...@huawei.com<mailto:zhiqiang.y...@huawei.com>>
Cc: Rochelle Grober 
<rochelle.gro...@huawei.com<mailto:rochelle.gro...@huawei.com>>
Subject: Help needed with Fusion SPhere

Hi Rocky,

Henry from SW lab wanted to know who has worked with Fusion Sphere and can help 
him with the installation.
Would you know who can help him out?

Thanks,
Farhad.


Farhad Sunavala
Principal Engineer I Cloud Computing
Tel: 408-330-4499 I Mobile: 408-330-4499 I E-mail: 
farhad.sunav...@huawei.com<mailto:farhad.sunav...@huawei.com>
Company: Huawei I Address: Santa Clara, CA

[315px-Huawei]http://www.huawei.com<http://www.huawei.com/>
[cid:image002.jpg@01D36226.A42ABD20]
This e-mail and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure,reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it !
本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 
地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help needed with Fusion SPhere

2017-11-20 Thread Rochelle Grober
Hey, Farhad and Henry.

I am shipping your request to the OpenStack email list.  Though many of these 
people are in China, there are both upstream (OpenStack) and downstream 
(FusionSphere) people on this list.

I hope someone will speak up and work with Henry to accomplish the installation.

Thanks,

--Rocky

From: Farhad Sunavala
Sent: Monday, November 20, 2017 4:54 PM
To: Zhiqiang Yang <zhiqiang.y...@huawei.com>
Cc: Rochelle Grober <rochelle.gro...@huawei.com>
Subject: Help needed with Fusion SPhere

Hi Rocky,

Henry from SW lab wanted to know who has worked with Fusion Sphere and can help 
him with the installation.
Would you know who can help him out?

Thanks,
Farhad.


Farhad Sunavala
Principal Engineer I Cloud Computing
Tel: 408-330-4499 I Mobile: 408-330-4499 I E-mail: 
farhad.sunav...@huawei.com<mailto:farhad.sunav...@huawei.com>
Company: Huawei I Address: Santa Clara, CA

[315px-Huawei]http://www.huawei.com<http://www.huawei.com/>
[cid:image002.jpg@01D36224.26F3E860]
This e-mail and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure,reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it !
本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 
地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] [QA] Proposal for a QA SIG

2017-11-17 Thread Rochelle Grober
First off, let me say I think this is a tremendous idea.  And, it's perfect for 
the SIG concept.

Next, see inline:

Thierry Carrez wrote:
> Andrea Frittoli wrote:
> > [...]
> > during the last summit in Sydney we discussed the possibility of
> > creating an OpenStack quality assurance special interest group (OpenStack
> QA SIG).
> > The proposal was discussed during the QA feedback session [0] and it
> > received positive feedback there; I would like to bring now the
> > proposal to a larger audience via the SIG, dev and operators mailing
> > lists.
> > [...]
> 
> I think this goes with the current trends of re-centering upstream "project
> teams" on the production of software, while using SIGs as communities of
> practice (beyond the governance boundaries), even if they happen to
> produce (some) software as the result of their work.
> 
> One question I have is whether we'd need to keep the "QA" project team at
> all. Personally I think it would create confusion to keep it around, for no 
> gain.
> SIGs code contributors get voting rights for the TC anyway, and SIGs are free
> to ask for space at the PTG... so there is really no reason (imho) to keep a
> "QA" project team in parallel to the SIG ?

Well, you can get rid of the "QA Project Team" but you would then need to 
replace it with something like the Tempest Project, or perhaps the Test 
Project.  You still need a PTL and cores to write, review and merge tempest 
fixes and upgrades, along with some of the tests.  The Interop Guideline tests 
are part of Tempest because being there provides oversight on the style and 
quality of the code of those tests.  We still need that.

--Rocky

> In the same vein we are looking into turning the Security project team into a
> SIG, and could consider turning other non-purely-upstream teams (like I18n)
> in the future.
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-sigs] [Openstack-operators] [QA] Proposal for a QA SIG

2017-11-17 Thread Rochelle Grober
First off, let me say I think this is a tremendous idea.  And, it's perfect for 
the SIG concept.

Next, see inline:

Thierry Carrez wrote:
> Andrea Frittoli wrote:
> > [...]
> > during the last summit in Sydney we discussed the possibility of
> > creating an OpenStack quality assurance special interest group (OpenStack
> QA SIG).
> > The proposal was discussed during the QA feedback session [0] and it
> > received positive feedback there; I would like to bring now the
> > proposal to a larger audience via the SIG, dev and operators mailing
> > lists.
> > [...]
> 
> I think this goes with the current trends of re-centering upstream "project
> teams" on the production of software, while using SIGs as communities of
> practice (beyond the governance boundaries), even if they happen to
> produce (some) software as the result of their work.
> 
> One question I have is whether we'd need to keep the "QA" project team at
> all. Personally I think it would create confusion to keep it around, for no 
> gain.
> SIGs code contributors get voting rights for the TC anyway, and SIGs are free
> to ask for space at the PTG... so there is really no reason (imho) to keep a
> "QA" project team in parallel to the SIG ?

Well, you can get rid of the "QA Project Team" but you would then need to 
replace it with something like the Tempest Project, or perhaps the Test 
Project.  You still need a PTL and cores to write, review and merge tempest 
fixes and upgrades, along with some of the tests.  The Interop Guideline tests 
are part of Tempest because being there provides oversight on the style and 
quality of the code of those tests.  We still need that.

--Rocky

> In the same vein we are looking into turning the Security project team into a
> SIG, and could consider turning other non-purely-upstream teams (like I18n)
> in the future.
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [LTS] Upstream LTS Releases

2017-11-14 Thread Rochelle Grober
Well, moving this discussion is easy.  All that takes is everyone posting 
responses to the openstack-...@lists.openstack.org mailing list instead of dev 
and ops lists.  I've cc'ed all here.  I've also added [LTS] to the subject 
(sorry to break all the threaders). So that the sig list knows what the general 
topic is.  Yeah.  It's not really a topic, but everyone is used to parsing 
those things, even if the mailserver sw isn't.

But, are the two groups willing to move this discussion to the sigs list?  If 
they are, great.  If not,  hmmm.

Anyway, here's my attempt to serve

--Rocky

> -Original Message-
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: Tuesday, November 14, 2017 4:25 PM
> To: Rochelle Grober <rochelle.gro...@huawei.com>
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-...@lists.openstack.org>; openstack-oper.  operat...@lists.openstack.org>
> Subject: Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases
> 
> On Tue, Nov 14, 2017 at 4:10 PM, Rochelle Grober
> <rochelle.gro...@huawei.com> wrote:
> > Folks,
> >
> > This discussion and the people interested in it seem like a perfect
> application of the SIG process.  By turning LTS into a SIG, everyone can
> discuss the issues on the SIG mailing list and the discussion shouldn't end up
> split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a
> new project, great.  Even once  there is a decision on how to move forward,
> there will still be implementation issues and enhancements, so the SIG could
> very well be long-lived.  But the important aspect of this is:  keeping the
> discussion in a place where both devs and ops can follow the whole thing and
> act on recommendations.
> >
> > Food for thought.
> >
> > --Rocky
> >
> Just to add more legs to the spider that is this thread: I think the SIG idea 
> is a
> good one. It may evolve into a project team some day, but for now it's a
> free-for-all polluting 2 mailing lists, and multiple etherpads. How do we go
> about creating one?
> 
> -Erik
> 
> >> -Original Message-
> >> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
> >> Sent: Tuesday, November 14, 2017 8:31 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> <openstack-...@lists.openstack.org>; openstack-oper.  >> operat...@lists.openstack.org>
> >> Subject: Re: [openstack-dev] Upstream LTS Releases
> >>
> >> Hi all - please note this conversation has been split variously
> >> across -dev and - operators.
> >>
> >> One small observation from the discussion so far is that it seems as
> >> though there are two issues being discussed under the one banner:
> >> 1) maintain old releases for longer
> >> 2) do stable releases less frequently
> >>
> >> It would be interesting to understand if the people who want longer
> >> maintenance windows would be helped by #2.
> >>
> >> On 14 November 2017 at 09:25, Doug Hellmann
> <d...@doughellmann.com>
> >> wrote:
> >> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> >> >> >> The concept, in general, is to create a new set of cores from
> >> >> >> these groups, and use 3rd party CI to validate patches. There
> >> >> >> are lots of details to be worked out yet, but our amazing UC
> >> >> >> (User
> >> >> >> Committee) will be begin working out the details.
> >> >> >
> >> >> > What is the most worrying is the exact "take over" process. Does
> >> >> > it mean that the teams will give away the +2 power to a
> >> >> > different team? Or will our (small) stable teams still be
> >> >> > responsible for landing changes? If so, will they have to learn
> >> >> > how to debug 3rd party CI
> >> jobs?
> >> >> >
> >> >> > Generally, I'm scared of both overloading the teams and losing
> >> >> > the control over quality at the same time :) Probably the final
> >> >> > proposal will
> >> clarify it..
> >> >>
> >> >> The quality of backported fixes is expected to be a direct (and
> >> >> only?) interest of those new teams of new cores, coming from users
> >> >> and operators and vendors. The more parties to establish their 3rd
> >> >> party
> >> >
> >> > We have an unhealthy focus on &

Re: [openstack-dev] [Openstack-operators] [LTS] Upstream LTS Releases

2017-11-14 Thread Rochelle Grober
Well, moving this discussion is easy.  All that takes is everyone posting 
responses to the openstack-...@lists.openstack.org mailing list instead of dev 
and ops lists.  I've cc'ed all here.  I've also added [LTS] to the subject 
(sorry to break all the threaders). So that the sig list knows what the general 
topic is.  Yeah.  It's not really a topic, but everyone is used to parsing 
those things, even if the mailserver sw isn't.

But, are the two groups willing to move this discussion to the sigs list?  If 
they are, great.  If not,  hmmm.

Anyway, here's my attempt to serve

--Rocky

> -Original Message-
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: Tuesday, November 14, 2017 4:25 PM
> To: Rochelle Grober <rochelle.gro...@huawei.com>
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>; openstack-oper.  operat...@lists.openstack.org>
> Subject: Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases
> 
> On Tue, Nov 14, 2017 at 4:10 PM, Rochelle Grober
> <rochelle.gro...@huawei.com> wrote:
> > Folks,
> >
> > This discussion and the people interested in it seem like a perfect
> application of the SIG process.  By turning LTS into a SIG, everyone can
> discuss the issues on the SIG mailing list and the discussion shouldn't end up
> split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a
> new project, great.  Even once  there is a decision on how to move forward,
> there will still be implementation issues and enhancements, so the SIG could
> very well be long-lived.  But the important aspect of this is:  keeping the
> discussion in a place where both devs and ops can follow the whole thing and
> act on recommendations.
> >
> > Food for thought.
> >
> > --Rocky
> >
> Just to add more legs to the spider that is this thread: I think the SIG idea 
> is a
> good one. It may evolve into a project team some day, but for now it's a
> free-for-all polluting 2 mailing lists, and multiple etherpads. How do we go
> about creating one?
> 
> -Erik
> 
> >> -Original Message-
> >> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
> >> Sent: Tuesday, November 14, 2017 8:31 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> <openstack-dev@lists.openstack.org>; openstack-oper.  >> operat...@lists.openstack.org>
> >> Subject: Re: [openstack-dev] Upstream LTS Releases
> >>
> >> Hi all - please note this conversation has been split variously
> >> across -dev and - operators.
> >>
> >> One small observation from the discussion so far is that it seems as
> >> though there are two issues being discussed under the one banner:
> >> 1) maintain old releases for longer
> >> 2) do stable releases less frequently
> >>
> >> It would be interesting to understand if the people who want longer
> >> maintenance windows would be helped by #2.
> >>
> >> On 14 November 2017 at 09:25, Doug Hellmann
> <d...@doughellmann.com>
> >> wrote:
> >> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> >> >> >> The concept, in general, is to create a new set of cores from
> >> >> >> these groups, and use 3rd party CI to validate patches. There
> >> >> >> are lots of details to be worked out yet, but our amazing UC
> >> >> >> (User
> >> >> >> Committee) will be begin working out the details.
> >> >> >
> >> >> > What is the most worrying is the exact "take over" process. Does
> >> >> > it mean that the teams will give away the +2 power to a
> >> >> > different team? Or will our (small) stable teams still be
> >> >> > responsible for landing changes? If so, will they have to learn
> >> >> > how to debug 3rd party CI
> >> jobs?
> >> >> >
> >> >> > Generally, I'm scared of both overloading the teams and losing
> >> >> > the control over quality at the same time :) Probably the final
> >> >> > proposal will
> >> clarify it..
> >> >>
> >> >> The quality of backported fixes is expected to be a direct (and
> >> >> only?) interest of those new teams of new cores, coming from users
> >> >> and operators and vendors. The more parties to establish their 3rd
> >> >> party
> >> >
> >> > We have an unhealthy focus on &

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Rochelle Grober
Folks,

This discussion and the people interested in it seem like a perfect application 
of the SIG process.  By turning LTS into a SIG, everyone can discuss the issues 
on the SIG mailing list and the discussion shouldn't end up split.  If it turns 
into a project, great.  If a solution is found that doesn't need a new project, 
great.  Even once  there is a decision on how to move forward, there will still 
be implementation issues and enhancements, so the SIG could very well be 
long-lived.  But the important aspect of this is:  keeping the discussion in a 
place where both devs and ops can follow the whole thing and act on 
recommendations.

Food for thought.

--Rocky

> -Original Message-
> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
> Sent: Tuesday, November 14, 2017 8:31 AM
> To: OpenStack Development Mailing List (not for usage questions)
> ; openstack-oper.  operat...@lists.openstack.org>
> Subject: Re: [openstack-dev] Upstream LTS Releases
> 
> Hi all - please note this conversation has been split variously across -dev 
> and -
> operators.
> 
> One small observation from the discussion so far is that it seems as though
> there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
> 
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
> 
> On 14 November 2017 at 09:25, Doug Hellmann 
> wrote:
> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> >> >> The concept, in general, is to create a new set of cores from
> >> >> these groups, and use 3rd party CI to validate patches. There are
> >> >> lots of details to be worked out yet, but our amazing UC (User
> >> >> Committee) will be begin working out the details.
> >> >
> >> > What is the most worrying is the exact "take over" process. Does it
> >> > mean that the teams will give away the +2 power to a different
> >> > team? Or will our (small) stable teams still be responsible for
> >> > landing changes? If so, will they have to learn how to debug 3rd party CI
> jobs?
> >> >
> >> > Generally, I'm scared of both overloading the teams and losing the
> >> > control over quality at the same time :) Probably the final proposal will
> clarify it..
> >>
> >> The quality of backported fixes is expected to be a direct (and
> >> only?) interest of those new teams of new cores, coming from users
> >> and operators and vendors. The more parties to establish their 3rd
> >> party
> >
> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
> > should not assume that they are needed or will be present. They may
> > be, but we shouldn't build policy around the assumption that they
> > will. Why would we have third-party jobs on an old branch that we
> > don't have on master, for instance?
> >
> >> checking jobs, the better proposed changes communicated, which
> >> directly affects the quality in the end. I also suppose, contributors
> >> from ops world will likely be only struggling to see things getting
> >> fixed, and not new features adopted by legacy deployments they're used
> to maintain.
> >> So in theory, this works and as a mainstream developer and
> >> maintainer, you need no to fear of losing control over LTS code :)
> >>
> >> Another question is how to not block all on each over, and not push
> >> contributors away when things are getting awry, jobs failing and
> >> merging is blocked for a long time, or there is no consensus reached
> >> in a code review. I propose the LTS policy to enforce CI jobs be
> >> non-voting, as a first step on that way, and giving every LTS team
> >> member a core rights maybe? Not sure if that works though.
> >
> > I'm not sure what change you're proposing for CI jobs and their voting
> > status. Do you mean we should make the jobs non-voting as soon as the
> > branch passes out of the stable support period?
> >
> > Regarding the review team, anyone on the review team for a branch that
> > goes out of stable support will need to have +2 rights in that branch.
> > Otherwise there's no point in saying that they're maintaining the
> > branch.
> >
> > Doug
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Cheers,
> ~Blairo
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list

Re: [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Rochelle Grober
Folks,

This discussion and the people interested in it seem like a perfect application 
of the SIG process.  By turning LTS into a SIG, everyone can discuss the issues 
on the SIG mailing list and the discussion shouldn't end up split.  If it turns 
into a project, great.  If a solution is found that doesn't need a new project, 
great.  Even once  there is a decision on how to move forward, there will still 
be implementation issues and enhancements, so the SIG could very well be 
long-lived.  But the important aspect of this is:  keeping the discussion in a 
place where both devs and ops can follow the whole thing and act on 
recommendations.

Food for thought.

--Rocky

> -Original Message-
> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
> Sent: Tuesday, November 14, 2017 8:31 AM
> To: OpenStack Development Mailing List (not for usage questions)
> ; openstack-oper.  operat...@lists.openstack.org>
> Subject: Re: [openstack-dev] Upstream LTS Releases
> 
> Hi all - please note this conversation has been split variously across -dev 
> and -
> operators.
> 
> One small observation from the discussion so far is that it seems as though
> there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
> 
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
> 
> On 14 November 2017 at 09:25, Doug Hellmann 
> wrote:
> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> >> >> The concept, in general, is to create a new set of cores from
> >> >> these groups, and use 3rd party CI to validate patches. There are
> >> >> lots of details to be worked out yet, but our amazing UC (User
> >> >> Committee) will be begin working out the details.
> >> >
> >> > What is the most worrying is the exact "take over" process. Does it
> >> > mean that the teams will give away the +2 power to a different
> >> > team? Or will our (small) stable teams still be responsible for
> >> > landing changes? If so, will they have to learn how to debug 3rd party CI
> jobs?
> >> >
> >> > Generally, I'm scared of both overloading the teams and losing the
> >> > control over quality at the same time :) Probably the final proposal will
> clarify it..
> >>
> >> The quality of backported fixes is expected to be a direct (and
> >> only?) interest of those new teams of new cores, coming from users
> >> and operators and vendors. The more parties to establish their 3rd
> >> party
> >
> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
> > should not assume that they are needed or will be present. They may
> > be, but we shouldn't build policy around the assumption that they
> > will. Why would we have third-party jobs on an old branch that we
> > don't have on master, for instance?
> >
> >> checking jobs, the better proposed changes communicated, which
> >> directly affects the quality in the end. I also suppose, contributors
> >> from ops world will likely be only struggling to see things getting
> >> fixed, and not new features adopted by legacy deployments they're used
> to maintain.
> >> So in theory, this works and as a mainstream developer and
> >> maintainer, you need no to fear of losing control over LTS code :)
> >>
> >> Another question is how to not block all on each over, and not push
> >> contributors away when things are getting awry, jobs failing and
> >> merging is blocked for a long time, or there is no consensus reached
> >> in a code review. I propose the LTS policy to enforce CI jobs be
> >> non-voting, as a first step on that way, and giving every LTS team
> >> member a core rights maybe? Not sure if that works though.
> >
> > I'm not sure what change you're proposing for CI jobs and their voting
> > status. Do you mean we should make the jobs non-voting as soon as the
> > branch passes out of the stable support period?
> >
> > Regarding the review team, anyone on the review team for a branch that
> > goes out of stable support will need to have +2 rights in that branch.
> > Otherwise there's no point in saying that they're maintaining the
> > branch.
> >
> > Doug
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Cheers,
> ~Blairo
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development 

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Rochelle Grober
Clint Byrum wrote:
> Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:
> > On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
> >
> > :OpenStack is big. Big enough that a user will likely be fine with
> > learning :a new set of tools to manage it.
> >
> > New users in the startup sense of new, probably.
> >
> > People with entrenched environments, I doubt it.
> >
> 
> Sorry no, I mean everyone who doesn't have an OpenStack already.
> 
> It's nice and all, if you're a Puppet shop, to get to use the puppet modules.
> But it doesn't bring you any closer to the developers as a group. Maybe a few
> use Puppet, but most don't. And that means you are going to feel like
> OpenStack gets thrown over the wall at you once every
> 6 months.
> 
> > But OpenStack is big. Big enough I think all the major config systems
> > are fairly well represented, so whether I'm right or wrong this
> > doesn't seem like an issue to me :)
> >
> 
> They are. We've worked through it. But that doesn't mean potential users
> are getting our best solution or feeling well integrated into the community.
> 
> > Having common targets (constellations, reference architectures,
> > whatever) so all the config systems build the same things (or a subset
> > or superset of the same things) seems like it would have benefits all
> > around.
> >
> 
> It will. It's a good first step. But I'd like to see a world where developers 
> are
> all well versed in how operators actually use OpenStack.

Hear, hear!  +1000  Take a developer to work during peak operations.

For Walmart, that would be Black Firday/Cyber Monday.
For schools, usually a few days into the new session.
For otherseach has a time when things break more.  Having a developer 
experience what operators do to predict/avoid/recover/work around the normal 
state of operations would help each to understand the macro work flows.  Those 
are important, too.  Full stack includes Ops.

< Snark off />

--Rocky

> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [ptls] Install guide testing

2017-08-25 Thread Rochelle Grober
Might I suggest a PTG activity such as an evening docathon?

Drinks and Docs

Everyone welcome to locate in some convenient location to review/test docs, 
converse and plan for the next dev cycle with Docs folks.  It might not be 
doable at this late date for Pike, but I suspect there may be some availability 
of folks during the PTG.  It's also a great way to tie up a release, having all 
the teams test the docs for the release and get a refresh of what the release 
is and where the next dev cycle needs to start from (yes, I know, it's actually 
already started, but the big plans will start from the PTG).

Just a suggestion on how to spur participation.  Hope it's helpful.

--Rocky

> From: Doug Hellmann 
> 
> Excerpts from Alexandra Settle's message of 2017-08-25 08:57:51 +:
> > Hi everyone,
> >
> > The documentation team is searching for volunteers to help test and
> > verify the OpenStack installation instructions here:
> > https://docs.openstack.org/install-guide/
> >
> > Previously this action has been undertaken by a series of volunteers,
> mostly from the documentation team. However due to the migration, and a
> significant drop in contributors, we are now seeking new individuals to help
> us complete this task.
> >
> > We will be tracking any work here:
> > https://wiki.openstack.org/wiki/Documentation/PikeDocTesting You can
> > see what we have previously done for testing here:
> > https://wiki.openstack.org/wiki/Documentation/OcataDocTesting
> >
> > PTLs of cinder/keystone/horizon/neutron/nova/glance – Previously the
> > documentation team performed the testing tasks for your respective
> > projects as they lived within the openstack-manuals repo. We
> > appreciate that you may or may not have the resources to continue this
> > effort, but it would be great if your teams are able. Please let me
> > know if you are able to so we can verify the instructions of these
> > projects :)
> >
> > Thanks,
> >
> > Alex
> 
> Thanks for starting this thread, Alex.  I want to add a little background for
> folks only now catching up with the state of the migration initiative and
> documentation for Pike.
> 
> At the start of the Pike cycle we lost most of our technical writers.
> By the time of the summit in Boston, it wasn't clear (at least to
> me) if we would have *any* writers left at the start of the Queens cycle.
> Luckily it seems we will, but without knowing that a few months ago I
> encouraged the docs team to make some decisions about prioritizing work
> that will leave us in a less than ideal, but recoverable, state for Pike.
> 
> We started by emphasizing the need to move all of the existing content to a
> new home so it could be maintained by more owners -- this was the aspect
> of the migration spec that most of you will be familiar with, and with over
> 1100 reviews tagged 'doc-migration' I think it's safe to say this was the 
> bulk of
> the work.
> 
> We did not completely address two important aspects of managing that
> content after the migration: translations and testing.
> 
> We looked into translations far enough to know that it would be possible to
> set up the jobs and build project docs in multiple languages. We're already
> doing this for some other sphinx-based docs, so we would have a pattern to
> work from.
> 
> I had hoped that if we had time after the import was done, we could develop
> a plan for end-of-cycle-testing. That didn't happen, so we will have to come
> up with a plan and execute it during Queens. No one thinks this is a good
> outcome, but it's where we are. If you're interested in helping with that
> work, I expect it will be a big part of what the docs team discusses at the 
> PTG.
> Spoiler alert: I will be advocating distributing the responsibility for 
> verifying
> instructions to the project teams.
> 
> Doug
> 
> PS - By the way, most projects are finished with the migration, but I see on
> the dashboard[1] that we still have quite a few open reviews and a few
> missing pages. At this point, the missing docs will need to be backported to
> the stable/pike branches for those projects.
> Let me know if you need help approving things in the stable branches.
> 
> [1] https://doughellmann.com/doc-migration/
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Fault Genes WG

2017-07-20 Thread Rochelle Grober
The meeting is an online video/voice/collaboration meeting.  By clicking the 
link: https://welink-meeting.zoom.us/j/317491860 you will go to a page that 
will download the zoom client installation package.  Install that, run it and 
put the meeting ID in where asked.  Zoom works all over the world.  Once you 
have the client, when you click on the link in the future, it will ask you 
whether you want the zoom client launched.

No IRC room for this one.  It seems that the user groups often are more 
comfortable and productive with interactive meetings.

Hope this helps.

--Rocky


From: randy.perry...@dell.com [mailto:randy.perry...@dell.com]
Sent: Thursday, July 20, 2017 9:06 AM
To: Nematollah Bidokhti ; 
user-commit...@lists.openstack.org; openstack-operators@lists.openstack.org
Subject: Re: [User-committee] [Openstack-operators] Fault Genes WG

Dell - Internal Use - Confidential
Hi,
Is this only on a weblink?  Is there a meeting room on IRC for this?

-Original Appointment-
From: Nematollah Bidokhti [mailto:nematollah.bidok...@huawei.com]
Sent: Wednesday, March 22, 2017 8:20 PM
To: Nematollah Bidokhti; 
user-commit...@lists.openstack.org; 
openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Fault Genes WG
When: Thursday, July 20, 2017 9:00 AM-10:00 AM (UTC-08:00) Pacific Time (US & 
Canada).
Where: Using Zoom Conf. Service - Meeting ID is 317491860


When: Occurs every Thursday from 9:00 AM to 10:00 AM effective 3/23/2017. 
(UTC-08:00) Pacific Time (US & Canada)
Where: Using Zoom Conf. Service - Meeting ID is 317491860

*~*~*~*~*~*~*~*~*~*
Hi there,

nematollah.bidok...@huawei.com is 
inviting you to a scheduled Zoom meeting.

Topic: Fault Genes WG

Time: this is a recurring meeting Meet anytime

Join from PC, Mac, Linux, iOS or Android: 
https://welink-meeting.zoom.us/j/317491860

Or iPhone one-tap (US Toll):  +16465588656,317491860# or +14086380968,317491860#

Or Telephone:

Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)
Meeting ID: 317 491 860

International numbers available: 
https://welink-meeting.zoom.us/zoomconference?m=qqUZ1nX7Q2YCsoeZbbUf9Wf3EkBnmwWe


 << File: ATT1.txt >>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all][tc] Wiki

2017-07-05 Thread Rochelle Grober
And I'd just like to point out, when was the last time you tried to find info 
contained in some etherpad on our etherpad server without having the etherpad's 
exact name?  Either searched for a specific etherpad or for info you knew was 
somewhere on an etherpad somewhere on the etherpad.openstack.org site?

If you have any tricks for that, I could really use them ;-)

--Rocky

Arkady Kanevsky wrote:
> Most of google searches will pickup wiki pages. So people will view wiki as
> the current state of projects.
> 
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Monday, July 03, 2017 9:30 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][tc] Wiki
> 
> Flavio Percoco wrote:
> > On 03/07/17 13:58 +0200, Thierry Carrez wrote:
> >> Flavio Percoco wrote:
> >>> Sometimes I wonder if we still need to maintain a Wiki. I guess some
> >>> projects still use it but I wonder if the use they make of the Wiki
> >>> could be moved somewhere else.
> >>>
> >>> For example, in the TC we use it for the Agenda but I think that
> >>> could be moved to an etherpad. Things that should last forever
> >>> should be documented somewhere (project repos, governance repo in
> >>> the TC case) where we can actually monitor what goes in and easily
> >>> clean up.
> >>
> >> This is a complete tangent, but I'll bite :) We had a thorough
> >> discussion about that last year, summarized at:
> >>
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-June/096481.h
> >> tml
> >>
> >> TL,DR; was that while most authoritative content should (and has been
> >> mostly) moved off the wiki, it's still useful as a cheap publication
> >> platform for teams and workgroups, somewhere between a git repository
> >> with a docs job and an etherpad.
> >>
> >> FWIW the job of migrating authoritative things off the wiki is still
> >> on-going. As an example, Thingee is spearheading the effort to move
> >> the "How to Contribute" page and other first pointers to a reference
> >> website (see recent thread about that).
> >
> > I guess the short answer is that we hope one day we won't need it. I
> > certainly do.
> >
> > What would happen if we make the wiki read-only? Would that break
> > peopl's workflow?
> >
> > Do we know what teams modify the wiki more often and what it is they
> > do there?
> 
> The data is publicly available (see recent changes on the wiki). Most ops
> workgroups heavily rely on the wiki, as well as a significant number of
> upstream project teams and workgroups. Developers are clearly not the
> main target.
> 
> You can dive back into the original analysis etherpad if you're interested:
> 
> https://etherpad.openstack.org/p/wiki-use-cases
> 
> Things that are stroked out are things we moved to reference websites since
> then.
> 
> --
> Thierry Carrez (ttx)
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Rochelle Grober
Thanks for the clarification.  Yeah.  No interop interaction  to see here.  
These are not the api's you are looking for;-)

I think Chris Dent's response  about extending the gabbi based tests is great.  

I'm a firm believer in never discouraging anyone from writing more tests, 
especially when current coverage isn't complete.  But that doesn't mean I want 
them in the gate.  But also, since developers gain a lot of understanding 
through reading code, the test code itself could help devs better understand 
the extent and limitations of the placement api's.  Or not.

I just can't keep from encouraging devs to write tests.

--Rocky
 
> From: Sean Dague [mailto:s...@dague.net]
> On 06/22/2017 01:22 PM, Matt Riedemann wrote:
> 
> > Rocky, we have tests, we just don't have API samples for documentation
> > purposes like in the compute API reference docs.
> >
> > This doesn't have anything to do with interop guidelines, and it
> > wouldn't, since the Placement APIs are all admin-only and interop is
> > strictly about non-admin APIs.
> 
> I think the other important thing to remember is that the consumers of the
> placement api are currently presumed to be other OpenStack projects.
> Definitely not end users. So the documentation priority of providing lots of
> examples so that people don't need to look at source code is not as high.
> 
> I'm firmly in the camp that sample request / response is a good thing, but
> from a priorities perspective it's way more important to get that on end user
> APIs than quasi internal ones.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-21 Thread Rochelle Grober


> From: Matt 
> On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:
> > I  would like to write functional tests to check the exact req/resp
> > for each placement API for all supported versions similar
> >
> > to what is already done for other APIs under
> > nova/tests/functional/api_sample_tests/api_samples/*.
> >
> > These request/response json samples can be used by the
> > api.openstack.org and in the manuals.
> >
> > There are already functional tests written for placement APIs under
> > nova/tests/functional/api/openstack/placement,
> >
> > but these tests doesn’t check the entire HTTP response for each API
> > for all supported versions.
> >
> > I think adding such functional tests for checking response for each
> > placement API would be beneficial to the project.
> >
> > If there is an interest to create such functional tests, I can file a
> > new blueprint for this activity.
> >
> 
> This has come up before and we don't want to use the same functional API
> samples infrastructure for generating API samples for the placement API.
> The functional API samples tests are confusing and a steep learning curve for
> new contributors (and even long time old tooth contributors still get
> confused by them).

I second that you talk with Chris Dent (mentioned below), but I also want to 
encourage you to write tests.  Write API tests that demonstrate *exactly* what 
is allowed and not allowed and verify that whether the api call is constructed 
correctly or not, that the responses are appropriate and correct.  By writing 
these new/extra/improved tests, the Interop guidelines can use these tests to 
improve interop expectations across clouds.  Plus, operators will be able to 
more quickly identify what the problem is when the tests demonstrate the 
problem-response patterns.  And, like you said, knowing what to expect makes 
documenting expected behaviors, for both correct and incorrect uses, much more 
straightforward.  Details are very important when tracking down issues based on 
the responses logged.

I want to encourage you to work with Chris to help expand our tests and their 
specificity and their extent.

Thanks!

--Rocky (with my interop, QA and ops hats on)



> Talk with Chris Dent about ideas here for API samples with placement.
> He's talked about building something into the gabbi library for this, but I 
> don't
> know if that's being worked on or not.
> 
> Chris is also on vacation for a couple of weeks, just FYI.
> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Rochelle Grober
OK.  So, our naming is like branding.  We are techies -- not good at marketing. 
 But, gee, the foundation has a marketing team.  And they end up fielding a lot 
of the confusing questions from companies not deeply entrenched in the 
OpenStack Dev culture.  Perhaps it would be worth explaining what we are trying 
to do to the marketing team and let them suggest some branding words.

What we would need to do pretty much goes back to Chris and Gord's emails about 
answering the questions of what we mean by "Openstack Project" and  "projects 
we allow to use our infrastructure in pursuit of something that somehow works 
with OpenStack projects".  If we provide marketing with a solid definition of 
what is an OpenStack project (we have that one down fairly well, but they might 
ask about core or other things we haven't debated in a while), and provide them 
with what those other projects have in common besides being hosted by us, they 
might come up with something that works for them and is ok for us.  Remember, 
they get hit with it a lot more than we do at this point.

So what we need is:

* detailed definition of "OpenStack Project" (maybe based on answering those 
questions Chris proposed plus others)
* good definition of what the "others" hosted on our infrastructure are/are 
expected to be
* removal of "big tent" from everywhere (aside/non sequitur -- There was a doge 
of Venice that got deposed for treason and his visage and name were eradicated 
from all buildings, documents, etc.  He also happened to be the inventor of the 
chastity belt)
* introduce the marketing guys to the definitions and the branding issue
* call in OpenStack projects and Others until we have a reasonable brand for 
Others.

--Rocky 

Thierry Carrez Wrote:
> Jeremy Stanley wrote:
> > On 2017-06-15 11:15:36 +0200 (+0200), Thierry Carrez wrote:
> > [...]
> >> I'd like to propose that we introduce a new concept:
> >> "OpenStack-Hosted projects". There would be "OpenStack projects" on
> >> one side, and "Projects hosted on OpenStack infrastructure" on the
> >> other side (all still under the openstack/ git repo prefix).
> >
> > I'm still unconvinced a term is needed for this. Can't we just have
> > "OpenStack Projects" (those under TC governance) and "everything
> > else?" Why must the existence of any term require a term for its
> > opposite?
> 
> Well, we tried that for 2.5 years now, and people are still confused about
> which projects are an Openstack project and what are not. The confusion led
> to the perception that everything under openstack/ is an openstack project.
> It led to the perception that "big tent" means "anything goes in" or "flea
> market".
> 
> Whether we like it or not, giving a name to that category, a name that people
> can refer to (not "projects under openstack infrastructure that are not
> officially recognized by the TC"), is I think the only way out of this 
> confusion.
> 
> Obviously we are not the target audience for that term. I think we are deep
> enough in OpenStack and technically-focused enough to see through that.
> But reality is, the majority of the rest of the world is confused, and needs
> help figuring it out. Giving the category a name is a way to do that.
> 
> --
> Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] PTG attendance

2017-06-14 Thread Rochelle Grober
In many ways, having the PWG at the PTG is a great idea.  The only problem with 
that is that the PWG and the InteropWG would overlap in the current way the PTG 
is arranged.  Having both at the same place is great for synergy, but having 
them at the same times is not :(  We actually have a chance to grow both teams 
through cross-pollination if we could arrange them not to overlap.

--Rocky

From: UKASICK, ANDREW [mailto:au3...@att.com]
Sent: Wednesday, June 14, 2017 4:29 PM
To: OpenStack Development Mailing List (not for usage questions) 
; user-commit...@lists.openstack.org; 
'Arkady Kanevsky (arkady.kanev...@dell.com)' 
Subject: Re: [User-committee] [Product] PTG attendance

Hi Arkady/Leong/Shamail

I don't know yet if I'll be attending the PTG in Denver, but at present, I 
think that it's unlikely.  If the PWG mid-cycle was co-located with the PTG, 
then I expect that I'd be able to attend at least the PWG days.

That said, contrary to what I thought in our meeting on Monday, it would work 
out better for me to have our PWG mid-cycle collocated with the Ops Meetup.

Thanks,

-Andy

Andrew Ukasick
Principal Systems Engineer
AT Integrated Cloud (AIC),  Openstack Community Coordination

From: arkady.kanev...@dell.com 
[mailto:arkady.kanev...@dell.com]
Sent: Tuesday, June 13, 2017 9:05 PM
To: openstack-dev@lists.openstack.org
Cc: 
user-commit...@lists.openstack.org
Subject: Re: [openstack-dev] [Product] PTG attendance

Fellow Product WG members,
We are taking informal poll on how many of us plan to attend
PTG meeting in Denver?

Second question should we have mid-cycle meeting co-located with PTG or with 
operator summit in Mexico city?

Please, respond to this email so Shamail and Leong can tally the results.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Enterprise][Product WG][LCOO][Scientific][Public Cloud][Massively Parallel][All][Interop]FW: [openstack-dev] [all] etcd3 as base service - update

2017-06-07 Thread Rochelle Grober
I wanted to make sure this got out to as much of the community as soon as 
possible.  This "new base service" will be part of Pike.  This means that etcd 
will be a requirement for Pike installations and beyond.

Most of you won't need to take any immediate actions, but knowing the plans for 
your future is always a good thing.  I'm trying to limit surprises here.

--Rocky

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Wednesday, June 07, 2017 3:48 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [all] etcd3 as base service - update

Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who 
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your environment for 
Services to use, Fedora already does, we need help in making sure all distros 
and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)

Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some configuration in 
etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Rochelle Grober
 From: Ildiko 
 > On 2017. May 23., at 15:43, Sean McGinnis 
> wrote:
> >
> > On Mon, May 22, 2017 at 05:50:50PM -0500, Anne Gentle wrote:
> >> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis
> >> 
> >> wrote:
> >>
> >>>
> >>> [snip]
> >>>
> >>
> >> Hey Sean, is the "right to merge" the top difficulty you envision
> >> with 1 or 2? Or is it finding people to do the writing and reviews?
> >> Curious about your thoughts and if you have some experience with
> >> specific day-to-day behavior here, I would love your insights.
> >>
> >> Anne
> >
> > I think it's more about finding people to do the writing and reviews,
> > though having incentives like having more say in that area of things
> > could be beneficial for finding those people.
> 
> I think it is important to note here that by having the documentation (in it’s
> easily identifiable, own folder) living together with the code in the same
> repository you have the developer(s) of the feature as first line candidates
> on adding documentation to their change.
> 
> I know that writing good technical documentation is it’s own profession, but
> having the initial data there which can be fixed by experienced writers if
> needed is a huge win compared to anything separated, where you might not
> have any documentation at all.
> 
> So by having the ability to -1 a change because of the lack of documentation
> is on one hand might be a process change for reviewers, but gives you the
> docs contributors as well.

Possible side benefits here:  If a new, wannabe developer starts with the docs 
to figure out how to participate in the project, they may/will (if encouraged) 
file bugs against the docs where they are wrong or lacking.  Beyond that, if 
the newbie is reading the code s/he may just fix some low hanging fruit docs 
issues, or even go deeper.  I know, devs don't read docs, but I think they 
sneak looks when they think no one is looking.  And then get infuriated if the 
docs don't match the code.  Perusers of code have more time to address issues 
than firefighters (fixing high priority bugs), so it's possible that this new 
approach will encourage more complete documentation.  I can be optimistic, too.

--Rocky
 
> So to summarize, the changes what Alex described do not indicate that the
> core team has to write the documentation themselves or finding a team of
> technical writers before applying the changes, but be conscious about caring
> whether docs is added along with the code changes.
> 
> Thanks,
> Ildikó
> 
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Summary of BOS Summit session: User API Improvements

2017-05-19 Thread Rochelle Grober
Hey folks,

I’ve summarized the User API Improvement forum session.  I seem to recall that 
Clark Boyland and I were “volunteered” to project manage the effort to get 
these tracked and scheduled (which I would assume also means eiterh spec’ed, 
bp’ed or bugged), but I’m real fuzzy on that.  Any/all comments welcome.  The 
etherpad for this session is here:
https://etherpad.openstack.org/p/openstack-user-api-improvements

--Rocky

Summary:
This session’s focus was how to both improve the user experience in using 
OpenStack APIs through identification of issues and inconsistencies and raising 
their visibility in the developer community.  There were general observations, 
then observation specific to individual projects. The major themes of this 
session were:
Consistency:

· Use the same verbs for same/similar actions across all OpenStack

·  Make states the same across all projects

· UTF8 everywhere for user provided info and metadata

· Make ports 80/443 defaults across OpenStack so that deployments that 
have to think about nonstandard port assignments

· Services other than core/base services should be design to run *on* 
cloud, not *in* cloud

· All clouds should accept qcow2 uploads and convert to cloud native if 
necessary

· Cloud provided images should have immutable names

· Label ephemeral drives and swap disks consistently across clouds

· Enforce consistency in the service catalog across projects
Missing Functionality:

· Search function with wildcarding for entities within a cloud

· Automation for API self description (sort of like APINAME –help)

· Ability to take a “show” or “get” response and pass response to a 
“create” call (piping)

· support for image aliases

· Image annotations, both for cloud provider and user

· Provide information with net addresses as to whether internal to 
cloud only or internet accessible

· Clarify DHCP APIs and documentation

· Document config drive – important functionality the currently 
requires reading the code

· Nested virt

· Multi-attach read-only volumes

· Support for master+slave impactless backups
Improve Functionality:

· Better info/differentiation on custom attributes of images:  
read-only vs user-defined vs ??

· Create internet attached network with a single API call based on user 
expressible rules/options

· Move towards Neutron+IPv6 as default networking

· Default Security groups default needs improvement

· Improve clarity of which device each volume is attached to

· Make quota management simpler

· Horizon: move security groups to networking menus

· User facing docs on how to use all the varieties of auth and scopes

· Heat should be able to run on top of clouds (improves consistency and 
interop)

· Heat support for multi region and multi cloud




华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Rochelle Grober
 From: Duncan Thomas 
> On 18 May 2017 at 22:26, Rochelle Grober <rochelle.gro...@huawei.com>
> wrote:
> > If you're going to use --distance, then you should have specific values
> (standard definitions) rather than operator defined:
> > And for that matter, is there something better than distance?  Collocated
> maybe?
> >
> > colocated={local, rack, row, module, dc} Keep the standard definitions
> > that are already in use in/across data centers
> 
> There's at least 'chasis' that some people would want to add (blade based
> stuff) and I'm not sure what standard 'module' is... The trouble with standard
> definitions is that your standards rarely match the next guy's standards, and
> since some of these are entirely irrelevant to many storage topologies,
> you're likely going to need an API to discover what is relevant to a specific
> system anyway.


Dang.  Missed the chassis.  Yeah.  So, module/pod/container is the fly/crane in 
the container already built to add onto your larger DC.  But, I think the key 
is that if we came up with a reasonable list, based on what Ops know and use, 
then each operator can choose to use what is relevant to her and ignore the 
others.  More can be added by request.  But the key is that it is a limited set 
with a definition of each term.

I also agree that storage doesn't neatly fit into a distance relationship.  It 
can be everywhere and slow, local and slow, some distance and fast, etc.  
Actually, the more I think about this, this may be part of the placement 
conundrum.  Does this/how does this map to terms and decisions made in the 
placement subproject?

--Rocky

> 
> --
> Duncan Thomas
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Rochelle Grober


 From: Matt Riedemann 
> On 5/15/2017 2:28 PM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
> > Hi all,
> >
> > I'd like to follow up on a few discussions that took place last week
> > in Boston, specifically in the Compute Instance/Volume Affinity for
> > HPC session
> > (https://etherpad.openstack.org/p/BOS-forum-compute-instance-
> volume-affinity-hpc).
> >
> > In this session, the discussions all trended towards adding more
> > complexity to the Nova UX, like adding --near and --distance flags to
> > the nova boot command to have the scheduler figure out how to place an
> > instance near some other resource, adding more fields to flavors or
> > flavor extra specs, etc.
> >
> > My question is: is it the right question to ask how to add more
> > fine-grained complications to the OpenStack user experience to support
> > what seemed like a pretty narrow use case?
> 
> I think we can all agree we don't want to complicate the user experience.
> 
> >
> > The only use case that I remember hearing was an operator not wanting
> > it to be possible for a user to launch an instance in a particular
> > Nova AZ and then not be able to attach a volume from a different
> > Cinder AZ, or they try to boot an instance from a volume in the wrong
> > place and get a failure to launch. This seems okay to me, though -
> > either the user has to rebuild their instance in the right place or
> > Nova will just return an error during instance build. Is it worth
> > adding all sorts of convolutions to Nova to avoid the possibility that
> > somebody might have to build instances a second time?
> 
> We might have gone down this path but it's not the intention or the use case
> as I thought I had presented it, and is in the etherpad. For what you're
> describing, we already have the CONF.cinder.cross_az_attach option in nova
> which prevents you from booting or attaching a volume to an instance in a
> different AZ from the instance. That's not what we're talking about though.
> 
> The use case, as I got from the mailing list discussion linked in the 
> etherpad, is
> a user wants their volume attached as close to local storage for the instance
> as possible for performance reasons. If this could be on the same physical
> server, great. But there is the case where the operator doesn't want to use
> any local disk on the compute and wants to send everything to Cinder, and
> the backing storage might not be on the same physical server, so that's
> where we started talking about --near or --distance (host, rack, row, data
> center, etc).
> 
> >
> > The feedback I get from my cloud-experienced users most frequently is
> > that they want to know why the OpenStack user experience in the
> > storage area is so radically different from AWS, which is what they
> > all have experience with. I don't really have a great answer for them,
> > except to admit that in our clouds they just have to know what
> > combination of flavors and Horizon options or BDM structure is going
> > to get them the right tradeoff between storage durability and speed. I
> > was pleased with how the session on expanding Cinder's role for Nova
> > ephemeral storage went because of the suggestion of reducing Nova
> > imagebackend's role to just the file driver and having Cinder take over for
> everything else.
> > That, to me, is the kind of simplification that's a win-win for both
> > devs and ops: devs get to radically simplify a thorny part of the Nova
> > codebase, storage driver development only has to happen in Cinder,
> > operators get a storage workflow that's easier to explain to users.
> >
> > Am I off base in the view of not wanting to add more options to nova
> > boot and more logic to the scheduler? I know the AWS comparison is a
> > little North America-centric (this came up at the summit a few times
> > that EMEA/APAC operators may have very different ideas of a normal
> > cloud workflow), but I am striving to give my users a private cloud
> > that I can define for them in terms of AWS workflows and vocabulary.
> > AWS by design restricts where your volumes can live (you can use
> > instance store volumes and that data is gone on reboot or terminate,
> > or you can put EBS volumes in a particular AZ and mount them on
> > instances in that AZ), and I don't think that's a bad thing, because
> > it makes it easy for the users to understand the contract they're
> > getting from the platform when it comes to where their data is stored
> > and what instances they can attach it to.
> >
> 
> Again, we don't want to make the UX more complicated, but as noted in the
> etherpad, the solution we have today is if you want the same instance and
> volume on the same host for performance reasons, then you need to have a
> 1:1 relationship for AZs and hosts since AZs are exposed to the user. In a
> public cloud where you've got hundreds of thousands of compute hosts, 1:1
> AZs aren't going to be realistic, for neither the admin or user. Plus, AZs are
> really supposed to 

Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-05-04 Thread Rochelle Grober
I arrive Sunday afternoon and leave Friday morning, so you can bracket and 
schedule your drink buying ;-)

My view of Rocky is that of a *solid* base to build on.  One that withstands 
the ravages of storms and squalls.  So, I look forward to helping to reinforce 
and expand the stability of OpenStack projects in the Rocky release.  And if 
you like, I've got a cute picture of our dog, Rocky (Secundus -- I existed 
first) as an informal mascot for the release.

--Rocky

> -Original Message-
> From: Tom Barron [mailto:t...@dyncloud.net]
> Sent: Friday, April 28, 2017 2:58 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] All Hail our Newest Release Name - OpenStack
> Rocky
> 
> 
> 
> On 04/28/2017 05:54 PM, Monty Taylor wrote:
> > Hey everybody!
> >
> > There isn't a ton more to say past the subject. The "R" release of
> > OpenStack shall henceforth be known as "Rocky".
> >
> > I believe it's the first time we've managed to name a release after a
> > community member - so please everyone buy RockyG a drink if you see
> > her in Boston.
> 
> Deal!
> 
> 
> >
> > For those of you who remember the actual election results, you may
> > recall that "Radium" was the top choice. Radium was judged to have
> > legal risk, so as per our name selection process, we moved to the next
> > name on the list.
> >
> > Monty
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Boston Summit Forum and working session Log messages (please comment and participate)

2017-05-01 Thread Rochelle Grober
Hey folks!

I just wanted to raise your awareness of a forum session and a working session 
on Log Messages that is happening during the Boston Summit.

Here is the link to the etherpad for these sessions:

https://etherpad.openstack.org/p/BOS-forum-log-messages

We’ve been going around in circles on some details of the log messages for 
years, and Doug Hellmann has graciously stepped up to try and wrestle this 
beast into submissions.  So, besides giving him a warm round of applause, let’s 
give him (and the small cadre of folks working with him on this) our respectful 
interactions, comments, concerns, high fives, etc. and turn up to the sessions 
to get this spec implementable in the here and now.

Please add your comments, topics, pre forum discussions, etc. on the etherpad 
so that we remember to review and discuss them in the sessions.


Thanks and see you soon!
--Rocky


As reference, here is Doug’s email [1] advertising the spec:
I am looking for some feedback on two new proposals to add IDs to
log messages.

The tl;dr is that we’ve been talking about adding unique IDs to log
messages for 5 years. I myself am still not 100% convinced the idea
is useful, but I would like us to either do it or definitively say
we won't ever do it so that we can stop talking about it and consider
some other improvements to logging instead.

Based on early feedback from a small group who have been involved
in the conversations about this in the past, I have drafted new two
specs with different approaches that try to avoid the pitfalls that
blocked the earlier specs:

1. A cross-project spec to add logging message IDs in (what I hope
   is) a less onerous way than has been proposed before:
   https://review.openstack.org/460110

2. An Oslo spec to add some features to oslo.log to try to achieve the
   goals of the original proposal without having to assign message IDs:
   https://review.openstack.org/460112

To understand the full history and context, you’ll want to read the
blog post I write last week [1].  The reference lists of the specs
also point to some older specs with different proposals that have
failed to gain traction in the past.

I expect all three proposals to be up for discussion during the
logging working group session at the summit/forum, so if you have
any interest in the topic please plan to attend [2].

Thanks!
Doug

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18507/logging-working-group-working-session


[1] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115958.html


华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-04-25 Thread Rochelle Grober

I know that some cyborg folks and nova folks are planning to be there. Now we 
need to drive some ops folks.


Sent from HUAWEI AnyOffice
From:Blair Bethwaite
To:openstack-dev@lists.openstack.org,openstack-oper.
Date:2017-04-25 08:24:34
Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session

Hi all,

A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.

It would be great to see a good representation from both the Nova and
Cyborg dev teams, and also ops ready to share their experience and
use-cases.

--
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [elections] Available time and top priority

2017-04-14 Thread Rochelle Grober


Matt Riedemann, Monday, April 10, 2017 1:41 PM
On 4/10/2017 2:55 PM, Dean Troyer wrote:
>
> The TC meetings are held in IRC and that may somewhat mitigate the 
> issue for non-native English speakers, but I've had problems myself 
> keeping up at times with the flurry of comments.  In any case, I think 
> it would be good to include language in the pile of concerns over 
> world-wide participation

I don't attend many TC meetings, it's usually on accident, but yeah, when I do 
I always note the flurry of cross-talk chatter that just drowns everything out. 
I feel like there are usually at least 3 parallel conversations going on during 
a TC meeting and it's pretty frustrating to follow along, or get a thought in 
the mix. That has to be much worse for a non-native English speaker.

So yeah, slow down folks. :)

I'm not advocating splitting the meetings though. It's possible to have your 
cake and eat it to if done properly. For example, Alex Xu runs the Nova API 
subteam meeting and we have people from China, India, Japan, UK and USA and get 
through it fine, but it does involve slowing down to get an acknowledgement 
from people that they are OK with any decisions being made.

This might also tie back in with what cdent was mentioning, and if the flurry 
of conversation during a TC meeting throws people off, maybe the minutes should 
be digested after the meeting in the mailing list. I know the meeting is 
logged, but it can be hard to read through that without one's eyes glazing over 
due to the cross-talk and locker-room towel whipping going on.

-- 

Thanks,

Matt


I read through the thread (or at least a very large chunk of it) and realized 
something still missing and something sort of related that I'd like to pass on.

Part 1: tldr; publish the resolutions to the key MLs at least when they are 
approved.

I like the idea of some sort of regularly scheduled summary of work in 
progress/completed/needing input.  And I realized that the TC generally works 
through resolutions, but I can't remember seeing the finished resolutions 
actually published to the dev, ops, or user-committee mailing lists.  Either 
you follow the TC meetings, or you subscribe to the governance review project, 
or you serendipitously discover the published resolutions when they come out or 
when you go looking for something else in the same vicinity.  It would be great 
if there were checkpoints on thes resolutions that publish to the mailing 
lists.  Maybe, first draft for wider audience, final call for comments and 
published resolution, or some subset depending on the size/importance/etc. of 
the resolution.

Part 2: tldr; establish some kind of communications channel with other TCs/TSCs 
that have dependencies on OpenStack

I just happened to be in a meeting with an associate who is on the TSC of Open 
Daylight.  After the meeting he asked me about the release schedules of 
OpenStack and whether the recent change was a one off or , would now be offset 
like the last or.  It appears that ODL sets their releases to two week 
after OpenStack's then other Open networking projects cascade behind Open 
Daylight.  These TSCs don't know where to go to get the info/new schedules, 
etc.  Itg might be nice to find out what they need from us and provide a 
channel for them to ask questions.  TC might be a good contact point.  But TC 
can certainly come up with possible solution(s).  Which could open all these 
projects work a towards a more open coordination or cooperation with us.  I 
think that would be sooo cool.

Apologize for the extended verbiage, but, those who know me, know that's me.

And some really great discussions here.

Thanks,
--Rocky
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [shade] help wanted - tons to do, not enough people

2017-04-14 Thread Rochelle Grober
On  April 14, 2017 1:23 PM Jay Pipes wrote:

On 04/12/2017 02:15 PM, Sean Dague wrote:
> On 04/12/2017 01:38 PM, Monty Taylor wrote:
>> On 04/12/2017 11:21 AM, Joshua Harlow wrote:
>>> Just a question, not meant as anything bad against shade,
>>>
>>> But would effort be better spent on openstacksdk?
>>
>> tl;dr - great in practice, falls apart in the details
>>
>> I don't think so - but it was an original thought, so it's certainly 
>> a reasonable question.
>>
>> openstacksdk is an SDK exposing the OpenStack APIs. It does not hide 
>> differences between APIs, nor abstract into different concepts. shade 
>> does. So I think they have different audiences and different intends 
>> in mind.
>>
>>> Take the good parts of shade and just move it to openstacksdk, 
>>> perhaps as a 'higher level api' available in openstacksdk?
>>>
>>> Then ansible openstack components (which I believe use shade) could 
>>> then switch to openstacksdk and all will be merry...
>>
>> The thing is - for shade's needs, openstacksdk is both too much and 
>> not enough simultaneously. (this is not intended to be a dig against 
>> sdk - their goal in life is not to be a rest layer for shade, it's to 
>> be an SDK for the OpenStack APIs)
>>
>> To handle nodepool scale, shade needs to do some really specific 
>> things related to exactly when and how remote interactions happen. In 
>> services of its users, openstacksdk hides those interactions - which 
>> I think is a nice feature for its users, but unfortunately removes 
>> shade's ability to control those interactions in the way it needs to.
>>
>> At the same time, the object model wrapper with magic generators and 
>> whatnot doesn't add much value to shade past "get('/servers').json()" 
>> to be quite honest.
>>
>> So - I think handling our needs would be very annoying to the SDK 
>> folks, and it would just unnecessarily make things complex for both sides.
>>
>> In any case, like I said, it's a completely fair and legit question - 
>> but as of right now I don't think it would actually make anyone's 
>> lives better.
>
> Just to provide a different though related perspective.
>
> This is what success looks like. Lots of different people writing 
> different stuff, in different ways, talking to your API (which is the 
> REST API, not a library). Everyone implementing the slices that are 
> important for their consumers, and providing the fidelity that their 
> consumers need.
>
> We should never think this is a bad thing.

   Well, sure, I don't think it's a bad thing that there are multiple clients 
to our REST API.

   But I *do* think it's a bad thing that shade needs to exist to smooth out 
all the rough edges, inconsistencies, implementation leaks and flat-out 
silliness that our REST API has.

++
--Rocky


   Best,
   -jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New contributor

2017-04-05 Thread Rochelle Grober
Welcome!

And, of course, writing tests that demonstrate some of those more difficult 
open bugs would help make sure they don't resurface once fixed ;-)  so, if you 
can't get a handle yet on the intricacies of the the code that contains the 
issue, you still might be able to demonstrate the breakage and get kudos for 
that.

At least, I'd give you kudos for that!

--Rocky

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Wednesday, April 05, 2017 4:42 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ironic] New contributor

On 03/30/2017 05:13 AM, Julian Edwards wrote:
> Hi all

Hi and welcome!

>
> I'm looking to start contributing to Ironic, and in fact I did a 
> couple of small patches already which are still waiting to be 
> landed/reviewed. [1]
>
> I'm finding it a little hard to find some more reasonable bugs to get 
> started with fixing, so if any of you guys can point me at a few I 
> would appreciate it, or indeed if someone is willing to do more 
> involved mentoring.  (I am in the +10 time zone so this may be 
> awkward, sadly)

Indeed, as Jay already mentioned, we have a competitions for easier bugs right 
now :) Please keep in mind, though, that carefully reviewing others patches is 
often more valuable, than adding more patches to the queue. This may be a good 
way to both help the project and learn it better.

>
> Cheers
> J
>
> PS  Some of you may remember me as the original Ubuntu MAAS lead, so I 
> am pretty familiar with bare metal stuff generally.
>
> [1] https://review.openstack.org/#/c/449454/ and 
> https://review.openstack.org/#/c/450492/
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Rochelle Grober
Sorry for top posting, but this is likely the best place...

I wanted to provide an update from the Ops midcycle about related topics around 
this. 

The operators here are in general agreement that translation is not needed, but 
they are also very interested in the possibility of getting some of their long 
time wish list items for traceability and specificity into the log messages at 
the same time as the translation bits are removed.  There is an effort and a 
team of devops working on defining more specifically exactly what the operators 
want in the messages and providing personpower to do a good chunk of the work.

We hope to have some solid proposals written up for the forum so as to be able 
to move forward and partner with the project developers on this.
We expect two proposals, one for error codes (yes, that again, but operators 
really want/need this) and traceability around request-ids.

--Rocky

-Original Message-
From: Ian Y. Choi [mailto:ianyrc...@gmail.com] 
Sent: Wednesday, March 15, 2017 11:36 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [i18n] [nova] understanding log domain change - 
https://review.openstack.org/#/c/439500

Doug Hellmann wrote on 3/10/2017 11:39 PM:
> Excerpts from Doug Hellmann's message of 2017-03-10 09:28:52 -0500:
>> Excerpts from Ian Y. Choi's message of 2017-03-10 01:22:40 +0900:
>>> Doug Hellmann wrote on 3/9/2017 9:24 PM:
 Excerpts from Sean McGinnis's message of 2017-03-07 07:17:09 -0600:
> On Mon, Mar 06, 2017 at 09:06:18AM -0500, Sean Dague wrote:
>> On 03/06/2017 08:43 AM, Andreas Jaeger wrote:
>>> On 2017-03-06 14:03, Sean Dague  wrote:
 I'm trying to understand the implications of 
 https://review.openstack.org/#/c/439500. And the comment in the 
 linked
 email:

 ">> Yes, we decided some time ago to not translate the log 
 files anymore and
>> thus our tools do not handle them anymore - and in general, 
>> we remove these kind of files."
 Does that mean that all the _LE, _LI, _LW stuff in projects 
 should be fully removed? Nova currently enforces those things 
 are there -
 https://github.com/openstack/nova/blob/e88dd0034b1b135d680dae34
 94597e295add9cfe/nova/hacking/checks.py#L314-L333
 and want to make sure our tools aren't making us do work that 
 the i18n team is ignoring and throwing away.
> So... just looking for a definitive statement on this since there 
> has been some back and forth discussion.
>
> Is it correct to say - all projects may (should?) now remove all 
> bits in place for using and enforcing the _Lx() translation 
> markers. Only _() should be used for user visible error messages.
>
> Sean (smcginnis)
>
 The situation is still not quite clear to me, and it would be 
 unfortunate to undo too much of the translation support work 
 because it will be hard to redo it.

 Is there documentation somewhere describing what the i18n team has 
 committed to trying to translate?
>>> I18n team describes translation plan and priority in Zanata - 
>>> translation platform
>>>: https://translate.openstack.org/ .
>>>
I think I heard that there was a shift in emphasis to "user 
 interfaces", but I'm not sure if that includes error messages in 
 services. Should we remove all use of oslo.i18n from services? Or 
 only when dealing with logs?
>>> When I18n team decided to removal of log translations in Barcelona 
>>> last October, there had been no discussion on the removal of 
>>> oslo.i18n translation support for log messages.
>>> (I have kept track of what I18n team discussed during Barcelona I18n 
>>> meetup on Etherpad - [1])
>>>
>>> Now I think that the final decision of oslo.i18n log translation 
>>> support needs more involvement with translators considering 
>>> oslo.i18n translation support, and also more people on community 
>>> wide including project working groups, user committee, and operators 
>>> as Matt suggested.
>>>
>>> If translating log messages is meaningful to some community members 
>>> and some translators show interests on translating log messages, 
>>> then I18n team can revert the policy with rolling back of 
>>> translations.
>>> Translated strings are still alive in not only previous stable 
>>> branches, but also in translation memory in Zanata - translation platform.
>>>
>>> I would like to find some ways to discuss this topic with more 
>>> community wide.
>> I would suggest that we discuss this at the Forum in Boston, but I 
>> think we need to gather some input before then because if there is a 
>> consensus that log translations are not useful we can allow the code 
>> cleanup to occur and not take up face-to-face time.
> I've started a thread on the operators mailing list [1].

Thanks a lot, Doug!

[Openstack-operators] MIL Ops meetup -- Rabbitmq Pitfalls with HA etherpad

2017-03-14 Thread Rochelle Grober
Please add your comments, suggestions, info, etc. to the Rabbitmq with HA 
pitfalls etherpad.  There are a few things out there already that will 
hopefully induce more and better items to include.

See you tomorrow!

https://etherpad.openstack.org/p/MIL-ops-rabbitmq-pitfalls-ha

--Rocky


华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Refstack - final mascot

2017-02-07 Thread Rochelle Grober
Looks great to me.

--Rocky

From: Catherine Cuong Diep [mailto:cd...@us.ibm.com]
Sent: Monday, February 06, 2017 4:25 PM
To: OpenStack Dev Mailer 
Subject: [openstack-dev] Refstack - final mascot


Hello RefStack team,

Please see RefStack mascot in Heidi's note below.

Catherine Diep
- Forwarded by Catherine Cuong Diep/San Jose/IBM on 02/06/2017 04:18 PM 
-

From: Heidi Joy Tretheway 
>
To: Catherine Cuong Diep/San Jose/IBM@IBMUS
Date: 02/02/2017 11:42 AM
Subject: Refstack - final mascot





Hi Catherine,

I have a new revision from our illustration team for your team’s project 
mascot. We’re pushing hard to get all 60 of the mascots finalized by the PTG, 
so I’d love any feedback from your team as swiftly as possible. As a reminder, 
we can’t change the illustration style (since it’s consistent throughout the 
full mascot set) and so we’re just looking for problems with the creatures. 
Could you please let me know if your team has any final concerns?

Thank you!

[cid:image001.png@01D28171.0E28D410]




Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: 
heidi.tretheway




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-03 Thread Rochelle Grober
Well, uh, how about Jazz hands?  Open, waving hands are pretty universally 
friendly and it would look more like a dancing bear which brings the music 
aspect bac a bit.

Sorry for butting in, but I couldn resist...

--Rocky

-Original Message-
From: Miles Gould [mailto:mgo...@redhat.com] 
Sent: Friday, February 03, 2017 10:30 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] New mascot design

On 02/02/17 16:55, Loo, Ruby wrote:
>  I guess a 'peace sign' wouldn't work?

That also has several meanings:

https://en.wikipedia.org/wiki/V_sign

On the other hand, the palm-forward version has no offensive meanings that I 
can see (the offensive version is palm-backwards).

I like the proposed logo v3.0, but I could get behind a version 3.1 in which 
the bear was flashing a peace/victory sign.

Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][glance] gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

2017-01-16 Thread Rochelle Grober
There was a driver thread about snapshot management test failures.  It appears 
there is a config option that changed for devstack from false to true, causing 
the cinder drivers all sorts of issues.  Here is the email that discusses the 
change and its effects on cinder drivers:

http://lists.openstack.org/pipermail/openstack-dev/2017-January/110184.html

It might not be related, but.

A bit of a coincidence if not.

--Rocky

-Original Message-
From: Brian Rosmaita [mailto:rosmaita.foss...@gmail.com] 
Sent: Monday, January 16, 2017 6:18 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [infra][qa][glance] 
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

I need some help troubleshooting a glance_store gate failure that I think is 
due to a recent change in a tempest test and a configuration problem (or it 
could be something else entirely).  I'd appreciate some help solving this as it 
appears to be blocking all merges into glance_store, which, as a non-client 
library, is supposed to be frozen later this week.

Here's an example of the failure in a global requirements update patch:
https://review.openstack.org/#/c/420832/
(I should mention that the failure is occurring in a volume test in 
tempest.api.volume.admin.v2.test_snapshot_manage.SnapshotManageAdminV2Test,
not a glance_store test.)

The test is being run by this gate:
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial

The test that's failing, test_unmanage_manage_snapshot was recently modified by 
Change-Id: I77be1cf85a946bf72e852f6378f0d7b43af8023a
To be more precise, the test itself wasn't changed, rather the criterion for 
skipping the test was changed (from a skipIf based on whether the backend was 
ceph, to a skipUnless based on a boolean config option).

>From the comment in the old code on that patch, it seems like the test config 
>value should be False when ceph is the backend (and that's its default).  But 
>in the config dump of the failing test run, 
>http://logs.openstack.org/32/420832/1/check/gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial/dab27eb/logs/tempest_conf.txt.gz
you can see that manage_snapshot is True.

That's why I think the problem is being caused by a flipped test config value, 
but I'm not sure where the configuration for this particular gate lives so I 
don't know what repo to propose a patch to.

Thanks in advance for any help,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-16 Thread Rochelle Grober
YEES!

-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Monday, January 16, 2017 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

On 14/01/17 04:07, Joshua Harlow wrote:
>
> Sometimes I almost wish we just rented out a football stadium (or 
> equivalent, a soccer field?) and put all the contributors in the 'field'
> with bean bags and some tables and a bunch of white boards (and a lot 
> of wifi and power cords) and let everyone 'have at it' (ideally in a 
> stadium with a roof in the winter). Maybe put all the infra people in 
> a circle in the middle and make the foundation people all wear referee 
> outfits.
>
> It'd be an interesting social experiment at least :-P

I have been informed we have located at least 3 referee outfits across 
Foundation staff, along with a set of red/yellow cards.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] change in plan for releases repo data model updates

2016-11-09 Thread Rochelle Grober
Sorry for top posting, but exchange

Automation is our friend.  Define the structure/naming of the release repo 
patches such that when they merge, they auto generate the governance patch and 
submit it.  It gives you time to run a cycle and see how things work and what a 
more elegant solution might be.

my $.02

--Rocky

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, November 09, 2016 5:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [release] change in plan for releases repo data 
model updates

Doug Hellmann wrote:
> At the summit we said we would move the data that tells us the type
> and release model for each deliverable out of the governance
> repository and into the releases repository where it is easier to
> change over time, but that we needed to think more about what might
> break by making that change. After giving it more consideration, I
> think we have to use the option 2 we discussed instead (allow local
> values to override the global values).

If we look back at the goals driving the change, this solves the
"temporarily bypass governance values" need. The main drawback (to me)
is that we continue to consider deliverable types and release model to
be a "governance" thing. Another drawback is the additional work needed
to sync changes back into the governance repo (see Tony's question).

> The list-repos command would not be able to filter on the type or
> model values early in a cycle because not enough deliverable files
> would even exist until the first milestone. That limitation would
> make the command essentially useless until close to the end of each
> cycle. Using option 2 means list-repos would continue to work all the
> time.

Devil's advocate, we could copy over the type/model values from the
previous cycle as part of the new cycle opening, and have list-repos
work all the time. That sounds like less work than tracking the
retrosyncing of each and every override back onto the governance repo...

> Using option 2 also means that instead of us having to do extra
> work to build and publish a single unified file for the project
> navigator team, they can continue to use the same input data without
> changes to their project at all.

That's a one-time work, so I don't think having to do that is unreasonable.

> I propose adding "type" and "model" fields, as we discussed, but
> making them optional. If they are not present, the values can be
> derived from the governance tags for the deliverable. Teams who
> want to change either value can then make the update in the releases
> repository with a separate patch to update the governance repo, and
> not have releases blocked by the governance change.

It feels like extra work overall. More work for teams (having to file
two separate patches) and more work for us to make sure that governance
patch is merged, doesn't slip through the cracks and doesn't introduce a
drift between the two repos.

So... not convinced :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-03 Thread Rochelle Grober
a blog post on the OpenStack sore might be good. superuser? there are folks 
reading this who can help

Sent from HUAWEI AnyOffice
From:Lance Bragstad
To:OpenStack Development Mailing List (not for usage 
questions),openstack-operat...@lists.openstack.org,
Date:2016-11-03 08:11:20
Subject:Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing 
default token format

I totally agree with communicating this the best we can. I'm adding the 
operator list to this thread to increase visibility.

If there are any other methods folks think of for getting the word out, outside 
of what we've already done (release notes, email threads, etc.), please let me 
know. I'd be happy to drive those communications.

On Thu, Nov 3, 2016 at 9:45 AM, Alex Schultz 
> wrote:
Hey Steve,

On Thu, Nov 3, 2016 at 8:29 AM, Steve Martinelli 
> wrote:
> Thanks Alex and Emilien for the quick answer. This was brought up at the
> summit by Adam, but I don't think we have to prevent keystone from changing
> the default. TripleO and Puppet can still specify UUID as their desired
> token format; it is not deprecated or slated for removal. Agreed?
>

My email was not to tell you to stop.I was just letting you know that
your change does not affect the puppet modules because we define our
default as UUID.  It was just as a heads up to others on this email
that this change should not affect anyone consuming the puppet modules
because our default is still UUID and will be even after keystone's
default changes.

Thanks,
-Alex

> On Thu, Nov 3, 2016 at 10:23 AM, Alex Schultz 
> > wrote:
>>
>> Hey Steve,
>>
>> On Thu, Nov 3, 2016 at 8:11 AM, Steve Martinelli 
>> >
>> wrote:
>> > As a heads up to some of keystone's consuming projects, we will be
>> > changing
>> > the default token format from UUID to Fernet. Many patches have merged
>> > to
>> > make this possible [1]. The last 2 that you probably want to look at are
>> > [2]
>> > and [3]. The first flips a switch in devstack to make fernet the
>> > selected
>> > token format, the second makes it default in Keystone itself.
>> >
>> > [1] https://review.openstack.org/#/q/topic:make-fernet-default
>> > [2] DevStack patch: https://review.openstack.org/#/c/367052/
>> > [3] Keystone patch: https://review.openstack.org/#/c/345688/
>> >
>>
>> Thanks for the heads up. In puppet openstack we had already
>> anticipated this and attempted to do the same for the
>> puppet-keystone[0] module as well.  Unfortunately after merging it, we
>> found that tripleo wasn't yet prepared to handle the HA implementation
>> of fernet tokens so we had to revert it[1].  This shouldn't impact
>> anyone currently consuming puppet-keystone as we define uuid as the
>> default for now. Our goal is to do something similar this cycle but
>> there needs to be some further work in the downstream consumers to
>> either define their expected default (of uuid) or support fernet key
>> generation correctly.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/389322/
>> [1] https://review.openstack.org/#/c/392332/
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [Openstack-operators] [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-03 Thread Rochelle Grober
a blog post on the OpenStack sore might be good. superuser? there are folks 
reading this who can help

Sent from HUAWEI AnyOffice
From:Lance Bragstad
To:OpenStack Development Mailing List (not for usage 
questions),openstack-operators@lists.openstack.org,
Date:2016-11-03 08:11:20
Subject:Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing 
default token format

I totally agree with communicating this the best we can. I'm adding the 
operator list to this thread to increase visibility.

If there are any other methods folks think of for getting the word out, outside 
of what we've already done (release notes, email threads, etc.), please let me 
know. I'd be happy to drive those communications.

On Thu, Nov 3, 2016 at 9:45 AM, Alex Schultz 
> wrote:
Hey Steve,

On Thu, Nov 3, 2016 at 8:29 AM, Steve Martinelli 
> wrote:
> Thanks Alex and Emilien for the quick answer. This was brought up at the
> summit by Adam, but I don't think we have to prevent keystone from changing
> the default. TripleO and Puppet can still specify UUID as their desired
> token format; it is not deprecated or slated for removal. Agreed?
>

My email was not to tell you to stop.I was just letting you know that
your change does not affect the puppet modules because we define our
default as UUID.  It was just as a heads up to others on this email
that this change should not affect anyone consuming the puppet modules
because our default is still UUID and will be even after keystone's
default changes.

Thanks,
-Alex

> On Thu, Nov 3, 2016 at 10:23 AM, Alex Schultz 
> > wrote:
>>
>> Hey Steve,
>>
>> On Thu, Nov 3, 2016 at 8:11 AM, Steve Martinelli 
>> >
>> wrote:
>> > As a heads up to some of keystone's consuming projects, we will be
>> > changing
>> > the default token format from UUID to Fernet. Many patches have merged
>> > to
>> > make this possible [1]. The last 2 that you probably want to look at are
>> > [2]
>> > and [3]. The first flips a switch in devstack to make fernet the
>> > selected
>> > token format, the second makes it default in Keystone itself.
>> >
>> > [1] https://review.openstack.org/#/q/topic:make-fernet-default
>> > [2] DevStack patch: https://review.openstack.org/#/c/367052/
>> > [3] Keystone patch: https://review.openstack.org/#/c/345688/
>> >
>>
>> Thanks for the heads up. In puppet openstack we had already
>> anticipated this and attempted to do the same for the
>> puppet-keystone[0] module as well.  Unfortunately after merging it, we
>> found that tripleo wasn't yet prepared to handle the HA implementation
>> of fernet tokens so we had to revert it[1].  This shouldn't impact
>> anyone currently consuming puppet-keystone as we define uuid as the
>> default for now. Our goal is to do something similar this cycle but
>> there needs to be some further work in the downstream consumers to
>> either define their expected default (of uuid) or support fernet key
>> generation correctly.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/389322/
>> [1] https://review.openstack.org/#/c/392332/
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] : Public cloud operators group in

2016-09-28 Thread Rochelle Grober
I've followed "cloud for at least as long as OpenStack has existed, but back 
then I followed whatever/whoever called themselves "cloud" or 
"cloud-{app|service|etc}" and at one point there was a heated discussion 
(mostly that the rest of the group agreed with) that you couldn't claim you ran 
in the/a cloud if you utilized your own equipment in your own data center.



So, yeah.  The rest of the world doesn't always see cloud the way we do.



--Rocky





From: Silence Dogood <m...@nycresistor.com>

I figure if you have entity Y's workloads running on entity X's hardware...

and that's 51% or greater portion of gross revenue... you are a public

cloud.



On Mon, Sep 26, 2016 at 11:35 AM, Kenny Johnston 
<ke...@kencjohnston.com<mailto:ke...@kencjohnston.com>>

wrote:



> That seems like a strange definition. It doesn't incorporate the usual

> multi-tenancy requirement that traditionally separates private from public

> clouds. By that definition, Rackspace's Private Cloud offer, where we

> design, deploy and operate a single-tenant cloud on behalf of customers (in

> their data-center or ours) would be considered a "public" cloud.

>

> On Fri, Sep 23, 2016 at 3:54 PM, Rochelle Grober <

> rochelle.gro...@huawei.com<mailto:rochelle.gro...@huawei.com>> wrote:

>

>> Hi Matt,

>>

>>

>>

>> At considerable risk of heading down a rabbit hole... how are you

>> defining "public" cloud for these purposes?

>>

>>

>>

>> Cheers,

>>

>> Blair

>>

>>

>>

>> Any cloud that provides a cloud to a thirdparty in exchange for money.

>> So, rent a VM, rent a collection of vms, lease a fully operational cloud

>> spec'ed to your requirements, lease a team and HW with your cloud on

>> them.

>>

>>

>>

>> So any cloud that provides offsite IAAS to lessees

>>

>>

>>
>> --Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Public cloud operators group in

2016-09-23 Thread Rochelle Grober
Hi Matt,



At considerable risk of heading down a rabbit hole... how are you defining 
"public" cloud for these purposes?



Cheers,

Blair

Any cloud that provides a cloud to a thirdparty in exchange for money.  So, 
rent a VM, rent a collection of vms, lease a fully operational cloud spec'ed to 
your requirements, lease a team and HW with your cloud on them.

So any cloud that provides offsite IAAS to lessees

--Rockyy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [ALL] Looking for participants in the Interop Challenge for Barcelona

2016-09-15 Thread Rochelle Grober
Hey folks!

Do you know about the Interop Challenge?  Well, here's a way to learn about it 
and and participate if you like...

"The interop challenge was started in July 2016 to create a set of common 
workloads/tests to be executed across multiple OpenStack distributions and/or 
cloud deployment models. The participants in this challenge will work together 
to prove once and for all that OpenStack-Powered clouds are interoperable."

The WG (and lots of other interested folks) would like to see how many of our 
Cloud deployers, vendors, distros, etc can successfully deploy and run some 
very common apps to their clouds.  And we want to let the world know about how 
easy it is in Barcelona.

The WG started with a number of vendors to create some OpenStack apps that 
demonstrate the kind of workloads users want to run.  The team identified four 
or so and started implementing the apps.  Those apps are:

* LAMP Stack

* Docker Swarm

* NFV
The  Ansible LAMP stack code can be found here:
http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/ansible/lampstack

And a Terraform version is here:
http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/terraform/lampstack

The Heat LAMP stack code is here:
http://git.openstack.org/cgit/openstack/osops-tools-contrib/tree/heat

The LAMP stack code is ready for Operators and other deployers and developers 
to take for a spin.  There are still some bugs and we are wringing out issues 
in the app as more clouds attempt to deoploy it.  But, you can file bugs on it, 
or file fixes, etc.

Mailing List discussions are happening on the defcore-committee mailing list.
IRC discussions are in openstack-defcore


Here is a bunch moreinfo on the challenge:
https://wiki.openstack.org/wiki/Interop_Challenge

And here is how to share your results:
https://wiki.openstack.org/wiki/Interop_Challenge#RefStack_Testing_and_Results_Upload

Come to the IRC meeting in #openstack-meeting-cp on Wednesday at UTC 14:-00 to 
ask questions.

Show the world that the OpenStack community not only has lots of clouds out 
there, but that they play nice together!!
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [release] proposing adding Tony Breeds to "Release Managers" team

2016-09-06 Thread Rochelle Grober
I have no vote, but my nonvote is a hearty +1 
Tony is amazing.

--Rocky

-Original Message-
From: Amrith Kumar [mailto:amr...@tesora.com] 
Sent: Tuesday, September 06, 2016 10:56 AM
To: d...@doughellmann.com
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [release] proposing adding Tony Breeds to "Release 
Managers" team

Doug,

This is a great addition; Tony's reviews are very helpful, he asks great 
questions and provides insightful feedback. And when you are in a bind, he has 
great suggestions.

+1

Thanks,

-amrith

> -Original Message-
> From: Doug Hellmann [mailto:]
> Sent: Tuesday, September 06, 2016 11:36 AM
> To: openstack-dev 
> Subject: [openstack-dev] [release] proposing adding Tony Breeds to
> "Release Managers" team
> 
> Team,
> 
> I would like to add Tony Breeds to the "Release Managers" team in
> gerrit. This would give him +2 permissions on openstack-infra/release-
> tools
> and on openstack/releases. I feel his reviews on both of those repos
> have already demonstrated a good attention to detail, especially
> of the release schedule and processes.
> 
> Please respond below with +1 or -1.
> 
> Thanks,
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for proxy API deprecation

2016-07-29 Thread Rochelle Grober
Thank you, Doug.  Yes, if the DefCore guidelines have any of these tests, the 
tests used by DefCore will need to be run beyond EOL of Newton as the DefCore 
tests last longer than the EOL timeframe.  But, first we should check which 
tests need to be capped and whether they are part of a/some DefCore guidelines. 
 If yes, as Tempest/Defcore summit session would be good.

Thanks!
--Rocky

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Tuesday, July 26, 2016 10:46 AM
To: openstack-dev
Subject: Re: [openstack-dev] [nova] Next steps for proxy API deprecation

Excerpts from Matt Riedemann's message of 2016-07-26 12:14:03 -0500:
> On 7/26/2016 11:59 AM, Matt Riedemann wrote:
> > Now that the 2.36 microversion change has merged [1], we can work on the
> > python-novaclient changes for this microversion.
> >
> > At the midcycle we agreed [2] to also return a 404 for network APIs,
> > including nova-network (which isn't a proxy), for consistency and
> > further signaling that nova-network is going away.
> >
> > In the client, we agreed to soften the impact for network CLIs by
> > determining if the latest microversion supported will fail (so will we
> > send >=2.36) and rather than fail, send 2.35 instead (if the user didn't
> > specifically specify a different version). However, we'd emit a warning
> > saying this is deprecated and will go away in the first major client
> > release (in Ocata? after nova-network is removed? after Ocata is
> > released?).
> >
> > We should probably just deprecate any CLIs/APIs in python-novaclient
> > today that are part of this server side API change, including network
> > CLIs/APIs in novaclient. The baremetal and image proxies in the client
> > are already deprecated, and the volume proxies were already removed.
> > That leaves the network proxies in the client.
> >
> > From my notes, Dan Smith was going to work on the novaclient changes for
> > 2.36 to not fail and use 2.35 - unless anyone else wants to volunteer to
> > do that work (please speak up).
> >
> > We can probably do the network CLI/API deprecations in the client in
> > parallel to the 2.36 support, but need someone to step up for that. I'll
> > try to get it started this week if no one else does.
> >
> > [1] https://review.openstack.org/#/c/337005/
> > [2] https://etherpad.openstack.org/p/nova-newton-midcycle
> >
> 
> I forgot to mention Tempest. We're going to have to probably put a 
> max_microversion cap in several tests in Tempest to cap at 2.35 (or 
> change those to use Neutron?). There are also going to be some response 
> schema changes like for quota usage/limits, I'm not sure if anyone is 
> looking at this yet. We could also get it done after feature freeze on 
> 9/2, but I still need to land the get-me-a-network API change which is 
> microversion 2.37 and has it's own Tempest test, although that test 
> relies on Neutron so I might be OK for the most part.
> 

If these tests are being used by DefCore, it would be better to cap
the existing behavior and add new tests to use neutron instead of
changing the existing tests. That will make it easier for DefCore
to handle the transition from the old to new behavior by replacing
the old tests in their list with the new ones.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread Rochelle Grober
Just an FYI that might be the reason for the 14400:

1440 is the number of minutes in a day.  14400 would be tenths of minutes in a 
day of number of 6second chunks (huh???)

So, the number was picked to divide files in human logical, not computer 
logical chunks.

--Rocky

-Original Message-
From: gordon chung [mailto:g...@live.ca] 
Sent: Thursday, July 28, 2016 3:05 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [gnocchi] typical length of timeseries data

hi folks,

this is probably something to discuss on ops list as well eventually but 
what do you think about shrinking the max size of timeseries chunks from 
14400 to something smaller? i'm curious to understand what the length of 
the typical timeseries is. my main reason for bringing this up is that 
even our default 'high' policy doesn't reach 14400 limit so it at most 
will only split into two, partially filled objects. as we look to make a 
more efficient storage format for v3(?) seems like this may be an 
opportunity to change size as well (if necessary)

14400 points roughly equals 128KB object which is cool but maybe we 
should target something smaller? 7200points aka 64KB? 3600 points aka 
32KB? just for reference our biggest default series is 10080 points 
(1min granularity over week).

that said 128KB (at most) might not be that bad from read/write pov and 
maybe it's ok to keep it at 14400? i know from the test i did earlier, 
the time requirement to read/write increases linearly (7200 point object 
takes roughly half time of 14400 point object)[1]. i think the main item 
is we don't want it too small that we're updating multiple objects at a 
time.

[1] http://www.slideshare.net/GordonChung/gnocchi-profiling-v2/25

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] Seeking feedback: Active User Contributor (AUC) eligibility requirements

2016-07-06 Thread Rochelle Grober
Umm, I see a major contribution area not included here:

OpenStack community meetup organizers.  I know some sink a large amount of time 
scheduling and organizing at least one a month, some two or more.  These 
organizers are critical for getting info out on OpenStack and familiarizing 
their local tech communities with the OpenStack project.  I hope they are 
included somewhere for their contributions.

--Rocky

-Original Message-
From: Jonathan D. Proulx [mailto:j...@csail.mit.edu] 
Sent: Thursday, June 30, 2016 12:00 PM
To: Shamail Tahir
Cc: openstack-operators; user-committee
Subject: Re: [User-committee] Seeking feedback: Active User Contributor (AUC) 
eligibility requirements


I'm surprised this hasn't generated more feed back, though I'd
generally take that as positive.

List seems good to me.

The self nomintaion + confirm by UC is a good catch all especially in
the beginning where we're unlikely to have though of everything.  We
can always expand criterial later if the 'misc' of UC confirmantion
gets too big and we idnetify patterns.

Thanks all!
-Jon

On Wed, Jun 29, 2016 at 04:52:00PM -0400, Shamail Tahir wrote:
:Hi everyone,
:
:The AUC Recognition WG has been hard at work on milestone-4 of our plan
:which is to identify the eligibility criteria for each community
:contributor role that is covered by AUC.  We had a great mix of community
:people involved in defining these thresholds but we wanted to also open
:this up for broader community feedback before we propose them to the user
:committee.  AUC is a new concept and we hope to make iterative improvements
:going forward... you can consider the guidelines below as "version 1" and I
:am certain they will evolve as lessons are learned.  Thank you in advance
:for your feedback!
:
:*  Official User Group organizers
:
:o   Listed as an organizer or coordinator for an official OpenStack user
:group
:
:*  Active members of official UC Working Groups
:
:o   Attend 25% of the IRC meetings and have spoken more than 25 times OR
:have spoken more than 100 times regardless of attendance count over the
:last six months
:
:o   WG that do not use IRC for their meetings will depend on the meeting
:chair(s) to identify active participation from attendees
:
:*  Ops meetup moderators
:
:o   Moderate a session at the operators meetup over the last six
:months AND/OR
:
:o   Host the operators meetup (limit 2 people from the hosting
:organization) over the last six months
:
:*  Contributions to any repository under UC governance (ops
:repositories, user stories repository, etc.)
:
:o   Submitted two or more patches to a UC governed repository over the last
:six months
:
:*  Track chairs for OpenStack Summits
:
:o   Identified track chair for the upcoming OpenStack Summit (based on when
:data is gathered) [this is a forward-facing metric]
:
:*  Contributors to Superuser (articles, interviews, user stories, etc.)
:
:o   Listed as author in at least one publication at superuser.openstack.org
:over the last six months
:
:*  Submission for eligibility to AUC review panel
:
:o   No formal criteria, anyone can self-nominate, and nominations will be
:reviewed per guidance established in milestone-5
:
:*  Active moderators on ask.openstack
:
:o   Listed as moderator on Ask OpenStack and have over 500 karma
:
:There is additional information available in the etherpad[1] the AUC
:recognition WG has been using for this task which includes Q (question
:and answers) between team members.
:
:[1] https://etherpad.openstack.org/p/uc-recog-metrics
:
:-- 
:Thanks,
:Shamail Tahir
:t: @ShamailXD
:tz: Eastern Time

:___
:User-committee mailing list
:user-commit...@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee


-- 

___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-06 Thread Rochelle Grober
repository is:  http://git.openstack.org/cgit/openstack/osops-tools-contrib/

FYI, there are also:  osops-tools-generic, osops-tools-logging, 
osops-tools-monitoring, osops-example-configs and osops-coda

Wish I could help more,

--Rocky

-Original Message-
From: Joshua Harlow [mailto:harlo...@fastmail.com] 
Sent: Tuesday, July 05, 2016 10:44 AM
To: Matt Fischer
Cc: openstack-...@lists.openstack.org; OpenStack Operators
Subject: Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 
crashing (anyone else seen this?)

Ah, those sets of command sound pretty nice to run periodically,

Sounds like a useful script that could be placed in the ops tools repo 
(I forget where this repo exists at, but pretty sure it does exist?).

Some other oddness though is that this issue seems to go away when we 
don't run cross-release; do you see that also?

Another hypothesis was that the following fix may be triggering part of 
this @ https://bugs.launchpad.net/oslo.messaging/+bug/1495568

So that if we have some queues being set up as auto-delete and some 
beign set up with expiry that perhaps the combination of these causes 
more work (and therefore eventually it falls behind and falls over) for 
the management database.

Matt Fischer wrote:
> Yes! This happens often but I'd not call it a crash, just the mgmt db
> gets behind then eats all the memory. We've started monitoring it and
> have runbooks on how to bounce just the mgmt db. Here are my notes on that:
>
> restart rabbitmq mgmt server - this seems to clear the memory usage.
>
> rabbitmqctl eval 'application:stop(rabbitmq_management).'
> rabbitmqctl eval 'application:start(rabbitmq_management).'
>
> run GC on rabbit_mgmt_db:
> rabbitmqctl eval
> '(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'
>
> status of rabbit_mgmt_db:
> rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'
>
> Rabbitmq mgmt DB how much memory is used:
> /usr/sbin/rabbitmqctl status | grep mgmt_db
>
> Unfortunately I didn't see that an upgrade would fix for sure and any
> settings changes to reduce the number of monitored events also require a
> restart of the cluster. The other issue with an upgrade for us is the
> ancient version of erlang shipped with trusty. When we upgrade to Xenial
> we'll upgrade erlang and rabbit and hope it goes away. I'll also
> probably tweak the settings on retention of events then too.
>
> Also for the record the GC doesn't seem to help at all.
>
> On Jul 5, 2016 11:05 AM, "Joshua Harlow"  > wrote:
>
> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been
> hitting a issue that has been causing the `rabbit_mgmt_db` consuming
> nearly all the processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a
> while and our dual-version-path (where we upgrade the control plane
> and then slowly/eventually upgrade the compute nodes to the same
> version) has somehow triggered this memory leaking bug/issue since
> it has happened most prominently on our cloud which was running
> nova-compute at kilo and the other services at liberty (thus using
> the versioned objects code path more frequently due to needing
> translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511
> with kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to
> 3.6.2 seems to make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932
>  -> 127.0.0.1:5671 )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640
> allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud
> over the weekend (which made some dev/ops not so happy especially
> due to the july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also
> seeing about getting confirmation from the upstream users/devs
> (which also has erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> 
> 

Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else seen this?)

2016-07-06 Thread Rochelle Grober
repository is:  http://git.openstack.org/cgit/openstack/osops-tools-contrib/

FYI, there are also:  osops-tools-generic, osops-tools-logging, 
osops-tools-monitoring, osops-example-configs and osops-coda

Wish I could help more,

--Rocky

-Original Message-
From: Joshua Harlow [mailto:harlo...@fastmail.com] 
Sent: Tuesday, July 05, 2016 10:44 AM
To: Matt Fischer
Cc: openstack-dev@lists.openstack.org; OpenStack Operators
Subject: Re: [openstack-dev] [Openstack-operators] [nova] Rabbit-mq 3.4 
crashing (anyone else seen this?)

Ah, those sets of command sound pretty nice to run periodically,

Sounds like a useful script that could be placed in the ops tools repo 
(I forget where this repo exists at, but pretty sure it does exist?).

Some other oddness though is that this issue seems to go away when we 
don't run cross-release; do you see that also?

Another hypothesis was that the following fix may be triggering part of 
this @ https://bugs.launchpad.net/oslo.messaging/+bug/1495568

So that if we have some queues being set up as auto-delete and some 
beign set up with expiry that perhaps the combination of these causes 
more work (and therefore eventually it falls behind and falls over) for 
the management database.

Matt Fischer wrote:
> Yes! This happens often but I'd not call it a crash, just the mgmt db
> gets behind then eats all the memory. We've started monitoring it and
> have runbooks on how to bounce just the mgmt db. Here are my notes on that:
>
> restart rabbitmq mgmt server - this seems to clear the memory usage.
>
> rabbitmqctl eval 'application:stop(rabbitmq_management).'
> rabbitmqctl eval 'application:start(rabbitmq_management).'
>
> run GC on rabbit_mgmt_db:
> rabbitmqctl eval
> '(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'
>
> status of rabbit_mgmt_db:
> rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'
>
> Rabbitmq mgmt DB how much memory is used:
> /usr/sbin/rabbitmqctl status | grep mgmt_db
>
> Unfortunately I didn't see that an upgrade would fix for sure and any
> settings changes to reduce the number of monitored events also require a
> restart of the cluster. The other issue with an upgrade for us is the
> ancient version of erlang shipped with trusty. When we upgrade to Xenial
> we'll upgrade erlang and rabbit and hope it goes away. I'll also
> probably tweak the settings on retention of events then too.
>
> Also for the record the GC doesn't seem to help at all.
>
> On Jul 5, 2016 11:05 AM, "Joshua Harlow"  > wrote:
>
> Hi ops and dev-folks,
>
> We over at godaddy (running rabbitmq with openstack) have been
> hitting a issue that has been causing the `rabbit_mgmt_db` consuming
> nearly all the processes memory (after a given amount of time),
>
> We've been thinking that this bug (or bugs?) may have existed for a
> while and our dual-version-path (where we upgrade the control plane
> and then slowly/eventually upgrade the compute nodes to the same
> version) has somehow triggered this memory leaking bug/issue since
> it has happened most prominently on our cloud which was running
> nova-compute at kilo and the other services at liberty (thus using
> the versioned objects code path more frequently due to needing
> translations of objects).
>
> The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511
> with kernel 3.10.0-327.4.4.el7.x86_64 (do note that upgrading to
> 3.6.2 seems to make the issue go away),
>
> # rpm -qa | grep rabbit
>
> rabbitmq-server-3.4.0-1.noarch
>
> The logs that seem relevant:
>
> ```
> **
> *** Publishers will be blocked until this alarm clears ***
> **
>
> =INFO REPORT 1-Jul-2016::16:37:46 ===
> accepting AMQP connection <0.23638.342> (127.0.0.1:51932
>  -> 127.0.0.1:5671 )
>
> =INFO REPORT 1-Jul-2016::16:37:47 ===
> vm_memory_high_watermark clear. Memory used:29910180640
> allowed:47126781542
> ```
>
> This happens quite often, the crashes have been affecting our cloud
> over the weekend (which made some dev/ops not so happy especially
> due to the july 4th mini-vacation),
>
> Looking to see if anyone else has seen anything similar?
>
> For those interested this is the upstream bug/mail that I'm also
> seeing about getting confirmation from the upstream users/devs
> (which also has erlang crash dumps attached/linked),
>
> https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg
>
> Thanks,
>
> -Josh
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> 
> 

Re: [openstack-dev] [release] Invitation to join Hangzhou Bug Smash

2016-06-14 Thread Rochelle Grober
Perhaps the right way to schedule these bug smashes is to do it at the same 
time as the release scheduling is determined.  Decide on a fixed time within 
the release cycle (it's been just after M3/feature freeze a few times) and when 
the schedule is put together, the bugsmash is part of the schedule.

By having the release schedule determine the week of the bug smash, we have a 
long timeline to get the planning done and don't have to worry about 
development schedule conflicts.

--Rocky

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Monday, June 13, 2016 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; Anni Lai; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't*
want the event during that week. IOW, the 8th July is the latest you should
schedule it - don't let it slip into the next week starting July 11th, as
during the week of the n-2 milestone focus of the teams will be almost
exclusively on prep for that release, to the detriment of any bug smash
event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Rochelle Grober
Well, you could stick with the wine bottle analogy  and go with a bigger size:

Jeroboam
Methuselah
Salmanazar
Balthazar
Nabuchadnezzar

--Rocky

-Original Message-
From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com] 
Sent: Wednesday, June 01, 2016 3:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Haruhiko Katou
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Thanks Shu for providing suggestions.

I wanted the new name to be related to containers as Magnum is also synonym for 
containers. So I have few options here.

1. Casket
2. Canister
3. Cistern
4. Hutch

All above options are free to be taken on pypi and Launchpad.
Thoughts?

Regards
Madhuri

-Original Message-
From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com] 
Sent: Wednesday, June 1, 2016 11:11 AM
To: openstack-dev@lists.openstack.org
Cc: Haruhiko Katou 
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name) 2. consider other name for 
next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't 
> seem to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad 
> > and CI software on PYPI. Now, we use "python-higgins" for our 
> > project on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not 
> > used on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on 
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI 
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] FW: [openstack-dev] Collecting our wiki use cases

2016-05-11 Thread Rochelle Grober
FYI.

If you don't comment, you can't complain when they don't address your needs.

--Rocky

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, May 11, 2016 8:28 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Collecting our wiki use cases

(posting as separate thread upon request)

Hi everyone,

At the beginning of OpenStack we have been using wiki.openstack.org as 
our default community lightweight information publication platform. 
There were/are lots of things in there, reference information that a 
couple people struggled to keep up to date (and non-vandalized), old 
pages referencing processes or projects long gone, meeting agendas and 
minutes, etc. That created a mix of up-to-date reference information and 
completely outdated random information that is confusing to use 
(especially for newcomers to our community) and that Google search 
indiscriminately provides links to.

Over the last two years, various groups have worked to push reference 
information out of the wiki, to proper documentation guides (like the 
infra guide or the project team guide) or to peer-reviewed specific 
reference websites (like security.o.o or releases.o.o). These efforts to 
move reference content to other tools and websites where appropriate 
will continue.

There are still a lot of use cases for which the wiki is a good 
solution, though, and it is likely that we'll need a lightweight 
publication platform like a wiki to cover those use cases indefinitely. 
Since it's difficult to have a complete view of what the wiki is 
currently used for, I'd like to collect information from current wiki 
users, to make sure we have the complete picture of wiki use cases as we 
discuss adding new tools to our toolbelt.

So, if you use the wiki as part of your OpenStack work, make sure to 
check if your use case is mentioned on the etherpad at:

https://etherpad.openstack.org/p/wiki-use-cases

If not, please add your use case, together with the URL of such a wiki 
page to that etherpad.

Thanks in advance,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Rochelle Grober
But, the original spelling of the landing site is Plimoth Rock.  There were 
still highway signs up in the 70's directing folks to "Plimoth Rock"

--Rocky
Who should know about rocks ;-)

-Original Message-
From: Brian Haley [mailto:brian.ha...@hpe.com] 
Sent: Monday, May 02, 2016 3:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Timeframe for naming the P release?

On 05/02/2016 02:53 PM, Shamail Tahir wrote:
> Hi everyone,
>
> When will we name the P release of OpenStack?  We named two releases
> simultaneously (Newton and Ocata) during the Mitaka release cycle.  This gave 
> us
> the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata) releases.
>
> If we were to vote for the name of the P release soon (since the location is 
> now
> known) we would be able to have names associated with the current release 
> cycle
> (Newton), N+1 (Ocata), and N+2 (P).  This would also allow us to get back to
> only voting for one name per release cycle but consistently have names for N,
> N+1, and N+2.

Is there really going to be an option besides Plymouth?  I remember something 
important happened there in 1620 ;-)

https://en.wikipedia.org/wiki/Plymouth,_Massachusetts

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Austin Ops meetup session: Taxonomy of Failure -- please comment!

2016-04-20 Thread Rochelle Grober
The Ops session on the taxonomy of failure  can use some input even before the 
session itself!

The etherpad is here:

https://etherpad.openstack.org/p/AUS-ops-Taxonomy-of-Failures

And has a brief outline on some of info we'd like to gather.  There is also a 
google spreadsheet here:

https://docs.google.com/spreadsheets/d/1sekKLp7C8lsTh-niPHNa2QLk5kzEC_2w_UsG6ifC-Pw/edit?usp=sharing

that has a couple of examples already in it that would be a great way to 
collect data.

Please go out to the etherpad, read and think about the issue if you are 
interested, and comment ahead if possible.  It will provide more material for 
the discussion.  It also is a way to provide your input even if you can't  be 
at the summit.

Thanks for your time and consideration and see some of you there!

--Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Congress] Issues with Tox testing

2016-04-15 Thread Rochelle Grober
Bryan,

Check out refstack.openstack.org and https://github.com/openstack/refstack

The refstack project provides a client which enables anyone to  run tempest 
tests on their own clouds.  It is flexible, with options for selecting all or 
specific test sets, or specific tests and also has an option to specify the 
build SHA.  Although its first purpose has been to provide the community with a 
way to run and track interop tests on their clouds, it has been designed with 
flexibility and extensibility in mind.

If you want to ask specific questions about the project, they can be found on 
IRC: #refstack  and in their weekly meetings on IRC:  #openstack-meeting-alt, 
on Mondays at 19:00 UTC  And, 
of course, you can always ask questions on the dev list by including [refstack] 
in the subject.

I believe Rally also provides options for running tempest tests on clouds of 
your choosing.

--Rocky

From: Bryan Sullivan [mailto:bls...@hotmail.com]
Sent: Monday, April 11, 2016 10:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress] Issues with Tox testing

Hi Anusha,

That helps. Just one more question: in Liberty (which I'm currently based upon) 
have the tempest tests been run outside of devstack deployments, i.e. in an 
actual OpenStack deployment? The guide you reference mentions devstack but it's 
not clear that the same process applies outside devstack:

e.g. "To list all Congress test cases, run command in /opt/stack/tempest:" 
references the "/opt/stack" folder which is not created outside of devstack 
environments. Thus to run them in a full OpenStack deployment, do I need to 
install  tempest and create an "opt/stack/tempest" folder to which the tests 
are copied, on the same server where Congress is installed?

I'll try Mitaka soon but I expect to have the same question there: basically, 
are the tempest tests expected to be usable outside a devstack deploy?

I guess I could just try it, but I don't want to waste time if this is not 
designed to be used outside devstack environments.

Thanks,
Bryan Sullivan

Date: Fri, 8 Apr 2016 09:01:29 +0530
From: anusha.ii...@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
Hi Bryan,

tox -epy27 doesn't run tempest tests , that is tests mentioned in 
https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest
 , it runs only unit tests , tests present in 
https://github.com/openstack/congress/tree/stable/liberty/congress/tests .

To run tempest tests, you need to manually copy the files to tempest and run 
the tests as mentioned in following readme 
https://github.com/openstack/congress/blob/stable/liberty/contrib/tempest/README.rst

Mitaka supports tempest plugin, so manually copying tests to tempest can be 
avoided if you are using mitaka.

Hope I clarified your question.


Best Regards,
Anusha

On 8 April 2016 at 08:51, Bryan Sullivan 
> wrote:
OK, somehow I did not pick up on that, or dropped it along the way of 
developing the script. Thanks for the clarification, also that Tempest is not 
required. I should have clarified that I'm using stable/liberty as the base. I 
will be moving to stable/mitaka soon, as part of the OPNFV Colorado release 
development.

One additional question then - are the tests run by "tox -epy27" the same as 
the tests in the folder 
https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest? If 
not, how are those tests supposed to be run for a non-devstack deploy (I see 
reference to devstack in the readme)?

I see that the folders have been reorganized for mitaka. My question is per the 
goal to include as much of the Congress tests as possible in the OPNFV CI/CD 
process. Not that I expect any to fail, I just want OPNFV to leverage the full 
test suite. If for liberty that's best left as the tests run by the tox 
command, then that's OK.

Thanks,
Bryan Sullivan

Date: Thu, 7 Apr 2016 17:11:36 -0700
From: ekcs.openst...@gmail.com
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
Thanks for the feedback, Bryan. Glad you got things working!

1. The instructions asking to install those packages are missing from kilo 
(we'll fix that), but they have been there since liberty. Was it perhaps 
unclear because the line is too long?
* Additionally:


$ sudo apt-get install git gcc python-dev libxml2 libxslt1-dev libzip-dev 
mysql-server python-mysqldb build-essential libssl-dev libffi-dev
2. Tempest should not be required by the tox tests.

Thanks!

From: Bryan Sullivan >
Reply-To: "OpenStack Development 

[Openstack-operators] [Config] Cross project spec that organizes/changes how configuration options are provided at deploy/upgrade

2016-04-07 Thread Rochelle Grober
There is currently a cross project specification under review that changes how 
and where the config files are written for an OpenStack installation.  If 
config options and how they are presented to you are important to you, I 
strongly suggest you go out , read the spec/review and comment on both the 
aspects that work for you and the aspects that don't work for you.  Be clear on 
explaining why something doesn't work, as the developers believe what they are 
proposing is what you would want.  You also might want to read the comments on 
version 6 of this review.

--Rocky

The review is here:
https://review.openstack.org/#/c/295543/

The commit message is:
This spec looks at how to get a better configuration option experience
for operators and (to a lesser extent) developers.
The summary is:
Operators currently have to grapple with hundreds of configuration options
across tens of OpenStack services across multiple projects. In many cases
they also need to be aware of the relationships between the options.
In particular, this makes the initial deployment a very daunting task.

This spec proposes a way projects can use oslo.config in a way that leads
to better experience for operators trying to understand all the configuration
options, and helps developers better express the intent of each configuration
option.

This spec re-uses and builds on the ideas in this recent Nova specification:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [OpenStack-DefCore] [defcore][glance] Glare not defcore ready

2016-04-01 Thread Rochelle Grober
Hi folks.

I'm chiming in here from a systems engineering perspective.  I recently 
discovered that OpenStack-client is trying to build cross-project consistency 
into its design.  As such, it is opinionated, but this is good.  The glance 
team might consider also consulting the OSC team to ensure api structure 
compatibility with the OpenStack client.

Yeah, this is a lot to rollup, but the more consistency across projects, the 
friendlier it is to deployers and users, so heading that way soonest, but 
when/where it makes sense would be a *good thing*.

--Rocky


-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Friday, April 01, 2016 10:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mikhail Fedosin; defcore-commit...@lists.openstack.org; Flavio Percoco
Subject: Re: [OpenStack-DefCore] [openstack-dev] [defcore][glance] Glare not 
defcore ready

Thank you for your emails Flavio and Mike. It's really good to get a
clarity out there.

Hence, yes, the intent of the DefCore meeting was to get more "clarity"
on the entire situation and making sure that the project proceeds with
compliant standards. However, meetings can be informal and if anyone
perceived anything differently, I would like to apologize from my end.
I'm happy to clarify more things. Please feel free to ping me, send me
email or ask for chat if you do think that's necessary.

One important thing that I wanted to clarify for Newton, our top
priorities are 1) working with the Nova team for adoption of the Glance
v2 API 2) moving ahead and fast on the import refactor work. All of
these are strongly tied together API hardening and ensuring we support
interoperability requirements.

Looking forward to move collaboration with the DefCore committee in the
future.

On 4/1/16 1:03 PM, Mikhail Fedosin wrote:
> Hi Flavio! Thank you for the clarification.
>
> I do realize that I missed both meetings and that logs from one of
> them are not
>
> complete. I apologize if I've misinterpreted the intentions here.
> I do think
>
> engaiging with DefCore as early in the process as possible is good
> but I'd also
>
> like to clarify the intentions here before this escalates (again)
> into more
>
> confusion about what Glance's future looks like.
>
>
> I want to tell youthat the intention of the DefCore meeting was not to
> confuse more on the work, rather it was to get clarity on all the
> constraints that we are stuck with. Currently we intend to keep our
> focus on interoperability issues this cycle - API hardening being our
> first priority, along with early adoption from Murano and Community
> App Catalog.
>
> And also I want to assure the community that Glare is being developed
> consistent with the API WG principles and in such a way that it could
> be included in DefCore at the appropriate time.
>
> Best regards,
> Mikhail Fedosin
>
> On Fri, Apr 1, 2016 at 6:02 PM, Flavio Percoco  > wrote:
>
> Greetings,
>
>
>
> I missed yday's Glance meeting but I went ahead and read the logs.
> While I was
>
> at it, I read a sentence from Erno (under the Glare updates topic)
> that caught
>
> my eye:
>
>
>
> 14:06:27  About that. I got couple of pings last
> night asking wtf is
>
> going on. Could we please stop selling Glare as
> replacement for Glance at
>
> least until we have a) stable API and b) some level of
> track record/testing
>
> that it actually is successfully working
>
>
>
> I went ahead and looked for the defcore meeting logs[0] (btw,
> seems like the bot
>
> died during the meeting) to get a better understanding of what
> Erno meant (I
>
> assumed the pings he mentioned came from the meeting and then
> confirmed it).
>
>
>
> From the small piece of conversation I could read, and based on
> the current
>
> status of development, priorities and support, I noticed a few
> "issues" that I
>
> believe are worth raising:
>
>
>
> 1. Glare's API is under discussion and it's a complementary
> service for Glance.
>
> [1] 2. Glare should not be a required API for every cloud, whereas
> Glance is and
>
> it should be kept that way for now. 3. Glare is not a drop-in
> replacement for
>
> Glance and it'll need way more discussions before that can happen.
>
>
>
> I do realize that I missed both meetings and that logs from one of
> them are not
>
> complete. I apologize if I've misinterpreted the intentions here.
> I do think
>
> engaiging with DefCore as early in the process as possible is good
> but I'd also
>
> like to clarify the intentions here before this escalates (again)
> into more
>
> confusion about what Glance's future looks like.
>
>
>
> So, to summarize, I don't think Glare should be added in DefCore
> in the near
>
> 

Re: [Openstack-operators] [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-03-31 Thread Rochelle Grober
Cross posting to the Ops ML as one/some of them might have a test cloud like 
this.

Operators:

If you respond to this thread, please only respond to the openstack-dev list?

They could use your input;-)

--Rocky

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, March 31, 2016 12:58 PM
To: openstack-...@lists.openstack.org
Subject: Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

On 03/31/2016 01:23 PM, Monty Taylor wrote:
> Just a friendly reminder to everyone - floating IPs are not synonymous
> with Public IPs in OpenStack.
> 
> The most common (and growing, thank you to the beta of the new
> Dreamcompute cloud) configuration for Public Clouds is directly assign
> public IPs to VMs without requiring a user to create a floating IP.
> 
> I have heard that the require-floating-ip model is very common for
> private clouds. While I find that even stranger, as the need to run NAT
> inside of another NAT is bizarre, it is what it is.
> 
> Both models are common enough that pretty much anything that wants to
> consume OpenStack VMs needs to account for both possibilities.
> 
> It would be really great if we could get the default config in devstack
> to be to have a shared direct-attached network that can also have a
> router attached to it and provider floating ips, since that scenario
> actually allows interacting with both models (and is actually the most
> common config across the OpenStack public clouds)

If someone has the the pattern for what that config looks like,
especially if it could work on single interface machines, that would be
great.

The current defaults in devstack are mostly there for legacy reasons
(and because they work everywhere), and for activation energy to getting
a new robust work everywhere setup.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-03-31 Thread Rochelle Grober
Cross posting to the Ops ML as one/some of them might have a test cloud like 
this.

Operators:

If you respond to this thread, please only respond to the openstack-dev list?

They could use your input;-)

--Rocky

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, March 31, 2016 12:58 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

On 03/31/2016 01:23 PM, Monty Taylor wrote:
> Just a friendly reminder to everyone - floating IPs are not synonymous
> with Public IPs in OpenStack.
> 
> The most common (and growing, thank you to the beta of the new
> Dreamcompute cloud) configuration for Public Clouds is directly assign
> public IPs to VMs without requiring a user to create a floating IP.
> 
> I have heard that the require-floating-ip model is very common for
> private clouds. While I find that even stranger, as the need to run NAT
> inside of another NAT is bizarre, it is what it is.
> 
> Both models are common enough that pretty much anything that wants to
> consume OpenStack VMs needs to account for both possibilities.
> 
> It would be really great if we could get the default config in devstack
> to be to have a shared direct-attached network that can also have a
> router attached to it and provider floating ips, since that scenario
> actually allows interacting with both models (and is actually the most
> common config across the OpenStack public clouds)

If someone has the the pattern for what that config looks like,
especially if it could work on single interface machines, that would be
great.

The current defaults in devstack are mostly there for legacy reasons
(and because they work everywhere), and for activation energy to getting
a new robust work everywhere setup.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] Announcing the Non-ATC Recognition Working Group

2016-03-30 Thread Rochelle Grober
Left a comment on the doodle.

Could you double-check the dates?  Most of the days and times listed are 
already past.

--Rocky

From: Shamail Tahir [mailto:itzsham...@gmail.com]
Sent: Wednesday, March 30, 2016 10:36 AM
To: openstack-operators; commun...@lists.openstack.org; 
user-commit...@lists.openstack.org
Subject: Re: [User-committee] Announcing the Non-ATC Recognition Working Group

Hi everyone,

Friendly reminder:  The doodle poll[1] for the Non-ATC recognition WG meeting 
time will be closing at March 31st, 2100 UTC (tomorrow).

Please vote if you plan to join us!

[1] http://doodle.com/poll/7r5khrefyx7wysi4

Thanks,
Shamail

On Mon, Mar 28, 2016 at 3:33 PM, Shamail Tahir 
> wrote:
Hi everyone,

We had some great discussion on the mailing lists on how to recognize operators 
and other contributors in the OpenStack community who don't qualify for ATC 
(Active Technical Contributor) status earlier this month[1].  We revisited this 
topic at the last User Committee meeting[2] and decided to start a short-term 
working group with one objective: to help define the User Committee 
constituency.  This will help us determine who should be included in the {not 
yet named} contributor program by the User Committee and how to qualify.

Maish and I volunteered to help start the conversations, document the 
decisions, and provide updates on the outcome to the User Committee.  We have 
created a wiki[3] for this new working group (which will be disbanded once we 
have reached our objective) and there is an active doodle poll[4] to determine 
the time for our weekly meeting, which will be held in an IRC channel.  The 
poll shows the start dates as this week but the please just vote for the 
day/time (not date) since we will not begin our meeting until next week (week 
of 4/4).

Please volunteer if you want to help us develop the membership definition for 
this new designation.  The user committee will have a lot more work ahead of 
them to fully build out a new charter and/or program but this activity will be 
extremely helpful as a foundational building block.

[1] http://lists.openstack.org/pipermail/community/2016-March/001422.html
[2] http://eavesdrop.openstack.org/meetings/uc/2016/uc.2016-03-14-19.00.log.html
[3] https://wiki.openstack.org/wiki/NonATCRecognition
[4] http://doodle.com/poll/7r5khrefyx7wysi4

--
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-19 Thread Rochelle Grober
(Inline because the mail formatted friendly this time)

From: Tim Bell March 17, 2016 11:26 AM:
On 17/03/16 18:29, "Sean Dague"  wrote:

>On 03/17/2016 11:57 AM, Markus Zoeller wrote:
>
>> Suggested action items:
>> 
>> 1. I close the open wish list items older than 6 months (=138 reports)
>>and explain in the closing comment that they are outdated and the 
>>ML should be used for future RFEs (as described above).
>> 2. I post on the openstack-ops ML to explain why we do this
>> 3. I change the Nova bug report template to explain this to avoid more
>>RFEs in the bug report list in the future.

Please take a look at how Neutron is doing this.  [1] is their list of RFEs. 
[2] is the ML post Kyle provided to document how Ops and other users can submit 
RFEs without needing to know how to submit specs or code OpenStack Neutron. 
I'll let Kyle post on how successful the process is, if he wants to.

The point here is that Neutron uses wishlist combined with [RFE] in the title 
to identify Ops and user requests.  This identifies items as Ops/user asks that 
these comuunities consider important.  Also, the point is that Yes, post the 
RFE on the ops list, but open the RFE bug and allow comments, voting there.  
The bug system does much better keeping track of the request and Ops votes once 
it exists.  Plus, once Ops and others know about the lightweight process, 
they'll know where to go looking so they can vote/add comments.  Please don't 
restrict RFEs to mailing list.  It's a great way to lose them.  So my 
suggestion here is:

1.  Close the wishlist (all of it???) and post in each that if it's a new 
feature the submitter thinks is useful to himself and others, resubmit with 
[RFE] in title, priority wishlist, pointer to the Neutron docs.
2.  Post to openstack-ops and usercommittee why, and ask them to discuss on the 
ML and review all [RFE]s that they submit (before or after, but if the bug 
number is on ML, they can vote on it and add comments, etc.)
3. Change the template to highlight/require the information needed to move 
forward with *any* submitted bug by dev.

>> 4. In 6 months I double-check the rest of the open wishlist bugs
>>if they found developers, if not I'll close them too.
>> 5. Continously double-check if wishlist bug reports get created
>>
>> Doubts? Thoughts? Concerns? Agreements?
>
>This sounds like a very reasonable plan to me. Thanks for summarizing
>all the concerns and coming up with a pretty balanced plan here. +1.
>
>   -Sean

I’d recommend running it by the -ops* list along with the RFE proposal. I think 
many of the cases
had been raised since people did not have the skills/know how to proceed.

Engaging with the ops list would also bring in the product working group who 
could potentially
help out on the next step (i.e. identifying the best places to invest for RFEs) 
and the other
topical working groups (e.g. Telco, scientific) who could help with 
prioritisation/triage.

I don’t think that a launchpad account on its own is a big problem. Thus, I 
could also see an approach
where a blueprint was created in launchpad with some reasonably structured set 
of chapters. My
personal experience was that the challenges came more later on trying to get 
the review matched up and
the right bp directories.

There is a big benefit to good visibility in the -ops community for RFEs 
though. Quite often, the
features are implemented but people did not know how to find them in the doc 
(or maybe its a doc bug).
Equally, the OSops scripts repo can give people workarounds while the requested 
feature is in the
priority queue.

It would be a very interesting topic to kick off in the ops list and then have 
a further review in
Austin to agree how to proceed.

Tim 

You can review how the [RFE] experiment is going in six weeks or more.  We can 
also get an Ops session specifically for reviewing/commenting on RFEs and/or 
hot Nova bugs. I think you'd get good attendance.  I'd be happy to moderate, or 
be the secretary for that session.

I really think if we can get Ops to use the RFE system that Neutron already 
employs, you'll see fewer duplicates, more participation and better feedback 
across all bugs from Ops (and others).  The Ops folks will participate 
enthusiastically as long as they get feedback from devs and/or see progress in 
getting their needs addressed.  If you post the mail and the process (and an 
example of what a good RFE might look like) to the ops list soon, there can be 
a good list of RFEs by the summit to get Ops to discuss and start the 
conversation on just what they need and Nova can provide along those lines in 
Newton, taking into account Nova's other Newton priorities.  Plus, you will 
have a differentiator of what folks need as new features as they are discovered 
during Ops' rollout to the newer releases.

--Rocky


[1] 

Re: [openstack-dev] [stable] Proposing Tony Breeds for stable-maint-core

2016-03-19 Thread Rochelle Grober
+1  Here's another not counting vote and cheer!

--Rocky


  +1 for Tony! (my vote does not count, but still wanted to cheer!)

  -- Dims

On Fri, Mar 18, 2016 at 4:38 PM, Anita Kuno  wrote:
> On 03/18/2016 04:11 PM, Matt Riedemann wrote:
>> I'd like to propose tonyb for stable-maint-core. Tony is pretty much my
>> day to day guy on stable, he's generally in every stable team meeting
>> (which is not attended well so I appreciate it), and he's as proactive
>> as ever on staying on top of gate issues when they come up, so he's well
>> deserving of it in my mind.
>>
>> Here are review stats for stable for the last 90 days (as defined in the
>> reviewstats repo):
>>
>> http://paste.openstack.org/show/491155/
>>
>> Tony is also the latest nova-stable-maint core and he's done a great job
>> there (as expected) and is very active, which is again much appreciated.
>>
>> Please respond with ack/nack.
>>
> My vote probably doesn't count, but I can't pass up the opportunity to
> say it is nice to see Tony's hard work being acknowledged and appreciated.
>
> I appreciate it.
>
> Thanks Matt,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Rochelle Grober
(Sorry for the top post.  It was this or bottom post because of company choice 
of email systems)

Integration tests, corner cases, negative tests.  Lots of names that don't have 
clear definitions in this discussion.  But, it seems like the collection of 
"negative tests" include both functional and integrated tests.  Maybe we should 
analyze the tests to see where they belong. So, perhaps we can ask an important 
question here:

· Does the test provoke a change of behavior outside the designated 
project, depending on the output(s) from the test action?  In other words, if 
the test is attempting to generate a 4xx response, does it cause another 
project or client to act differently than if it returns a 2xx or different 4xx 
response?  An example of this would be the project the response is returned to 
attempts a retry.
This should qualify as a tempest test because it drives actions from multiple 
projects.  Yes, it is a "negative" api test, but run in an integrated system, 
it is also a normal integration test that demonstrates the integrated system 
behaves appropriately on expected error conditions from other projects.  Or it 
throws an exception or an error message and we know we have a problem with 
either the api or the misbehaving project.

Andrea aptly pointed out that these tests may very well be interoperability 
tests that should be in a gate somewhere.  After all, what are APIs, but 
defined interfaces between two components.  Sometimes it's between human and 
software, but most often in OpenStack, it's between two pieces of software 
written by different groups of developers.  Hence, the API *is* the integration 
point and ideally, all request/response options should be validated, whether 
happy path or not so happy.

And, oh by the way, "negative" responses are crucial interoperability points 
for DefCore.

--Rocky

From: Valeriy Ponomaryov March 16, 2016 11:25 PM
Main reason to split negative tests out, that I see, is attempt to remove 
"corner, not integrational" tests out of Tempest tree. If so, then "negative" 
tests is not proper choice, I think.
Tempest has lots of positive "corner" test cases. So, if move something, then 
we need to move exactly "corner" tests and leave only "integrational" tests of 
any kind - either it is negative or positive.
It is said, that doing so, all single-project related tests will be in its own 
repo as a plugin and only integrational tests will be in Tempest.

Valeriy

On Thu, Mar 17, 2016 at 7:31 AM, Qiming Teng 
> wrote:
>
> I'd love to see this idea explored further. What happens if Tempest
> ends up without tests, as a library for shared code as well as a
> centralized place to run tests from via plugins?
>

Also curious about this. It seems weird to separate the 'positive' and
the 'negative' ones, assuming those patches are mostly contributed by
the same group of developers.

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Rochelle Grober
Don't quote me on this, but the tool that generates the dev docs is the one the 
docs team for the config ref use to generate that document.

And they have been looped in on the upcoming improvements.

--Rocky

-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org] 
Sent: Wednesday, March 02, 2016 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] config options help text improvement: 
current status

On Thu, Mar 03, 2016 at 10:24:28AM +1100, Tony Breeds wrote:
> On Wed, Mar 02, 2016 at 06:11:47PM +, Tim Bell wrote:
>  
> > Great. Does this additional improved text also get into the configuration 
> > guide documentation somehow ? 
> 
> It's certainly part of tox -egenconfig, I don't know about docs.o.o

The sample config file is generated (doing basically the same thing as the tox
job) for nova's devref:

http://docs.openstack.org/developer/nova/sample_config.html

-Matt Treinish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova][neutron] What are your cells

2016-02-25 Thread Rochelle Grober
There is also a bit of info on cells from the Manchester meetup in the Large 
Deployment team ether pad:

https://etherpad.openstack.org/p/MAN-ops-Large-Deployment-Team

I've added the link to the cells etherpad.

--Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15, 16)

2016-02-02 Thread Rochelle Grober
Hey, folks.  Two things about the agenda.

I can handle the "OSOps - what is it, where is it going, what you can do" 
presentation.  There might be one other from the group, but if not, it's mine.

Also, instead of the OSOps working session, I'd like to propose:

A session on config opts and what the Ops guys need/want with regard to 
consolidation/distribution.  Also give a briefing on what the oslo project is 
pushing for them wrt to documentation and what Nova opts working group is 
doing.  This is an important precursor for some of the notifications, logging 
and metrics work that still needs to be addressed.  And I can present/moderate 
this session.

--Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [oslo] nominating Ronald Bradford for oslo-core

2016-01-29 Thread Rochelle Grober
I'm not a voting member of the Oslo team, but a BIG

+1

>From me.

--Rocky

-Original Message-
From: Joshua Harlow [mailto:harlo...@fastmail.com] 
Sent: Thursday, January 28, 2016 11:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] nominating Ronald Bradford for oslo-core

Let's do it,

+1

-Josh

On 01/28/2016 05:41 PM, Davanum Srinivas wrote:
> +1 from me!
>
> -- Dims
>
> On Thu, Jan 28, 2016 at 4:00 PM, Doug Hellmann  wrote:
>> I am nominating Ronald Bradford (rbradfor) for oslo-core.
>>
>> Ronald has been working this cycle to learn about oslo.context,
>> oslo.log, and oslo.config. He anticipates picking up the much-needed
>> work on the app-agnostic-logging blueprint, and has already started
>> making incremental changes related to that work.  He has also
>> contributed to the documentation generation and sample generator
>> in oslo.config. His understanding of the code, our backwards-compatibility
>> requirements, and the operational needs related to configuration
>> and logging will make him a valuable addition to the team.
>>
>> Please indicate yea or nay with the usual +1/-1 vote here.
>>
>> Doug
>>
>> http://stackalytics.com/?release=all_type=all_id=ronaldbradford
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Rochelle Grober
At the Tokyo summit, there was a working session that addressed how to log the 
request-id chain.  The etherpad for that is [0]

A spec needs to be written and implementation details need some hashing out, 
but the approach should provide a way to track the originating request through 
each logged transition, even through forking.

the solution is essentially a triplet with the {original RID, previous RID, 
Current RID}  The first and last steps in the process would have only two 
fields.

So, it's a matter of getting the spec written and approved, then implementing 
in Oslo and integrating with the new RID  calls.

And, yeah, I should have done the spec long ago

--Rocky

[0] https://etherpad.openstack.org/p/Mitaka_Cross_Project_Logging

-Original Message-
From: Andrew Laski [mailto:and...@lascii.com] 
Sent: Wednesday, January 27, 2016 3:21 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of 
x-openstack-request-id



On Wed, Jan 27, 2016, at 05:47 AM, Kuvaja, Erno wrote:
> > -Original Message-
> > From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> > Sent: Wednesday, January 27, 2016 9:56 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make
> > use of x-openstack-request-id
> > 
> > 
> > 
> > On 1/27/2016 9:40 AM, Tan, Lin wrote:
> > > Thank you so much. Eron. This really helps me a lot!!
> > >
> > > Tan
> > >
> > > *From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
> > > *Sent:* Tuesday, January 26, 2016 8:34 PM
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > While the cross project spec was discussed Glance already had
> > > implementation of request ids in place. At the time of the Glance
> > > implementation we assumed that one request id is desired through the
> > > chain of services and we implemented the req id to be accepted as part
> > > of the request. This was mainly driven to have same request id through
> > > the chain between glance-api and glance-registry but as the same code
> > > was used in both api and registry services we got this functionality
> > > across glance.
> > >
> > > The cross project discussion turned this approach down and decided
> > > that only new req id will be returned. We did not want to utilize 2
> > > different code bases to handle req ids in glance-api and
> > > glance-registry, nor we wanted to remove the functionality to allow
> > > the req ids being passed to the service as that was already merged to
> > > our API. Thus is requests are passed without req id defined to the
> > > services they behave (apart from nova having different header name)
> > > same way, but with glance the request maker has the liberty to specify
> > > request id they want to use (within configured length limits).
> > >
> > > Hopefully that clarifies it for you.
> > >
> > > -Erno
> > >
> > > *From:*Tan, Lin [mailto:lin@intel.com]
> > > *Sent:* 26 January 2016 01:26
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Thanks Kebane, I test glance/neutron/keystone with
> > > ``x-openstack-request-id`` and find something interesting.
> > >
> > > I am able to pass ``x-openstack-request-id``  to glance and it will
> > > use the UUID as its request-id. But it failed with neutron and keystone.
> > >
> > > Here is my test:
> > >
> > > http://paste.openstack.org/show/484644/
> > >
> > > It looks like because keystone and neutron are using
> > > oslo_middleware:RequestId.factory and in this part:
> > >
> > >
> > https://github.com/openstack/oslo.middleware/blob/master/oslo_middlew
> > a
> > > re/request_id.py#L35
> > >
> > > It will always generate an UUID and append to response as
> > > ``x-openstack-request-id`` header.
> > >
> > > My question is should we accept an external passed request-id as the
> > > project's own request-id or having its unique request-id?
> > >
> > > In other words, which one is correct way, glance or neutron/keystone?
> > > There must be something wrong with one of them.
> > >
> > > Thanks
> > >
> > > B.R
> > >
> > > Tan
> > >
> > > *From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> > > *Sent:* Wednesday, December 2, 2015 2:24 PM
> > > *To:* OpenStack Development Mailing List
> > > (openstack-dev@lists.openstack.org
> > > )
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
> > > the API response header but thisrequest id isnotavailable to the
> > > callerfromthe python client.
> > 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Rochelle Grober


Devananda van der Veen, on  January 21, 2016 5:14 PM wrote:

On Wed, Jan 20, 2016 at 9:53 AM, Flavio Percoco 
> wrote:
Greetings,

At the Tokyo summit, we discussed OpenStack's development themes in a
cross-project session. In this session a group of folks started discussing what
topics the overall community could focus on as a shared effort. One of the
things that was raised during this session is the need of having cycles to
stabilize projects. This was brought up by Robert Collins again in a meeting[0]
the TC had right after the summit and no much has been done ever since.

Now, "stabilization Cycles" are easy to dream about but really hard to do and
enforce. Nonetheless, they are still worth a try or, at the very least, a
thought. I'll try to go through some of the issues and benefits a stabilization
cycle could bring but bear in mind that the lists below are not exhaustive. In
fact, I'd love for other folks to chime in and help building a case in favor or
against this.

Negative(?) effects
===

- Project won't get new features for a period of time Economic impact on
 developers(?)
- It was mentioned that some folks receive bonuses for landed features
- Economic impact on companies/market because no new features were added (?)
- (?)

Positive effects


- Focus on bug fixing
- Reduce review backlog
- Refactor *existing* code/features with cleanups
- Focus on multi-cycle features (if any) and complete those
- (?)

A stabilization cycle, as it was also discussed in the aforementioned
meeting[0], doesn't need to be all or nothing. For instance, it should be
perfectly fine for a project to say that a project would dedicate 50% of the
cycle to stabilization and the rest to complete some pending features. Moreover,
each project is free to choose when/if a stabilization cycle would be good for
it or not.

For example, the Glance team is currently working on refactoring the image
import workflow. This is a long term effort that will require at least 2 cycles
to be completed. Furthermore, it's very likely these changes will introduce bugs
and that will require further work. If the Glance team would decide (this is not
an actual proposal... yet :) to use Newton as a stabilization cycle, the team
would be able to focus all its forces on fixing those bugs, completing the
feature and tackling other, long-term, pending issues. In the case of Glance,
this would impact *only glance* and not other projects under the Glance team
umbrella like glanceclient and glance_store. In fact, this would be a perfect
time for the glance team to dedicate time to improving glanceclient and catch up
with the server side latest changes.

So, the above sounds quite vague, still but that's the idea. This email is not a
formal proposal but a starting point to move this conversation forward. Is this
something other teams would be interested in? Is this something some teams would
be entirely against? Why?

From a governance perspective, projects are already empowered to do this and
they don't (and won't) need to be granted permission to have stabilization
cycles. However, the TC could work on formalizing this process so that teams
have a reference to follow when they want to have one. For example, we would
have to formalize how projects announce they want to have a stabilization cycle
(I believe it should be done before the mid-term of the ongoing cycle).

Thoughts? Feedback?
Flavio


Thanks for writing this up, Flavio.

The topic's come up in smaller discussion groups several times over the last 
few years, mostly with a nod to "that would be great, except the corporations 
won't let it happen".

To everyone who's replied with shock to this thread, the reality is that nearly 
all of the developer-hours which fuel OpenStack's progress are funded directly 
by corporations, whether big or small. Even those folks who have worked in open 
source for a long time, and are working on OpenStack by choice, are being paid 
by companies deeply invested in the success of this project. Some developers 
are adept at separating the demands of their employer from the best interests 
of the community. Some are not. I don't have hard data, but I suspect that most 
of the nearly-2000 developers who have contributed to OpenStack during the 
Mitaka cycle are working on what ever they're working on BECAUSE IT MATTERS TO 
THEIR EMPLOYER.

Every project experiences pressure from companies who are trying to land very 
specific features. Why? Because they're all chasing first-leader advantage in 
the market. Lots of features are in flight right now, and, even though it 
sounds unbelievable, many companies announce those features in their products 
BEFORE they actually land upstream. Crazy, right? Except... IT WORKS. Other 
companies buy their product because they are buying a PRODUCT from some 
company. It happens to contain OpenStack. And it has a bunch of unmerged 
features.

With my 

Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Rochelle Grober
Any chance you could make the Monday meeting a few hours later?  Japan and 
China are still mostly in bed then, but two hours would allow both to 
participate.

--Rocky

> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Wednesday, December 16, 2015 11:12 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [stable] meeting time proposal
> 
> I'm not entirely sure what the geo distribution is for everyone that
> works on stable, but I know we have people in Europe and some people in
> Australia.  So I was thinking alternating weekly meetings:
> 
> Mondays at 2100 UTC
> 
> Tuesdays at 1500 UTC
> 
> Does that at least sort of work for people that would be interested in
> attending a meeting about stable? I wouldn't expect a full hour
> discussion, my main interests are highlighting status, discussing any
> issues that come up in the ML or throughout the week, and whatever else
> people want to go over (work items, questions, process discussion,
> etc).
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-09 Thread Rochelle Grober
Congratulations, Matt!

Condolences to Erno, but he's already said he's still part of the team.

I'm looking forward to the IRC meetings and and also what the team becomes.

--Rocky

> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Wednesday, December 09, 2015 6:11 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
> open
> 
> On 12/09/2015 09:02 AM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
> >> Thierry Carrez wrote:
> >>> Thierry Carrez wrote:
>  The nomination deadline is passed, we have two candidates!
> 
>  I'll be setting up the election shortly (with Jeremy's help to
> generate
>  election rolls).
> >>>
> >>> OK, the election just started. Recent contributors to a stable
> branch
> >>> (over the past year) should have received an email with a link to
> vote.
> >>> If you haven't and think you should have, please contact me
> privately.
> >>>
> >>> The poll closes on Tuesday, December 8th at 23:59 UTC.
> >>> Happy voting!
> >>
> >> Election is over[1], let me congratulate Matt Riedemann for his
> election
> >> ! Thanks to everyone who participated to the vote.
> >>
> >> Now I'll submit the request for spinning off as a separate project
> team
> >> to the governance ASAP, and we should be up and running very soon.
> >>
> >> Cheers,
> >>
> >> [1] http://civs.cs.cornell.edu/cgi-
> bin/results.pl?id=E_2f5fd6c3837eae2a
> >>
> >
> > Congratulations, Matt!
> >
> > Doug
> >
> >
> ___
> ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Thanks to both candidates for putting their name forward, it is nice to
> have an election.
> 
> Congratulations Matt,
> Anita.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Log] Tomorrow's Log WG meeting is cancelled

2015-11-24 Thread Rochelle Grober
Getting ready to eat lots of turkey

--Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [openstack][openstack-operators] IRC meeting(s)

2015-11-19 Thread Rochelle Grober
++

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Thursday, November 19, 2015 1:52 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack][openstack-operators] IRC
> meeting(s)
> 
> JJ Asghar wrote:
> > I've been seeing some interesting regressions in our community
> > recently. Our IRC meetings, or at least the ones I've been attending,
> > are extremely variable in quality. I believe this is a disservice to
> > our community as a whole, so I've offered to help.
> >
> > I've written up a blog post[1] on some general characteristics of IRC
> > meetings that seem to come out with the most fruitful conversations.
> >
> > I hope everyone finds something useful from this post, and hopefully
> > will help educate new comers to our community in some good working
> > practices.
> >
> > I'd love to talk about this more, so please don't hesitate to reach
> out.
> >
> > [1]:
> > http://jjasghar.github.io/blog/2015/11/18/characteristics-of-a-
> success
> > ful-chatroom-meeting/
> 
> Thanks for posting this. I could see that morph into a more complete
> chapter about IRC in the project team guide:
> 
> http://docs.openstack.org/project-team-guide/
> 
> --
> Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [all]Can we get some sanity in the Neutron logs please?

2015-11-19 Thread Rochelle Grober
Thanks both Armando and Matt!

I am cross posting this to the operators' list (as the main post -- operators, 
simple reply and no spam to dev).

The logging tag should be a tag in all projects.  I'm glad it's already there 
in Neutron.  And, this sort of feedback is exactly what we need operators and 
gate debuggers to provide when they hit stuff in the logs that is either 
classified at the wrong level (e.g., warning instead of info), is too verbose, 
generates a traceback rather than an error, or is too chatty, happens too often.

Please file these bugs to the appropriate projects and tag them "logging".  
Generally, these are easy fixes, or are "while you're in there" fixes while a 
bug is being addressed, so reporting them will most often get them fixed 
quickly.  I know that doesn't help the Ops folks as much as the developers, but 
it will help the developers when they upgrade to releases the logging has been 
fixed in.

These sorts of issues are major maintainability issues for operators, so a 
focus on this for the Mitaka release will help make it a release operators want 
to upgrade to :-)

Thanks for the ear and for the help with debugging OpenStack issues by 
improving the signal to noise ratio.

--Rocky

From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, November 10, 2015 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Can we get some sanity in the Neutron logs please?



On 10 November 2015 at 11:12, Matt Riedemann 
> wrote:


On 11/10/2015 1:10 PM, Matt Riedemann wrote:


On 11/10/2015 12:51 PM, Armando M. wrote:


On 10 November 2015 at 10:33, Matt Riedemann 

>> wrote:

Let me qualify by saying I'm not a Neutron person.

We know that gate-tempest-dsvm-neutron-full is failing hard as of
the last 24 hours [1].

An error that's been showing up in tempest runs with neutron a lot
is:

"AssertionError: 0 == 0 : No IPv4 addresses found in: []"

So checking logstash [2] it's hitting a lot. It's only recent
because that failure message is new to Tempest in the last day or
so, but it has a lot of hits, so whatever it is, it's failing a lot.

So the next step is usually digging into service logs looking for
errors. I check the q-svc logs first. Not many errors but a
bazillion warnings for things not found (networks and devices). [3]

For example:

2015-11-10 17:13:02.542 WARNING neutron.plugins.ml2.rpc
[req-15a73753-1512-4689-9404-9658a0cd0c09 None None] Device
aaa525be-14eb-44a5-beb0-ed722896be93 requested by agent
ovs-agent-devstack-trusty-rax-iad-5785199 not found in database

2015-11-10 17:14:17.754 WARNING neutron.api.rpc.handlers.dhcp_rpc
[req-3d7e9848-6151-4780-907f-43f11a2a8545 None None] Network
b07ad9b2-e63e-4459-879d-3721074704e5 could not be found, it might
have been deleted concurrently.

Are several hundred of these warnings useful to an operator trying
to debug a problem? The point of the CI gate testing is to try and
simulate a production cloud environment. When something goes wrong,
you check the logs. With the amount of warning/error level logging
that is in the neutron logs, finding a real problem is like looking
for a needle in a haystack. Since everything is async, 404s are
expected when racing to delete a resource and they should be handled
gracefully.

Anyway, the server log isn't useful so I go digging in the agent
logs and stacktraces there are aplenty. [4]

Particularly this:

"Exception: Port tapcea51630-e1 is not ready, resync needed"

That's due to a new change landing in the last 24 hours [5]. But the
trace shows up over 16K times since it landed [6].

Checking the code, it's basically a loop processing events and when
it hits an event it can't handle, it punts (breaking the loop so you
don't process the other events after it - which is a bug), and the
code that eventually handles it is just catching all Exception and
tracing them out assuming they are really bad.

At this point, as a non-neutron person, i.e. not well versed in the
operations of neutron or how to debug it in great detail, I assume
something is bad here but I don't really know - and the logs are so
full of noise that I can't distinguish real failures.

I don't mean to pick on this particular change, but it's a good
example of a recent thing.

I'd like to know if this is all known issue or WIP type stuff. I've
complained about excessively noisey neutron logs in channel before
and I'm usually told that they are either necessary (for whatever
reason) or that rather than complain about the verbosity, I should
fix the race that is causing it - which is not 

Re: [Openstack-operators] [Scale][Performance] / compute_nodes ratio experience

2015-11-19 Thread Rochelle Grober
Sorry this doesn't thread properly, but cut and pasted out of the digest...



> As providing OpenStack community with understandable recommendations

> and instructions on performant OpenStack cloud deployments is part of

> Performance Team mission, I'm kindly asking you to share your

> experience on safe cloud deployment ratio between various types of

> nodes you're having right now and the possible issues you observed (as

> an example: discussed GoDaddy's cloud is having 3 conductor boxes vs

> 250 computes in the cell, and there was an opinion that it's simply

> not enough and that's it).



That was my opinion, and it was based on an apparently incorrect assumption 
that they had a lot of things coming and going on their cloud. I think they've 
demonstrated at this point that (other issues

aside) three is enough for them, given their environment, workload, and 
configuration.



This information is great for building rules of thumb, so to speak.  GoDaddy 
has an example configuraton that is adequate for low frequency 
construct/destruct (low number of vm create/destroy) cloud architectures.  This 
provides a lower bounds and might be representative of a lot of enterprise 
cloud deployments.



The problem with coming up with any sort of metric that will apply to everyone 
is that it's highly variable. If you have 250 compute nodes and never create or 
destroy any instances, you'll be able to get away with

*many* fewer conductors than if you have a very active cloud. Similarly, during 
a live upgrade (or following any upgrade where we do some online migration of 
data), your conductor load will be higher than normal. Of course, 4-core and 
96-core conductor nodes aren't equal either.



And here we have another rule of thumb, but no numbers put to it yet.  If you 
have a low frequency construct/destruct cloud model, you will need to 
temporarily increase your number of conductors by {x amount OR x%} when 
performing OpenStack live upgrades.



So, by all means, we should gather information on what people are doing 
successfully, but keep in mind that it depends *a lot* on what sort of 
workloads the cloud is supporting.



Right, but we can start applying fuzzy logic (the human kind, not machine) and 
get a better understanding of working configurations and *why* they work, then 
start examining where the transition states between configurations are.   You 
need data before you can create information ;-)



--Rocky


--Dan
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all]Can we get some sanity in the Neutron logs please?

2015-11-19 Thread Rochelle Grober
Thanks both Armando and Matt!

I am cross posting this to the operators' list (as the main post -- operators, 
simple reply and no spam to dev).

The logging tag should be a tag in all projects.  I'm glad it's already there 
in Neutron.  And, this sort of feedback is exactly what we need operators and 
gate debuggers to provide when they hit stuff in the logs that is either 
classified at the wrong level (e.g., warning instead of info), is too verbose, 
generates a traceback rather than an error, or is too chatty, happens too often.

Please file these bugs to the appropriate projects and tag them "logging".  
Generally, these are easy fixes, or are "while you're in there" fixes while a 
bug is being addressed, so reporting them will most often get them fixed 
quickly.  I know that doesn't help the Ops folks as much as the developers, but 
it will help the developers when they upgrade to releases the logging has been 
fixed in.

These sorts of issues are major maintainability issues for operators, so a 
focus on this for the Mitaka release will help make it a release operators want 
to upgrade to :-)

Thanks for the ear and for the help with debugging OpenStack issues by 
improving the signal to noise ratio.

--Rocky

From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, November 10, 2015 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Can we get some sanity in the Neutron logs please?



On 10 November 2015 at 11:12, Matt Riedemann 
> wrote:


On 11/10/2015 1:10 PM, Matt Riedemann wrote:


On 11/10/2015 12:51 PM, Armando M. wrote:


On 10 November 2015 at 10:33, Matt Riedemann 

>> wrote:

Let me qualify by saying I'm not a Neutron person.

We know that gate-tempest-dsvm-neutron-full is failing hard as of
the last 24 hours [1].

An error that's been showing up in tempest runs with neutron a lot
is:

"AssertionError: 0 == 0 : No IPv4 addresses found in: []"

So checking logstash [2] it's hitting a lot. It's only recent
because that failure message is new to Tempest in the last day or
so, but it has a lot of hits, so whatever it is, it's failing a lot.

So the next step is usually digging into service logs looking for
errors. I check the q-svc logs first. Not many errors but a
bazillion warnings for things not found (networks and devices). [3]

For example:

2015-11-10 17:13:02.542 WARNING neutron.plugins.ml2.rpc
[req-15a73753-1512-4689-9404-9658a0cd0c09 None None] Device
aaa525be-14eb-44a5-beb0-ed722896be93 requested by agent
ovs-agent-devstack-trusty-rax-iad-5785199 not found in database

2015-11-10 17:14:17.754 WARNING neutron.api.rpc.handlers.dhcp_rpc
[req-3d7e9848-6151-4780-907f-43f11a2a8545 None None] Network
b07ad9b2-e63e-4459-879d-3721074704e5 could not be found, it might
have been deleted concurrently.

Are several hundred of these warnings useful to an operator trying
to debug a problem? The point of the CI gate testing is to try and
simulate a production cloud environment. When something goes wrong,
you check the logs. With the amount of warning/error level logging
that is in the neutron logs, finding a real problem is like looking
for a needle in a haystack. Since everything is async, 404s are
expected when racing to delete a resource and they should be handled
gracefully.

Anyway, the server log isn't useful so I go digging in the agent
logs and stacktraces there are aplenty. [4]

Particularly this:

"Exception: Port tapcea51630-e1 is not ready, resync needed"

That's due to a new change landing in the last 24 hours [5]. But the
trace shows up over 16K times since it landed [6].

Checking the code, it's basically a loop processing events and when
it hits an event it can't handle, it punts (breaking the loop so you
don't process the other events after it - which is a bug), and the
code that eventually handles it is just catching all Exception and
tracing them out assuming they are really bad.

At this point, as a non-neutron person, i.e. not well versed in the
operations of neutron or how to debug it in great detail, I assume
something is bad here but I don't really know - and the logs are so
full of noise that I can't distinguish real failures.

I don't mean to pick on this particular change, but it's a good
example of a recent thing.

I'd like to know if this is all known issue or WIP type stuff. I've
complained about excessively noisey neutron logs in channel before
and I'm usually told that they are either necessary (for whatever
reason) or that rather than complain about the verbosity, I should
fix the race that is causing it - which is not 

[openstack-dev] [release][stable] OpenStack 2014.2.4 (juno)

2015-11-19 Thread Rochelle Grober
Again, my plea to leave the Juno repository on git.openstack.org, but locked 
down to enable at least grenade testing for Juno->Kilo upgrades.  For upgrade 
testing purposes, python2.6 is not needed as any cloud would have to upgrade 
python before upgrading to kilo.  The testing could/should be limited to only 
occurring when Kilo backports are proposed.  The nodepool requirements should 
be very small except for the pre-release periods remaining for Kilo, especially 
if the testing is restricted to grenade only.

Thanks for the ear. I'm expecting to participate in the stable releases team, 
and to bring a developer along with me;-)

--Rocky

> -Original Message-
> From: Alan Pevec [mailto:ape...@gmail.com]
> Sent: Thursday, November 19, 2015 3:35 PM
> To: openstack-annou...@lists.openstack.org
> Subject: [openstack-announce] [release][stable] OpenStack 2014.2.4
> (juno)
> 
> Hello everyone,
> 
> The OpenStack Stable Maintenance team is eager to announce the release
> of the 2014.2.4 stable Juno.  We have been busy reviewing and
> accepting backported bugfixes to the stable/juno branches according
> to the criteria set at:
> 
> https://wiki.openstack.org/wiki/StableBranch
> 
> A total of 180 bugs have been fixed across all projects since 2014.2.3.
> These updates to Juno are intended to be low risk with no intentional
> regressions or API changes. The list of bugs, tarballs and
> other milestone information for each project may be found on Launchpad:
> 
> https://launchpad.net/ceilometer/juno/2014.2.4
> https://launchpad.net/cinder/juno/2014.2.4
> https://launchpad.net/glance/juno/2014.2.4
> https://launchpad.net/heat/juno/2014.2.4
> https://launchpad.net/horizon/juno/2014.2.4
> https://launchpad.net/keystone/juno/2014.2.4
> https://launchpad.net/neutron/juno/2014.2.4
> https://launchpad.net/nova/juno/2014.2.4
> https://launchpad.net/sahara/juno/2014.2.4
> https://launchpad.net/trove/juno/2014.2.4
> 
> Release notes may be found on the wiki:
> 
> https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.4
> 
> No further upstream Juno releases are planned.
> 
> Thanks,
> Alan
> 
> ___
> OpenStack-announce mailing list
> openstack-annou...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-16 Thread Rochelle Grober
I would like to make a plea that while Juno is locked down so as no changes can 
be made against it, the branch remains on the git.openstack.org site.  Please?  
One area that could be better investigated with the branch in place is upgrade. 
 Kilo will continue to get patches, as will Liberty, so an occasional grenade 
run (once a week?  more often?  Less often) could help operators understand 
what is in store for them when they finally can upgrade from Juno.  Yes, it 
will require occasional resources for the run, but I think this is one of the 
cheapest forms of insurance in support of the installed base of users, before a 
Stable Release team is put together.

My $.02

--Rocky

> -Original Message-
> From: Gary Kotton [mailto:gkot...@vmware.com]
> Sent: Friday, November 13, 2015 6:04 AM
> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re:
> [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> 
> 
> On 11/13/15, 3:23 PM, "Flavio Percoco"  wrote:
> 
> >On 10/11/15 16:11 +0100, Alan Pevec wrote:
> >>Hi,
> >>
> >>while we continue discussion about the future of stable branches in
> >>general and stable/juno in particular, I'd like to execute the
> current
> >>plan which was[1]
> >>
> >>2014.2.4 (eol) early November, 2015. release manager: apevec
> >>
> >>Iff there's enough folks interested (I'm not) in keep Juno alive
> 
> +1 I do not see any reason why we should still invest time and effort
> here. Lets focus on stable/kilo
> 
> >>longer, they could resurrect it but until concrete plan is done let's
> >>be honest and stick to the agreed plan.
> >>
> >>This is a call to stable-maint teams for Nova, Keystone, Glance,
> >>Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to
> review
> >>open stable/juno changes[2] and approve/abandon them as appropriate.
> >>Proposed timeline is:
> >>* Thursday Nov 12 stable/juno freeze[3]
> >>* Thursday Nov 19 release 2014.2.1
> >>
> >
> >General ack from a stable-maint point of view! +1 on the above
> >
> >Flavio
> >
> >>Cheers,
> >>Alan
> >>
> >>[1]
> >>https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2F
> juno
> >>_releases_.2812_months.29
> >>
> >>[2]
> >>https://review.openstack.org/#/q/status:open+AND+branch:stable/juno+A
> ND+%
> >>28project:openstack/nova+OR+project:openstack/keystone+OR+project:ope
> nsta
> >>ck/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR
> +pro
> >>ject:openstack/horizon+OR+project:openstack/heat+OR+project:openstack
> /cei
> >>lometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n,z
> >>
> >>[3] documented  in
> >>https://wiki.openstack.org/wiki/StableBranch#Stable_release_managers
> >>TODO add in new location
> >>http://docs.openstack.org/project-team-guide/stable-branches.html
> >>
> >>_
> 
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >--
> >@flaper87
> >Flavio Percoco
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Rochelle Grober
+1

I think that as OpenStack's customer base grows, this is going to become more 
and more important, and will get the pain of users not getting updates will 
increase enough that there *will* be a group of engineers willing to  do not 
only backports and releases, but bugfixes in trunk and gate maintenance on the 
branches.  Providing a real "home" for these engineers will help empower them 
to act proactively to improve Stable branch releases and maybe even go beyond 
and develop a patch release process for stable releases that makes security and 
vulnerability fixes easier to install for the users.

As long as a critical mass can be reached for the team, it is likely that the 
team will exceed expectations.

--Rocky

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Monday, November 09, 2015 8:42 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [stable] Making stable maintenance its own
> OpenStack project team
> 
> Hi everyone,
> 
> A few cycles ago we set up the Release Cycle Management team which was
> a
> bit of a frankenteam of the things I happened to be leading: release
> management, stable branch maintenance and vulnerability management.
> While you could argue that there was some overlap between those
> functions (as in, "all these things need to be released") logic was not
> the primary reason they were put together.
> 
> When the Security Team was created, the VMT was spinned out of the
> Release Cycle Management team and joined there. Now I think we should
> spin out stable branch maintenance as well:
> 
> * A good chunk of the stable team work used to be stable point release
> management, but as of stable/liberty this is now done by the release
> management team and triggered by the project-specific stable
> maintenance
> teams, so there is no more overlap in tooling used there
> 
> * Following the kilo reform, the stable team is now focused on defining
> and enforcing a common stable branch policy[1], rather than approving
> every patch. Being more visible and having more dedicated members can
> only help in that very specific mission
> 
> * The release team is now headed by Doug Hellmann, who is focused on
> release management and does not have the history I had with stable
> branch policy. So it might be the right moment to refocus release
> management solely on release management and get the stable team its own
> leadership
> 
> * Empowering that team to make its own decisions, giving it more
> visibility and recognition will hopefully lead to more resources being
> dedicated to it
> 
> * If the team expands, it could finally own stable branch health and
> gate fixing. If that ends up all falling under the same roof, that team
> could make decisions on support timeframes as well, since it will be
> the
> primary resource to make that work
> 
> So.. good idea ? bad idea ? What do current stable-maint-core[2]
> members
> think of that ? Who thinks they could step up to lead that team ?
> 
> [1] http://docs.openstack.org/project-team-guide/stable-branches.html
> [2] https://review.openstack.org/#/admin/groups/530,members
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Logging sessions at the Tokyo summit -- Let's get serious!

2015-10-21 Thread Rochelle Grober
Hey operators and other users (and devs),

We've got a number of logging sessions scheduled for the Summit.

I'm collecting up information on the summit logging etherpad:  
https://etherpad.openstack.org/p/TYO-ops-logging
and it will get fleshed out a lot more during the flight;-)

Two goals of the sessions:  Get Error Codes spec into a condition the devs will 
accept it
Define the Logging in OpenStack document and get commitments from Ops folks to 
write the various sections.

I also want to discuss

* the recent acceptance of the request-ID spec and how we might want to 
influence the implementation for logging purposes

o   
http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

* A network security and FW logging spec

o   https://review.openstack.org/#/c/203509/25  that also has a session in the 
design summit

* App agnostic logging parameters -- suggestions/advice/details for 
devs to consider for this.

o   
http://specs.openstack.org/openstack/oslo-specs/specs/kilo/app-agnostic-logging-parameters.html

* Info level standardization  -- what is good, what is bad, let's file 
bugs?

* Notifications - rules on what should be logged and what shouldn't

So, there is much too much to cover during the sessions, but we can pick the 
ones (or others) that are important to the attendees.  Please add what you 
think is important to the etherpad and come to the sessions.

Here are the sessions related to logging:

http://sched.co/4QeM
http://sched.co/49xh
http://sched.co/49yK
http://sched.co/4Fjf
These two are at the same time.  They should have the same people in at least 
part of them.  Maybe we should ask Oslo to join us after they address rootwrap 
and security
http://sched.co/4Nl5
http://sched.co/4QcR
http://sched.co/4S1C
http://sched.co/4Qbp (for writing the logging doc)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - Depends-On vs. backports

2015-10-13 Thread Rochelle Grober
Could you put this in the devref?  Or getting started or something so this info 
isn't lost and new devs will be educated?

Thanks!

--Rocky

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Tuesday, October 13, 2015 2:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [all] FYI - Depends-On vs. backports
> 
> I was checking out where things were at this morning with fixing unit
> tests, and found that a bunch of things were all gummed up in "merge
> conflicts".
> 
> When you specify
> 
> Depends-On: ID
> 
> It requires everything with ID to be in a "merged" state to go forward.
> A change being abandoned doesn't count.
> 
> This runs smack into our tendancy to make the UUID for backported
> changes the same as the id in master (there is no reason we really have
> to do that).
> 
> So the following sequence:
> 
> Fix A in requirements master
> Fix B in nova master (depends on A)
> Backport A to requirements stable/liberty
> Abandon A in stable/liberty
> 
> Just means that B is stuck and can never land.
> 
> 
> It's come up to let things through with Abandon, however Depends-On is
> used to stack up changes across a bunch of different repos with
> different review teams, and who can Abandon and then let changes
> through
> that shouldn't land, gets kind of confusing. Also, if we don't support
> depends on across branches, we can't use it for testing upgrade
> scenarios. So the current edge case is probably the least weird edge
> case that we can have.
> 
> 
> So, in future if you want to use Depends-On in a backport situation,
> you
> need to carefully do your backports so they don't share a UUID with
> master.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Rochelle Grober
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Wednesday, October 07, 2015 3:48 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] naming N and O releases nowish
> 
> On 10/07/2015 06:22 PM, Monty Taylor wrote:
> > On 10/07/2015 09:24 AM, Sean Dague wrote:
> >> On 10/07/2015 08:57 AM, Thierry Carrez wrote:
> >>> Sean Dague wrote:
>  We're starting to make plans for the next cycle. Long term plans
> are
>  getting made for details that would happen in one or two cycles.
> 
>  As we already have the locations for the N and O summits I think
> we
>  should do the naming polls now and have names we can use for this
>  planning instead of letters. It's pretty minor but it doesn't seem
> like
>  there is any real reason to wait and have everyone come up with
> working
>  names that turn out to be confusing later.
> >>>
> >>> That sounds fair. However the release naming process currently
> >>> states[1]:
> >>>
> >>> """
> >>> The process to chose the name for a release begins once the
> location of
> >>> the design summit of the release to be named is announced and no
> sooner
> >>> than the opening of development of the previous release.
> >>> """
> >>>
> >>> ...which if I read it correctly means we could pick N now, but not
> O. We
> >>> might want to change that (again) first.
> >>>
> >>> [1] http://governance.openstack.org/reference/release-naming.html
> >>
> >> Right, it seems like we should change it so that we can do naming as
> >> soon as the location is announced.
> >>
> >> For projects like Nova that are trying to plan things more than one
> >> cycle out, having those names to hang those features on is massively
> >> useful (as danpb also stated). Delaying for bureaucratic reasons
> just
> >> seems silly. :)
> >
> > So, for what it's worth, I remember discussing this when we discussed
> > the current process, and the change you are proposing was one of the
> > options put forward when we talked about it.
> >
> > The reason for not doing all of them as soon as we know them was to
> keep
> > a sense of ownership by the people who are actually working on the
> > thing. Barcelona is a long way away and we'll all likely have rage
> quit
> > by then, leaving the electorate for the name largely disjoint from
> the
> > people working on the release.
> >
> > Now, I hear you - and I'm not arguing that position. (In fact, I
> believe
> > my original thought was in line with what you said here) BUT - I
> mostly
> > want to point out that we have had this discussion, the discussion
> was
> > not too long ago, it covered this point, and I sort of feel like if
> we
> > have another discussion on naming process people might kill us with
> > pitchforks.
> 
> You are assuming that not having this conversation might shield you
> from
> the pitchforks.
 
I, myself favor war hammers (very useful tool for separating plaster from 
lathe), but if we all rage quit, the new guard can always change the name as a 
middle finger salute to the old guard.  Let's be daring!  Let's name O, too!

--Rocky

> Anita.
> 
> >
> > Monty
> >
> >
> >
> ___
> ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][TC] Candidacy

2015-09-30 Thread Rochelle Grober
Hello People!

I am tossing one of my hats into the ring to run for TC.  Yes, I believe you
could call me a "diversity candidate" as I'm not much of a developer any more,
but I think my skills would be a great addition to the excellent people who are
on the TC (past and present).

My background:  I am currently an architect with Huawei Technologies.  My role
is "OpenStack" and as such, I am liaison to many areas an groups in the
OpenStack community and I am liaison to Huawei engineers and management for the
OpenStack community.  I focus energy on all parts of Software products that
aren't directly writing code.  I am an advocate for quality, for effective and
efficient process, and for the downstream stakeholders (Ops, Apps developers,
Users, Support Engineers, Docs, Training, etc).  I am currently active in:
* DefCore
* RefStack
* Product Working Group
* Logging Working Group (cofounder)
* Ops community
* Peripherally, Tailgaters
* Women of OpenStack
* Diversity Working Group

What I would like to help the TC and the community with:
* Interoperability across deployed clouds begins with cross project
communications and the realization that  each engineer and each project is
connected and influential in how the OpenStack ecosystem works, responds, and
grows. When OpenStack was young, there were two projects and everyone knew
each other, even if they didn't live in the same place.  Just as processes
become more formal when startups grow to be mid-sized companies, OpenStack
has formalized much as it has exploded in number of participants.  We need to
continue to transform Developer, Ops and other community lore into useful
documentation. We are at the point where we really need to focus our energies
and our intelligence on how to effectively span projects and communities via
better communications.  I'm already doing this to some extent.  I'd like to
help the TC do that to a greater extent.
* In the past two years, I've seen the number of "horizontal" 
projects grow
almost as significantly as the "vertical" projects.  These cross functional
projects, with libraries, release, configuration management, docs, QA, etc.,
have also grown in importance in maintaining the quality and velocity of
development.  Again, cross-functional needs are being address, and I want to
help the TC be more proactive in identifying needs and seeding the teams with
senior OpenStack developers (and user community advisors where useful).
* The TC is the conduit between, translator of and champion for the
developers to the OpenStack Board.  They have a huge responsibility and not
enough time, energy or resources to address all the challenges.  I am ready to
work on the challenges and help develop the strategic vision needed to keep on
top of the current and new opportunities always arising and always needing some
thoughtful analysis and action.

That said, I have my own challenges to address.  I know my company will support
me in my role as a TC member, but they will also demand more of my time,
opinions, presence and participation specifically because of the TC position.
I also am still struggling to make inroads on the logging issues I've been
attempting to wrangle into better shape.  I've gotten lots of support from the
community on this (thanks, folks, you know who you are;-), but it still gives
me pause for thought that I, myself need to keep working on my effectiveness.

Whether on the TC or not, I will continue to contribute as much as I can to the
community in the ways that I do best.  and you will continue to see me at the
summits, the midcycles, the meetups, the mailing lists and IRC (hopefully more
there as I'm trying to educate my company how they can provide us the access we
need without compromising their corporate rules).

Thank you for reading this far and considering me for service on the TC.

--Rocky

ps.  I broke Gerrit on my laptop, so infra is helping me, but I've stumped them 
and wanted to get this out.  TLDR: this ain't in the elections repository yet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Rochelle Grober


Doug Hellmann wrote:
Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
> wrote:
> > 
> > I'd like to clarify something.
> > 
> > On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
> > [big snip]
> >> Also worth pointing out here: when we talk about ³doing the same thing²
> >> from a DefCore perspective, we¹re essentially talking about what¹s
> >> exposed to the end user, not how that¹s implemented in OpenStack¹s source
> >> code.  So from an end user¹s perspective:
> >> 
> >> If I call nova image-create, I get an image in my cloud.  If I call the
> >> Glance v2 API to create an image, I also get an image in my cloud.  I
> >> neither see nor care that Nova is actually talking to Glance in the
> >> background, because if I¹m writing code that uses the OpenStack API¹s, I
> >> need to pick which one of those two API¹s to make my code call upon to
> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >> of if/else loops into my code because some clouds I want to use only
> >> allow one way and some allow only the other.
> > 
> > The above is a bit inaccurate.
> > 
> > The nova image-create command does give you an image in your cloud.  The
> > image you get, however, is a snapshot of an instance that has been
> > previously created in Nova.  If you don't have an instance, you cannot
> > create an image via that command.  There is no provision in the Compute
> > (Nova) API to allow you to create an image out of bits that you supply.
> > 
> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> > register them as an image which you can then use to boot instances from by
> > using the Compute API.  But note that if all you have available is the
> > Images API, you cannot create an image of one of your instances.
> > 
> >> So from that end-user perspective, the Nova image-create API indeed does
> >> ³do the same thing" as the Glance API.
> > 
> > They don't "do the same thing".  Even if you have full access to the
> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> > create an image of an instance, which is by far the largest use-case for
> > image creation.  You can't do it through Glance, because Glance doesn't
> > know anything about instances.  Nova has to know about Glance, because it
> > needs to fetch images for instance creation, and store images for
> > on-demand images of instances.
> 
> Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
> Let’s use image-list instead. =)

From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

[Rocky]
This is likely the first case DefCore will have pf deprecating a requirement 
;-)  The committee wasn't thrilled with the original requirement, but really, 
can you have OpenStack without some way of creating an instance?  And Glance V1 
had no user facing APIs, so the committee was kind of stuck.

But, going forward, what needs to happen in Dev is for Glance V2 to become *the 
way* to create images, and for Glance V1 to be deprecated *and removed*.  Then 
we've got two more cycles before we can require V2 only.  Yes, DefCore is a 
trailing requirement.  We have to give our user community time to migrate to 
versions of OpenStack that don't have the "old" capability.

But now comes the tricky partHow do you allow both V1 and V2 capabilities 
and still be interoperable?  This will definitely be the first test for DefCore 
on migration from obsolete capabilities to current capabilities.  We could use 
some help figuring out how to make that work.

--Rocky

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> > 
> > Glad to be of service, too,
> > brian
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [searchlight] Liberty release planning video conference

2015-08-11 Thread Rochelle Grober
Just wanted to point out that if you dig a little in Postman's website, it 
looks like all the base code is on github, and appears to be under the Apache 
license.  I didn't check the jetpacks, but I suspect those might be 
proprietary bits.

  Tripp, Travis S wrote on Monday, August 10, 2015 5:18 PM:
  Hello Anita,

  Thank you for the email and for checking into the Postman open ness! 
There might be some mis-comunication here. Postman is not an official part of 
the project and never will be. Neither will any non-open source code base. As 
mentioned in the IRC logs, I personally like to perform additional manual tests 
of APIs when doing code reviews. I use both curl and also sometimes find the 
Postman browser plugin to be helpful, so was sharing that with others who might 
use it. If you do already have an open source browser plugin that has similar 
functionality I would very much like to take advantage of it for my personal 
testing, because you are certainly right that we don’t want to seem as thought 
we are endorsing non-open tool sets!

  Thank you!
  Travis



  On 8/10/15, 5:34 PM, Anita Kuno ante...@anteaya.info wrote:

  On 08/10/2015 06:28 PM, Tripp, Travis S wrote:
   At our last weekly IRC meeting we decided that we should have a short 
video conference meetup to talk about the Liberty 3 release. The purpose will 
be to walk through the Liberty Blueprints / Bugs and finalize the list for 
Liberty 3. I promised to create a doodle poll for choosing the time, so here it 
is:
  
   http://doodle.com/99kzigdbmed57g7s
  
   Please note, one time I set is the same time as the normal IRC 
meeting.  If that is what works best for everybody, we’ll start the IRC meeting 
normally, but share a video link in the meeting room for people to join.
   
__
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  Hello:
  
  Part of being an OpenStack project listed in the governance repo, as you
  are [0], is agreeing to conduct your business in an open manner. Had you
  not done so prior to now I would not have known that you are using
  Postman for testing [1], as you linked in your meeting log [2]. If
  Postman is open source licensed I couldn't find it in their
  documentation [3].
  
  Now I'm certainly not going to waste my time telling you how you should
  operate your project. I am simply going to take the time to tell you
  that currently your tool choices aren't in keeping with the 4 opens [4].
  If this is by design, power to you, and please don't let me hold you 
back.
  
  If this is by accident and you really do want to ensure you stay an
  OpenStack project, do reach out to a member of the technical committee
  as I feel you could benefit from some tool choice and workflow guidance.
  
  Thank you,
  Anita.
  
  
  [0]
  
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n2011
  [1] https://www.getpostman.com/collections/8c0b1e05875c7c58e967
  [2]
  
http://eavesdrop.openstack.org/meetings/openstack_search/2015/openstack_search.2015-08-06-15.01.log.html
  [3] https://www.getpostman.com/docs
  [4]
  
http://git.openstack.org/cgit/openstack/governance/tree/reference/new-projects-requirements.rst#n17
  
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-19 Thread Rochelle Grober
In line (at the bottom)

From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Friday, June 19, 2015 12:40
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header


On Wed, Jun 17, 2015 at 7:31 AM Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On 06/17/2015 06:30 AM, Lucas Alvares Gomes wrote:
 overlap there rather than competition), how crazy does it sound if we say
 that for OpenStack Nova is the compute API and Ironic the Bare Metal API and
 so on? Would that be an unacceptable power grab?

 It's not that it's unacceptable, but I think that things weren't
 projected that way. Jay started this thread with this sentence:

 To be blunt, Nova is the *implementation* of the OpenStack Compute
 API. Ironic is the *implementation* of the OpenStack BareMetal API.

 Which I don't think is totally correct, at least for Ironic. The
 Ironic's API have evolved and shaped as we implemented Ironic, I think
 that some decisions we made in the API makes it clear, e.g:

 * Resources have JSON attributes. If you look at some attributes of
 the resources you will see that they are just a JSON blob. That's by
 design because we didn't know exactly how the API should look like and
 so by having these JSON fields it allows us to easily extend the
 resource without changing it's structure [1] (see driver_info,
 instance_info, extra)

OK. Nothing wrong with that.

 * We have a vendor endpoint. This endpoint allows vendor to extend our
 API to expose new hardware capabilities that aren't present in the
 core API. Once multiple vendors starts implementing the same feature
 on this endpoint we then decide whether to promote it to the core API.

This is a problem. The above means that there is no single OpenStack
BareMetal API. This means developers that want to write against an
OpenStack BareMetal API cannot rely on different deployments of Ironic
exposing the same API. That is a recipe for a lack of interoperability
and decreased developer ease of use.

Nope - not a problem. Actually it's been really helpful. We've found this to be 
a better way to implement driver extensions -- it's clearly *not* part of 
Ironic's API, and it's communicated as such in the API itself.

Any standard part of Ironic's functionality is exposed in the standard API, and 
hardware-specific extensions, which are not supported by enough vendors to be 
standardized yet, are only exposed through the vendor-specific API endpoint. 
It's very clear in the REST API what this is -- the end points are, for example,

  GET /v1/nodes//vendor_passthru/methods
  POST /v1/nodes//vendor_passthru?method=fooparam=bar

  GET /v1/drivers//methods

... and so on. This provides a mechanism to discover what resources and methods 
said hardware vendor exposes in their hardware driver. We have always supported 
out of tree drivers, and it is possible to upgrade Ironic without upgrading the 
driver (or vice versa).

Also to note, our client library doesn't support any of the vendor-specific 
methods, and never will. It only supports Ironic's API's ability to *discover* 
what vendor-specific methods that driver exposes, and then to transparently 
call to them. And none of that is relevant to other OpenStack projects.

So if an operator wants to write a custom app that uses foo-vendor's 
advanced-foo-making capabilities because they only buy Foo hardware -- that's 
great. They can do that. Presumably, they have a support contract with Foo 
vendor. Ironic is merely providing the transport between them.



 * There's a reservation attribute in the Node's resource [1] which
 valueis the hostname of the conductor that is currently holding an
 exclusive lock to act upon this node. This is because internally we
 use a distributed hashing algorithm to be able to route the requests
 from the API service to a conductor service that is able to manage
 that Node. And having this field in the API

That is just leaking implementation inappropriately out of the public
REST API, and shouldn't be encouraged, IMHO. Nova has a number of these
leaky parts of its API, too, of course. Just look at the os-guests API
extension, which only works when you're using Xen under the hood,
thereby leaking implementation details about the underlying
infrastructure out through the public REST API.

yes, this is leaky in the purest sense, but remember that ironic doesn't expose 
a *public* API. Only operators and other services should be talking directly to 
it -- and this field was requested by operators who find it helpful to know 
which service has locked a resource.


 I don't think that any of those decisions were bad by the way, this
 have helped us a lot to understand how a service to manage Bare Metal
 machines should looks like, and we have made wrong decisions too (You
 can get the same information by GET'ing different endpoints in the
 API, the Chassis resources currently have 

Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-18 Thread Rochelle Grober
Adam pointed to this url as a proposal for the namespaces:
https://wiki.openstack.org/wiki/URLs
How about this gets turned into a cross project spec or part of a larger one 
with the stuff in this ML thread?  Then we can get the projects aware and 
buying into this little slice of sanity.

--Rocky

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Wednesday, June 17, 2015 11:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack] apache wsgi application support

On 06/16/2015 05:25 PM, Chris Dent wrote:
 On Tue, 16 Jun 2015, Sean Dague wrote:
 
 I was just looking at the patches that put Nova under apache wsgi for
 the API, and there are a few things that I think are going in the wrong
 direction. Largely I think because they were copied from the
 lib/keystone code, which we've learned is kind of the wrong direction.
 
 Yes, that's certainly what I've done the few times I've done it.
 devstack is deeply encouraging of cargo culting for reasons that are
 not entirely clear.

Yeh, hence why I decided to put the brakes on a little here and get this
on the list.

 The first is the fact that a big reason for putting {SERVICES} under
 apache wsgi is we aren't running on a ton of weird unregistered ports.
 We're running on 80 and 443 (when appropriate). In order to do this we
 really need to namespace the API urls. Which means that service catalog
 needs to be updated appropriately.
 
 So:
 
 a) I'm very glad to hear of this. I've been bristling about the weird
ports thing for the last year.
 
 b) You make it sound like there's been a plan in place to not use
those ports for quite some time and we'd get to that when we all
had some spare time. Where do I go to keep abreast of such plans?

Unfortunately, this is one of those in the ether kinds of plans. It's
been talked about for so long, but it never really got written down.
Hopefully this can be driven into the service catalog standardization
spec (or tag along somewhere close).

Or if nothing else, we're documenting it now on the mailing list as
permanent storage.

 I also think this -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
 is completely wrong.

 The Apache configs should instead specify access rules such that the
 installed console entry point of nova-api can be used in place as the
 WSGIScript.
 
 I'm not able to parse this paragraph in any actionable way. The lines
 you reference are one of several ways of telling mod wsgi where the
 virtualenv is, which has to happen in some fashion if you are using
 a virtualenv.
 
 This doesn't appear to have anything to do with locating the module
 that contains the WSGI app, so I'm missing the connection. Can you
 explain please?
 
 (Basically I'm keen on getting gnocchi and ceilometer wsgi servers
 in devstack aligned with whatever the end game is, so knowing the plan
 makes it a bit easier.)

Gah, the problem of linking to 'master' with line numbers. The three
lines I cared about were:

# copy proxy vhost and wsgi helper files
sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api

I don't think that we should be copying py files around to other
directories outside of normal pip install process. We should just have
mod_wsgi reference a thing that is installed in /usr/{local}/bin or
/usr/share via the python install process.

 This should also make lines like -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
 L274 uneeded. (The WSGI Script will be in a known place). It will also
 make upgrades much more friendly.
 
 It sounds like maybe you are saying that the api console script and
 the module containing the wsgi 'application' variable ought to be the
 same thing. I don't reckon that's a great idea as the api console
 scripts will want to import a bunch of stuff that the wsgi application
 will not.
 
 Or I may be completely misreading you. It's been a long day, etc.

They don't need to be actually the same thing. They could be different
scripts, but they should be scripts that install via the normal pip
install process to a place, and we reference them by known name.

 I think that we need to get these things sorted before any further
 progression here. Volunteers welcomed to help get us there.
 
 Find me, happy to help. The sooner we can kill wacky port weirdness
 the better.

Agreed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Rochelle Grober
I'd also like to point out that if the state of the projects has encouraged 
*new* contributors to OpenStack, then their contributions will likely take a 
couple to a few months to become visible in a significant way in the 
statistics. Two to three months to get your first merge is extremely common 
among the subgroup developers new to OpenStack.

--Rocky

Thierry Carrez wrote:

Joe Gordon wrote:
 [...]
 Below is a list of the first few few projects to join OpenStack after
 the big tent, All of which have now been part of OpenStack for at least
 two months.[1]
 
 * Mangum -  Tue Mar 24 20:17:36 2015
 * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015 
 
 When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of
 commits from before and after each project joined OpenStack.

Also note that release and summit months are traditionally less active
(some would say totally dead), so comparing April-May to anything else
is likely to not mean much. I'd wait for a complete cycle before
answering this question. Or at the very least compare it to
October-November from the previous cycle.

If we do so for the few projects that existed in October 2014, that
would point to a rather steep increase:

Look at Oct/Nov in:
http://stackalytics.com/?module=murano-groupmetric=commitsrelease=kilo

And compare to April/May in:
http://stackalytics.com/?module=murano-groupmetric=commitsrelease=liberty

Same for Rally:
http://stackalytics.com/?module=rally-groupmetric=commitsrelease=kilo
http://stackalytics.com/?module=rally-groupmetric=commitsrelease=liberty

Only Congress was slightly more active in the first months of Kilo than
in the first months of Liberty.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >