[openstack-dev] [release] Release countdown for week R-21 and R-20, November 12-23

2018-11-08 Thread Sean McGinnis
Development Focus
-

Teams should now be focused on feature development and preparing for Berlin
Forum discussions.

General Information
---

The OpenStack Summit and Forum start on Tuesday. Hopefully many teams will have
an opportunity to work together as well as get input from folks they don't
normally get to interact with as often. This is a great opportunity to get
feedback from operators and other users to help shape the direction of the
project.

Upcoming Deadlines & Dates
--

Forum at OpenStack Summit in Berlin: November 13-15
Start using openstack-discuss ML: November 19
Stein-2 Milestone: January 10

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FIPS Compliance

2018-11-05 Thread Sean McGinnis
I'm interested in some feedback from the community, particularly those running
OpenStack deployments, as to whether FIPS compliance [0][1] is something folks
are looking for.

I've been seeing small changes starting to be proposed here and there for
things like MD5 usage related to its incompatibility to FIPS mode. But looking
across a wider stripe of our repos, it appears like it would be a wider effort
to be able to get all OpenStack services compatible with FIPS mode.

This should be a fairly easy thing to test, but before we put in much effort
into updating code and figuring out testing, I'd like to see some input on
whether something like this is needed.

Thanks for any input on this.

Sean

[0] https://en.wikipedia.org/wiki/FIPS_140-2
[1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed

2018-11-01 Thread Sean McGinnis
On Thu, Nov 01, 2018 at 03:08:09PM +, Jeremy Stanley wrote:
> On 2018-11-01 09:27:03 -0400 (-0400), Doug Hellmann wrote:
> > Jeremy Stanley  writes:
> > 
> > > On 2018-11-01 08:52:05 -0400 (-0400), Doug Hellmann wrote:
> > > [...]
> > >> Did I miss any options or issues with this approach?
> > >
> > > https://review.openstack.org/578557
> > 
> > How high of a priority is rolling that feature out? It does seem to
> > eliminate this particular issue (even the edge cases described in the
> > commit message shouldn't affect us based on our typical practices), but
> > until we have one of the two changes in place we're going to have this
> > issue with every release we tag.
> 
> It was written as a potential solution to this problem back when we
> first discussed it in June, but at the time we thought it might be
> solvable via job configuration with minimal inconvenience so that
> feature was put on hold as a fallback option in case we ended up
> needing it. I expect since it's already seen some review and is
> passing tests it could probably be picked back up fairly quickly now
> that alternative solutions have proven more complex than originally
> envisioned.
> -- 
> Jeremy Stanley

Doug's option 3 made sense to me as a way to address this for now. I could see
doing that for the time being, but if this is coming in the near future, we can
wait and go with it as option 4.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-22, November 5-9

2018-11-01 Thread Sean McGinnis
Development Focus
-

Teams should now be focused on feature development and completion of release
goals [0].

[0] https://governance.openstack.org/tc/goals/stein/index.html

General Information
---

We are now past the Stein-1 milestone. Following the changes described in [0]
we have release most of the cycle-with-intermediary libraries. It is a good
time to think about any additional client (or deliverables marked as
type:"other") releases. All projects with these deliverables should try to have
a release done before Stein-2.

All cycle-with-milestone deliverables have been switched over to be
cycle-with-rc [1]. Just a reminder that we can still do milestone beta releases
if there is a need for them, but we want to avoid doing so just because that is
what we've historically done.

Any questions about these changes, or any release planning or process questions
in general, please reach out to us in the #openstack-release channel or on the
mailing list.

[0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135689.html
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html

Upcoming Deadlines & Dates
--

Forum at OpenStack Summit in Berlin: November 13-15
Start using openstack-discuss ML: November 19
Stein-2 Milestone: January 10

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Sean McGinnis
On Tue, Oct 30, 2018 at 11:10:42PM +0900, Ian Y. Choi wrote:
> Hello,
> 
> I got involved organizing & mentoring Korean people for OpenStack upstream
> contribution for about last two months,
> and would like to share with community members.
> 

Very cool! Thanks for organizing this Ian. And thank you to all that
contributed. Some really fun and useful stuff!

Sean


> Total nine mentees had started to learn OpenStack, contributed, and finally
> survived as volunteers for
>  1) developing OpenStack mobile app for better mobile user interfaces and
> experiences
>     (inspired from https://github.com/stackerz/app which worked on Juno
> release), and
>  2) translating OpenStack official project artifacts including documents,
>  and Container Whitepaper (
> https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).
> 
> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
> Sungjin Kang, and Andrew Yongjoon Kong)
> all helped to organize total 8 offline meetups + one mini-hackathon and
> mentored to attendees.
> 
> The followings are brief summary:
>  - "OpenStack Controller" Android app is available on Play Store
>   : 
> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
>    (GitHub: https://github.com/kosslab-kr/openstack-controller )
> 
>  - Most high-priority projects (although it is not during string freeze
> period) and documents are
>    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and
> Container Whitepaper.
> 
>  - Total 18,695 words were translated into Korean by four contributors
>   (confirmed through Zanata API:
> https://translate.openstack.org/rest/stats/user/[Zanata
> ID]/2018-08-16..2018-10-25 ):
> 
> ++---+-+
> | Zanata ID  | Name  | Number of words |
> ++---+-+
> | ardentpark | Soonyeul Park | 12517   |
> ++---+-+
> | bnitech    | Dongbim Im    | 693 |
> ++---+-+
> | csucom | Sungwook Choi | 4397    |
> ++---+-+
> | jaeho93    | Jaeho Cho | 1088    |
> ++---+-+
> 
>  - The list of projects translated into Korean are described as:
> 
> +-+-+
> | Project | Number of words |
> +-+-+
> | api-site    | 20  |
> +-+-+
> | cinder  | 405 |
> +-+-+
> | designate-dashboard | 4   |
> +-+-+
> | horizon | 3226    |
> +-+-+
> | i18n    | 434 |
> +-+-+
> | ironic  | 4   |
> +-+-+
> | Leveraging Containers and OpenStack | 5480    |
> +-+-+
> | neutron-lbaas-dashboard | 5   |
> +-+-+
> | openstack-helm  | 8835    |
> +-+-+
> | trove-dashboard | 89  |
> +-+-+
> | zun-ui  | 193 |
> +-+-+
> 
> I would like to really appreciate all co-mentors and participants on such a
> big event for promoting OpenStack contribution.
> The venue and food were supported by Korea Open Source Software Development
> Center ( https://kosslab.kr/ ).
> 
> 
> With many thanks,
> 
> /Ian
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update to flake8 and failures despite # flake8: noqa

2018-10-26 Thread Sean McGinnis
On Fri, Oct 26, 2018 at 12:53:17PM -0400, David Moreau Simard wrote:
> Hi openstack-dev,
> 
> I stumbled on odd and sudden pep8 failures with ARA recently and
> brought it up in #openstack-infra [1].
> 
> It was my understanding that appending "  # flake8: noqa" to a line of
> code would have flake8 ignore this line if it happened to violate any
> linting rules.
> It turns out that, at least according to the flake8 release notes [2],
> "flake8: noqa" is actually meant to ignore the linting on an entire
> file.
> 
> The correct way to ignore a specific line appears to be to append "  #
> noqa" to the line... without "flake8: ".
> Looking at codesearch [3], there is a lot of projects using the
> "flake8: noqa" approach with the intent of ignoring a specific line.
> 
> It would be important to fix that in order to make sure we're only
> ignoring the specific lines we're interested in ignoring and prevent
> upcoming failures in the jobs.
> 
> [1]: 
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-10-26.log.html#t2018-10-26T16:18:38
> [2]: http://flake8.pycqa.org/en/latest/release-notes/3.6.0.html
> [3]: http://codesearch.openstack.org/?q=flake8%3A%20noqa&i=nope&files=&repos=
> 
> David Moreau Simard
> dmsimard = [irc, github, twitter]
> 

Thanks for raising this. We have a few of these in python-cinderclient, and
after correcting the usage, it was indeed suppressing some valid errors by
skipping the entire file.

For those looking at fixing this in your repos - this may help:

for file in $(grep -r -l "flake8.*noqa" cinderclient/*); do
sed -i 's/flake8.*noqa/noqa/g' $file
done

Of course check the git diff and run any linting jobs before accepting the
changes from that.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/python-apmecclient failed

2018-10-26 Thread Sean McGinnis
On Fri, Oct 26, 2018 at 4:42 AM  wrote:

> Build failed.
>
> - release-openstack-python3
> http://logs.openstack.org/d3/d39466cf752f2a20a3047b9ca537b2b6adccb154/release/release-openstack-python3/345a591/
> : POST_FAILURE in 3m 57s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
>
>
Release failed for this due to openstackci not being properly configured
for the pypi package upload.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-24 Thread Sean McGinnis
On Wed, Oct 24, 2018 at 12:08 AM Tony Breeds 
wrote:

> On Wed, Oct 24, 2018 at 03:23:53AM +, z...@openstack.org wrote:
> > Build failed.
> >
> > - release-openstack-python3
> http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/
> : POST_FAILURE in 2m 18s
>
> So this failed because pypi thinks there was a name collision[1]:
>  HTTPError: 400 Client Error: File already exists. See
> https://pypi.org/help/#file-name-reuse for url:
> https://upload.pypi.org/legacy/
>
> AFACIT the upload was successful:
>
> shade-1.27.2-py2-none-any.whl  :
> 2018-10-24T03:20:00
> d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a
> shade-1.27.2-py2.py3-none-any.whl  :
> 2018-10-24T03:20:11
> 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792
> shade-1.27.2.tar.gz:
> 2018-10-24T03:20:04
> ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf
>
> The strange thing is that the tar.gz was uploaded *befoer* the wheel
> even though our publish jobs explictly do it in the other order and the
> timestamp of the tar.gz doesn't match the error message.
>
> SO I think we have a bug somewhere, more digging tomorrow
>
> Yours Tony.
>

Looks like this was another case of conflicting jobs. This still has both
release-openstack-python3
and release-openstack-python jobs running, so I think it ended up being a
race between the two of
which one got to pypi first.

I think the "fix" is to get the release-openstack-python out of there now
that we are able to run the
Python 3 version.

On the plus side, all of the subsequent jobs passed, so the package is
published, the announcement
went out, and the requirements update patch was generated.


> [1]
> http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/job-output.txt.gz#_2018-10-24_03_20_15_264676
> ___
> Release-job-failures mailing list
> release-job-failu...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Sean McGinnis
On Tue, Oct 23, 2018 at 10:30:23AM -0400, Ben Nemec wrote:
> 
> 
> On 10/23/18 9:58 AM, Matt Riedemann wrote:
> > On 10/23/2018 8:09 AM, Ben Nemec wrote:
> > > Can't we just add a noop command like we are for the services that
> > > don't currently need upgrade checks?
> > 
> > We could, but I was also hoping that for most projects we will actually
> > be able to replace the noop / placeholder check with *something* useful
> > in Stein.
> > 
> 
> Yeah, but part of the reason for placeholders was consistency across all of
> the services. I guess if there are never going to be upgrade checks in
> adjutant then I could see skipping it, but otherwise I would prefer to at
> least get the framework in place.
> 

+1

Even if there is nothing to check at this point, I think having the facility
there is a benefit for projects and scripts that are going to be consuming
these checks. Having nothing to check, but having the status check there, is
going to be better than everything needing to keep a list of which projects to
run the checks on and which not.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Sean McGinnis
On Mon, Oct 22, 2018 at 07:49:35AM -0700, Morgan Fainberg wrote:
> I should be able to do a write up for Keystone's removal of paste *and*
> move to flask soon.
> 
> I can easily extract the bit of code I wrote to load our external
> middleware (and add an external loader) for the transition away from paste.
> 
> I also think paste is terrible, and would be willing to help folks move off
> of it rather than maintain it.
> 
> --Morgan
> 

Do I detect a volunteer to champion a cycle goal? :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-19 Thread Sean McGinnis
This appears to be another in-transit job conflict with the py3 work.
Things should be fine, but we will need to manually propose the constraint
update since it was skipped.

On Fri, Oct 19, 2018, 09:14  wrote:

> Build failed.
>
> - release-openstack-python3
> http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python3/1cb87ba/
> : SUCCESS in 2m 44s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> - release-openstack-python
> http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python/3a9339d/
> : POST_FAILURE in 2m 40s
>
> ___
> Release-job-failures mailing list
> release-job-failu...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-24 and R-23, October 22 - November 2

2018-10-18 Thread Sean McGinnis

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

General Information
---

Projects that have been following the cycle-with-milestone release model will
be switched over to cycle-with-rc soon [0]. Just a reminder that although the
changes mean milestones are no long required, projects are free to still
request one if they feel there is a need.

Projects following cycle-with-intermediary with libraries have hopefully seen
the mailing list thread on changes there [1]. Projects with libraries following
this release model that have unreleased commits are encouraged to request a
release for those. But under the new plan, the release team will propose
releases for those projects if there has not been one requested before the
milestone.

PTLs and/or release liaisons - a reminder that we would love to have you around
during our weekly meeting [2]. It would also be very helpful if you would
linger in the #openstack-release channel during deadline weeks.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135689.html
[2] http://eavesdrop.openstack.org/#Release_Team_Meeting

Upcoming Deadlines & Dates
--

Stein-1 milestone: October 25  (R-24 week)
Forum at OpenStack Summit in Berlin: November 13-15

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps

2018-10-18 Thread Sean McGinnis
On Wed, Oct 17, 2018 at 10:41:36AM -0500, Matt Riedemann wrote:
> On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote:
> > 
> > As you may know, unfortunately, Horizon doesn't support all features
> > provided by APIs. That's why we created feature gaps list [1].
> > 
> > I'd got a lot of great conversations with projects teams during the PTG
> > and we tried to figure out what should be done prioritize these tasks.
> > It's really helpful for Horizon to get feedback from other teams to
> > understand what features should be implemented next.
> > 
> > While I'm filling launchpad with new bugs and blueprints for [1], it
> > would be good to review this list again and find some volunteers to
> > decrease feature gaps.
> > 
> > [1] https://etherpad.openstack.org/p/horizon-feature-gap
> > 
> > Thanks everybody for any of your contributions to Horizon.
> 
> +openstack-sigs
> +openstack-operators
> 
> I've left some notes for nova. This looks very similar to the compute API
> OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to
> really work on without some user/operator feedback - maybe we can get the
> user work group involved in trying to help prioritize what people really
> want that is missing from horizon, at least for compute?
> 
> [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
> 
> -- 
> 
> Thanks,
> 
> Matt

I also have a cinderclient OSC gap analysis I've started working on. It might
be useful to add a Horizon column to this list too.

https://ethercalc.openstack.org/cinderclient-osc-gap

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer

2018-10-16 Thread Sean McGinnis
On Tue, Oct 16, 2018 at 06:03:54PM +0530, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote:
> Dear OpenStackers,
> 
> For a few months now, I am not able to contribute to code or reviewing
> Kolla and Requirements actively given my current responsibilities, I
> would like to take a step back and release my core reviewer ability
> for the Kolla and Requirements repositories.
> 
> I want to use this moment to thank the everyone I have had a chance to
> work alongside with and I may have troubled. It has been both an honor
> and privilege to serve this community and I will continue to do so.
> 
> In the new cloudy world I am sure the paths will cross again. Till
> then, Sayo Nara, Take Care.
> 
> Best Regards,
> Swapnil (coolsvap)
> 

Thanks for all you've been able to do Swapnil!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Forum Schedule - Seeking Community Review

2018-10-16 Thread Sean McGinnis
On Mon, Oct 15, 2018 at 03:01:07PM -0500, Jimmy McArthur wrote:
> Hi -
> 
> The Forum schedule is now up
> (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262).
> If you see a glaring content conflict within the Forum itself, please let me
> know.
> 

I have updated the Forum wiki page in preparation for the topic etherpads:

https://wiki.openstack.org/wiki/Forum/Berlin2018

Please add your working session etherpad links once they are available so
everyone has one spot to go to to find all relevant links.

Thanks!
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables

2018-10-15 Thread Sean McGinnis
On Fri, Oct 12, 2018 at 11:44:11AM -0400, Doug Hellmann wrote:
> Sean McGinnis  writes:
> 
> > [snip]
> >
> > One of the big motivators in the past was also to have output that 
> > downstream
> > distros and users could pick up for testing and early packaging. Based on 
> > our
> > admittedly anecdotal small sample, it doesn't appear this is actually a big
> > need, so we propose to stop tagging milestone releases for the
> > cycle-with-milestone projects.
> 
> One of the issues that was raised from downstream consumers [1] is that
> this complicates upgrade testing using packages, since tools like yum
> will think that the stable branch (with a final version tag) has a
> higher version number than master (with a dev version computed off of
> the first release candidate where the stable branch was created).
> 
> [snip]
> 
> We need all projects to increment their version at least by one minor
> version at the start of each cycle to save space for patch releases on
> the stable branch, so we looked at a few options for triggering that
> update automatically.
> 
> [snip]
> 
> A similarly low impact solution is to use pbr's Sem-Ver calculation
> feature and inject patches into master to bump the version being
> computed by 1 feature level (which should move from x.y.z.0rc1 to
> somethinglike x.y+1.0.devN). See [2] for details about how this works.
> 
> This is the approach I prefer, and I have a patch to the branching
> scripts to add the Sem-Ver instruction to the patches we already
> generate to update reno [3].
> 
> That change should take care of our transition from Stein->T, but we're
> left with versions in Stein that are lower than Rocky right now. So, as
> a one time operation, Sean is going to propose empty patches with the
> Sem-Ver instruction in the commit message to all of the repositories for
> Stein deliverables that have stable/rocky branches.
> 

The patch to automatically propose the sem-ver flag on branching stable/* has
landed and I have tested it out with our release-test repo. This seems to work
well and is a much lower impact than other options we had considered.

I have a set of patches queued up to now do this one-time manual step for the
rocky to stein transition.

**Please watch for these "empty" patches from me and get them through quickly
if they look OK to you.**

I have checked the list of repos to make sure none of them have done any sort
off milestone release yet for stein. If you are aware of any that I have
missed, please let me know.

There is a strong warning with using this PBR feature that it is not obvious
that having this metadata tag in the commit has this effect on a repo and it is
very, very disruptive to try to undo its use. So please do not copy, backport,
or do anything else with any of the commits that contain this flag.

Any questions at all, please ask here or in the #openstack-release channel.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-15 Thread Sean McGinnis
On Mon, Oct 15, 2018 at 01:40:37PM +0100, Chris Dent wrote:
> 
> Back in August [1] there was an email thread about the Paste package
> being essentially unmaintained and several OpenStack projects still
> using it. At that time we reached the conclusion that we should
> investigate having OpenStack adopt Paste in some form as it would
> take some time or be not worth it to migrate services away from it.
> 
> [snip]
> 
> I'd like some input from the community on how we'd like this to go.
> Some options.
> 
> * Chris becomes the de-facto maintainer of paste and I do whatever I
>   like to get it healthy and released.
> 
> * Several volunteers from the community take over the existing
>   bitbucket setup [2] and keep it going there.
> 
> * Several volunteers from the community import the existing
>   bitbucket setup to OpenStack^wOpenDev infra and manage it.
> 
> What would people like? Who would like to volunteer?
> 

Maybe some combination of bullets one and three?

This seems like it would increase visibility in the community if it were moved
under the OpenDev umbrella. Then we could also leverage our release automation
as/when needed. It might also then attract some of the driveby contributions.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [SIGS] Ops Tools SIG

2018-10-12 Thread Sean McGinnis
On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote:
> Greetings guys,
> 
> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
> 
> > Adding the mailing lists back to your reply, thank you :)
> >
> > I guess that +melvin.hills...@huawei.com  can
> > help us a little bit organizing the SIG,
> > but I guess the first thing would be collecting a list of tools which
> > could be published
> > under the umbrella of the SIG, starting by the ones already in Osops.
> >
> > Publishing documentation for those tools, and the catalog under
> > docs.openstack.org
> > is possibly the next step (or a parallel step).
> >
> >
> > On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister 
> > wrote:
> >
> >> Hi Miguel,
> >>
> >> I would love to join this. What do I need to do?
> >>
> >> Sent from my iPhone
> >>
> >> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo 
> >> wrote:
> >>
> >> Hello
> >>
> >> Yesterday, during the Oslo meeting we discussed [6] the possibility
> >> of creating a new Special Interest Group [1][2] to provide home and release
> >> means for operator related tools [3] [4] [5]
> >>
> >>
>   all of those tools have python dependencies related to openstack such as
> python-openstackclient or python-pbr. Which is exactly the reason why we
> moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to
> Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent
> on openstack SIG. I believe that Cloud SIG is proper home for tools like
> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains
> general tools like fluentd, sensu, collectd.
> 
> 
> Hope this helps,
> Martin
> 

Hey Martin,

I'm not sure I understand the issue with these tools have dependencies on other
packages and the relationship to SIG ownership. Is your concern (or the history
of a concern you are pointing out) that the tools would have a more difficult
time if they required updates to dependencies if they are owned by a different
group?

Thanks!
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][release] Proposed changes for library releases

2018-10-11 Thread Sean McGinnis
Libraries should be released early and often so their consumers can pick up
merged changes, and issues with those changes can be identified close to when
the change is made. To help with this, we are considering forcing at least one
library release per milestone (if there are unreleased merged changes).

Planned Changes
---

The proposed change would be that for each cycle-with-intermediary library
deliverable, if it was not released during that milestone timeframe, the
release team would automatically generate a release request early in the week
of the milestone deadline. For example, at Stein milestone 1, if the library
was not released at all in the Stein cycle yet, we would trigger a release the
week of the milestone. At Stein milestone 2, if the library was not released
since milestone 1, we would trigger another release, etc.

That autogenerated patch would be used as a base to communicate with the team:
if a team knows it is not a good time to do a release for that library, someone
from the team can -1 the patch to have it held, or update that patch with a
different commit SHA where they think it would be better to release from. If
there are no issues, ideally we would want a +1 from the PTL and/or release
liaison to indicate approval, but we would also consider no negative feedback
as an indicator that the automatically proposed patches without a -1 can all be
approved on the Thursday milestone deadline.

Frequently Asked Questions (we're guessing)
---

Q: Our team likes to release libraries often. We don't want to wait for each
   milestone. Why are you ruining our lives?
A: Teams are encouraged to request library releases regularly, and at any point
   in time that makes sense. The automatic release patches only serve as a
   safeguard to guarantee that library changes are released and consumed early
   and often, in case no release is actively requested.

Q: Our library has no change, that's why we are not requesting changes. Why are
   you forcing meaningless releases? You need a hobby.
A: If the library has not had any change merged since the previous tag, we
   would not generate a release patch for it.

Q: My team is responsible for this library. I don't feel comfortable having an
   autogenerated patch grab a random commit to release. Can we opt out of this?
A: The team can do their own releases when they are ready. If we generate a
   release patch and you don't think you are ready, just -1 the patch. Then
   when you are ready, you can update the patch with the new commit to use.


Please ask questions or raise concerns here and/or in the #openstack-release
channel.

Thanks!

The Release Management Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-10 Thread Sean McGinnis
> >
> >
> > What I mean is that we run too into a situation where we have a large
> > backlog of CI jobs since we have to many changes and jobs in flight.
> >
> > So, I'm asking whether there is a good way to not duplicating all jobs
> > to run on all three interpreters. Do we really need testing of all three
> > versions? Or is testing with a subset a manageable risk?
> >
> 
> Fair enough. I'm probably not the right person to answer so perhaps someone
> else can chime in. One thing worth pointing out is that it seems the jump
> from 3.5 to 3.6 wasn't nearly as painful as the jump from 3.6 to 3.7, at
> least in my experience.
> 
> Corey
> 

I share Andreas's concerns. I would rather see us testing 3.5 and 3.7 versus
3.5, 3.6, and 3.7. I would expect anything that passes on 3.7 to be fairly safe
when it comes to 3.6 runtimes.

Maybe a periodic job that exercies 3.6?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] assigning new liaisons to projects

2018-10-08 Thread Sean McGinnis
On Mon, Oct 08, 2018 at 10:27:06AM -0400, Doug Hellmann wrote:
> TC members,
> 
> Since we are starting a new term, and have several new members, we need
> to decide how we want to rotate the liaisons attached to each our
> project teams, SIGs, and working groups [1].
> 
> Last term we went through a period of volunteer sign-up and then I
> randomly assigned folks to slots to fill out the roster evenly. During
> the retrospective we talked a bit about how to ensure we had an
> objective perspective for each team by not having PTLs sign up for their
> own teams, but I don't think we settled on that as a hard rule.
> 
> I think the easiest and fairest (to new members) way to manage the list
> will be to wipe it and follow the same process we did last time. If you
> agree, I will update the page this week and we can start collecting
> volunteers over the next week or so.
> 
> Doug
> 

Seems fair and a good approach to me.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-08 Thread Sean McGinnis
> >
> > I think we can definitely manage the agenda to minimize the number of
> > complex discussions. If that proves to be too hard, I wouldn't mind
> > meeting more often, but there does seem to be a lot of support for
> > preferring other venues for those conversations.
> >
> >
> +1 I think there is a point where we need to recognize there is a time and
> place for everything, and some of those long running complex conversations
> might not be well suited for what would essentially be "review business
> status" meetings.  If we have any clue that something is going to be a very
> long and drawn out discussion, then I feel like we should make an effort to
> schedule individually.

We could also be very aggressive about ending the meeting early if all process
topics are covered to actively discourage this meeting from becoming a forum
for other non-process discussions.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Sean McGinnis
On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> In Denver, we agree to add a new "re-image" API in cinder to support upport
> volume-backed server rebuild with a new image.
> 
> An initial blueprint has been drafted in [3], welcome to review it, thanks.
> : )
> 
> [snip]
> 
> The "force" parameter idea comes from [4], means that
> 1. we can re-image an "available" volume directly.
> 2. we can't re-image "in-use"/"reserved" volume directly.
> 3. we can only re-image an "in-use"/"reserved" volume with "force"
> parameter.
> 
> And it means nova need to always call re-image API with an extra "force"
> parameter,
> because the volume status is "in-use" or "reserve" when we rebuild the
> server.
> 
> *So, what's you idea? Do we really want to add this "force" parameter?*
> 

I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.

> [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
> [3] https://review.openstack.org/#/c/605317
> [4]
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> 
> Regards,
> Yikun
> 
> Jiang Yikun(Kero)
> Mail: yikunk...@gmail.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-04 Thread Sean McGinnis
On Thu, Oct 04, 2018 at 05:47:53PM +, Jeremy Stanley wrote:
> On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > TC members, please reply to this thread and indicate if you would
> > find meeting at 1300 UTC on the first Thursday of every month
> > acceptable, and of course include any other comments you might
> > have (including alternate times).
> 
> This time is acceptable to me. As long as we ensure that community
> feedback continues more frequently in IRC and on the ML (for example
> by making it clear that this meeting is expressly *not* for that)
> then I'm fine with resuming formal meetings.
> -- 
> Jeremy Stanley

Same here. The time works for me, but I hope that bringing back an official
meeting time does not prevent productive conversations from happening at any
other time.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-26 and R-25, October 8-19

2018-10-04 Thread Sean McGinnis
Welcome to the (biweekly for now) release countdown email. Just a quick
reminder for this one.

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

General Information
---

Teams should now be making progress towards the cycle goals [1]. Please
prioritize reviews for these appropriately. 

[1] https://governance.openstack.org/tc/goals/stein/index.html

Upcoming Deadlines & Dates
--

Stein-1 milestone: October 25  (R-24 week)
Forum at OpenStack Summit in Berlin: November 13-15

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Sean McGinnis
On Wed, Oct 03, 2018 at 09:45:25AM -0500, Jay S. Bryant wrote:
> Team,
> 
> We had discussed the possibility of adding Gorka to the stable core team
> during the PTG.  He does review a number of our backport patches and is
> active in that area.
> 
> If there are no objections in the next week I will add him to the list.
> 
> Thanks!
> 
> Jay (jungleboyj)
> 

+1 from me. Gorka has shown to understand the stable policies and I think his
coming from a company that has a vested interest in stable backports would make
him a good candidate for stable core.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stable Core Team Update

2018-10-02 Thread Sean McGinnis
On Tue, Oct 02, 2018 at 01:45:38PM -0500, Matt Riedemann wrote:
> On 10/2/2018 10:41 AM, Miguel Lavalle wrote:
> > Hi Stable Team,
> > 
> > I want to nominate Bernard Cafarrelli as a stable core reviewer for
> > Neutron and related projects. Bernard has been increasing the number of
> > stable reviews he is doing for the project [1]. Besides that, he is a
> > stable maintainer downstream for his employer (Red Hat), so he can bring
> > that valuable experience to the Neutron stable team.
> > 
> > Thanks and regards
> > 
> > Miguel
> > 
> > [1] 
> > https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22
> >  
> > 
> 
> +1 from me.
> 
> -- 
> 
> Thanks,
> 
> Matt

+1 from me as well.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Sean McGinnis
On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
> On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
> 
> > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
> >  wrote:
> > >
> > > Ideally I would like to see it in the form of least specific to most
> > specific. But more importantly in a way that there is no additional
> > delimiters between the service type and the resource. Finally, I do not
> > like the change of plurality depending on action type.
> > >
> > > I propose we consider
> > >
> > > ::[:]
> > >
> > > Example for keystone (note, action names below are strictly examples I
> > am fine with whatever form those actions take):
> > > identity:projects:create
> > > identity:projects:delete
> > > identity:projects:list
> > > identity:projects:get
> > >
> > > It keeps things simple and consistent when you're looking through
> > overrides / defaults.
> > > --Morgan
> > +1 -- I think the ordering if `resource` comes before
> > `action|subaction` will be more clean.
> >
> 

Great idea. This is looking better and better.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Sean McGinnis
> On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad  wrote:
> 
> > Bumping this thread again and proposing two conventions based on the
> > discussion here. I propose we decide on one of the two following
> > conventions:
> >
> > *::*
> >
> > or
> >
> > *:_*
> >
> > Where  is the corresponding service type of the project [0],
> > and  is either create, get, list, update, or delete. I think
> > decoupling the method from the policy name should aid in consistency,
> > regardless of the underlying implementation. The HTTP method specifics can
> > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
> >
> > I think the plurality of the resource should default to what makes sense
> > for the operation being carried out (e.g., list:foobars, create:foobar).
> >
> > I don't mind the first one because it's clear about what the delimiter is
> > and it doesn't look weird when projects have something like:
> >
> > :::
> >

My initial preference was the second format, but you make a good point here
about potential subactions. Either is fine with me - the main thing I would
love to see is consistency in format. But based on this part, I vote for option
2.

> > If folks are ok with this, I can start working on some documentation that
> > explains the motivation for this. Afterward, we can figure out how we want
> > to track this work.
> >

+1 thanks for working on this!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Sean McGinnis
This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables

2018-09-26 Thread Sean McGinnis
During the Stein PTG in Denver, the release management team talked about ways
we can make things simpler and reduce the "paper pushing" work that all teams
need to do right now. One topic that came up was the usefulness of pushing tags
around milestones during the cycle.

There were a couple of needs identified for doing such "milestone releases":
1) It tests the release automation machinery to identify problems before
   the RC and final release crunch time.
2) It creates a nice cadence throughout the cycle to help teams stay on
   track and focus on the right things for each phase of the cycle.
3) It gives us an indication that teams are healthy, active, and planning
   to include their components in the final release.

One of the big motivators in the past was also to have output that downstream
distros and users could pick up for testing and early packaging. Based on our
admittedly anecdotal small sample, it doesn't appear this is actually a big
need, so we propose to stop tagging milestone releases for the
cycle-with-milestone projects.

We would still have "milestones" during the cycle to facilitate work
organization and create a cadence: teams should still be aware of them, and we
will continue to communicate those dates in the schedule and in the release
countdown emails. But you would no longer be required to request a release for 
each milestone.

Beta releases would be optional: if teams do want to have some beta version
tags before the final release they can still request them - whether on one of
the milestone dates, or whenever there is the need for the project.

Release candidates would still require a tag. To facilitate that step and
guarantee we have a release candidate for every deliverable, the release team
proposes to automatically generate a release request early in the week of the
RC deadline. That patch would be used as a base to communicate with the team:
if a team wants to wait for a specific patch to make it to the RC, someone from
the team can -1 the patch to have it held, or update that patch with a
different commit SHA. If there are no issues, ideally we would want a +1 from
the PTL and/or release liaison to indicate approval, but we would also consider
no negative feedback as an indicator that the automatically proposed patches
without a -1 can all be approved at the end of the RC deadline week.

To cover point (3) above, and clearly know that a project is healthy and should
be included in the coordinated release, we are thinking of requiring a person 
for each team to add their name to a "manifest" of sorts for the release cycle.
That "final release liaison" person would be the designated person to follow
through on finishing out the releases for that team, and would be designated
ahead of the final release phases.

With all these changes, we would rename the cycle-with-milestones release
model to something like cycle-with-rc.

FAQ:
Q: Does this mean I don't need to pay attention to releases any more and the
   release team will just take care of everything?
A: No. We still want teams engaged in the release cycle and would feel much
   more comfortable if we get an explicit +1 from the team on any proposed tags
   or releases.

Q: Who should sign up to be the final release liaison ?
A: Anyone in the team really. Could be the PTL, the standing release liaison,
   or someone else stepping up to cover that role.

--
Thanks!
The Release Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-29 Update

2018-09-22 Thread Sean McGinnis
On Fri, Sep 21, 2018 at 04:19:35PM -0500, Ben Nemec wrote:
> 
> 
> On 09/21/2018 03:53 PM, Matt Riedemann wrote:
> > Updates for this week:
> > 
> > * As bnemec noted in the last update [1], he's making some progress with
> > the oslo.upgradecheck library. He's retrofitting the nova-status upgrade
> > check code to use the library and has a patch up for designate to use
> > it.
> > 
> > * The only two projects that I'm aware of with patches up at this point
> > are monasca [2] and designate [3]. The monasca one is tricky because as
> > I've found going through release notes for some projects, they don't
> > really have any major upgrade impacts so writing checks is not obvious.
> > I don't have a great solution here. What monasca has done is add the
> > framework with a noop check. If others are in the same situation, I'd
> > like to hear your thoughts on what you think makes sense here. The
> > alternative is these projects opt out of the goal for Stein and just add
> > the check code later when it makes sense (but people might forget or not
> > care to do that later if it's not a goal).
> 
> My inclination is for the command to exist with a noop check, the main
> reason being that if we create it for everyone this cycle then the
> deployment tools can implement calls to the status commands all at once. If
> we wait until checks are needed then someone has to not only implement it in
> the service but also remember to go update all of the deployment tools.
> Implementing a noop check should be pretty trivial with the library so it
> isn't a huge imposition.
> 

This was brought up at one point, and I think the preference for those involved
at the time was to still have the upgrade check available, even if it is just a
noop. The reason being as you state that it makes things consistent for
deployment tooling to be able to always run the check, regardless which project
is being done.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-21 Thread Sean McGinnis
On Fri, Sep 21, 2018 at 08:26:41AM -0600, Doug Hellmann wrote:
> Excerpts from Elõd Illés's message of 2018-09-21 16:08:28 +0200:
> > Hi,
> > 
> > Here is an etherpad with the teams that have stable:follow-policy tag on 
> > their repos:
> > 
> > https://etherpad.openstack.org/p/ocata-final-release-before-em
> > 
> > On the links you can find reports about the open and unreleased changes, 
> > that could be a useful input for the before-EM/final release.
> > Please have a look at the report (and review the open patches if there 
> > are) so that a release can be made if necessary.
> > 
> > Thanks,
> > 
> > Előd
> 
> Thanks for pulling all of this information together!
> 
> Doug
> 

Really useful information Előd - thanks for getting that put together!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread Sean McGinnis
On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> All,
> 
> In the last year we have had some changes to Core team participation.  This
> was a topic of discussion at the PTG in Denver last week.  Based on that
> discussion I have reached out to John Griffith and Winston D (Huang Zhiteng)
> and asked if they felt they could continue to be a part of the Core Team. 
> Both agreed that it was time to relinquish their titles.
> 
> So, I am proposing to remove John Griffith and Winston D from Cinder Core. 
> If I hear no concerns with this plan in the next week I will remove them.
> 
> It is hard to remove people who have been so instrumental to the early days
> of Cinder.  Your past contributions are greatly appreciated and the team
> would be happy to have you back if circumstances every change.
> 
> Sincerely,
> Jay Bryant
> 

Really sad to see Winston go as he's been a long time member, but I think over
the last several releases it's been obvious he's had other priorities to
compete with. It would be great if that were to change some day. He's made a
lot of great contributions to Cinder over the years.

I'm a little reluctant to make any changes with John though. We've spoken
briefly. He definitely is off to other things now, but with how deeply he has
been involved up until recently with things like the multiattach
implementation, replication, and other significant things, I would much rather
have him around but less active than completely gone. Having a few good reviews
is worth a lot.

I would propose we hold off on changing John's status for at least a cycle. He
has indicated to me he would be willing to devote a little time to still doing
reviews as his time allows, and I would hate to lose out on his expertise on
changes to some things. Maybe we can give it a little more time and see if his
other demands keep him too busy to participate and reevaluate later?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][collectd-openstack-plugins] Schedule new release?

2018-09-21 Thread Sean McGinnis
On Fri, Sep 21, 2018 at 09:43:52AM +0200, Matthias Runge wrote:
> Hello,
> 
> it has been some time, since collectd-ceilometer-plugin[1]
> has been renamed to collectd-openstack-plugins[2] and since there has been a
> release.
> 
> What is required to trigger a new release here?
> 
> Thank you,
> Matthias
> 
> 
> [1] https://github.com/openstack/collectd-ceilometer-plugin
> [2] https://github.com/openstack/collectd-openstack-plugins
> -- 
> Matthias Runge 
> 

Hi Matthias,

collectd-openstack-plugins does not appear to be an official repo under
governance [1]. For these types of projects to do a release, the team would
need to push a tag to the repo. That will trigger some post jobs to run that
will create and publish tarballs. Some basic information (though slightly
different context) can be found here [2].

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
[2] 
https://docs.openstack.org/infra/manual/creators.html#prepare-an-initial-release

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Capturing Feedback/Input

2018-09-20 Thread Sean McGinnis
On Thu, Sep 20, 2018 at 05:30:32PM -0500, Melvin Hillsman wrote:
> Hey everyone,
> 
> During the TC meeting at the PTG we discussed the ideal way to capture
> user-centric feedback; particular from our various groups like SIGs, WGs,
> etc.
> 
> Options that were mentioned ranged from a wiki page to a standalone
> solution like discourse.
> 
> While there is no perfect solution it was determined that Storyboard could
> facilitate this. It would play out where there is a project group
> openstack-uc? and each of the SIGs, WGs, etc would have a project under
> this group; if I am wrong someone else in the room correct me.
> 
> The entire point is a first step (maybe final) in centralizing user-centric
> feedback that does not require any extra overhead be it cost, time, or
> otherwise. Just kicking off a discussion so others have a chance to chime
> in before anyone pulls the plug or pushes the button on anything and we
> settle as a community on what makes sense.
> 
> -- 
> Kind regards,
> 
> Melvin Hillsman

I think Storyboard would be a good place to manage SIG/WG feedback. It will
take some time before the majority of projects have moved over from Launchpad,
but once they do, this will make it much easier to track SIG initiatives all
the way through to code implementation.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...)

2018-09-20 Thread Sean McGinnis
On Thu, Sep 20, 2018 at 03:46:43PM -0600, Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +:
> > tl;dr: The openstack, openstack-dev, openstack-sigs and
> > openstack-operators mailing lists (to which this is being sent) will
> > be replaced by a new openstack-disc...@lists.openstack.org mailing
> > list.
> 
> Since last week there was some discussion of including the openstack-tc
> mailing list among these lists to eliminate confusion caused by the fact
> that the list is not configured to accept messages from all subscribers
> (it's meant to be used for us to make sure TC members see meeting
> announcements).
> 
> I'm inclined to include it and either use a direct mailing or the
> [tc] tag on the new discuss list to reach TC members, but I would
> like to hear feedback from TC members and other interested parties
> before calling that decision made. Please let me know what you think.
> 
> Doug
> 

This makes sense to me. I would rather have any discussions where everyone is
likely to see them than to continue with the current separation.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-28 and R-27, September 24 - October 5

2018-09-20 Thread Sean McGinnis
Welcome to the release countdown email. I am going to keep these biweekly for
now since there are not as many critical deadlines, but please let me know if
any more frequent or additional updates would be useful.

Development Focus
-

Team focus should be on spec approval and implementation of priority features.

Action Required
--

Matt Riedemann started a thread about the Ocata release entering the new
"extended maintenance" phase:


http://lists.openstack.org/pipermail/openstack-dev/2018-September/134810.html

We are actually past the published date for that transition, but this is the
first time we've done this, so I think we're all working through the process.
Matt raises the good point that any teams that have merged patches for
stable/ocata that would like to have those officially available should do a
final stable release off of stable/ocata.

After that, as part of extended maintenance, teams can choose how they handle
further stable backports, but there will no longer we official releases from
stable/ocata.

General Information
---

Please be aware of the project specific deadlines that vary slightly from the
overall release schedule [1].

Teams should now be making progress towards the cycle goals [2]. Please
prioritize reviews for these appropriately. 

[1] https://releases.openstack.org/stein/schedule.html
[2] https://governance.openstack.org/tc/goals/stein/index.html

If your project has a library that is still a 0.x release, start thinking about
when it will be appropriate to do a 1.0 version. The version number does signal
the state, real or perceived, of the library, so we strongly encourage going to
a full major version once things are in a good and usable state.

PTLs and/or release liaisons - we are still a little ways out from the first
milestone, but a reminder that we would love to have you around during our
weekly meeting [3]. It would also be very helpful if you would linger in the
#openstack-release channel during deadline weeks.

[3] http://eavesdrop.openstack.org/#Release_Team_Meeting

Upcoming Deadlines & Dates
--

Stein-1 milestone: October 25  (R-24 week)
Forum at OpenStack Summit in Berlin: November 13-15

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-18 Thread Sean McGinnis
On Tue, Sep 18, 2018 at 02:27:03PM -0500, Matt Riedemann wrote:
> The release page says Ocata is planned to go into extended maintenance mode
> on Aug 27 [1]. There really isn't much to this except it means we don't do
> releases for Ocata anymore [2]. There is a caveat that project teams that do
> not wish to maintain stable/ocata after this point can immediately end of
> life the branch for their project [3]. We can still run CI using tags, e.g.
> if keystone goes ocata-eol, devstack on stable/ocata can still continue to
> install from stable/ocata for nova and the ocata-eol tag for keystone.
> Having said that, if there is no undue burden on the project team keeping
> the lights on for stable/ocata, I would recommend not tagging the
> stable/ocata branch end of life at this point.
> 
> So, questions that need answering are:
> 
> 1. Should we cut a final release for projects with stable/ocata branches
> before going into extended maintenance mode? I tend to think "yes" to flush
> the queue of backports. In fact, [3] doesn't mention it, but the resolution
> said we'd tag the branch [4] to indicate it has entered the EM phase.
> 
> 2. Are there any projects that would want to skip EM and go directly to EOL
> (yes this feels like a Monopoly question)?
> 
> [1] https://releases.openstack.org/
> [2] 
> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases
> [3] 
> https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance
> [4] 
> https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life
> 
> -- 
> 
> Thanks,
> 
> Matt

I have a patch that's been pending for marking it as extended maintenance:

https://review.openstack.org/#/c/598164/

That's just the state for Ocata. You raise some other good points here that I
am curious to see input on.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Sean McGinnis
> > 
> > Plan
> > 
> > We would now like to have the driverfixes/ocata branch deleted so there is 
> > no
> > confusion about where backports should go and we don't accidentally get 
> > these
> > out of sync again.
> > 
> > Infra team, please delete this branch or let me know if there is a process
> > somewhere I should follow to have this removed.
> 
> The first step is to make sure that all changes on the branch are in a non 
> open state (merged or abandoned). 
> https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
>  shows that there are no open changes.
> 
> Next you will want to make sure that the commits on this branch are preserved 
> somehow. Git garbage collection will delete and cleanup commits if they are 
> not discoverable when working backward from some ref. This is why our old 
> stable branch deletion process required we tag the stable branch as 
> $release-eol first. Looking at `git log origin/driverfixes/ocata 
> ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on 
> the driverfixes branch that are not on the stable branch, but that appears to 
> be due to cherry pick writing new commits. You have indicated above that you 
> believe the two branches are in sync at this point. A quick sampling of 
> commits seems to confirm this as well.
> 
> If you can go ahead and confirm that you are ready to delete the 
> driverfixes/ocata branch I will go ahead and remove it.
> 
> Clark
> 

I did another spot check too to make sure I hadn't missed anything, but it does
appear to be as you stated that the cherry pick resulted in new commits and
they actually are in sync for our purposes.

I believe we are ready to proceed.

Thanks for your help.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Sean McGinnis
Hello Cinder and Infra teams. Cinder needs some help from infra or some
pointers on how to proceed.

tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
fixes that no longer met the more restrictive phase II stable policy criteria.
Extended maintenance has changed that and we want to delete driverfixes/ocata
to make sure patches are going to the right place.

Background
--
Before the extended maintenance changes, the Cinder team found a lot of vendors
were maintaining their own forks to keep backported driver fixes that we were
not allowing upstream due to the stable policy being more restrictive for older
(or deleted) branches. We created the driverfixes/* branches as a central place
for these to go so distros would have one place to grab these fixes, if they
chose to do so.

This has worked great IMO, and we do occasionally still have things that need
to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of
fixes to driverfixes/ocata, but with the changes to stable policy with extended
maintenance, that is no longer needed.

Extended Maintenance Changes

With things being somewhat relaxed with the extended maintenance changes, we
are now able to backport bug fixes to stable/ocata that we couldn't before and
we don't have to worry as much about that branch being deleted.

I had gone through and identified all patches backported to driverfixes/ocata
but not stable/ocata and cherry-picked them over to get the two branches in
sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
and we want to make sure nothing more gets accidentally merged to
driverfixes/ocata instead of the official stable branch.

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.

Thanks!
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-17 Thread Sean McGinnis
> 
> > I also have technical questions about 'wechat' (like how do you
> > use it without a smartphone?) and the relevance of tools we
> > currently use, but this will open Pandora's box, and I'd rather
> > not spend my energy on closing that box right now :D
> 
> Not that I was planning on running it myself, but I did look into
> the logistics. Apparently there is at least one free/libre open
> source wechat client under active development but you still need to
> use a separate mobile device to authenticate your client's
> connection to wechat's central communication service. By design, it
> appears this is so that you can't avoid reporting your physical
> location (it's been suggested this is to comply with government
> requirements for tracking citizens participating in potentially
> illegal discussions).

This is correct from my experience. There are one or two desktop clients I have
found out there, but there is no way to log in to them other than logging into
your phone app, then using that to scan a QR-code like image in order to
authenticate. As far as I know, there is no way to use Wechat without
installing a smartphone app.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21

2018-09-08 Thread Sean McGinnis
On Sat, Sep 08, 2018 at 09:54:33AM +0900, Trinh Nguyen wrote:
> Hi,
> 
> Thanks for the summary.
> 
> I just added Searchlight to the Stein deliverable [1]. One concern is we
> moved our projects to Storyboard last week, do I have to change the project
> file to reflect that and how?
> 
> Thanks,
> 
> [1] https://review.openstack.org/#/c/600889/
> 

Hey Trinh,

The deliverable should be switched over to reflect the change to use
storyboard. Here is an example of a patch that did that for another
deliverable:

https://review.openstack.org/#/c/553900/1/deliverables/_independent/reno.yaml

So once you get the storyboard ID, you just swap out the "launchpad:" line for
a "storyboard:" one.

Jeremy's idea of supporting the names from the other thread seems like a good
idea to me, so I will have to take a look at that proposal.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-30 and R-29, September 10-21

2018-09-07 Thread Sean McGinnis
Here we go again! The Stein cycle will be slightly longer than past cycles. In
case you haven't seen it yet, please take a look over the schedule for this
release:

https://releases.openstack.org/stein/schedule.html

Development Focus
-

Focus should be on optimizing the time at the PTG and following up after the
event to jump start Stein development.

General Information
---

All teams should review their release liaison information and make sure it is
up to date [1].

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

While reviewing liaisons, this would also be a good time to make sure your
declared release model matches the project's plans for Stein (e.g. [2]). This
should be done prior to the first milestone and can be done by proposing a
change to the Stein deliverable file for the project(s) affected [3].

[2] 
https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5
[3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/stein

Now would be a good time to start brainstorming Forum topics while some of the
PTG discussions are fresh. Just a couple months until the Summit and Forum in
Berlin.

Upcoming Deadlines & Dates
--

Stein-1 milestone: October 25  (R-24 week)
Forum at OpenStack Summit in Berlin: November 13-15

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3] week 4 update

2018-09-04 Thread Sean McGinnis
> 
> +-+--+---+-++
> | Team| Open | Total | Status 
>  | Champion   |
> +-+--+---+-++
>
> | cinder  |0 | 0 | not started, 6 repos   
>  ||
>
> +-+--+---+-++
> 
> == Next Steps ==
> 
> If your team shows up in the above list as "not started" please let
> us know when you are ready to begin the transition. As you can see,
> it involves quite a few patches for some teams and it will be better
> to propose those early in the cycle during a relatively quiet period,
> rather than waiting until a time when having large batches of changes
> proposed together disrupts work.
> 

Jay has been out lately, so I will speak for Cinder for now. We should be at a
good point to proceed with the transition.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Rocky is officially released

2018-08-31 Thread Sean McGinnis
The following was sent out yesterday to the openstack-announce mailing list.

Thank you to everyone involved in the Rocky development cycle. Truly a lot has
happened since the snowpocalypse. On to Stein (and hopefully a less chaotic
PTG)!

--

Hello OpenStack community,

I'm excited to announce the final releases for the components of OpenStack
Rocky, which conclude the Rocky development cycle.

You will find a complete list of all components, their latest versions, and
links to individual project release notes documents listed on the new release
site.

  https://releases.openstack.org/rocky/

Congratulations to all of the teams who have contributed to this release!

Our next production cycle, Stein, has already started. We will meet in Denver,
Colorado, USA September 10-14 at the Project Team Gathering to plan the work
for the upcoming cycle. I hope to see you there!

Thanks,
Sean McGinnis and the whole Release Management team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-30 Thread Sean McGinnis
> >
> > Yeah it's already on the PTG agenda [1][2]. I started the thread because I
> > wanted to get the ball rolling as early as possible, and with people that
> > won't attend the PTG and/or the Forum, to weigh in on not only the known
> > issues with cross-cell migration but also the things I'm not thinking about.
> >
> > [1] https://etherpad.openstack.org/p/nova-ptg-stein
> > [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> 
> Should we also add the topic to the Thursday Cinder-Nova slot in case
> there are some questions where the Cinder team can assist?
> 
> Cheers,
> Gorka.
> 

Good idea. That will be a good time to circle back between the teams to see if
any Cinder needs come up that we can still have time to talk through and see if
we can get work started.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core

2018-08-28 Thread Sean McGinnis
On Tue, Aug 28, 2018 at 03:26:02PM -0500, Matt Riedemann wrote:
> I hereby nominate Melanie Witt for nova stable core. Mel has shown that she
> knows the stable branch policy and is also an active reviewer of nova stable
> changes.
> 
> +1/-1 comes from the stable-maint-core team [1] and then after a week with
> no negative votes I think it's a done deal. Of course +1/-1 from existing
> nova-stable-maint [2] is also good feedback.
> 
> [1] https://review.openstack.org/#/admin/groups/530,members
> [2] https://review.openstack.org/#/admin/groups/540,members
> 

+1 from me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-25 Thread Sean McGinnis
On Fri, Aug 24, 2018 at 04:20:21PM -0500, Matt Riedemann wrote:
> On 8/20/2018 10:29 AM, Matthew Booth wrote:
> > Secondly, is there any reason why we shouldn't just document then you
> > have to delete snapshots before doing a volume migration? Hopefully
> > some cinder folks or operators can chime in to let me know how to back
> > them up or somehow make them independent before doing this, at which
> > point the volume itself should be migratable?
> 
> Coincidentally the volume migration API never had API reference
> documentation. I have that here now [1]. It clearly states the preconditions
> to migrate a volume based on code in the volume API. However, volume
> migration is admin-only by default and retype (essentially like resize) is
> admin-or-owner so non-admins can do it and specify to migrate. In general I
> think it's best to have preconditions for *any* API documented, so anything
> needed to perform a retype should be documented in the API, like that the
> volume can't have snapshots.

That's where things get tricky though. There aren't really reconditions we can
have as a blanket statement with the retype API.

A retype can do a lot of different things, all dependent on what type you are
coming from and trying to go to. There are some retypes where all it does is
enable vendor flag ``foo`` on the volume with no change in any other state.
Then there are other retypes (using --migrate-policy on-demand) that completely
move the volume from one backend to another one, copying every block along the
way from the original to the new volume. It really depends on what types you
are trying to retype to.

> 
> [1] https://review.openstack.org/#/c/595379/
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Sean McGinnis
> 
> After some prompting from gibi, that code has now been adjusted so
> that requirements.txt and tox.ini [1] make sure that the extract
> placement branch is installed into the test virtualenvs. So in the
> gate the unit and functional tests pass. Other jobs do not because
> of [1].
> 
> In the intervening time I've taken that code, built a devstack that
> uses a nova-placement-api wsgi script that uses nova.conf and the
> extracted placement code. It runs against the nova-api database.
> 
> Created a few servers. Worked.
> 

Excellent!

> Then I switched the devstack@placement-unit unit file to point to
> the placement-api wsgi script, and configured
> /etc/placement/placement.conf to have a
> [placement_database]/connection of the nova-api db.
> 
> Created a few servers. Worked.
> 
> Thanks.
> 
> [1] As far as I can tell a requirements.txt entry of
> 
> -e 
> git+https://github.com/cdent/placement-1.git@cd/make-it-work#egg=placement
> 
> will install just fine with 'pip install -r requirements.txt', but
> if I do 'pip install nova' and that line is in requirements.txt it
> does not work. This means I had to change tox.ini to have a deps
> setting of:
> 
> deps = -r{toxinidir}/test-requirements.txt
>-r{toxinidir}/requirements.txt
> 
> to get the functional and unit tests to build working virtualenvs.
> That this is not happening in the dsvm-based zuul jobs mean that the
> tests can't run or pass. What's going on here? Ideas?

Just conjecture on my part, but I know we have it documented somewhere that URL
paths to requirements are not allowed. Maybe we do something to actively
prevent that?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-jenkins][Release-job-failures] Release of openstack/python-jenkins failed

2018-08-24 Thread Sean McGinnis
See below for links to a release job failure for python-jenkins.

This was a ReadTheDocs publishing job. It appears to have failed due to the
necessary steps missing from this earlier post:

http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html


- Forwarded message from z...@openstack.org -

Date: Fri, 24 Aug 2018 14:33:25 +
From: z...@openstack.org
To: release-job-failu...@lists.openstack.org
Subject: [Release-job-failures] Release of openstack/python-jenkins failed
Reply-To: openstack-dev@lists.openstack.org

Build failed.

- trigger-readthedocs-webhook 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/trigger-readthedocs-webhook/cec87fd/
 : FAILURE in 1m 49s
- release-openstack-python 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/release-openstack-python/68b356f/
 : SUCCESS in 4m 03s
- announce-release 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/announce-release/04fd7c3/
 : SUCCESS in 4m 10s
- propose-update-constraints 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/propose-update-constraints/3eaf094/
 : SUCCESS in 2m 08s

___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

- End forwarded message -

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
> >
> > We are still missing releases for the following tempest plugins. Some are
> > pending getting pypi and release jobs set up, but please try to prioritize
> > getting these done as soon as possible.
> >
> > barbican-tempest-plugin
> > blazar-tempest-plugin
> > cloudkitty-tempest-plugin
> > congress-tempest-plugin
> > ec2api-tempest-plugin
> > magnum-tempest-plugin
> > mistral-tempest-plugin
> > monasca-kibana-plugin
> > monasca-tempest-plugin
> > murano-tempest-plugin
> > networking-generic-switch-tempest-plugin
> > oswin-tempest-plugin
> > senlin-tempest-plugin
> > telemetry-tempest-plugin
> > tripleo-common-tempest-plugin
> 
> To speak for the tripleo-common-template-plugin, it's currently not
> used and there aren't any tests so I don't think it's in a spot for
> it's first release during Rocky. I'm not sure the current status of
> this effort so it'll be something we'll need to raise at the PTG.
> 

Thanks Alex. Odd that a repo was created with no tests.

I think the goal was to split out in-repo tempest tests, not to ensure that
every project has one whether they need it or not. I wonder if we should
"retire" this repo until it is actually needed.

I will propose a patch to the releases repo to drop the deliverable file at
least. That will keep it from showing up in our list of unreleased repos.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
> >
> > We are still missing releases for the following tempest plugins. Some are
> > pending getting pypi and release jobs set up, but please try to prioritize
> > getting these done as soon as possible.
> >
> > barbican-tempest-plugin
> > blazar-tempest-plugin
> > cloudkitty-tempest-plugin
> > congress-tempest-plugin
> > ec2api-tempest-plugin
> > magnum-tempest-plugin
> > mistral-tempest-plugin
> > monasca-kibana-plugin
> > monasca-tempest-plugin
> > murano-tempest-plugin
> > networking-generic-switch-tempest-plugin
> > oswin-tempest-plugin
> > senlin-tempest-plugin
> > telemetry-tempest-plugin
> > tripleo-common-tempest-plugin
> > trove-tempest-plugin
> > watcher-tempest-plugin
> > zaqar-tempest-plugin
> 
> tempest-horizon is missing from the list. horizon team needs to
> release tempest-horizon.
> It does not follow the naming convention so it seems to be missed from the 
> list.
> 
> Thanks,
> Akihiro Motoki (amotoki)
> 

Ah, good catch Akihiro, thanks!

Maybe if it can be done quickly, before a release might be a good time to
update the package name to match the convention used elsewhere. But we are
running short on time and there's probably more involved in doing that than
just updating the package name.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
This is the final countdown email for the Rocky development cycle. Thanks to
everyone involved in the Rocky release! 

Development Focus
-

Teams attending the PTG should be preparing for those discussions and capturing
information in the etherpads:

https://wiki.openstack.org/wiki/PTG/Stein/Etherpads

General Information
---

The release team plans on doing the final Rocky release on 29 August. We will
re-tag the last commit used for the final RC using the final version number.

If you have not already done so, now would be a good time to take a look at the
Stein schedule and start planning team activities:

https://releases.openstack.org/stein/schedule.html

Actions
-

PTLs and release liaisons should watch for the final release patch from the
release team. While not required, we would appreciate having an ack from each
team before we approve it on the 29th.

We are still missing releases for the following tempest plugins. Some are
pending getting pypi and release jobs set up, but please try to prioritize
getting these done as soon as possible.

barbican-tempest-plugin
blazar-tempest-plugin
cloudkitty-tempest-plugin
congress-tempest-plugin
ec2api-tempest-plugin
magnum-tempest-plugin
mistral-tempest-plugin
monasca-kibana-plugin
monasca-tempest-plugin
murano-tempest-plugin
networking-generic-switch-tempest-plugin
oswin-tempest-plugin
senlin-tempest-plugin
telemetry-tempest-plugin
tripleo-common-tempest-plugin
trove-tempest-plugin
watcher-tempest-plugin
zaqar-tempest-plugin

Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Cycle trailing RC deadline: August 30
Stein PTG: September 10-14
Cycle trailing Rocky release: November 28

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan

2018-08-23 Thread Sean McGinnis
> > > 
> > > I've approved it for a UC only bump
> > > 
> 
> We are still waiting on https://review.openstack.org/594541 to merge,
> but I already voted and noted that it was FFE approved.
> 
> -- 
> Matthew Thode (prometheanfire)

And I have now approved the u-c update. We should be all set now.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-23 Thread Sean McGinnis
On Wed, Aug 22, 2018 at 08:23:41PM -0500, Matt Riedemann wrote:
> Hi everyone,
> 
> I have started an etherpad for cells topics at the Stein PTG [1]. The main
> issue in there right now is dealing with cross-cell cold migration in nova.
> 
> At a high level, I am going off these requirements:
> 
> * Cells can shard across flavors (and hardware type) so operators would like
> to move users off the old flavors/hardware (old cell) to new flavors in a
> new cell.
> 
> * There is network isolation between compute hosts in different cells, so no
> ssh'ing the disk around like we do today. But the image service is global to
> all cells.
> 
> Based on this, for the initial support for cross-cell cold migration, I am
> proposing that we leverage something like shelve offload/unshelve
> masquerading as resize. We shelve offload from the source cell and unshelve
> in the target cell. This should work for both volume-backed and
> non-volume-backed servers (we use snapshots for shelved offloaded
> non-volume-backed servers).
> 
> There are, of course, some complications. The main ones that I need help
> with right now are what happens with volumes and ports attached to the
> server. Today we detach from the source and attach at the target, but that's
> assuming the storage backend and network are available to both hosts
> involved in the move of the server. Will that be the case across cells? I am
> assuming that depends on the network topology (are routed networks being
> used?) and storage backend (routed storage?). If the network and/or storage
> backend are not available across cells, how do we migrate volumes and ports?
> Cinder has a volume migrate API for admins but I do not know how nova would
> know the proper affinity per-cell to migrate the volume to the proper host
> (cinder does not have a routed storage concept like routed provider networks
> in neutron, correct?). And as far as I know, there is no such thing as port
> migration in Neutron.
> 

Just speaking to iSCSI storage, I know some deployments do not route their
storage traffic. If this is the case, then both cells would need to have access
to the same subnet to still access the volume.

I'm also referring to the case where the migration is from one compute host to
another compute host, and not from one storage backend to another storage
backend.

I haven't gone through the workflow, but I thought shelve/unshelve could detach
the volume on shelving and reattach it on unshelve. In that workflow, assuming
the networking is in place to provide the connectivity, the nova compute host
would be connecting to the volume just like any other attach and should work
fine. The unknown or tricky part is making sure that there is the network
connectivity or routing in place for the compute host to be able to log in to
the storage target.

If it's the other scenario mentioned where the volume needs to be migrated from
one storage backend to another storage backend, then that may require a little
more work. The volume would need to be retype'd or migrated (storage migration)
from the original backend to the new backend.

Again, in this scenario at some point there needs to be network connectivity
between cells to copy over that data.

There is no storage-offloaded migration in this situation, so Cinder can't
currently optimize how that data gets from the original volume backend to the
new one. It would require a host copy of all the data on the volume (an often
slow and expensive operation) and it would require that the host doing the data
copy has access to both the original backend and then new backend.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-22 Thread Sean McGinnis
> 
> The solution is conceptually simple.  We add a new API microversion in
> Cinder that adds and optional parameter called "generic_keep_source"
> (defaults to False) to both migrate and retype operations.
> 
> This means that if the driver optimized migration cannot do the
> migration and the generic migration code is the one doing the migration,
> then, instead of our final step being to swap the volume id's and
> deleting the source volume, what we would do is to swap the volume id's
> and move all the snapshots to reference the new volume.  Then we would
> create a user message with the new ID of the volume.
> 

How would you propose to "move all the snapshots to reference the new volume"?
Most storage does not allow a snapshot to be moved from one volume to another.
really the only way a migration of a snapshot can work across all storage types
would be to incrementally copy the data from a source to a destination up to
the point of the oldest snapshot, create a new snapshot on the new volume, then
proceed through until all snapshots have been rebuilt on the new volume.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Sean McGinnis
On Fri, Aug 17, 2018 at 12:47:10PM -0500, Ed Leafe wrote:
> On Aug 17, 2018, at 12:30 PM, Dan Smith  wrote:
> > 
> > Splitting it out to another repository within the compute umbrella (what
> > do we call it these days?) satisfies the _technical_ concern of not
> > being able to use placement without installing the rest of the nova code
> > and dependency tree. Artificially creating more "perceived" distance
> > sounds really political to me, so let's be sure we're upfront about the
> > reasoning for doing that if so :)
> 
> Characterizing the proposed separation as “artificial” seems to be quite 
> political in itself.
> 

Other than currently having a common set of interested people, is there
something about placement that makes it something that should be under the
compute umbrella?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Sean McGinnis
> 
> Has there been a discussion on record of how use of placement by cinder
> would affect "standalone" cinder (or manila) initiatives where there is a
> desire to be able to run cinder by itself (with no-auth) or just with
> keystone (where OpenStack style multi-tenancy is desired)?
> 
> Tom Barron (tbarron)
> 

A little bit. That would be one of the pieces that needs to be done if we were
to adopt it.

Just high level brainstorming, but I think we would need something like we have
now with using tooz where if it is configured for it, it will use etcd for
distributed locking. And for single node installs it just defaults to file
locks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Sean McGinnis
On Fri, Aug 17, 2018 at 10:59:47AM -0500, Ed Leafe wrote:
> On Aug 17, 2018, at 10:51 AM, Chris Dent  wrote:
> > 
> > One of the questions that has come up on the etherpad is about how
> > placement should be positioned, as a project, after the extraction.
> > The options are:
> > 
> > * A repo within the compute project
> > * Its own project, either:
> >  * working towards being official and governed
> >  * official and governed from the start
> 
> I would like to hear from the Cinder and Neutron teams, especially those who 
> were around when those compute sub-projects were split off into their own 
> projects. Did you feel that being independent of compute helped or hindered 
> you? And to those who are in those projects now, is there any sense that 
> things would be better if you were still part of compute?
> 

I wasn't around at the beginning of the separation, but I don't think Cinder
would be anything like it is today (you can decide if that's a good thing or
not) if it had remained a component of Nova.

> My opinion has been that Placement should have been separate from the start. 
> The longer we keep Placement inside of Nova, the more painful it will be to 
> extract, and hence the likelihood of that every happening is greatly 
> diminished.

I have to agree with this statement.

> 
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Sean McGinnis
On Fri, Aug 17, 2018 at 04:51:10PM +0100, Chris Dent wrote:
> 
> [snip]
>
> One of the questions that has come up on the etherpad is about how
> placement should be positioned, as a project, after the extraction.
> The options are:
> 
> * A repo within the compute project
> * Its own project, either:
>   * working towards being official and governed
>   * official and governed from the start
> 
> [snip]
> 
> The outcome I'd like to see happen is the one that makes sure
> placement becomes useful to the most people and is worked on by the
> most people, as quickly as possible. If how it is arranged as a
> project will impact that, now is a good time to figure that out.
> 
> If you have thoughts about this, please share them in response.
> 

I do think this is important if we want placement to get wider adoption.

The subject of using placement in Cinder has come up, and since then I've had a
few conversations with people in and outside of that team. I really think until
placement is its own project outside of the nova team, there will be resistance
from some to adopt it.

This reluctance on having it part of Nova may be real or just perceived, but
with it within Nova it will likely be an uphill battle for some time convincing
other projects that it is a nicely separated common service that they can use.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Release][PTL] cycle-with-intermediary reminder

2018-08-17 Thread Sean McGinnis
Just reminding folks with deliverables following the cycle-with-intermediary
release model that next Thursday is the final deadline to get those out.

There are a handful of deliverables that have not done a release yet in Rocky.
If we do not get a release request from these teams we will need to force a
release so we can have a good point to create a stable/rocky branch.

There are also a few that have done a release this cycle but appear to have
merged more changes since then. For these deliverables, if not requested before
the final deadline, we will need to force the creation of the stable/rocky
branch from the last release.

Finally, we have a large list than I would like to see of tempest plugins that
have not done a release. As a reminder, we need those tagged (but not branched)
to have a record of which version of the plugin was part of which release
cycle. This is to ensure the right plugin can be used based on the version of
tempest used to make sure the plugin interface is compatible.

These plugins do require some steps to get things set up before doing the
release, so please keep that in mind when planning the time you will need. They
need to be registered on pypi and a publish-to-pypi job added in the
project-config repo before we will be able to process a release for them.

Please raise any questions here or in the #openstack-release channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Personal tool patterns in .gitignore cookiecutter

2018-08-16 Thread Sean McGinnis
On Thu, Aug 16, 2018 at 06:24:22PM +, Jeremy Stanley wrote:
> In response to some recent but misguided proposals from well-meaning
> contributors in various projects, I've submitted a change[*] for the
> openstack-dev/cookiecutter .gitignore template inserting a comment
> which recommends against including patterns related to personal
> choices of tooling (arbitrary editors, IDEs, operating systems...).
> It includes one suggestion for a popular alternative (creating a
> personal excludesfile specific to the tools you use), but there are
> of course multiple ways it can be solved.
> 
> This is not an attempt to set policy, but merely provides a
> recommended default for new repositories in hopes that projects can
> over time reduce some of the noise related to unwanted .gitignore
> additions. If it merges, projects who disagree with this default can
> of course modify or remove the comment at the top of the file as
> they see fit when bootstrapping content for a new repository.
> Projects with existing repositories on which they'd like to apply
> this can also easily copy the comment text or port the patch.
> 
> If there seems to be some consensus that this change is appreciated,
> I'll remove the WIP flag and propose similar changes to our other
> cookiecutters for consistency.
> 
> [*] https://review.openstack.org/592520
> -- 
> Jeremy Stanley

The comments match my personal preference, and I do see it is just advisory, so
it is not mandating any policy that must be followed by all projects. I think
it is a good comment to include if for no other reason than to potentially
inform folks that there are other ways to address this than copying and pasting
the same change to every repo.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-1, August 20-24

2018-08-16 Thread Sean McGinnis
The end is near!

Development Focus
-

Teams should be working on release critical bugs in preparation of the final
release candidate deadline this Thursday the 23rd.

Teams attending the PTG should also be preparing for those discussions and
capturing information in the etherpads:

https://wiki.openstack.org/wiki/PTG/Stein/Etherpads

General Information
---

Thursday, August 23 is the deadline for final Rocky release candidates. We will
then enter a quiet period until we tag the final release on August 29.

Actions
-

Watch for any translation patches coming through and merge them quickly. If
your project has a stable/rocky branch created, please make sure those patches
are also getting merged there. (Do not backport the ones from master)

Liaisons for projects with independent deliverables should import the release
history by preparing patches to openstack/releases.

Projects following the cycle-trailing model should be getting ready for the
cycle-trailing RC deadline coming up on August 30.

Please drop by #openstack-release with any questions or concerns about the
upcoming release.


Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Cycle trailing RC deadline: August 30
Cycle trailing Rocky release: November 28

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z

2018-08-15 Thread Sean McGinnis
On Tue, Aug 14, 2018 at 11:13:13AM -0500, Matthew Thode wrote:
> This is to warn and call out all those projects that do not have a
> stable/rocky branch yet.
> 
> If you are in the folloing list your project will need to realize that
> your master is testing against the requirements/constraints from stein,
> not rocky.  Any branching / tests you do will need to keep that in mind.
> 

I have just processed the branching request for the openstack/requirements
repo. Projects that have branched and have had the bot-proposed patch to update
the stable/rocky tox settings to use the stable/rocky upper constraints can now
approve those patches.

At the moment, requirements are matching between rocky and stein, but there's
usually only a small window where that is the case. If you have not branched
yet, be aware that at some point your testing could be running against the
wrong constraints.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [cinder] backup restore in api v2

2018-08-15 Thread Sean McGinnis
On Wed, Aug 15, 2018 at 07:52:47AM -0500, Sean McGinnis wrote:
> On Wed, Aug 15, 2018 at 09:32:45AM +0200, Artem Goncharov wrote:
> > Hi all,
> > 
> > I have recently faced an interesting question: is there no backup restore
> > functionality in block-storage api v2? There is possibility to create
> > backup, but not to restore it according to
> > https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups.
> > Version v3 ref contain restore function. What is also interesting, that
> > cinderclient contain the restore function for v2. Is this just a v2
> > documentation bug (what I assume) or was it an unsupported function in v2?
> > 
> > Thanks,
> > Artem
> 
> Thanks for pointing that out Artem. That does appear to be a documentation 
> bug.
> The backup API has not changed, so v2 and v3 should be identical in that
> regard. We will need to update our docs to reflect that.
> 
> Sean

Ah, we just did a really good job of hiding it:

https://developer.openstack.org/api-ref/block-storage/v2/index.html#restore-backup

I see the formatting for that document is off so that it actually appears as a
section under the delete documentation. I will get that fixed.

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [cinder] backup restore in api v2

2018-08-15 Thread Sean McGinnis
On Wed, Aug 15, 2018 at 09:32:45AM +0200, Artem Goncharov wrote:
> Hi all,
> 
> I have recently faced an interesting question: is there no backup restore
> functionality in block-storage api v2? There is possibility to create
> backup, but not to restore it according to
> https://developer.openstack.org/api-ref/block-storage/v2/index.html#backups-backups.
> Version v3 ref contain restore function. What is also interesting, that
> cinderclient contain the restore function for v2. Is this just a v2
> documentation bug (what I assume) or was it an unsupported function in v2?
> 
> Thanks,
> Artem

Thanks for pointing that out Artem. That does appear to be a documentation bug.
The backup API has not changed, so v2 and v3 should be identical in that
regard. We will need to update our docs to reflect that.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z

2018-08-14 Thread Sean McGinnis
On Tue, Aug 14, 2018 at 11:13:13AM -0500, Matthew Thode wrote:
> This is to warn and call out all those projects that do not have a
> stable/rocky branch yet.
> 
> If you are in the folloing list your project will need to realize that
> your master is testing against the requirements/constraints from stein,
> not rocky.  Any branching / tests you do will need to keep that in mind.
> 
> ansible-role-container-registry
> ansible-role-redhat-subscription
> ansible-role-tripleo-modify-image
> barbican-tempest-plugin
> blazar-tempest-plugin
> cinder-tempest-plugin

Point of clarification - *-tempest-plugin repos do not get branched. Releases
should be done to track the version for rocky, but stable branches are not
created.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release] FFE request for castellan

2018-08-14 Thread Sean McGinnis
> On 08/10/2018 10:15 AM, Ade Lee wrote:
> > Hi all,
> > 
> > I'd like to request a feature freeze exception to get the following
> > change in for castellan.
> > 
> > https://review.openstack.org/#/c/575800/
> > 
> > This extends the functionality of the vault backend to provide
> > previously uninmplemented functionality, so it should not break anyone.
> > 
> > The castellan vault plugin is used behind barbican in the barbican-
> > vault plugin.  We'd like to get this change into Rocky so that we can
> > release Barbican with complete functionality on this backend (along
> > with a complete set of passing functional tests).
> 
> This does seem fairly low risk since it's just implementing a function that
> previously raised a NotImplemented exception.  However, with it being so
> late in the cycle I think we need the release team's input on whether this
> is possible.  Most of the release FFE's I've seen have been for critical
> bugs, not actual new features.  I've added that tag to this thread so
> hopefully they can weigh in.
> 

As far as releases go, this should be fine. If this doesn't affect any other
projects and would just be a late merging feature, as long as the castellan
team has considered the risk of adding code so late and is comfortable with
that, this is OK.

Castellan follows the cycle-with-intermediary release model, so the final Rocky
release just needs to be done by next Thursday. I do see the stable/rocky
branch has already been created for this repo, so it would need to merge to
master first (technically stein), then get cherry-picked to stable/rocky.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-14 Thread Sean McGinnis
On Sun, Aug 12, 2018 at 05:41:20PM +0900, Ghanshyam Mann wrote:
> Hi All,
> 
> Rocky release is few weeks away and we all agreed to release Tempest plugin 
> with cycle-with-intermediary. Detail discussion are in ML [1] in case you 
> missed.
> 
> This is reminder to tag your project tempest plugins for Rocky release. You 
> should be able to find your plugins deliverable file under rocky folder in 
> releases repo[3].  You can refer cinder-tempest-plugin release as example. 
> 
> Feel free to reach to release/QA team for any help/query. 
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html  
> [2] https://review.openstack.org/#/c/590025/   
> [3] https://github.com/openstack/releases/tree/master/deliverables/rocky
> 
> -gmann
> 

It should also be noted that these repos will need to be set up with a release
job if they do not already have one. They will need either publish-to-pypi or
publish-to-pypi-python3, similar to what was done here:

https://review.openstack.org/#/c/587623/

And before that is done, please make sure the package is registered and set up
correctly on pypi following these insructions:

https://docs.openstack.org/infra/manual/creators.html#pypi

Requests to create a tempest plugin release will fail validation if this has
not be set up ahead of time.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL][TC] Stein Cycle Goals

2018-08-13 Thread Sean McGinnis
On Mon, Aug 13, 2018 at 04:50:40PM +, Fox, Kevin M wrote:
> Since the upgrade checking has not been written yet, now would be a good time 
> to unify them, so you upgrade check your openstack upgrade, not status check 
> nova, status check neutron, status check glance, status check cinder . ad 
> nauseam.
> 
> Thanks,
> Kevin
> 

That would be a good outcome of this. I think before we can have an overall
upgrade check, each service in use needs to have that mechanism in place to
perform their specific checks.

As long as the pattern is followed as $project-status check upgrade, then it
should be very feasible to write a higher level tool that iterates through all
services and runs the checks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTL][TC] Stein Cycle Goals

2018-08-13 Thread Sean McGinnis
We now have two cycle goals accepted for the Stein cycle. I think both are very
beneficial goals to work towards, so personally I am very happy with where we
landed on this.

The two goals, with links to their full descriptions and nitty gritty details,
can be found here:

https://governance.openstack.org/tc/goals/stein/index.html

Goals
=
Here are some high level details on the goals.

Run under Python 3 by default (python3-first)
-
In Pike we had a goal for all projects to support Python 3.5. As a continuation
of that effort, and in preparation for the EOL of Python 2, we now want to look
at all of the ancillary things around projects and make sure that we are using
Python 3 everywhere except those jobs explicitly intended for testing Python 2
support.

This means all docs, linters, and other tools and utility jobs we use should be
run using Python 3.

https://governance.openstack.org/tc/goals/stein/python3-first.html

Thanks to Doug Hellmann, Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for
championing this goal.

Support Pre Upgrade Checks (upgrade-checkers)
-
One of the hot topics we've been discussing for some time at Forum and PTG
events has been making upgrades better. To that end, we want to add tooling for
each service to provide an "upgrade checker" tool that can check for various
known issues so we can either give operators some assurance that they are ready
to upgrade, or to let them know if some step was overlooked that will need to
be done before attempting the upgrade.

This goal follows the Nova `nova-status upgrade check` command precendent to
make it a consistent capability for each service. The checks should look for
things like missing or changed configuration options, incompatible object
states, or other conditions that could lead to failures upgrading that project.

More details can be found in the goal:

https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html

Thanks to Matt Riedemann for championing this goal.

Schedule

We hope to have all projects complete this goal by the week of March 4, 2019:

https://releases.openstack.org/stein/schedule.html

This is the same week as the Stein-3 milestone, as well as Feature Freeze and
client lib freeze.

Future Goals

We welcome any ideas for future cycle goals. Ideally these should be things
that can actually be accomplished within one development cycle and would have a
positive, and hopefully visible, impact for users and operators.

Feel free to pitch any ideas here on the mailing list or drop by the
#openstack-tc channel at any point.

Thanks!

--
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][tricircle] FFE for python-tricircleclient

2018-08-10 Thread Sean McGinnis
This is a requirements FFE to raise the upper-constraints for
python-tricircleclient.

This client only has some requirements and CI changes merged, but it has not
   
done any releases during the rocky cycle. It is well past client lib freeze,
   
but as stated in our policy, we will need to force a final release so there is  
a rocky version and these requirements and CI changes are in the stable/rocky   
branch of the repo. 
   

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][karbor] FFE for python-karborclient

2018-08-10 Thread Sean McGinnis
This is requesting a requirements FFE to raise u-c for python-karborclient.

This client only has some requirements and CI changes merged, but it has not
done any releases during the rocky cycle. It is well past client lib freeze,
but as stated in our policy, we will need to force a final release so there is
a rocky version and these requirements and CI changes are in the stable/rocky
branch of the repo.

There is one caveat with this release in that the karbor service has not done a
release for rocky yet. If one is not done by the final cycle-with-intermediary
deadline, karbor will need to be excluded from the Rocky coordinated release.
This would include service and clients.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][docs] ACTION REQUIRED for projects using readthedocs

2018-08-09 Thread Sean McGinnis
Resending the below since several projects using ReadTheDocs appear to have
missed this. If your project publishes docs to ReadTheDocs, please follow these
steps to avoid job failures.

On Fri, Aug 03, 2018 at 02:20:40PM +1000, Ian Wienand wrote:
> Hello,
> 
> tl;dr : any projects using the "docs-on-readthedocs" job template
> to trigger a build of their documentation in readthedocs needs to:
> 
>  1) add the "openstackci" user as a maintainer of the RTD project
>  2) generate a webhook integration URL for the project via RTD
>  3) provide the unique webhook ID value in the "rtd_webhook_id" project
> variable
> 
> See
> 
>  
> https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs
> 
> --
> 
> readthedocs has recently updated their API for triggering a
> documentation build.  In the old API, anyone could POST to a known URL
> for the project and it would trigger a build.  This end-point has
> stopped responding and we now need to use an authenticated webhook to
> trigger documentation builds.
> 
> Since this is only done in the post and release pipelines, projects
> probably haven't had great feedback that current methods are failing
> and this may be a surprise.  To check your publishing, you can go to
> the zuul builds page [1] and filter by your project and the "post"
> pipeline to find recent runs.
> 
> There is now some setup required which can only be undertaken by a
> current maintainer of the RTD project.
> 
> In short; add the "openstackci" user as a maintainer, add a "generic
> webhook" integration to the project, find the last bit of the URL from
> that and put it in the project variable "rtd_webhook_id".
> 
> Luckily OpenStack infra keeps a team of highly skilled digital artists
> on retainer and they have produced a handy visual guide available at
> 
>   https://imgur.com/a/Pp4LH31
> 
> Once the RTD project is setup, you must provide the webhook ID value
> in your project variables.  This will look something like:
> 
>  - project:
> templates:
>   - docs-on-readthedocs
>   - publish-to-pypi
> vars:
>   rtd_webhook_id: '12345'
> check:
>   jobs:
>   ...
> 
> For actual examples; see pbrx [2] which keeps its config in tree, or
> gerrit-dash-creator which has its configuration in project-config [3].
> 
> Happy to help if anyone is having issues, via mail or #openstack-infra
> 
> Thanks!
> 
> -i
> 
> p.s. You don't *have* to use the jobs from the docs-on-readthedocs
> templates and hence add infra as a maintainer; you can setup your own
> credentials with zuul secrets in tree and write your playbooks and
> jobs to use the generic role [4].  We're always happy to discuss any
> concerns.
> 
> [1] https://zuul.openstack.org/builds.html
> [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17
> [3] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml
> [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-2, August 13-17

2018-08-09 Thread Sean McGinnis
Related to below, here is the current list of deliverables waiting for an RC1
release (as of August 9, 15:30 UTC):

barbican
ceilometer-powervm
cinder
congress-dashboard
congress
cyborg
designate-dashboard
designate
glance
heat
masakari-monitors
masakari
networking-bagpipe
networking-bgpvpn
networking-midonet
networking-odl
networking-ovn
networking-powervm
networking-sfc
neutron-dynamic-routing
neutron-fwaas
neutron-vpnaas
neutron
nova-powervm
nova
release-test
sahara-dashboard
sahara-image-elements
sahara


Today is the deadline, so please make sure you get in the RC release requests
soon.

Thanks!
Sean

On Thu, Aug 09, 2018 at 09:58:06AM -0500, Sean McGinnis wrote:
> Development Focus
> -
> 
> Teams should be working on any release critical bugs that would require 
> another
> RC before the final release, and thinking about plans for Stein.
> 
> General Information
> ---
> 
> Any cycle-with-milestones projects that missed the RC1 deadline should prepare
> an RC1 release as soon as possible.
> 
> After all of the cycle-with-milestone projects have branched we will branch
> devstack, grenade, and the requirements repos. This will effectively open them
> back up for Stein development, though the focus should still be on finishing 
> up
> Rocky until the final release.
> 
> Actions
> -
> 
> Watch for any translation patches coming through and merge them quickly.
> 
> If your project has a stable/rocky branch created, please make sure those
> patches are also merged there. Keep in mind there will need to be a final
> release candidate cut to capture any merged translations and critical bug 
> fixes
> from this branch.
> 
> Please also check for completeness in release notes and add any relevant
> "prelude" content. These notes are targetted for the downstream consumers of
> your project, so it would be great to include any useful information for those
> that are going to pick up and use or deploy the Queens version of your 
> project.
> 
> We also have the cycle-highlights information in the project deliverable 
> files.
> This one is targeted at marketing and other consumers that have typically been
> pinging PTLs every release asking for "what's new" in this release. If you 
> have
> not done so already, please add a few highlights for your team that would be
> useful for this kind of consumer.
> 
> This would be a good time for any release:independent projects to add the
> history for any releases not yet listed in their deliverable file. These files
> are under the deliverable/_independent directory in the openstack/releases
> repo.
> 
> If you have a cycle-with-intermediary release that has not done an RC yet,
> please do so as soon as possible. If we do not receive release requests for
> these repos soon we will be forced to create a release from the latest commit
> to create a stable/rocky branch. The release team would rather not be the ones
> initiating this release.
> 
> 
> Upcoming Deadlines & Dates
> --
> 
> Final RC deadline: August 23
> Rocky Release: August 29
> Stein PTG: September 10-14
> 
> -- 
> Sean McGinnis (smcginnis)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-2, August 13-17

2018-08-09 Thread Sean McGinnis
Development Focus
-

Teams should be working on any release critical bugs that would require another
RC before the final release, and thinking about plans for Stein.

General Information
---

Any cycle-with-milestones projects that missed the RC1 deadline should prepare
an RC1 release as soon as possible.

After all of the cycle-with-milestone projects have branched we will branch
devstack, grenade, and the requirements repos. This will effectively open them
back up for Stein development, though the focus should still be on finishing up
Rocky until the final release.

Actions
-

Watch for any translation patches coming through and merge them quickly.

If your project has a stable/rocky branch created, please make sure those
patches are also merged there. Keep in mind there will need to be a final
release candidate cut to capture any merged translations and critical bug fixes
from this branch.

Please also check for completeness in release notes and add any relevant
"prelude" content. These notes are targetted for the downstream consumers of
your project, so it would be great to include any useful information for those
that are going to pick up and use or deploy the Queens version of your project.

We also have the cycle-highlights information in the project deliverable files.
This one is targeted at marketing and other consumers that have typically been
pinging PTLs every release asking for "what's new" in this release. If you have
not done so already, please add a few highlights for your team that would be
useful for this kind of consumer.

This would be a good time for any release:independent projects to add the
history for any releases not yet listed in their deliverable file. These files
are under the deliverable/_independent directory in the openstack/releases
repo.

If you have a cycle-with-intermediary release that has not done an RC yet,
please do so as soon as possible. If we do not receive release requests for
these repos soon we will be forced to create a release from the latest commit
to create a stable/rocky branch. The release team would rather not be the ones
initiating this release.


Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Stein PTG: September 10-14

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Wed, Aug 08, 2018 at 05:15:26PM +, Sean McGinnis wrote:
> On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> > >Hi Cinder and API-SIG folks,
> > >
> > >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> > >3.0 was changed.
> > >Cinder introduced more strict schema validation for creating/updating
> > >volume encryption type
> > >during Rocky and a new micro version 3.53 was introduced[1].
> > >
> > >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> > >but after [1] landed unused fields are now rejected even when Cinder API
> > >3.0 is used.
> > >In my understanding on the microversioning, the existing behavior for
> > >older versions should be kept.
> > >Is it correct?
> > 
> > I agree with your assessment that 3.0 was used there - and also that I would
> > expect the api validation to only change if 3.53 microversion was used.
> > 
> 
> I filed a bug to track this:
> 
> https://bugs.launchpad.net/cinder/+bug/1786054
> 

Sorry, between lack of attention to detail (lack of coffee?) and an incorrect
link, I think I went down the wrong rabbit hole.

The change was actually introduced in [0]. I have submitted [1] to allow the
additional parameters in the volume type encryption API. This was definitely an
oversight when we allowed that one through.

Apologies for the hassle this has caused.

[0] https://review.openstack.org/#/c/561140/
[1] https://review.openstack.org/#/c/590014/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> >Hi Cinder and API-SIG folks,
> >
> >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> >3.0 was changed.
> >Cinder introduced more strict schema validation for creating/updating
> >volume encryption type
> >during Rocky and a new micro version 3.53 was introduced[1].
> >
> >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> >but after [1] landed unused fields are now rejected even when Cinder API
> >3.0 is used.
> >In my understanding on the microversioning, the existing behavior for
> >older versions should be kept.
> >Is it correct?
> 
> I agree with your assessment that 3.0 was used there - and also that I would
> expect the api validation to only change if 3.53 microversion was used.
> 

I filed a bug to track this:

https://bugs.launchpad.net/cinder/+bug/1786054

But something doesn't seem right from what I've seen. I've put up a patch to
add some extra unit testing around this. I expected some of those unit tests to
fail, but everything seemed happy and working the way it is supposed to with
prior to 3.53 accepting anything and 3.53 or later rejecting extra parameters.

Since that didn't work, I tried reproducing this against a running system using
curl. With no version specified (defaulting to the base 3.0 microversion)
creation succeeded:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}'

I then tried specifying the microversion that introduces the strict schema
checking to make sure I was able to get the appropriate failure, which worked
as expected:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv353", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.53"
HTTP/1.1 400 Bad Request
...

And to test boundary conditions, I then specified the microversion just prior
to the one that enabled strict checking:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Ac
"Content-Type: application/json" -H "User-Agent: python-cinderclient" -H
"X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv352", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.52"
HTTP/1.1 202 Accepted

In all cases except the strict checking one, the volume was created
successfully even though the junk extra parameters ("project_id": "testing",
"junk": "garbage") were provided.

So I'm missing something here. Is it possible horizon is requesting the latest
API version and not defaulting to 3.0?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
>  > > 
>  > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
>  > > but after [1] landed unused fields are now rejected even when Cinder API 
>  > > 3.0 is used.
>  > > In my understanding on the microversioning, the existing behavior for 
>  > > older versions should be kept.
>  > > Is it correct?
>  > 
>  > I agree with your assessment that 3.0 was used there - and also that I 
>  > would expect the api validation to only change if 3.53 microversion was 
>  > used.
> 
> +1. As you know, neutron also implemented strict validation in Rocky but with 
> discovery via config option and extensions mechanism. Same way Cinder should 
> make it with backward compatible way till 3.53 version. 
> 

I agree. I _thought_ that was the way it was implemented, but apparently
something was missed.

I will try to look at this soon and see what would need to be changed to get
this behaving correctly. Unless someone else has the time and can beat me to
it, which would be very much appreciated.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] ACTION REQUIRED for projects using readthedocs

2018-08-07 Thread Sean McGinnis
The recent release of rally failed the docs publishing to readthedocs. This
appears to be related to the actions required below.

The failure from the job can be found here:

http://logs.openstack.org/0c/0cd69c70492f800e0835da4de006fc292e43a5f1/release/trigger-readthedocs-webhook/e9de48a/job-output.txt.gz#_2018-08-07_21_10_16_898906

On Fri, Aug 03, 2018 at 02:20:40PM +1000, Ian Wienand wrote:
> Hello,
> 
> tl;dr : any projects using the "docs-on-readthedocs" job template
> to trigger a build of their documentation in readthedocs needs to:
> 
>  1) add the "openstackci" user as a maintainer of the RTD project
>  2) generate a webhook integration URL for the project via RTD
>  3) provide the unique webhook ID value in the "rtd_webhook_id" project
> variable
> 
> See
> 
>  
> https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs
> 
> --
> 
> readthedocs has recently updated their API for triggering a
> documentation build.  In the old API, anyone could POST to a known URL
> for the project and it would trigger a build.  This end-point has
> stopped responding and we now need to use an authenticated webhook to
> trigger documentation builds.
> 
> Since this is only done in the post and release pipelines, projects
> probably haven't had great feedback that current methods are failing
> and this may be a surprise.  To check your publishing, you can go to
> the zuul builds page [1] and filter by your project and the "post"
> pipeline to find recent runs.
> 
> There is now some setup required which can only be undertaken by a
> current maintainer of the RTD project.
> 
> In short; add the "openstackci" user as a maintainer, add a "generic
> webhook" integration to the project, find the last bit of the URL from
> that and put it in the project variable "rtd_webhook_id".
> 
> Luckily OpenStack infra keeps a team of highly skilled digital artists
> on retainer and they have produced a handy visual guide available at
> 
>   https://imgur.com/a/Pp4LH31
> 
> Once the RTD project is setup, you must provide the webhook ID value
> in your project variables.  This will look something like:
> 
>  - project:
> templates:
>   - docs-on-readthedocs
>   - publish-to-pypi
> vars:
>   rtd_webhook_id: '12345'
> check:
>   jobs:
>   ...
> 
> For actual examples; see pbrx [2] which keeps its config in tree, or
> gerrit-dash-creator which has its configuration in project-config [3].
> 
> Happy to help if anyone is having issues, via mail or #openstack-infra
> 
> Thanks!
> 
> -i
> 
> p.s. You don't *have* to use the jobs from the docs-on-readthedocs
> templates and hence add infra as a maintainer; you can setup your own
> credentials with zuul secrets in tree and write your playbooks and
> jobs to use the generic role [4].  We're always happy to discuss any
> concerns.
> 
> [1] https://zuul.openstack.org/builds.html
> [2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17
> [3] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml
> [4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-senlinclient][release][requirements] FFE for python-senlinclient 1.8.0

2018-08-07 Thread Sean McGinnis
On Tue, Aug 07, 2018 at 03:25:39PM -0500, Sean McGinnis wrote:
> Added requirements tag to the subject since this is a requirements FFE.
> 
> On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefe...@zte.com.cn wrote:
> > hi, all
> > 
> > 
> > I'd like to request an FFE to release 1.8.0(stable/rocky)
> >  for python-senlinclient.
> > 
> > The CURRENT_API_VERSION has been changed to "1.10", we need this release.
> > 

XueFeng, do you just need upper-constraints raised for this, or also the
minimum version? From that last sentence, I'm assuming you need to ensure only
1.8.0 is used for Rocky deployments.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-senlinclient][release][requirements] FFE for python-senlinclient 1.8.0

2018-08-07 Thread Sean McGinnis
Added requirements tag to the subject since this is a requirements FFE.

On Tue, Aug 07, 2018 at 11:44:04PM +0800, liu.xuefe...@zte.com.cn wrote:
> hi, all
> 
> 
> I'd like to request an FFE to release 1.8.0(stable/rocky)
>  for python-senlinclient.
> 
> The CURRENT_API_VERSION has been changed to "1.10", we need this release.
> 
> BestRegards,
> XueFeng


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Paste unmaintained

2018-08-06 Thread Sean McGinnis
On Mon, Aug 06, 2018 at 09:53:36AM -0500, Lance Bragstad wrote:
> 
> >
> 
> Morgan has been working through the migration since June, and it's been
> quite involved [0]. At one point he mentioned trying to write-up how he
> approached the migration for keystone. I understand that not every
> project structures their APIs the same way, but a high-level guide might
> be helpful for some if the long-term goal is to eventually move off of
> paste (e.g. how we approached it, things that tripped us up, how we
> prepared the code base for flask, et cetera).
> 
> I'd be happy to help coordinate a session or retrospective at the PTG if
> other groups find that helpful.
> 

I would find this very useful. I'm not sure the Cinder team has the resources
to tackle something like this immediately, but having a better understanding of
what would be involved would really help scope the work.

And if we have existing examples to follow and at least an outline of the steps
to do the work, it might be a good low-hanging-fruit type of thing for someone
to tackle if they are looking to get involved.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-06 Thread Sean McGinnis
> 
> I didn't have time to investigate these, but at least Glance was
> affected, and a patch was sent (as well as an async patch). None of them
> has been merged yet:
> 
> https://review.openstack.org/#/c/586050/
> https://review.openstack.org/#/c/586716/
> 
> That'd be ok if at least there was some reviews. It looks like nobody
> cares but Debian & Ubuntu people... :(
> 

Keep in mind that your priorities are different than everyone elses. There are
large parts of the community still working on Python 3.5 support (our
officially supported Python 3 version), as well as smaller teams overall
working on things like critical bugs.

Unless and until we declare Python 3.7 as our new target (which I don't think
we are ready to do yet), these kinds of patches will be on a best effort basis.

Making sure that duplicate patches are not pushed up will also help increase
the chances that they will eventually make it through as well.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [releease][ptl] Missing and forced releases

2018-08-03 Thread Sean McGinnis
Today the release team reviewed the rocky deliverables and their releases done
so far this cycle. There are a few areas of concern right now.

Unreleased cycle-with-intermediary
==
There is a much longer list than we would like to see of
cycle-with-intermediary deliverables that have not done any releases so far in
Rocky. These deliverables should not wait until the very end of the cycle to
release so that pending changes can be made available earlier and there are no
last minute surprises.

For owners of cycle-with-intermediary deliverables, please take a look at what
you have merged that has not been released and consider doing a release ASAP.
We are not far from the final deadline for these projects, but it would still
be good to do a release ahead of that to be safe.

Deliverables that miss the final deadline will be at risk of being dropped from
the Rocky coordinated release.

Unrelease client libraries
==
The following client libraries have not done a release:

python-cloudkittyclient
python-designateclient
python-karborclient
python-magnumclient
python-searchlightclient*
python-senlinclient
python-tricircleclient

The deadline for client library releases was last Thursday, July 26. This
coming Monday the release team will force a release on HEAD for these clients.

* python-searchlight client is currently planned on being dropped due to
  searchlight itself not having met the minimum of two milestone releases
  during the rocky cycle.

Missing milestone 3
===
The following projects missed tagging a milestone 3 release:

cinder
designate
freezer
mistral
searchlight

Following policy, a milestone 3 tag will be forced on HEAD for these
deliverables on Monday.

Freezer and searchlight missed previous milestone deadlines and will be dropped
from the Rocky coordinated release.

If there are any questions or concerns, please respond here or get ahold of
someone from the release management team in the #openstack-release channel.

--
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-3, August 6-10

2018-08-03 Thread Sean McGinnis
On Fri, Aug 03, 2018 at 11:23:56AM -0500, Sean McGinnis wrote:
> -
> 

More information on deadlines since we appear to have some conflicting
information documented. According to the published release schedule:

https://releases.openstack.org/rocky/schedule.html#r-finalrc

we stated intermediary releases had to be done by the final RC date. So based
on that, cycle-with-intermediary projects have until August 20 to do their
final release.

Of course, doing before that deadline is highly encouraged to make sure there
are not any last minute problems to work through, if at all possible.

> 
> Upcoming Deadlines & Dates
> --
> 
> RC1 deadline: August 9
cycle-with-intermediary deadline: August 20

> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-3, August 6-10

2018-08-03 Thread Sean McGinnis
Development Focus
-

The Release Candidate (RC) deadline is this Thursday, the 9th. Work should be
focused on any release-critical bugs and wrapping up and remaining feature
work.

General Information
---

All cycle-with-milestones and cycle-with-intermediary projects should cut their
stable/rocky branch by the end of the week. This branch will track the Rocky
release.

Once stable/rocky has been created, master will will be ready to switch to
Stein development. While master will no longer be frozen, please prioritize any
work necessary for completing Rocky plans. Please also keep in mind there will
be rocky patches competing with any new Stein work to make it through the gate.

Changes can be merged into stable/rocky as needed if deemed necessary for an
RC2. Once Rocky is released, stable/rocky will also be ready for any stable
point releases. Whether fixing something for another RC, or in preparation of a
future stable release, fixes must be merged to master first, then backported to
stable/rocky.

Actions
---

cycle-with-milestones deliverables should post an RC1 to openstack/releases
using the version format X.Y.Z.0rc1 along with branch creation from this point.
The deliverable changes should look something like:

  releases:
- projects:
- hash: 90f3ed251084952b43b89a172895a005182e6970
  repo: openstack/example
  version: 1.0.0.0rc1
branches:
  - name: stable/rocky
location: 1.0.0.0rc1

Other cycle deliverables (not cycle-with-milestones) will look the same, but
with your normal versioning.

And another reminder, please add what highlights you want for your project team
in the cycle highlights:

http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html


Upcoming Deadlines & Dates
--

RC1 deadline: August 9
Stein PTG: September 10-14

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
On Thu, Aug 02, 2018 at 05:56:23PM +, Jeremy Stanley wrote:
> On 2018-08-02 10:09:48 -0500 (-0500), Sean McGinnis wrote:
> [...]
> > I was able to find part of how that is implemented in jeepyb:
> > 
> > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py
> [...]
> 
> As for the nuts and bolts here, the script you found is executed
> from a Gerrit hook every time a change merges:
> 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/gerrit/change-merged
> 

Thanks, that's at least a place I can start looking!

> Gerrit hooks are a bit fragile but also terribly opaque (the only
> way to troubleshoot a failure is a Gerrit admin pouring over a noisy
> log file on the server looking for a Java backtrace). If you decide
> to do something automated to open bugs/stories when changes merge, I
> recommend a Zuul job. We don't currently have a pipeline definition
> which generates a distinct build set for every merged change (the
> post and promote pipelines do supercedent queuing rather than
> independent queuing these days) but it would be easy to add one that
> does.
> 
> It _could_ also be a candidate for a Gerrit ITS plug-in (there's one
> for SB but not for LP as far as I know), but implementing this would
> mean spending more time in Java than most of us care to experience.

Interesting... I hadn't looked into Gerrit functionality enough to know about
these. Looks like this is probably what you are referring to?

https://gerrit.googlesource.com/plugins/its-storyboard/

It's been awhile since I did anything significant with Java, but that might be
an option. Maybe a fun weekend project at least to see what it would take to
create an its-launchpad plugin.

Thanks for the pointers!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
On Thu, Aug 02, 2018 at 05:59:20PM +0200, Radomir Dopieralski wrote:
> To be honest, I don't see much point in automatically creating bugs that
> nobody is going to look at. When you implement a new feature, it's up to
> you to make it available in Horizon and CLI and wherever else, since the
> people working there simply don't have the time to work on it. Creating a
> ticket will not magically make someone do that work for you. We are happy
> to assist with this, but that's it. Anything else is going to get added
> whenever someone has any free cycles, or it becomes necessary for some
> reason (like breaking compatibility). That's the current reality, and no
> automation is going to help with it.
> 

I don't think that's universally true with these projects. There are some on
these teams that are interested in implementing support for new features and
keeping existing things working right.

The reality for most of this then is new features won't be available and users
will move away from using something like Horizon for whatever else comes along
that will give them access to what they need. I know there are very few
developers focused on Cinder that also have the skillset to add functionality
to Horizon.

I agree ideally someone would work on things wherever they are needed, but I
think there is a barrier with skills and priorities to make that happen. And at
least in the case of Cinder, neither Horizon nor OpenStackClient are required. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
I'm wondering if someone on the infra team can give me some pointers on how to
approach something, and looking for any general feedback as well.

Background
==
We've had things like the DocImpact tag that could be added to commit messages
that would tie into some automation to create a launchpad bug when that commit
merged. While we had a larger docs team and out-of-tree docs, I think this
really helped us make sure we didn't lose track of needed documentation
updates.

I was able to find part of how that is implemented in jeepyb:

http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py

Current Challenge
=
Similar to the need to follow up with documentation, I've seen a lot of cases
where projects have added features or made other changes that impact downstream
consumers of that project. Most often, I've seen cases where something like
python-cinderclient adds some functionality, but it is on projects like Horizon
or python-openstackclient to proactively go out and discover those changes.

Not only just seeking out those changes, but also evaluating whether a given
change should have any impact on their project. So we've ended up in a lot of
cases where either new functionality isn't made available through these
interfaces until a cycle or two later, or probably worse, cases where something
is now broken with no one aware of it until an actual end user hits a problem
and files a bug.

ClientImpact Plan
=
I've run this by a few people and it seems to have some support. Or course I'm
open to any other suggestions.

What I would like to do is add a ClientImpact tag handling that could be added
very similarly to DocImpact. The way I see it working is it would work in much
the same way where project's can use this to add the tag to a commit message
when they know it is something that will require additional work in OSC or
Horizon (or others). Then when that commit merges, automation would create a
launchpad bug and/or Storyboard story, including a default set of client
projects. Perhaps we can find some way to make those impacted clients
configurable by source project, but that could be a follow-on optimization.

I am concerned that this could create some extra overhead for these projects.
But my hope is it would be a quick evaluation by a bug triager in those
projects where they can, hopefully, quickly determine if a change does not in
fact impact them and just close the ones they don't think require any follow on
work.

I do hope that this will save some time and speed things up overall for these
projects to be notified that there is something that needs their attention
without needing someone to take the time to actively go out and discover that.

Help Needed
===
From the bits I've found for the DocImpact handling, it looks like it should
not be too much effort to implement the logic to handle a ClientImpact flag.
But I have not been able to find all the moving parts that work together to
perform that automation.

If anyone has any background knowledge on how DocImpact is implemented and can
give me a few pointers, I think I should be able to take it from there to get
this implemented. Or if there is someone that knows this well and is interested
in working on some of the implementation, that would be very welcome too!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Sean McGinnis
> > 
> > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few 
> > teams per cycle that miss the election call, that would fall under that.
> > 
+1 for appointing Dirk as PTL.

> > Trove had a volunteer (Dariusz Krol), but that person did not fill the 
> > requirements for candidates. Given that the previous PTL (Zhao Chao) 
> > plans to stay around to help onboarding the new contributors, I'd 
> > support appointing Dariusz.
> > 

I would be fine with this. But I also wonder if it might make sense to move
Trove out of governance while they go through this transition so they have more
leeway to evolve the project how they need to, with the expectation that if
things get to a good and healthy point we can quickly re-accept the project as
official.

> > I suspect Freezer falls in the same bucket as Packaging_Rpm and we 
> > should get a candidate there. I would reach out to caoyuan see if they 
> > would be interested in steeping up.
> > 
> > LOCI is also likely in the same bucket. However, given that it's a 
> > deployment project, if we can't get anyone to step up and guarantee some 
> > level of currentness, we should consider removing it from the "official" 
> > list.
> > 
> > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if 
> > it's not, given that it's an add-on project that runs within Neutron, I 
> > would consider removing it from the "official" list if we can't find 
> > anyone to step up.
> > 

Omer has responded that the deadline was missed and he would like to continue
as PTL. I think that is acceptable. (though unfortunate that it was missed)

> > For Winstackers and Searchlight, those are low-activity teams (18 and 13 
> > commits), which brings the question of PTL workload for feature-complete 
> > projects.
> 
> Even for feature-complete projects we need to know how to reach the
> maintainers, otherwise I feel like we would consider the project
> unmaintained, wouldn't we?
> 

I agree with Doug, I think there needs to be someone designated as the contact
point for issues with the project. We've seen other "stable" things suddenly go
unstable due to library updates or other external factors.

I don't think Thierry was suggesting there not be a PTL for these, but for any
potential PTL candidates they can know that the demands on their time to fill
that role _should_ be pretty light.

> > 
> > Finally, RefStack: I feel like this should be wrapped into an 
> > Interoperability SIG, since that project team is not producing 
> > "OpenStack", but helping fostering OpenStack interoperability. Having 
> > separate groups (Interop WG, RefStack) sounds overkill anyway, and with 
> > the introduction of SIGs we have been recentering project teams on 
> > upstream code production.
> > 
> 

I agree this has gotten to the point where it probably now makes more sense to
be owned by a SIG rather than being a full project team.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][stable] Stable Core Team Updates

2018-07-31 Thread Sean McGinnis
On Tue, Jul 31, 2018 at 04:15:16PM -0500, Matt Riedemann wrote:
> On 7/31/2018 12:39 PM, Graham Hayes wrote:
> > I would like to nominate 2 new stable core reviewers for Designate.
> > 
> > * Erik Olof Gunnar Andersson
> > * Jens Harbott (frickler)
> > 
> Looks OK to me on both.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][ffe] Critical bug found in python-cinderclient

2018-07-31 Thread Sean McGinnis
A critical bug has been found in python-cinderclient that is impacting both
horizon and python-openstackclient (at least).

https://bugs.launchpad.net/cinder/+bug/1784703

tl;dr is, something new was added with a microversion, but support for that was
done incorrectly such that nothing less than that new microversion would be
allowed. This patch addresses the issue:

https://review.openstack.org/587601

Once that lands we will need a new python-cinderclient release to unbreak
clients. We may want to blacklist python-cinderclient 4.0.0, but I think at
least just raising the upper-constraints should get things working again.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Github release tarballs broken

2018-07-30 Thread Sean McGinnis
On Mon, Jul 30, 2018 at 12:02:50PM -0500, Ben Nemec wrote:
> According to https://bugs.launchpad.net/pbr/+bug/1742809 our github release
> tarballs don't actually work.  It seems to be a github-specific issue
> because I was unable to reproduce the problem with a tarball from
> releases.openstack.org.
> 
> My best guess is that github's release process differs from ours and doesn't
> work with our projects.  I see a couple of options for fixing that.  Either
> we figure out how to make Github's release process DTRT for our projects, or
> we figure out a way to override Github's release artifacts with our own.
> I'm not familiar enough with this to know which is a better (or even
> possible) option, so I'm sending this to solicit help.
> 
> Thanks.
> 
> -Ben
> 

From what I understand, GitHub will provide zip and tar.gz links for all source
whenever a tag is applied. It is a very basic operation and does not have any
kind of logic for correctly packaging whatever that deliverable is.

They even just label the links as "Source code".

I am not sure if there is any way to disable this behavior. One option I see is
we could link in the tag notes to the official tarballs.openstack.org location.
We could also potentially look at using the GitHub API to upload a copy of
those to the GitHub release page. But there's always a mirroring delay, and
GitHub really is just a mirror of our git repos, so using this as a
distribution point really isn't what we want.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-4, July 30 - August 3

2018-07-26 Thread Sean McGinnis
As the client deadline and milestone 3 day winds down here, I wanted to do a
quick check on where things stand before calling it a day.

This is according to script output, so I haven't actually looked into any
details so far. But according to the script, the follow cycle-with-intermediary
deliverables have not had a release done for rocky yet:

aodh
bifrost
ceilometer
cloudkitty-dashboard
cloudkitty
ec2-api
ironic-python-agent
karbor-dashboard
karbor
kuryr-kubernetes
kuryr-libnetwork
magnum-ui
magnum
masakari-dashboard
monasca-kibana-plugin
monasca-log-api
monasca-notification
networking-hyperv
panko
python-cloudkittyclient
python-designateclient
python-karborclient
python-magnumclient
python-pankoclient
python-searchlightclient
python-senlinclient
python-tricircleclient
sahara-tests
senlin-dashboard
tacker-horizon
tacker
zun-ui
zun

Just a reminder that we will need to force a release on these in order to get a
final point to branch stable/rocky.

Taking a look at ones that have done a release but have had more unreleased
commits since then, I'm also seeing several python-*client deliverables that
may be missing final releases.

Thanks,
Sean

On Thu, Jul 26, 2018 at 07:22:01AM -0500, Sean McGinnis wrote:
> 
> General Information
> ---
> 
> For deliverables following the cycle-with-milestones model, we are now (after
> the day I send this) past Feature Freeze. The focus should be on determining
> and fixing release-critical bugs. At this stage only bugfixes should be
> approved for merging in the master branches: feature work should only be
> considered if explicitly granted a Feature Freeze exception by the team PTL
> (after a public discussion on the mailing-list).
> 
> StringFreeze is now in effect, in order to let the I18N team do the 
> translation
> work in good conditions. The StringFreeze is currently soft (allowing
> exceptions as long as they are discussed on the mailing-list and deemed worth
> the effort). It will become a hard StringFreeze on 9th of August along with 
> the
> RC.
> 
> The requirements repository is also frozen, until all cycle-with-milestones
> deliverables have produced a RC1 and have their stable/rocky branches. If
> release critical library or client library releases are needed for Rocky past
> the freeze dates, you must request a Feature Freeze Exception (FFE) from the
> requirements team before we can do a new release to avoid having something
> released in Rocky that is not actually usable. This is done by posting to the
> openstack-dev mailing list with a subject line similar to:
> 
> [$PROJECT][requirements] FFE requested for $PROJECT_LIB
> 
> Include justification/reasoning for why a FFE is needed for this lib. If/when
> the requirements team OKs the post-freeze update, we can then process a new
> release. Including a link to the FFE in the release request is not required,
> but would be helpful in making sure we are clear to do a new release.
> 
> Note that deliverables that are not tagged for release by the appropriate
> deadline will be reviewed to see if they are still active enough to stay on 
> the
> official project list.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Sean McGinnis
On Thu, Jul 26, 2018 at 02:06:01PM -0500, Matt Riedemann wrote:
> On 7/26/2018 12:00 PM, Sean McGinnis wrote:
> > I wouldn't think so. Nothing is changing with the policy, so it is still of
> > interest to see which projects are following that. I don't believe the 
> > policy
> > was tied in any way with stable being an actual project team vs a SIG.
> 
> OK, then maybe as a separate issue, I would argue the tag is not maintained
> and therefore useless at best, or misleading at worst (for those projects
> that don't have it) and therefore should be removed.
> 

I'd be curious to hear more about why you don't think that tag is maintained.

For projects that assert they follow stable policy, in the relase process we
have extra scrutiny that nothing is being released on stable branches that
would appear to violate the stable policy.

Granted, we need to base most of that evaluation on the commit messages, so
it's certainly possible to phrase something in a misleading way that would not
raise any red flags for stable compliance, but if that happens, I would think
it would be unintentional and rare.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Sean McGinnis
> 
> Only question I have is will the stable:follows-policy governance tag [1]
> also be removed?
> 
> [1]
> https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html
> 

I wouldn't think so. Nothing is changing with the policy, so it is still of
interest to see which projects are following that. I don't believe the policy
was tied in any way with stable being an actual project team vs a SIG.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-26 Thread Sean McGinnis
> >
> > A good news is recently a team from Samsung R&D Center in Krakow, Poland
> > joined us, they're building a product on OpenStack, have done improvments
> > on Trove(internally), and now interested in contributing to the community,
> > starting by migrating the intergating tests to the tempest plugin. They're
> > also willing and ready to act as the PTL role. The only problem for their
> > nomination may be that none of them have a patched merged into the Trove
> > projects. There're some in the trove-tempest-plugin waiting review, but
> > according to the activities of the project, these patches may need a long
> > time to merge (and we're at Rocky milestone-3, I think we could merge
> > patches in the trove-tempest-plugin, as they're all abouth testing).
> >
> > I also hope and welcome the other current active team members of Trove
> > could nominate themselves, in that way, we could get more discussions about
> > how we think about the direction of Trove.
> >

Great to see another group getting involved!

It's too bad there hasn't been enough time to build up some experience working
upstream and getting at least a few more commits under their belt, but this
sounds like things are heading in the right direction.

Since the new folks are still so new - if this works for you - I would
recommend continuing on as the official PTL for one more release, but with the
understanding that you would just be around to answer questions and give advice
to help the new team get up to speed. That should hopefully be a small time
commitment for you while still easing that transition.

Then hopefully by the T release it would not be an issue at all for someone
else to step up as the new PTL. Or even if things progress well, you could step
down as PTL at some point during the Stein cycle if someone is ready to take
over for you.

Just a suggestion to help ease the process.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-4, July 30 - August 3

2018-07-26 Thread Sean McGinnis
Hey, I thought you guys might be interested in this release countdown info. ;)

Development Focus
-

The R-4 week is our one deadline free week between the lib freezes and Rocky-3
milestone and RC.

Work should be focused on fixing any requirements update issues, critical bugs,
and wrapping up feature work to prepare for the Release Candidate deadline (for
deliverables following the with-milestones model) or final Rocky releases (for
deliverables following the with-intermediary model) next Thursday, 9th of
August.

General Information
---

For deliverables following the cycle-with-milestones model, we are now (after
the day I send this) past Feature Freeze. The focus should be on determining
and fixing release-critical bugs. At this stage only bugfixes should be
approved for merging in the master branches: feature work should only be
considered if explicitly granted a Feature Freeze exception by the team PTL
(after a public discussion on the mailing-list).

StringFreeze is now in effect, in order to let the I18N team do the translation
work in good conditions. The StringFreeze is currently soft (allowing
exceptions as long as they are discussed on the mailing-list and deemed worth
the effort). It will become a hard StringFreeze on 9th of August along with the
RC.

The requirements repository is also frozen, until all cycle-with-milestones
deliverables have produced a RC1 and have their stable/rocky branches. If
release critical library or client library releases are needed for Rocky past
the freeze dates, you must request a Feature Freeze Exception (FFE) from the
requirements team before we can do a new release to avoid having something
released in Rocky that is not actually usable. This is done by posting to the
openstack-dev mailing list with a subject line similar to:

[$PROJECT][requirements] FFE requested for $PROJECT_LIB

Include justification/reasoning for why a FFE is needed for this lib. If/when
the requirements team OKs the post-freeze update, we can then process a new
release. Including a link to the FFE in the release request is not required,
but would be helpful in making sure we are clear to do a new release.

Note that deliverables that are not tagged for release by the appropriate
deadline will be reviewed to see if they are still active enough to stay on the
official project list.

Actions
-

stable/rocky branches should be created soon for all not-already-branched
libraries. You should expect 2-3 changes to be proposed for each: a .gitreview
update, a reno update (skipped for projects not using reno), and a tox.ini
constraints URL update*. Please review those in priority so that the branch can
be functional ASAP.

* The constraints update patches should not be approved until a stable/rocky
  branch has been created for openstack/requirements. Watch for an unfreeze
  announcement from the requirements team for this.

For cycle-with-intermediary deliverables, release liaisons should consider
releasing their latest version, and creating stable/rocky branches from it
ASAP.

For cycle-with-milestones deliverables, release liaisons should wait until R-3
week to create RC1 (to avoid having an RC2 created quickly after). Review
release notes for any missing information, and start preparing "prelude"
release notes as summaries of the content of the release so that those are
merged before the first release candidate.

*Release Cycle Highlights*
Along with the prelude work, it is also a good time to start planning what
highlights you want for your project team in the cycle highlights:

Background on cycle-highlights: 
http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html
Project Team Guide, Cycle-Highlights: 
https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights
anne [at] openstack.org/annabelleB on IRC is available if you need help
selecting or writing your highlights

For release-independent deliverables, release liaisons should check that their
deliverable file includes all the existing releases, so that they can be
properly accounted for in the releases.openstack.org website.

If your team has not done so, remember to file Rocky goal completion
information, as explained in:

https://governance.openstack.org/tc/goals/index.html#completing-goals


Upcoming Deadlines & Dates
--

PTL self-nomination ends: July 31
PTL election starts: August 1
RC1 deadline: August 9

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   >