Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >>On 06.06.2017 18:08, Emilien Macchi wrote:
> >>>Another benefit is that confd will generate a configuration file when
> >>>the application will start. So if etcd is down *after* the app
> >>>startup, it shouldn't break the service restart if we don't ask confd
> >>>to re-generate the config. It's good for operators who were concerned
> >>>about the fact the infrastructure would rely on etcd. In that case, we
> >>>would only need etcd at the initial deployment (and during lifecycle
> >>>actions like upgrades, etc).
> >>>
> >>>The downside is that in the case of containers, they would still have
> >>>a configuration file within the container, and the whole goal of this
> >>>feature was to externalize configuration data and stop having
> >>>configuration files.
> >>
> >>It doesn't look a strict requirement. Those configs may (and should) be
> >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> >>something what *does* make embedded configs a strict requirement?..
> >
> >mmh, one thing I liked about this effort was possibility of stop 
> >bind-mounting
> >config files into the containers. I'd rather find a way to not need any
> >bindmount and have the services get their configs themselves.
> 
> Probably sent too early!
> 
> If we're not talking about OpenStack containers running in a COE, I guess this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on 
> files
> that are in the host is not ideal for these scenarios. I hate this idea 
> because
> it makes deployments inconsistent and I don't want that.
> 
> Flavio
> 

I'm not sure I understand how a configmap is any different from what is
proposed with confd in terms of deployment-specific data being added to
a container before it launches. Can you elaborate on that?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-07 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-06-07 16:42:13 +0200:
> On Wed, Jun 7, 2017 at 3:31 PM, Doug Hellmann  wrote:
>> 
>> On Jun 7, 2017, at 7:20 AM, Emilien Macchi  wrote:
>> 
>> On Tue, Jun 6, 2017 at 6:08 PM, Emilien Macchi  wrote:
>> 
>> Following-up the session that we had in Boston:
>> https://etherpad.openstack.org/p/BOS-forum-future-of-configuration-management
>> 
>> Here's an update on where we are and what is being done.
>> 
>> 
>> == Machine Readable Sample Config
>> 
>> Ben's spec has been merged: https://review.openstack.org/#/c/440835/
>> And also the code which implements it:
>> https://review.openstack.org/#/c/451081/
>> He's now working on the documentation on how to use the feature.
>> 
>> $ oslo-config-generator --namespace keystone --formal yaml > keystone.yaml
>> 
>> Here's an example of the output for Keystone config: https://clbin.com/EAfFM
>> This feature was asked at the PTG, and it's already done!
>> 
>> 
>> == Pluggable drivers for oslo.config
>> 
>> Doug's spec has been well written and the feedback from Summit and the
>> review was taken in account: https://review.openstack.org/#/c/454897/
>> It's now paused because we think we could use confd (with etcd driver)
>> to generate configuration files.
>> 
>> Imagine the work done by Ben in Machine Readable Sample Config, that
>> would allow us to generate Confd templates for all services (Keystone,
>> Nova, etc) via a tool provided in oslo.config with all the options
>> available for a namespace.
>> 
>> 
>> I'm also wondering if we could use oslo-config-generate directly to
>> generate confd templates, with a new format. So we would have ini,
>> yaml, json and confd.
>> "confd" format would be useful when building rpms that we ship in
>> containers.
>> "yaml" format would be useful for installers to expose the options
>> directly to the User Interface, so we know which params OpenStack
>> provide and we could re-use the data to push it into etcd.
>> 
>> Would it make sense?
>> 
>> 
>> I did think about making oslo-config-generator also take the YAML file as
>> input instead of scanning plugins, and then including all the output formats
>> in the single command. I haven’t looked to see how much extra complexity
>> that would add.
> 
> Do you mean taking the YAML file that we generate with Ben's work
> (which would include the parameters values, added by some other
> tooling maybe)?
> 
> I see 2 options at least:
> 
> * Let installers to feed etcd with the parameters by using this etcd
> namespace $CUSTOM_PREFIX + /project/section/parameter (example
> /node1/keystone/DEFAULT/debug).
>  And patch oslo.config to be able to generate confd templates with
> all the options (and ship the template in the package)
>  I like this option because it provides a way for operators to learn
> about all possible options in the configuration, with documentation
> and default values.
> 
> * Also let installers to feed etcd but use a standard template like
> you showed me last week (credits to you for the code):
> http://paste.openstack.org/show/2KZUQsWYpgrcG2K8TDcE/
>   I like this option because nothing has to be done in oslo.config,
> since we use a standard template for all OpenStack configs (see the
> paste ^)
> 
> Thoughts?

There are 2 problems with using the generic template.

1. In order for confd to work, you have to give it a list of all of the
  keys in etcd that it should monitor, and that list is
  application-specific.

2. Not all of our configuration values are simple strings or numbers.
  We have options for managing lists of values, and there is even
  an Opt class for loading a dictionary for some reason. So,
  rendering the value in the template will depend on the type of
  the option.

Given those constraints, it makes sense to generate a custom template
for each set of options. We need to generate the confd file anyway, and
the template can have the correct logic for rendering mult-value
options.

One further problem I don't know how to address yet is the applications
that use dynamic sections in configuration files. I think Cinder
is still the primary example of this, but other apps may use that
ability.  I don't know how to tell confd that it needs to look at
the keys in those groups, since we don't know the names in advance.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-07 Thread Doug Hellmann

> On Jun 7, 2017, at 7:20 AM, Emilien Macchi  wrote:
> 
> On Tue, Jun 6, 2017 at 6:08 PM, Emilien Macchi  > wrote:
>> Following-up the session that we had in Boston:
>> https://etherpad.openstack.org/p/BOS-forum-future-of-configuration-management
>> 
>> Here's an update on where we are and what is being done.
>> 
>> 
>> == Machine Readable Sample Config
>> 
>> Ben's spec has been merged: https://review.openstack.org/#/c/440835/
>> And also the code which implements it: 
>> https://review.openstack.org/#/c/451081/
>> He's now working on the documentation on how to use the feature.
>> 
>> $ oslo-config-generator --namespace keystone --formal yaml > keystone.yaml
>> 
>> Here's an example of the output for Keystone config: https://clbin.com/EAfFM
>> This feature was asked at the PTG, and it's already done!
>> 
>> 
>> == Pluggable drivers for oslo.config
>> 
>> Doug's spec has been well written and the feedback from Summit and the
>> review was taken in account: https://review.openstack.org/#/c/454897/
>> It's now paused because we think we could use confd (with etcd driver)
>> to generate configuration files.
>> 
>> Imagine the work done by Ben in Machine Readable Sample Config, that
>> would allow us to generate Confd templates for all services (Keystone,
>> Nova, etc) via a tool provided in oslo.config with all the options
>> available for a namespace.
> 
> I'm also wondering if we could use oslo-config-generate directly to
> generate confd templates, with a new format. So we would have ini,
> yaml, json and confd.
> "confd" format would be useful when building rpms that we ship in containers.
> "yaml" format would be useful for installers to expose the options
> directly to the User Interface, so we know which params OpenStack
> provide and we could re-use the data to push it into etcd.
> 
> Would it make sense?

I did think about making oslo-config-generator also take the YAML file as input 
instead of scanning plugins, and then including all the output formats in the 
single command. I haven’t looked to see how much extra complexity that would 
add.

> 
>> We could have packaging builds (e.g. RDO distgit) using the tooling
>> when building packages so we could ship confd templates in addition of
>> ini configuration files.
>> When services would start (e.g. in containers), confd would generate
>> configuration files from the templates that is part of the container,
>> and read the values from etcd.
>> 
>> The benefit of doing this, is that a very little work is required in
>> oslo.config to make this happen (only a tool to generate confd
>> templates). It could be a first iteration.
>> Another benefit is that confd will generate a configuration file when
>> the application will start. So if etcd is down *after* the app
>> startup, it shouldn't break the service restart if we don't ask confd
>> to re-generate the config. It's good for operators who were concerned
>> about the fact the infrastructure would rely on etcd. In that case, we
>> would only need etcd at the initial deployment (and during lifecycle
>> actions like upgrades, etc).
>> 
>> The downside is that in the case of containers, they would still have
>> a configuration file within the container, and the whole goal of this
>> feature was to externalize configuration data and stop having
>> configuration files.
>> 
>> 
>> == What's next
>> 
>> I see 2 short-term actions that we can work on:
>> 
>> 1) Decide if whether or not confd solution would be acceptable for a
>> start. I'm asking Kolla, TripleO, Helm, Ansible projects if they would
>> be willing to use this feature. I'm also asking operators to give
>> feedback on it.
>> 
>> 2) Investigate how to expose parameters generated by Ben's work on
>> Machine Readable Sample Config to the users (without having to
>> manually maintain all options) - I think this has to be solved on the
>> installers side, but I might be wrong; and also investigate how to
>> populate parameters data into etcd. This tool could be provided by
>> oslo.config probably.
>> 
>> 
>> 
>> Any feedback from folks working on installers or from operators would
>> be more than welcome!
>> 
>> Thanks,
>> --
>> Emilien Macchi
> 
> 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [reno] we need help reviewing patches to reno

2017-06-06 Thread Doug Hellmann
I am looking for one or two people interested in learning about how reno
works to help with reviews. If you like graph traversal algorithms
and/or text processing, have a look at the code in the openstack/reno
repository and let me know if you're interested in helping out.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][infra][python3] how to handle release tools for python 3

2017-06-02 Thread Doug Hellmann
As we discussed in the team meeting today, I have filed reviews to add a
Python 3.5 unit test job to the release-tools repository:

https://review.openstack.org/470350  update semver module for python 3.5
https://review.openstack.org/470352  add python 3.5 unit test job for 
release-tools repository

There are 2 remaining tools that we use regularly that haven't been
ported.

openstack-infra/project-config/jenkins/scripts/release-tools/launchpad_add_comment.py
requires launchpadlib, which has at least one dependency that is not
available for Python 3. I propose that we continue to run this script
under Python 2, until all projects are migrated to storyboard and we can
drop it completely.

openstack-infra/release-tools/announce.sh uses some python programs in
the release-tools repository. Those work under python 3, but they are a
bit odd because they are the last remaining tools used by the automation
that live in that git repo. Everything else has either moved to
openstack/releases or openstack-infra/project-config. If we move these
tools, we will have all of our active scripts in a consistent place.

1. If we move the scripts to openstack/releases then we can easily
use the release note generation tool as part of the validation jobs,
and eliminate (or at least reduce) issues with release announcement
failures. The actual announcement job will have to clone the releases
repo to run the tool, but it already has to do that with the
release-tools repo.

2. The other option is to move the scripts to
openstack-infra/project-config.  I think this will end up being
more work, because that repository is not set up to encourage using
tox to configure virtualenvs where we can run console scripts, and
these tools rely on that technique right now. If we were starting
from scratch I think it would make sense to put them in project-config
with the other release tools, but they were designed in a way that
makes that more work right now.

Before I start working on option 1, I wanted to get some feedback from
the rest of the team.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-02 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2017-06-01 20:51:24 -0400:
> On Thu, Jun 01, 2017 at 11:57:00AM -0400, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> > > Graham Hayes wrote:
> > > > On 01/06/17 01:30, Matthew Treinish wrote:
> > > >> TBH, it's a bit premature to have the discussion. These additional 
> > > >> programs do
> > > >> not exist yet, and there is a governance road block around this. Right 
> > > >> now the
> > > >> set of projects that can be used defcore/interopWG is limited to the 
> > > >> set of 
> > > >> projects in:
> > > >>
> > > >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > > > 
> > > > Sure - but that is a solved problem, when the interop committee is
> > > > ready to propose them, they can add projects into that tag. Or am I
> > > > misunderstanding [1] (again)?
> > > 
> > > I think you understand it well. The Board/InteropWG should propose
> > > additions/removals of this tag, which will then be approved by the TC:
> > > 
> > > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> > > 
> > > > [...]
> > > >> We had a forum session on it (I can't find the etherpad for the 
> > > >> session) which
> > > >> was pretty speculative because it was about planning the new programs. 
> > > >> Part of
> > > >> that discussion was around the feasibility of using tests in plugins 
> > > >> and whether
> > > >> that would be desirable. Personally, I was in favor of doing that for 
> > > >> some of
> > > >> the proposed programs because of the way they were organized it was a 
> > > >> good fit.
> > > >> This is because the proposed new programs were extra additions on top 
> > > >> of the
> > > >> base existing interop program. But it was hardly a definitive 
> > > >> discussion.
> > > > 
> > > > Which will create 2 classes of testing for interop programs.
> > > 
> > > FWIW I would rather have a single way of doing "tests used in trademark
> > > programs" without differentiating between old and new trademark programs.
> > > 
> > > I fear that we are discussing solutions before defining the problem. We
> > > want:
> > > 
> > > 1- Decentralize test maintenance, through more tempest plugins, to
> > > account for limited QA resources
> > > 2- Additional codereview constraints and approval rules for tests that
> > > happen to be used in trademark programs
> > > 3- Discoverability/ease-of-install of the set of tests that happen to be
> > > used in trademark programs
> > > 4- A git repo layout that can be simply explained, for new teams to
> > > understand
> > > 
> > > It feels like the current git repo layout (result of that 2016-05-04
> > > resolution) optimizes for 2 and 3, which kind of works until you add
> > > more trademark programs, at which point it breaks 1 and 4.
> > > 
> > > I feel like you could get 2 and 3 without necessarily using git repo
> > > boundaries (using Gerrit approval rules and some tooling to install/run
> > > subset of tests across multiple git repos), which would allow you to
> > > optimize git repo layout to get 1 and 4...
> > > 
> > > Or am I missing something ?
> > > 
> > 
> > Right. The point of having the trademark tests "in tempest" was not
> > to have them "in the tempest repo", that was just an implementation
> > detail of the policy of "put them in a repository managed by people
> > who understand the expanded review rules".
> 
> There was more to it than this, a big part was duplication of effort as well.
> Tempest itself is almost a perfect fit for the scope of the testing defcore is
> doing. While tempest does additional testing that defcore doesn't use, a large
> subset is exactly what they want.

That does explain why Tempest was appealing to the DefCore folks.
I was trying to explain my motivation for writing the resolution
saying that we did not want DefCore using tests scattered throughout
a bunch of plugin repositories managed by different reviewer teams.

> > There were a lot of unexpected issues when we started treating the
> > test suite as a pr

Re: [openstack-dev] [puppet][release][Release-job-failures] Tag of openstack/puppet-nova failed

2017-06-01 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-31 20:26:33 +:
> Build failed.
> 
> - puppet-nova-releasenotes 
> http://logs.openstack.org/d9/d913ccd1ea88f3661c32b0fcfdac58d749cd4eb2/tag/puppet-nova-releasenotes/cefa30a/
>  : FAILURE in 2m 13s
> 

This failure only prevented the release notes from being published, and
did not block the actual release.

The problem should be fixed by https://review.openstack.org/469872

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][release][Release-job-failures] Tag of openstack/swift failed

2017-06-01 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-31 22:46:21 +:
> Build failed.
> 
> - swift-releasenotes 
> http://logs.openstack.org/e9/e9032fbea361df790901022740ac837a2a02daa0/tag/swift-releasenotes/687d120/
>  : FAILURE in 1m 47s
> 

This failure was just with publishing the release notes after the
tag was applied and did not actually block the release.

https://review.openstack.org/469881 should fix the problem

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-06-01 11:09:56 +0100:
> On Wed, 31 May 2017, Doug Hellmann wrote:
> > Yeah, it sounds like the current organization of the repo is not
> > ideal in terms of equal playing field for all of our project teams.
> > I would be fine with all of the interop tests being in a plugin
> > together, or of saying that the tempest repo should only contain
> > those tests and that others should move to their own plugins. If we're
> > going to reorganize all of that, we should decide what new structure we
> > want and work it into the goal.
> 
> I feel like the discussion about the interop tests has strayed this
> conversation from the more general point about plugin "fairness" and
> allowed the vagueness in plans for interop to control our thinking
> and discussion about options in the bigger view.

I should have prefaced my initial response with a statement like
"For those of you who don't know or remember the history". It wasn't
meant to imply we shouldn't be making any changes, just that we
need to understand how we ended up where we are now so we don't
make a change that then no longer meets old requirements.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> Graham Hayes wrote:
> > On 01/06/17 01:30, Matthew Treinish wrote:
> >> TBH, it's a bit premature to have the discussion. These additional 
> >> programs do
> >> not exist yet, and there is a governance road block around this. Right now 
> >> the
> >> set of projects that can be used defcore/interopWG is limited to the set 
> >> of 
> >> projects in:
> >>
> >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > 
> > Sure - but that is a solved problem, when the interop committee is
> > ready to propose them, they can add projects into that tag. Or am I
> > misunderstanding [1] (again)?
> 
> I think you understand it well. The Board/InteropWG should propose
> additions/removals of this tag, which will then be approved by the TC:
> 
> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> 
> > [...]
> >> We had a forum session on it (I can't find the etherpad for the session) 
> >> which
> >> was pretty speculative because it was about planning the new programs. 
> >> Part of
> >> that discussion was around the feasibility of using tests in plugins and 
> >> whether
> >> that would be desirable. Personally, I was in favor of doing that for some 
> >> of
> >> the proposed programs because of the way they were organized it was a good 
> >> fit.
> >> This is because the proposed new programs were extra additions on top of 
> >> the
> >> base existing interop program. But it was hardly a definitive discussion.
> > 
> > Which will create 2 classes of testing for interop programs.
> 
> FWIW I would rather have a single way of doing "tests used in trademark
> programs" without differentiating between old and new trademark programs.
> 
> I fear that we are discussing solutions before defining the problem. We
> want:
> 
> 1- Decentralize test maintenance, through more tempest plugins, to
> account for limited QA resources
> 2- Additional codereview constraints and approval rules for tests that
> happen to be used in trademark programs
> 3- Discoverability/ease-of-install of the set of tests that happen to be
> used in trademark programs
> 4- A git repo layout that can be simply explained, for new teams to
> understand
> 
> It feels like the current git repo layout (result of that 2016-05-04
> resolution) optimizes for 2 and 3, which kind of works until you add
> more trademark programs, at which point it breaks 1 and 4.
> 
> I feel like you could get 2 and 3 without necessarily using git repo
> boundaries (using Gerrit approval rules and some tooling to install/run
> subset of tests across multiple git repos), which would allow you to
> optimize git repo layout to get 1 and 4...
> 
> Or am I missing something ?
> 

Right. The point of having the trademark tests "in tempest" was not
to have them "in the tempest repo", that was just an implementation
detail of the policy of "put them in a repository managed by people
who understand the expanded review rules".

There were a lot of unexpected issues when we started treating the
test suite as a production tool for validating a cloud.  We have
to be careful about how we change the behavior of tests, for example,
even if the API responses are expected to be the same.  It's not
fair to vendors or operators who get trademark approval with one
release to have significant changes in behavior in the exact same
tests for the next release.

At the early stage, when the DefCore team was still figuring out
these issues, it made sense to put all of the tests in one place
with a review team that was actively participating in establishing
the process. If we better understand the "rules" for these tests
now, we can document them and distribute the work of maintaining the
test suites.

And yes, I agree with the argument that we should be fair and treat
all projects the same way. If we're going to move tests out of the
tempest repository, we should move all of them. The QA team can
still help maintain the test suites for whatever projects they want,
even if those tests are in plugins.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][all] Potential Queens Goal: Continuing Python 3.5+ Support

2017-06-01 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-06-01 15:31:10 +0200:
> On Wed, May 31, 2017 at 10:38 PM, Mike  wrote:
> > Hello everyone,
> >
> > For this thread we will be discussing continuing Python 3.5+ support.
> > Emilien who has been helping with coordinating our efforts here with
> > Pike can probably add more here, but glancing at our goals document
> > [1] it looks like we have a lot of unanswered projects’ status, but
> > mostly we have python 3.5 unit test voting jobs done thanks to this
> > effort! I have no idea how to use the graphite dashboard, but here’s a
> > graph [2] showing success vs failure with python-35 jobs across all
> > projects.
> 
> Indeed, nice work from the community to make progress on this effort.
> 
> > Glancing at that I think it’s safe to say we can start discussions on
> > moving forward with having our functional tests support python 3.5.
> > Some projects are already ahead in this. Let the discussions begin so
> > we can aid the decision in the  TC deciding our community wide goals
> > for Queens [3].
> 
> +1 - making progress on functional tests looks like the next thing and
> Queens cycle could be used. I'm happy to keep helping on coordination
> if needed.

Unit tests were optional, according to the goal. The functional and
integration tests are much more important.

I know we have the integrated gate running on python 3, so that
covers cinder, glance, keystone, neutron, nova, as well as devstack
and tempest. How are other projects doing with getting their similar
jobs set up and running?

Doug

> 
> >
> > [1] - https://governance.openstack.org/tc/goals/pike/python35.html
> > [2] - 
> > http://graphite.openstack.org/render/?width=1273&height=554&_salt=1496261911.56&from=00%3A00_20170401&until=23%3A59_20170531&target=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.SUCCESS)&target=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.FAILURE)
> > [3] - https://governance.openstack.org/tc/goals/index.html
> >
> > —
> > Mike Perez
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> >> We had a discussion a few months back around what to do for cryptography
> >> since pycrypto is basically dead [1]. After some discussion, at least on
> >> the Cinder project, we decided the best way forward was to use the
> >> cryptography package instead, and work has been done to completely remove
> >> pycrypto usage.
> >>
> >> It all seemed like a good plan at the time.
> >>
> >> I now notice that for the python-cinderclient jobs, there is a pypy job
> >> (non-voting!) that is failing because the cryptography package is not
> >> supported with pypy.
> >>
> >> So this leaves us with two options I guess. Change the cryto library again,
> >> or drop support for pypy.
> >>
> >> I am not aware of anyone using pypy, and there are other valid working
> >> alternatives. I would much rather just drop support for it than redo our
> >> crypto functions again.
> >>
> >> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
> >> some input?
> 
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new 
> push and be successful at this point seems low at best.
> 
> I'd argue that pypy is already not supported, so dropping the non-voting 
> job doesn't seem like losing very much to me. Reworking cryptography 
> libs again, otoh, seems like a lot of work.
> 
> Monty
> 

This question came up recently for the Oslo libraries, and I think we
also agreed that pypy support was not being actively maintained.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-31 11:22:50 +0100:
> On Wed, 31 May 2017, Graham Hayes wrote:
> > On 30/05/17 19:09, Doug Hellmann wrote:
> >> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
> >>> Note that this goal only applies to tempest _plugins_. Projects
> >>> which have their tests in the core of tempest have nothing to do. I
> >>> wonder if it wouldn't be more fair for all projects to use plugins
> >>> for their tempest tests?
> >>
> >> All projects may have plugins, but all projects with tests used by
> >> the Interop WG (formerly DefCore) for trademark certification must
> >> place at least those tests in the tempest repo, to be managed by
> >> the QA team [1]. As new projects are added to those trademark
> >> programs, the tests are supposed to move to the central repo to
> >> ensure the additional review criteria are applied properly.
> 
> Thanks for the clarification, Doug. I don't think it changes the
> main thrust of what I was trying to say (more below).
> 
> >> [1] 
> >> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
> >
> > In the InterOp discussions in Boston, it was indicated that some people
> > on the QA team were not comfortable with "non core" project (even in
> > the InterOp program) having tests in core tempest.
> >
> > I do think that may be a bigger discussion though.
> 
> I'm not suggesting we change everything (because that would take a
> lot of time and energy we probably don't have), but I had some
> thoughts in reaction to this and sharing is caring:
> 
> The way in which the tempest _repo_ is a combination of smoke,
> integration, validation and trademark enforcement testing is very
> confusing to me. If we then lay on top of that the concept of "core"
> and "not core" with regard to who is supposed to put their tests in
> a plugin and who isn't (except when it is trademark related!) it all
> gets quite bewildering.
> 
> The resolution above says: "the OpenStack community will benefit
> from having the interoperability tests used by DefCore in a central
> location". Findability is a good goal so this a reasonable
> assertion, but then the directive to lump those tests in with a
> bunch of other stuff seems off if the goal is to "easier to read and
> understand a set of tests".
> 
> If, instead, Tempest is a framework and all tests are in plugins
> that each have their own repo then it is much easier to look for a
> repo (if there is a common pattern) and know "these are the interop
> tests for openstack" and "these are the integration tests for nova"
> and even "these are the integration tests for the thing we are
> currently describing as 'core'[1]".
> 
> An area where this probably falls down is with validation. How do
> you know which plugins to assemble in order to validate this cloud
> you've just built? Except that we already have this problem now that
> we are requiring most projects to manage their tempest tests as
> plugins. Does it become worse by everything being a plugin?
> 
> [1] We really need a better name for this.

Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.

The point of centralizing review of that specific set of tests was
to make it easier for interop folks to help ensure the tests continue
to follow the additionally stringent review criteria that comes
with being used as part of the trademark program. The QA team agreed
to do that, so it's news to me that they're considering reversing
course.  If the QA team isn't going to continue, we'll need to
figure out what that means and potentially find another group to
do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][kolla][stable][security][infra][all] guidelines for managing releases of binary artifacts

2017-05-30 Thread Doug Hellmann
Based on two other recent threads [1][2] and some discussions on
IRC, I have written up some guidelines [3] that try to address the
concerns I have with us publishing binary artifacts while still
allowing the kolla team and others to move ahead with the work they
are trying to do.

I would appreciate feedback about whether these would complicate
builds or make them impossible, as well as whether folks think they
go far enough to mitigate the risks described in those email threads.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116677.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117282.html
[3] https://review.openstack.org/#/c/469265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2017-05-30 16:11:41 -0500:
> On 05/30/2017 04:08 PM, Emilien Macchi wrote:
> > On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
> >  wrote:
> >> We have a problem in requirements that projects that don't have the
> >> cycle-with-intermediary release model (most of the cycle-with-milestones
> >> model) don't get integrated with requirements until the cycle is fully
> >> done.  This causes a few problems.
> >>
> >> * These projects don't produce a consumable release for requirements
> >> until end of cycle (which does not accept beta releases).
> >>
> >> * The former causes old requirements to be kept in place, meaning caps,
> >> exclusions, etc. are being kept, which can cause conflicts.
> >>
> >> * Keeping the old version in requirements means that cross dependencies
> >> are not tested with updated versions.
> >>
> >> This has hit us with the mistral and tripleo projects particularly
> >> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> >> mistral sqlalchemy updates.
> >>
> >> [mistral]
> >> mistral - blocking sqlalchemy - milestones
> >>
> >> [tripleo]
> >> os-refresh-config - blocking pbr - milestones
> >> os-apply-config - blocking pbr - milestones
> >> os-collect-config - blocking pbr - milestones
> > 
> > These are cycle-with-milestones., like os-net-config for example,
> > which wasn't mentioned in this email. It has the same releases as
> > os-net-config also, so I'm confused why these 3 cause issue, I
> > probably missed something.
> > 
> > Anyway, I'm happy to change os-*-config (from TripleO) to be
> > cycle-with-intermediary. Quick question though, which tag would you
> > like to see, regarding what we already did for pike-1?
> > 
> > Thanks,
> > 
> 
> Pike is fine as it's just master that has this issue.  The problem is
> that the latest release blocks the pbr from upper-constraints from being
> coinstallable.

The issue is that even with beta releases like we publish at
milestones, new versions of these projects won't be installed in
gate jobs because we have to give pip special instructions to allow
pre-releases and we, as a rule, do not give it those instructions.
The result is that we need anything that is going to be installed
as via pip to be releasable at any point in the cycle, to address
dependency issues like we're dealing with here, and that means
changing the model back to cycle-with-intermediary.

This isn't something I foresaw when we talked about making all of
the TripleO components use a consistent model in the past. I'm sorry
for the oversight.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2017-05-30 13:36:02 -0500:
> We have a problem in requirements that projects that don't have the
> cycle-with-intermediary release model (most of the cycle-with-milestones
> model) don't get integrated with requirements until the cycle is fully
> done.  This causes a few problems.
> 
> * These projects don't produce a consumable release for requirements
> until end of cycle (which does not accept beta releases).
> 
> * The former causes old requirements to be kept in place, meaning caps,
> exclusions, etc. are being kept, which can cause conflicts.
> 
> * Keeping the old version in requirements means that cross dependencies
> are not tested with updated versions.
> 
> This has hit us with the mistral and tripleo projects particularly
> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> mistral sqlalchemy updates.
> 
> [mistral]
> mistral - blocking sqlalchemy - milestones
> 
> [tripleo]
> os-refresh-config - blocking pbr - milestones
> os-apply-config - blocking pbr - milestones
> os-collect-config - blocking pbr - milestones
> 
> [nova]
> os-vif - blocking pbr - intermediary
> 
> [horizon]
> django-openstack-auth - blocking django - intermediary
> 
> 
> So, here's what needs doing.
> 
> Those projects that are already using the cycle-with-intermediary model
> should just do a release.
> 
> For those that are using cycle-with-milestones, you will need to change
> to the cycle-with-intermediary model, and do a full release, both can be
> done at the same time.
> 
> If anyone has any questions or wants clarifications this thread is good,
> or I'm on irc as prometheanfire in the #openstack-requirements channel.
> 

We probably want to add a review criteria to the requirements list that
projects using the cycle-with-milestone model are not added to the list
to avoid this issue in the future.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 22

2017-05-30 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
> 
> There's no TC meeting this week. Thierry did a second weekly status
> report[^1]. There will be a TC meeting next week (Tuesday, 6th June
> at 20:00 UTC) with the intention of discussing the proposals about
> postgreSQL (of which more below). Here are my comments on pending TC
> activity that either seems relevant or needs additional input.
> 
> [^1]: 
> 
> 
> # Pending Stuff
> 
> ## Queens Community Goals
> 
> Proposals for community-wide goals[^2] for the Queens cycle have started
> coming in. These are changes which, if approved, all projects are
> expected to satisfy. In Pike the goals are:
> 
> * [all control plane APIs deployable as WSGI 
> apps](https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
> * [supporting Python 
> 3.5](https://governance.openstack.org/tc/goals/pike/python35.html)
> 
> The full suite of goals for Queens has not yet been decided.
> Identifying goals is a community-wide process. Your ideas are
> wanted.
> 
> ### Split Tempest Plugins into Separate Repos
> 
> This goal for Queens is already approved. Any project which manages
> its tempest tests as a plugin should move those tests into a
> separate repo. The goal is at[^3]. The review for it[^4] has further
> discussion on why it is a good idea.
> 
> The original goal did not provide instructions on how to do it.
> There is a proposal in progress[^5] to add a link to an etherpad[^6]
> with instructions.
> 
> Note that this goal only applies to tempest _plugins_. Projects
> which have their tests in the core of tempest have nothing to do. I
> wonder if it wouldn't be more fair for all projects to use plugins
> for their tempest tests?

All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.

[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html

> 
> ### Two Proposals on Improving Version Discovery
> 
> Monty has been writing API-WG guidelines about how to properly use
> the service catalog and do version discovery[^7]. Building from that
> he's proposed two new goals:
> 
> * [Add Queens goal to add collection 
> links](https://review.openstack.org/#/c/468436/)
> * [Add Queens goal for full discovery 
> alignment](https://review.openstack.org/#/c/468437/)
> 
> The first is a small step in the direction of improving version
> discovery, the second is all the steps to getting all projects
> supporting proper version discovery, in case we are feeling extra
> capable.
> 
> Both of these need review from project contributors, first to see if there
> is agreement on the strategies, second to see if they are
> achievable.
> 
> [^2]: 
> [^3]: 
> 
> [^4]: 
> [^5]: 
> [^6]: 
> [^7]: 
> 
> ## etcd as a base service
> 
> etcd has been proposed as a base service[^8]. A "base" service is
> one that that can be expected to be present in any OpenStack
> deployment. The hope is that by declaring this we can finally
> bootstrap the distributed locking, group membership and service
> liveness functionality that we've been talking about for years. If
> you want this please say so on the review. You want this.
> 
> If for some reason you _don't_ want this, then you'll want to
> register your reasons as soon as possible. The review will merge
> soon.
> 
> [^8]: 
> 
> ## openstack-tc IRC channel
> 
> With the decrease in the number of TC meetings on IRC there's a plan
> to have [office hours](https://review.openstack.org/#/c/467256/)
> where some significant chunk of the TC will be available. Initially
> this was going to be in the `#openstack-dev` channel but in the
> hopes of making the logs readable after the fact, a [new channel is
> proposed](https://review.openstack.org/#/c/467386/).
> 
> This is likely to pass soon, unless objections are raised. If you
> have some, please raise them on the review.
> 
> ## postgreSQL
> 
> The discussions around postgreSQL have yet to resolve. See [last week's
> report](https://anticdent.org/tc-report-21.html) for additional
> information. Because things are blocked and there have been some
> expressions of review fatigue there will be, as mentioned above, a
> TC meeting next week on 6th June, 20:00 UTC. Show up if you have an
> opinion if or how 

Re: [openstack-dev] [oslo][all][logging] logging debugging improvement work status

2017-05-30 Thread Doug Hellmann
The oslo.log changes to include exception details are in version
3.27.0.

Doug

Excerpts from ChangBo Guo's message of 2017-05-27 16:22:27 +0800:
> Thanks Doug, I will release it on next Monday.
> 
> 2017-05-25 22:15 GMT+08:00 Doug Hellmann :
> 
> > One outcome from the forum session about improving logging debugging
> > was agreement that the proposal to add more details about exceptions
> > to the logs. The spec [1] was updated and has been approved, and
> > the patches to implement the work in oslo.log have also been approved
> > [2].
> >
> > The changes should be included in the Oslo releases next week. I
> > think it makes sense to hold off until then, given the holiday
> > weekend for many of the Oslo team members. As soon as the constraints
> > are updated to allow the new version of oslo.log, the log output
> > produced by devstack will change so that any log message emitted
> > in the context of handling an exception will include that exception
> > detail at the end of the log message (see the spec for details about
> > configuring that behavior).
> >
> > After we start seeing this run in the gate for a bit, we can evaluate
> > if we need to tweak the format or skip any other of Python's built-in
> > exception types.
> >
> > Thanks to Dims, Flavio, gcb, and Eric Fried for their help with
> > code reviews, and to the rest of the Oslo team and everyone who
> > participated in the discussion of the spec online and in Boston.
> >
> > Doug
> >
> > [1] http://specs.openstack.org/openstack/oslo-specs/specs/
> > pike/improving-logging-debugging.html
> > [2] https://review.openstack.org/#/q/topic:improve-logging-debugging
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-26 Thread Doug Hellmann
Excerpts from Saad Zaher's message of 2017-05-26 12:03:24 +0100:
> Hi Doug,
> 
> Thanks for your review. Actually freezer has a separate repo for the api,
> it can be found here [1]. Freezer is using oslo.context since newton. If
> you have the time you can take a look at it and let us know if you have any
> comments.

Ah, that explains why I couldn't find it in the freezer repo. :-)

Doug

> 
> Thanks for your help
> 
> [1] https://github.com/openstack/freezer-api
> 
> Best Regards,
> Saad!
> 
> On Fri, May 26, 2017 at 5:45 AM, Renat Akhmerov 
> wrote:
> 
> > Thanks Doug. We’ll look into this.
> >
> > @Tuan, is there any roadblocks with the patch you’re working on? [1]
> >
> > [1] https://review.openstack.org/#/c/455407/
> >
> >
> > Renat
> >
> > On 26 May 2017, 01:54 +0700, Doug Hellmann , wrote:
> >
> > The new work to add the exception information and request ID tracing
> > depends on using both oslo.context and oslo.log to have all of the
> > relevant pieces of information available as log messages are emitted.
> >
> > In the course of reviewing the "done" status for those initiatives,
> > I noticed that although mistral and freezer are using oslo.log,
> > neither uses oslo.context. That means neither project will get the
> > extra debugging information, and neither project will see the global
> > request ID in logs.
> >
> > I started looking at updating mistral's context to use oslo.context
> > as a base class, but ran into some issues because of some extensions
> > made to the existing class. I wasn't able to find where freezer is
> > doing anything at all with an API request context.
> >
> > I'm available to help, if someone else wants to pick up the work.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-25 Thread Doug Hellmann
The new work to add the exception information and request ID tracing
depends on using both oslo.context and oslo.log to have all of the
relevant pieces of information available as log messages are emitted.

In the course of reviewing the "done" status for those initiatives,
I noticed that although mistral and freezer are using oslo.log,
neither uses oslo.context. That means neither project will get the
extra debugging information, and neither project will see the global
request ID in logs.

I started looking at updating mistral's context to use oslo.context
as a base class, but ran into some issues because of some extensions
made to the existing class. I wasn't able to find where freezer is
doing anything at all with an API request context.

I'm available to help, if someone else wants to pick up the work.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all][logging] logging debugging improvement work status

2017-05-25 Thread Doug Hellmann
One outcome from the forum session about improving logging debugging
was agreement that the proposal to add more details about exceptions
to the logs. The spec [1] was updated and has been approved, and
the patches to implement the work in oslo.log have also been approved
[2].

The changes should be included in the Oslo releases next week. I
think it makes sense to hold off until then, given the holiday
weekend for many of the Oslo team members. As soon as the constraints
are updated to allow the new version of oslo.log, the log output
produced by devstack will change so that any log message emitted
in the context of handling an exception will include that exception
detail at the end of the log message (see the spec for details about
configuring that behavior).

After we start seeing this run in the gate for a bit, we can evaluate
if we need to tweak the format or skip any other of Python's built-in
exception types.

Thanks to Dims, Flavio, gcb, and Eric Fried for their help with
code reviews, and to the rest of the Oslo team and everyone who
participated in the discussion of the spec online and in Boston.

Doug

[1] 
http://specs.openstack.org/openstack/oslo-specs/specs/pike/improving-logging-debugging.html
[2] https://review.openstack.org/#/q/topic:improve-logging-debugging

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Issues with reno

2017-05-24 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-05-23 12:10:56 -0400:
> Excerpts from Matt Riedemann's message of 2017-05-22 21:48:37 -0500:
> > I think Doug and I have talked about this before, but it came up again 
> > tonight.
> > 
> > There seems to be an issue where release notes for the current series 
> > don't show up in the published release notes, but unreleased things do.
> > 
> > For example, the python-novaclient release notes:
> > 
> > https://docs.openstack.org/releasenotes/python-novaclient/
> > 
> > Contain Ocata series release notes and the currently unreleased set of 
> > changes for Pike, but doesn't include the 8.0.0 release notes, which is 
> > important for projects impacted by things we removed in the 8.0.0 
> > release (lots of deprecated proxy APIs and CLIs were removed).
> > 
> > I've noticed the same for things in Nova's release notes where 
> > everything between ocata and the p-1 tag is missing.
> > 
> > Is there already a bug for this?
> > 
> 
> I don't think there is a bug, but I have it in my notes to look
> into it this week based on our earlier conversation. Based purely
> on the description, the problem might be related to a similar issue
> the Ironic team reported in https://bugs.launchpad.net/reno/+bug/1682147
> 
> Doug
> 

I believe https://review.openstack.org/#/c/467733/ fixes this behavior.
I've tested it with python-novaclient and ironic. Please take a look at
the results and let me know if that's doing what you all expect.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-24 16:44:44 +0200:
> Doug Hellmann wrote:
> > Two questions about theme-based initiatives:
> > 
> > I would like to have a discussion about the work to stop syncing
> > requirements and to see if we can make progress on the implementation
> > while we have the right people together. That seems like a topic
> > for Monday or Tuesday. Would I reserve one of the "last-minute WG"
> > rooms or extra rooms (it doesn't feel very last-minute if I know
> > about it now)?
> 
> We have some flexibility there, depending on how much time we need. If
> it's an hour or two, I would go and book one of the "reservable rooms".
> If it's a day or two, I would assign one of the rooms reserved for
> upcoming workgroups/informal teams. "Last-minute" is confusing (just
> changed it on the spreadsheet), those rooms are more to cater for needs
> that are not represented in formal workgroups today.

Ideally I'd like some time to actually hack on the tools. If that's not
possible, an hour or two to work out a detailed plan would be enough.

> > I don't see a time on here for the Docs team to work together on
> > the major initiative they have going to change how publishing works.
> > I didn't have the impression that we could anticipate that work
> > being completed this cycle. Is the "helproom" going to be enough,
> > or do we need a separate session for that, too?
> 
> Again, we should be pretty flexible on that. We can reuse the doc
> helproom, or we can say that the doc helproom is only on one day and the
> other day is dedicated to making the doc publishing work. And if the doc
> refactoring ends up being a release goal, it could use a release goal
> room instead.

Let's see what Alex thinks.

> All in all, the idea is to show that on Mon-Tue we have a number of
> rooms set apart to cover for any inter-project need we may have (already
> identified or not). The room allocation is actually much tighter on
> Wed-Thu :)

OK.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-24 12:10:17 +0200:
> Hi everyone,
> 
> In a previous thread[1] I introduced the idea of moving the PTG from a
> purely horizontal/vertical week split to a more
> inter-project/intra-project activities split, and the initial comments
> were positive.
> 
> We need to solidify how the week will look like before we open up
> registration (first week of June), so that people can plan their
> attendance accordingly. Based on the currently-signed-up teams and
> projected room availability, I built a strawman proposal of how that
> could look:
> 
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312&single=true
> 
> Let me know what you think. If you're scheduled on the Wed-Fri but would
> rather be scheduled on Mon-Tue to avoid conflicting with another team,
> let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
> let me know as well, I'll update the spreadsheet accordingly.
> 
> One of the things you might notice on Mon-Tue is the "helproom" concept.
> In the spirit of separating inter-project and intra-project activities,
> support teams (Infra, QA, Release Management, Stable branch maintenance,
> but also teams that are looking at decentralizing their work like Docs
> or Horizon) would have team members in helprooms available to provide
> guidance to vertical teams on their specific needs. Some of those teams
> (like Infra/QA) could still have a "team meeting" on Wed-Fri to get
> stuff done as a team, though.
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html
> 

Two questions about theme-based initiatives:

I would like to have a discussion about the work to stop syncing
requirements and to see if we can make progress on the implementation
while we have the right people together. That seems like a topic
for Monday or Tuesday. Would I reserve one of the "last-minute WG"
rooms or extra rooms (it doesn't feel very last-minute if I know
about it now)?

I don't see a time on here for the Docs team to work together on
the major initiative they have going to change how publishing works.
I didn't have the impression that we could anticipate that work
being completed this cycle. Is the "helproom" going to be enough,
or do we need a separate session for that, too?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2017-05-23 09:12:29 -0700:
> 
> I can sympathize with the "do it tomorrow" turns into 6 weeks later...
>
> Part of the issue for me, personally, is that a governance patch
> does *not* feel simple or lightweight. I assume (in part based on
> experience) that any governance patch I propose will be closely
> examined and I will be forced to justify every corner case and
> comment made. Frankly, writing the patch that will stand up too a
> critical eye will take a long time. I'll do it tomorrow...

Maybe this does point to a need to move that information somewhere else.
It would ultimately be the same people reviewing it, though. I feel
strongly that we need the review step, but if folks think a different
repository would make a difference I'd be happy to set that up.

> Let's take the py3 goal as an example. Note: I am *not* wanting
> to get into a discussion about particular py3 issues or whatever.
> This is a discussion on the goals process, and I'm only using one
> of the current goals as an example of why I haven't proposed a
> governance patch for it.

> Swift does not support Py3. So clearly, there's work to be done
> to meet the goal. I've talked with others in the community about
> some of the blockers and concerns about porting to Py3. Several of
> the concerns are not trivial and will take substantial work to
> overcome[1]. A governance patch will need to list these issues, but
> I don't know if this is a complete list. If I propose a list that's
> incomplete, I feel like I'll be judged on the list I first proposed
> ("you finished the list, why doesn't it work?") instead of being a
> more dynamic process. I need to spend more time understanding what
> the issues are to make sure I have a complete list. I'll propose
> that patch tomorrow...

The patch does not necessarily need to list every detail. The purpose
of having a list of artifacts in the goal document is so that anyone
who wants to understand the state of the implementation can go look
there.  So, for example, if you're using a wiki page or an etherpad
to keep track of the details within the team, the patch only needs
to include a link to that. Some teams have done more, linking to
specs or changes that are already under review. Exactly what type
of artifact counts for a team is really up to that team.

The point is to show that each team is aware of the goal, and that
they've put together information in a place that someone outside
of the team can find it to either help, or at least follow progress.

> The outstanding work to get Py3 support in Swift is very large.
> Yet there are more goals being discussed now, and there's no way I
> can get Py3 support in Swift in Pike. Or Queens. Or probably Rocky
> either. That's not to say it isn't an important goal, but the scope
> combined with the TC deadline means that my governance patch for
> this goal (the tl;dr version is "not gonna happen") has to address
> this in sufficient detail to stand up to review by TC members who
> are on the PSF! I guess I'll start writing that tomorrow...

Some teams have a bit of a head start, but we expect many teams to
find the Python 3 work more than can be completed in a cycle. That's
perfectly OK. At the end of the cycle, we'll see where things stand,
and determine what the next steps are. That retrospective process
will be up to the teams, but I would expect it to factor into the
TC's decisions about what goals are adopted for Queens.

We don't want to have a big pile of unmet goals that all teams are
struggling to make progress on. That's why we have been limiting
ourselves to 1-2 goals per cycle.

> While I know that Py3 support is important, I also have to
> prioritize it against other important things. My employer has
> prioritized certain features because that directly impacts our
> ability to add customers (which directly affects my ability to get
> paid). Other employers in the community are doing the same for their
> employees. In the broader community, as clusters have grown over

There is undoubtedly tension between upstream and downstream needs
in some of these areas. We see that tension a lot with cross-project
initiatives. I don't have a good generic answer to the problem of
balancing community and employer needs, so I think the conversation
will have to happen case-by-case.

If we're finding that all of the contributors to a team are discouraged
from working on technical debt issues or other community goals in,
we'll need to address that. Uncovering that bit of information would
be an important outcome for the goals process, especially if it is
stated as directly as "no team member is being given time by their
employer to work on this community goal." If there is no response
from a team at all, though, we have no idea why that is the case.

If we know a team has issues tracking the goals due to a lack of
resources, then when the Board asks "how can we help," as they do
every time we have a joint meetin

Re: [openstack-dev] [release] Issues with reno

2017-05-23 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2017-05-22 21:48:37 -0500:
> I think Doug and I have talked about this before, but it came up again 
> tonight.
> 
> There seems to be an issue where release notes for the current series 
> don't show up in the published release notes, but unreleased things do.
> 
> For example, the python-novaclient release notes:
> 
> https://docs.openstack.org/releasenotes/python-novaclient/
> 
> Contain Ocata series release notes and the currently unreleased set of 
> changes for Pike, but doesn't include the 8.0.0 release notes, which is 
> important for projects impacted by things we removed in the 8.0.0 
> release (lots of deprecated proxy APIs and CLIs were removed).
> 
> I've noticed the same for things in Nova's release notes where 
> everything between ocata and the p-1 tag is missing.
> 
> Is there already a bug for this?
> 

I don't think there is a bug, but I have it in my notes to look
into it this week based on our earlier conversation. Based purely
on the description, the problem might be related to a similar issue
the Ironic team reported in https://bugs.launchpad.net/reno/+bug/1682147

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-23 10:44:30 -0400:
> Team,
> 
> Background:
> For projects based on Go and Containers we need to ship binaries, for

Can you elaborate on the use of the term "need" here. Is that because
otherwise the projects can't be consumed? Is it the "norm" for
projects from those communities? Something else?

> example Kubernetes, etcd both ship binaries and maintain stable
> branches as well.
>   https://github.com/kubernetes/kubernetes/releases
>   https://github.com/coreos/etcd/releases/
> 
> Kubernetes for example ships container images to public registeries as well:
>   
> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
>   
> https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube

What are the support lifetimes for those images? Who maintains them?

> So here's a proposal based on the really long thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
> 
> The idea is to augment the existing processes for the new deliverables.
> 
> * Projects define CI jobs for generating binaries and containers (some
> already do!)
> * Release team automation will kick builds off when specific versions
> are released for the binaries and containers (Since Go based projects
> can do cross-builds, we won't need to run these jobs on multiple
> architectures which will keep the release process simple)

I see how this would work for Go builds, since we would be tagging the
thing being built. My understanding is that Kolla images are using the
Kolla version, not the version of the software inside the image, though.
How would that work? (Or maybe I misunderstood something from another
thread and that's not how the images are versioned?)

> * Just like we upload stuff to tarballs.openstack.org, we will upload
> binaries and containers there as well

I know there's an infra spec for doing some of this, so I assume we
anticipate having the storage capacity needed?

> * Just like we upload things to pypi, we will upload containers with
> specific versions to public repos.
> * Projects can choose from the existing release models to make this
> process as frequent as they need.
> 
> Please note that I am deliberately ruling out the following
> * Daily/Nightly releases that are accessible to end users, especially
> from stable branches.

The Kolla team did seem to want periodic builds for testing (to avoid
having to build images in the test pipeline, IIUC). Do we still want to
build those to tarballs.o.o? Does that even meet the needs of those test
jobs?

> * Project teams directly responsible for pushing stuff to end users
> 
> What do you think?
> 
> Thanks,
> Dims
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2017-05-23 08:58:08 -0500:
> > >
> > > - Is it that the reporting process is too heavy ? (requiring answers
> > > from projects that are obviously unaffected)
> > 
> > I've thought about this, OSC was unaffected by one of the goals but
> > not the other, so I can't really hide in this bucket.  It really is
> > not that hard to put up a review saying "not me".
> > 
> > > - Is it that people ignore the deadlines and missed the reminders ?
> > > (some unaffected project teams also do not do releases, and therefore
> > > ignore the release countdown emails)
> > 
> > In my case, not so much "ignore" but "put off until tomorrow" where
> > tomorrow turned in to 6 weeks.  I really don't have a hard reason
> > other than simply not prioritizing it because I knew one of the goals
> > was going to take some coordination work
> > 
> 
> +1 - this has been my case, unfortunately.
> 
> A patch submission has the feeling of a major thing that goes through
> a lot of process (at least still in my head). I wonder if we would be
> better off tracking some of this through a wiki page or even an
> etherpad, with just the completion of the goal being something
> submitted to the repo. Then it would be really easy to update at any
> point with notes like "WIP patch put up but still working on it" along
> the way.

The review process for this type of governance patch is pretty light
(they fall under the one-week-no-objections house rule), but I
decided to use a patch instead of the wiki specifically because it
allows for feedback. We've had several cases where teams didn't
provide enough detail or didn't think a goal applied to them when
it did (deploying with WSGI came up at least once).  Wiki changes
can be tracked, but if someone has a question they have to go track
down the author in some other venue to get it answered.

I also didn't want teams to have to keep anything up to date during
the cycle, because I didn't want this to be yet another "status
report". Each goal needs at most 2 patches: one at the start of the
cycle to acknowledge and point to whatever other artifacts are being
used for tracking the work already, and then one at the end of the
cycle to indicate how much of the work was completed and what the
next steps are. We tied the process deadlines to existing deadlines
when we thought teams would already be thinking of these sorts of
topics (most teams have spec deadlines around milestone 1 and then
everyone has the same release date at the end of the cycle).

> 
> > > - Is it that in periods of resource constriction, having release-wide
> > > goals is just too ambitious ? (although anecdotal data shows that most
> > > projects have already completed their goals)
> > 
> > While this may certainly be a possibility, I don't think we should
> > give in to the temptation to blame too much on losing people.  OSC was
> > hit by this too, yet the loss of core and contributors did not affect
> > the goals not getting done, that falls squarely on the PTL in this
> > case.
> > 
> > > - Is it that the goals should be more clearly owned by the community
> > > beyond just the TC? (and therefore the goals should be maintained in a
> > > repository with simpler approval rules and a larger approval group)
> > 
> > I do think that at least the perception of the goals being community
> > things should be increased if we can.  We fall in to the problem of
> > the TC proposing something and getting pushback about projects being
> > forced to do more work, yet we hear so much about how the TC needs to
> > take more leadership in technical direction (see TC vision feedback
> > for the latest round of this).
> > 
> > I'm not sure that the actual repo is the issue, are we having problems
> > getting reviews to approve these?  I don't see this but I'm also not
> > tracking the time to takes for them to get approved.
> > 
> > I believe it is just going to have to be a social thing that we need
> > to continue to push forward.
> > 
> 
> What if we also require +1 from the "core six" projects on goal proposals?
> If we at least have buy in from those projects, then we can know that we
> should be able to get them as a minimum, with other projects more than
> likely to then follow suit.

Because we do not want to structure our governance in such a way that
some projects are more equal than others.

Everyone in the community has an opportunity to respond to the goals
through the review process. If we don't trust the TC to take those
responses into account, then we might as well drop the whole idea of
community goals.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-23 13:57:53 +:
> On 2017-05-23 05:40:05 -0500 (-0500), Dean Troyer wrote:
> > On Tue, May 23, 2017 at 4:59 AM, Thierry Carrez  
> > wrote:
> [...]
> > > - Is it that the reporting process is too heavy ? (requiring answers
> > > from projects that are obviously unaffected)
> > 
> > I've thought about this, OSC was unaffected by one of the goals but
> > not the other, so I can't really hide in this bucket.  It really is
> > not that hard to put up a review saying "not me".
> 
> While not at all an excuse, that was entirely what I chalk my lapse
> up to this time. I had already commented on the governance reviews
> that I had discussed the proposed goals with the rest of the Infra
> team and we'd come to the conclusion that they were either
> inapplicable or already met for us. It just escaped my memory that I
> needed to go back and reassert that again once the goals were
> officially approved.
> 
> Also, I still agree that it's hard to figure out which teams
> actually are affected without asking them, and that's this step of
> the process: confirmation/denial on record.

Right. The goals process is not about anyone telling anyone else what to
do. It's about communicating with each other about a few central
priorities. Part of that communication requires going through some hoops
even when they seem trivial or unnecessary based on what you know,
because the rest of us are not inside your head and don't automatically
have that knowledge. :-)

Doug

> 
> > > - Is it that people ignore the deadlines and missed the reminders ?
> > > (some unaffected project teams also do not do releases, and therefore
> > > ignore the release countdown emails)
> [...]
> 
> Not so much ignore but because so little of the content is directly
> applicable to Infra I read them in the context of things we should
> be on the lookout for other teams working on, so I'm not in the
> mindset of expecting to actually find an action item in there. This
> is just a matter of retraining myself on what to look for in those
> announcements in the future.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +:
> On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
> > On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley  wrote:
> > > On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
> > > [...]
> > >> We ran into this for the puppet-module-build check job so I created a
> > >> puppet-agent-install builder.  Perhaps the job needs that added to it
> > > [...]
> > >
> > > Problem here being these repos share the common tarball jobs used
> > > for generating python sdists, with a little custom logic baked into
> > > run-tarball.sh[*] for detecting and adjusting when the repo is for a
> > > Puppet module. I think this highlights the need to create custom
> > > tarball jobs for Puppet modules, preferably by abstracting this
> > > custom logic into a new JJB builder.
> > 
> > I assume you mean a problem if we added this builder to the job
> > and it fails for some reason thus impacting the python jobs?
> 
> My concern is more that it increases complexity by further embedding
> package selection and installation choices into that already complex
> script. We'd (Infra team) like to get more of the logic out of that
> random pile of shell scripts and directly into job definitions
> instead. For one thing, those scripts are only updated when we
> regenerate our nodepool images (at best once a day) and leads to
> significant job inconsistencies if we have image upload failures in
> some providers but not others. In contrast, job configurations are
> updated nearly instantly (and can even be self-tested in many cases
> once we're on Zuul v3).
> 
> > As far as adding to the builder to the job that's not really a
> > problem and wouldn't change those jobs as they don't reference the
> > installed puppet executable.
> 
> It does risk further destabilizing the generic tarball jobs by
> introducing more outside dependencies which will only be used by a
> scant handful of the projects running them.
> 
> > The problem I have with putting this in the .sh is that it becomes
> > yet another place where we're doing this package installation (we
> > already do it in puppet openstack in
> > puppet-openstack-integration). I originally proposed the builder
> > because it could be reused if a job requires puppet be available.
> > ie. this case. I'd rather not do what we do in the builder in a
> > shell script in the job and it seems like this is making it more
> > complicated than it needs to be when we have to manage this in the
> > long term.
> 
> Agreed, I'm saying a builder which installs an unnecessary Puppet
> toolchain for the generic tarball jobs is not something we'd want,
> but it would be pretty trivial to make puppet-specific tarball jobs
> which do use that builder (and has the added benefit that
> Puppet-specific logic can be moved _out_ of run-tarballs.sh and into
> your job configuration instead at that point).

That approach makes sense.

When the new job template is set up, let me know so I can add it to the
release repo validation as a known way to release things.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2017-05-22 12:36:29 -0500:
> > On May 22, 2017, at 9:09 AM, Doug Hellmann  wrote:
> >
> > Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> >>> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> >>> Hi everyone,
> >>>
> >>> The documentation team are rapidly losing key contributors and core 
> >>> reviewers.
> >>> We are not alone, this is happening across the board. It is making things
> >>> harder, but not impossible.
> >>>
> >>> Since our inception in 2010, we’ve been climbing higher and higher trying 
> >>> to
> >>> achieve the best documentation we could, and uphold our high standards. 
> >>> This is
> >>> something to be incredibly proud of.
> >>>
> >>> However, we now need to take a step back and realise that the amount of 
> >>> work we
> >>> are attempting to maintain is now out of reach for the team size that we 
> >>> have.
> >>> At the moment we have 13 cores, of whom none are full time contributors or
> >>> reviewers. This includes myself.
> >>>
> >>> Until this point, the documentation team has owned several manuals that 
> >>> include
> >>> content related to multiple projects, including an installation guide, 
> >>> admin
> >>> guide, configuration guide, networking guide, and security guide. Because 
> >>> the
> >>> team no longer has the resources to own that content, we want to invert 
> >>> the
> >>> relationship between the doc team and project teams, so that we become 
> >>> liaisons
> >>> to help with maintenance instead of asking for project teams to provide 
> >>> liaisons
> >>> to help with content. As a part of that change, we plan to move the 
> >>> existing
> >>> content out of the central manuals repository, into repositories owned by 
> >>> the
> >>> appropriate project teams. Project teams will then own the content and the
> >>> documentation team will assist by managing the build tools, helping with 
> >>> writing
> >>> guidelines and style, but not writing the bulk of the text.
> >>>
> >>> We currently have the infrastructure set up to empower project teams to 
> >>> manage
> >>> their own documentation in their own tree, and many do. As part of this 
> >>> change,
> >>> the rest of the existing content from the install guide and admin guide 
> >>> will
> >>> also move into project-owned repositories. We have a few options for how 
> >>> to
> >>> implement the move, and that's where we need feedback now.
> >>>
> >>> 1. We could combine all of the documentation builds, so that each project 
> >>> has a
> >>> single doc/source directory that includes developer, contributor, and user
> >>> documentation. This option would reduce the number of build jobs we have 
> >>> to run,
> >>> and cut down on the number of separate sphinx configurations in each 
> >>> repository.
> >>> It would completely change the way we publish the results, though, and we 
> >>> would
> >>> need to set up redirects from all of the existing locations to the new
> >>> locations and move all of the existing documentation under the new 
> >>> structure.
> >>>
> >>> 2. We could retain the existing trees for developer and API docs, and add 
> >>> a new
> >>> one for "user" documentation. The installation guide, configuration 
> >>> guide, and
> >>> admin guide would move here for all projects. Neutron's user 
> >>> documentation would
> >>> include the current networking guide as well. This option would add 1 new 
> >>> build
> >>> to each repository, but would allow us to easily roll out the change with 
> >>> less
> >>> disruption in the way the site is organized and published, so there would 
> >>> be
> >>> less work in the short term.
> >>>
> >>> 3. We could do option 2, but use a separate repository for the new 
> >>> user-oriented
> >>> documentation. This would allow project teams to delegate management of 
> >>> the
> >>> documentation to a separate review project-sub-team, but would complicate 
> >

Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-22 10:42:44 -0700:
> [snip]
> 
> So from Kolla perspective, since our dev guide is really also
> operators guide (we are operators tool so we're kinda "special" on
> that front), we'd love to handle both deployment guide, user manuals
> and all that in our tree. If we could create infrastructure that would
> allow us to segregate our content and manage it ourselves, I think
> that would be useful. Tell us how to help:)
> 
> Cheers,
> Michal
> 

The first step is to choose one of the options Alex proposed. From
there, we'll work out more detailed steps for achieving that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-05-22 16:54:30 +0200:
> On 05/22/2017 04:09 PM, Doug Hellmann wrote:
> > Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> >> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> >>> Hi everyone,
> >>>
> >>> The documentation team are rapidly losing key contributors and core 
> >>> reviewers.
> >>> We are not alone, this is happening across the board. It is making things
> >>> harder, but not impossible.
> >>>
> >>> Since our inception in 2010, we’ve been climbing higher and higher trying 
> >>> to
> >>> achieve the best documentation we could, and uphold our high standards. 
> >>> This is
> >>> something to be incredibly proud of.
> >>>
> >>> However, we now need to take a step back and realise that the amount of 
> >>> work we
> >>> are attempting to maintain is now out of reach for the team size that we 
> >>> have.
> >>> At the moment we have 13 cores, of whom none are full time contributors or
> >>> reviewers. This includes myself.
> >>>
> >>> Until this point, the documentation team has owned several manuals that 
> >>> include
> >>> content related to multiple projects, including an installation guide, 
> >>> admin
> >>> guide, configuration guide, networking guide, and security guide. Because 
> >>> the
> >>> team no longer has the resources to own that content, we want to invert 
> >>> the
> >>> relationship between the doc team and project teams, so that we become 
> >>> liaisons
> >>> to help with maintenance instead of asking for project teams to provide 
> >>> liaisons
> >>> to help with content. As a part of that change, we plan to move the 
> >>> existing
> >>> content out of the central manuals repository, into repositories owned by 
> >>> the
> >>> appropriate project teams. Project teams will then own the content and the
> >>> documentation team will assist by managing the build tools, helping with 
> >>> writing
> >>> guidelines and style, but not writing the bulk of the text.
> >>>
> >>> We currently have the infrastructure set up to empower project teams to 
> >>> manage
> >>> their own documentation in their own tree, and many do. As part of this 
> >>> change,
> >>> the rest of the existing content from the install guide and admin guide 
> >>> will
> >>> also move into project-owned repositories. We have a few options for how 
> >>> to
> >>> implement the move, and that's where we need feedback now.
> >>>
> >>> 1. We could combine all of the documentation builds, so that each project 
> >>> has a
> >>> single doc/source directory that includes developer, contributor, and user
> >>> documentation. This option would reduce the number of build jobs we have 
> >>> to run,
> >>> and cut down on the number of separate sphinx configurations in each 
> >>> repository.
> >>> It would completely change the way we publish the results, though, and we 
> >>> would
> >>> need to set up redirects from all of the existing locations to the new
> >>> locations and move all of the existing documentation under the new 
> >>> structure.
> >>>
> >>> 2. We could retain the existing trees for developer and API docs, and add 
> >>> a new
> >>> one for "user" documentation. The installation guide, configuration 
> >>> guide, and
> >>> admin guide would move here for all projects. Neutron's user 
> >>> documentation would
> >>> include the current networking guide as well. This option would add 1 new 
> >>> build
> >>> to each repository, but would allow us to easily roll out the change with 
> >>> less
> >>> disruption in the way the site is organized and published, so there would 
> >>> be
> >>> less work in the short term.
> >>>
> >>> 3. We could do option 2, but use a separate repository for the new 
> >>> user-oriented
> >>> documentation. This would allow project teams to delegate management of 
> >>> the
> >>> documentation to a separate review project-sub-team, but would complicate 
> >>> the
> 

[openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
> Build failed.
> 
> - puppet-nova-tarball 
> http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
>  : FAILURE in 55s
> - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
> - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
> 

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there?  I could use some advice & help.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2017-05-22 08:08:40 -0500:
> On Mon, May 22, 2017 at 4:39 AM, Alexandra Settle 
> wrote:
> 
> > Hi everyone,
> >
> >
> >
> > The documentation team are rapidly losing key contributors and core
> > reviewers. We are not alone, this is happening across the board. It is
> > making things harder, but not impossible.
> >
> > Since our inception in 2010, we’ve been climbing higher and higher trying
> > to achieve the best documentation we could, and uphold our high standards.
> > This is something to be incredibly proud of.
> >
> >
> >
> > However, we now need to take a step back and realise that the amount of
> > work we are attempting to maintain is now out of reach for the team size
> > that we have. At the moment we have 13 cores, of whom none are full time
> > contributors or reviewers. This includes myself.
> >
> 
> One point I'd like to emphasize with this proposal, any way we go, is that
> we would prefer that the writing tasks not always fall on the devs, but
> that there can be dedicated writers or ops or end-users attending to info
> needs, it's just that they'll do the work in the repos.

I'm not sure we can assume that will be the case. If we have writers,
obviously we want their help here. But if we have no dedicated writers,
we need project teams to take more responsibility for the docs for what
they produce.

> Also, I'm working on a patch to try to quantify the best practices using
> our current data: https://review.openstack.org/#/c/461280/ We may discover
> some ways to work that mean gaining efficiencies and ensuring quality.
> Project teams should consider changes to reviewers and so on to try to be
> inclusive of the varied types of work in their repo.
> 
> I'll emphasize that we need to be extremely protective of the user space
> with this sort of move. No one who reads the docs ultimately cares about
> how they are put together. They just want to find what they need and get on
> with their lives.

For me, this is another point in favor of option 2, which involves
the least amount of disruption to existing publishing jobs (affecting
contributors) and locations (affecting consumers).  Once we transfer
ownership and have the builds working, we can discuss more significant
changes.

> > Until this point, the documentation team has owned several manuals that
> > include content related to multiple projects, including an installation
> > guide, admin guide, configuration guide, networking guide, and security
> > guide. Because the team no longer has the resources to own that content, we
> > want to invert the relationship between the doc team and project teams, so
> > that we become liaisons to help with maintenance instead of asking for
> > project teams to provide liaisons to help with content. As a part of that
> > change, we plan to move the existing content out of the central manuals
> > repository, into repositories owned by the appropriate project teams.
> > Project teams will then own the content and the documentation team will
> > assist by managing the build tools, helping with writing guidelines
> > and style, but not writing the bulk of the text.
> >
> >
> >
> > We currently have the infrastructure set up to empower project teams to
> > manage their own documentation in their own tree, and many do. As part of
> > this change, the rest of the existing content from the install guide and
> > admin guide will also move into project-owned repositories. We have a few
> > options for how to implement the move, and that's where we need feedback
> > now.
> >
> >
> >
> > 1. We could combine all of the documentation builds, so that each project
> > has a single doc/source directory that includes developer, contributor, and
> > user documentation. This option would reduce the number of build jobs we
> > have to run, and cut down on the number of separate sphinx configurations
> > in each repository. It would completely change the way we publish the
> > results, though, and we would need to set up redirects from all of the
> > existing locations to the new locations and move all of the existing
> > documentation under the new structure.
> >
> 
> I'd love to try this one. I know this is what John Dickenson has tried for
> the swift project with https://review.openstack.org/#/c/386834/ but since
> it didn't match anyone else, and I haven't heard back yet about the user
> experience, we didn't pursue much.
> 
> I'll still be pretty adamant about the user experience, so that the project
> name does not spill over into the user space. Redirects will be crucial as
> someone pointed out in one of the recent etherpads. Also, it may require
> not publishing api-ref info to developer.openstack.org (in other words, one
> job means one target for publication right now).
> 
> >
> >
> > 2. We could retain the existing trees for developer and API docs, and add
> > a new one for "user" documentation. The installation guide, configuration
> > guide, and admin guide would mo

Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-05-22 12:26:25 +0200:
> On 05/22/2017 11:39 AM, Alexandra Settle wrote:
> > Hi everyone,
> > 
> > The documentation team are rapidly losing key contributors and core 
> > reviewers. 
> > We are not alone, this is happening across the board. It is making things 
> > harder, but not impossible.
> > 
> > Since our inception in 2010, we’ve been climbing higher and higher trying 
> > to 
> > achieve the best documentation we could, and uphold our high standards. 
> > This is 
> > something to be incredibly proud of.
> > 
> > However, we now need to take a step back and realise that the amount of 
> > work we 
> > are attempting to maintain is now out of reach for the team size that we 
> > have. 
> > At the moment we have 13 cores, of whom none are full time contributors or 
> > reviewers. This includes myself.
> > 
> > Until this point, the documentation team has owned several manuals that 
> > include 
> > content related to multiple projects, including an installation guide, 
> > admin 
> > guide, configuration guide, networking guide, and security guide. Because 
> > the 
> > team no longer has the resources to own that content, we want to invert the 
> > relationship between the doc team and project teams, so that we become 
> > liaisons 
> > to help with maintenance instead of asking for project teams to provide 
> > liaisons 
> > to help with content. As a part of that change, we plan to move the 
> > existing 
> > content out of the central manuals repository, into repositories owned by 
> > the 
> > appropriate project teams. Project teams will then own the content and the 
> > documentation team will assist by managing the build tools, helping with 
> > writing 
> > guidelines and style, but not writing the bulk of the text.
> > 
> > We currently have the infrastructure set up to empower project teams to 
> > manage 
> > their own documentation in their own tree, and many do. As part of this 
> > change, 
> > the rest of the existing content from the install guide and admin guide 
> > will 
> > also move into project-owned repositories. We have a few options for how to 
> > implement the move, and that's where we need feedback now.
> > 
> > 1. We could combine all of the documentation builds, so that each project 
> > has a 
> > single doc/source directory that includes developer, contributor, and user 
> > documentation. This option would reduce the number of build jobs we have to 
> > run, 
> > and cut down on the number of separate sphinx configurations in each 
> > repository. 
> > It would completely change the way we publish the results, though, and we 
> > would 
> > need to set up redirects from all of the existing locations to the new 
> > locations and move all of the existing documentation under the new 
> > structure.
> > 
> > 2. We could retain the existing trees for developer and API docs, and add a 
> > new 
> > one for "user" documentation. The installation guide, configuration guide, 
> > and 
> > admin guide would move here for all projects. Neutron's user documentation 
> > would 
> > include the current networking guide as well. This option would add 1 new 
> > build 
> > to each repository, but would allow us to easily roll out the change with 
> > less 
> > disruption in the way the site is organized and published, so there would 
> > be 
> > less work in the short term.
> > 
> > 3. We could do option 2, but use a separate repository for the new 
> > user-oriented 
> > documentation. This would allow project teams to delegate management of the 
> > documentation to a separate review project-sub-team, but would complicate 
> > the 
> > process of landing code and documentation updates together so that the docs 
> > are 
> > always up to date.
> > 
> > Personally, I think option 2 or 3 are more realistic, for now. It does mean 
> > that an extra build would have to be maintained, but it retains that key 
> > differentiator between what is user and developer documentation and 
> > involves 
> > fewer changes to existing published contents and build jobs. I definitely 
> > think 
> > option 1 is feasible, and would be happy to make it work if the community 
> > prefers this. We could also view option 1 as the longer-term goal, and 
> > option 2 
> > as an incremental step toward it (option 3 would make option 1 more 
> > complicated 
> > to achieve).
> > 
> > What does everyone think of the proposed options? Questions? Other thoughts?
> 
> We're already hosting install-guide and api-ref in our tree, and I'd prefer 
> we 
> don't change it, as it's going to be annoying (especially wrt backports). I'd 
> prefer we create user-guide directory in projects, and move the user guide 
> there.

Handling backports with a merged guide is an issue we didn't come
up with in our earlier discussions. How often do you backport doc
changes in practice? Do you foresee merge conflicts caused by issues
other than the files being renamed?

Doug

___

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Doug Hellmann
Excerpts from Mehdi Abaakouk's message of 2017-05-19 10:23:09 +0200:
> On Thu, May 18, 2017 at 03:16:20PM -0400, Mike Bayer wrote:
> >
> >
> >On 05/18/2017 02:37 PM, Julien Danjou wrote:
> >>On Thu, May 18 2017, Mike Bayer wrote:
> >>
> >>>I'm not understanding this?  do you mean this?
> >>
> >>In the long run, yes. Unfortunately, we're not happy with the way Oslo
> >>libraries are managed and too OpenStack centric. I've tried for the last
> >>couple of years to move things on, but it's barely possible to deprecate
> >>anything and contribute, so I feel it's safer to start fresh and better
> >>alternative. Cotyledon by Mehdi is a good example of what can be
> >>achieved.
> >
> >
> >here's cotyledon:
> >
> >https://cotyledon.readthedocs.io/en/latest/
> >
> >
> >replaces oslo.service with a multiprocessing approach that doesn't use 
> >eventlet.  great!  any openstack service that rides on oslo.service 
> >would like to be able to transparently switch from eventlet to 
> >multiprocessing the same way they can more or less switch to mod_wsgi 
> >at the moment.   IMO this should be part of oslo.service itself.   
> 
> I have quickly presented cotyledon some summit ago, we said we will wait
> to see if other projects want to get rid of eventlet before adopting
> such new lib (or merge it with oslo.service).
> 
> But for now, the lib is still under telemetry umbrella.
> 
> Keeping the current API and supporting both are (I think) impossible.
> The current API is too eventlet centric. And some applications rely
> on implicit internal contract/behavior/assumption.
> 
> Dealing about concurrent/thread/signal safety in multithreading app or
> eventlet app is already hard enough. So having the lib that deals with
> both is even harder. We already have oslo.messaging that deals with
> 3 threads models, this is just an unending story of race conditions.
> 
> Since a new API is needed, why not writing a new lib. Anyways when you
> get rid of eventlet you have so many thing to change to ensure your
> performance will not drop. Changing from oslo.service to cotyledon is
> really easy on the side.
> 
> >Docs state: "oslo.service being impossible to fix and bringing an 
> >heavy dependency on eventlet, "  is there a discussion thread on that?
> 
> Not really, I just put some comments on reviews and discus this on IRC.
> Since nobody except Telemetry have expressed/try to get rid of eventlet.
> 
> For the story we first get rid of eventlet in Telemetry, fixes couple of
> performance issue due to using threading/process instead
> greenlet/greenthread.
> 
> Then we fall into some weird issue due to oslo.service internal
> implementation. Process not exiting properly, signals not received,
> deadlock when signal are received, unkillable process,
> tooz/oslo.messaging heartbeat not scheduled correctly, worker not
> restarted when they are dead. All of what we expect from oslo.service
> was not working correctly anymore because we remove the line
> 'eventlet.monkeypatch()'.
> 
> For example, when oslo.service receive a signal, it can arrive on any
> thread, this thread is paused, the callback is run in this thread
> context, but if the callback try to discus to your code in this thread,
> the process lockup, because your code is paused. Python
> offers tool to avoid that (signal.set_wakeup_fd), but oslo.service don't
> use it. I have tried to run callbacks only on the main thread with
> set_wakeup_fd, to avoid this kind of issue but I fail. The whole
> oslo.service code is clearly not designed to be threadsafe/signalsafe.
> Well, it works for eventlet because you have only one real thread.
> 
> And this is just one example on complicated thing I have tried to fix,
> before starting cotyledon.
>
> >I'm finding it hard to believe that only a few years ago, everyone saw 
> >the wisdom of not re-implementing everything in their own projects and 
> >using a common layer like oslo, and already that whole situation is 
> >becoming forgotten - not just for consistency, but also when a bug is 
> >found, if fixed in oslo it gets fixed for everyone.
> 
> Because the internal of cotyledon and oslo.service are so different.
> Having the code in oslo or not doesn't help for maintenance anymore.
> Cotyledon is a lib, code and bugs :) can already be shared between
> projects that doesn't want eventlet.

Yes, I remember discussing this some time ago and I agree that starting
a new library was the right approach. The changes needed to make
oslo.service work without eventlet are too big, and rather than have 2
separate implementations in the same library a second library makes
sense.

> >An increase in the scope of oslo is essential to dealing with the 
> >issue of "complexity" in openstack. 
> 
> Increasing the scope of oslo works only if libs have maintainers. But
> most of them lack of people today. Most of oslo libs are in maintenance
> mode. But that another subject.
> 
> > The state of openstack as dozens 
> >of individual software projects e

Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-18 11:57:04 +0200:
> Hi again,
> 
> For the PTG events we have, by design, a pretty loose schedule. Each
> room is free to organize their agenda in whatever way they see fit, and
> take breaks whenever they need. This flexibility is key to keep our
> productivity at those events at a maximum. In Atlanta, most teams ended
> up dynamically building a loose agenda on a room etherpad.
> 
> This approach is optimized for team meetups and people who strongly
> identify with one team in particular. In Atlanta during the first two
> days, where a lot of vertical team contributors did not really know
> which room to go to, it was very difficult to get a feel of what is
> currently being discussed and where they could go. Looking into 20
> etherpads and trying to figure out what is currently being discussed is
> just not practical. In the feedback we received, the need to expose the
> schedule more visibly was the #1 request.
> 
> It is a thin line to walk on. We clearly don't want to publish a
> schedule in advance or be tied to pre-established timeboxes for every
> topic. We want it to be pretty fluid and natural, but we still need to
> somehow make "what's currently happening" (and "what will be discussed
> next") emerge globally.
> 
> One lightweight solution I've been working on is an IRC bot ("ptgbot")
> that would produce a static webpage. Room leaders would update it on
> #openstack-ptg using commands like:
> 
> #swift now discussing ring placement optimizations
> #swift next at 14:00 we plan to discuss better #keystone integration
> 
> and the bot would collect all those "now" and "next" items and publish a
> single (mobile-friendly) webpage, (which would also include
> ethercalc-scheduled things, if we keep any).
> 
> The IRC commands double as natural language announcements for those that
> are following activity on the IRC channel. Hashtags can be used to
> attract other teams attention. You can announce later discussions, but
> the commitment on exact timing is limited. Every "now" command would
> clear "next" entries, so that there wouldn't be any stale entries and
> the command interface would be kept dead simple (at the cost of a bit of
> repetition).
> 
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !
> 

I would subscribe to that twitter feed, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Doug Hellmann
Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:

> Fully agree that expecting users of a particular cloud to understand how
> the policy stuff works is pointless, but it does fall on the cloud
> provider to educate and document their roles and the permissions of
> those roles. I think step 1 plus some basic role permissions for the

Doesn't basing the API key permissions directly on roles also imply that
the cloud provider has to anticipate all of the possible ways API keys
might be used so they can then set up those roles?

> Keys with the expectation of operators to document their roles/policy is
> a safe enough place to start, and for us to document and set some
> sensible default roles and policy. I don't think we currently have good

This seems like an area where we want to encourage interoperability.
Policy doesn't do that today, because deployers can use arbitrary
names for roles and set permissions in those roles in any way they
want. That's fine for human users, but doesn't work for enabling
automation. If the sets of roles and permissions are different in
every cloud, how would anyone write a key allocation script that
could provision a key for their application on more than one cloud?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-17 11:36:40 -0700:
> On 17 May 2017 at 11:04, Doug Hellmann  wrote:
>
> > You've presented some positive scenarios. Here's a worst case
> > situation that I'm worried about.
> >
> > Suppose in a few months the top several companies contributing to
> > kolla decide to pull out of or reduce their contributions to
> > OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
> > redirect their efforts to other projects.  Maybe they start
> > contributing directly to kubernetes. The kolla team is hit badly,
> > and all of the people from that team who know how the container
> > publishing jobs work are gone.
> 
> There are only 2 ways to defend against that: diverse community, which
> we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
> OpenStack, we'd still have almost 50% of contributors. I think we'll
> much more likely to survive than most of other Big Tent projects. In
> fact, I'd think with our current diversity, that we'll survive for as
> long as OpenStack survives.
> 
> Also all the more reasons why *we shouldn't build images personally*,
> we should have autonomous process to do it for us.
> 
> > The day after everyone says goodbye, the build breaks. Maybe a bad
> > patch lands, or maybe some upstream assumption changes. The issue
> > isn't with the infra jobs themselves. The break means no new container
> > images are being published. Since there's not much of a kolla team
> > any more, it looks like it will be a while before anyone has time
> > to figure out how to fix the problem.
> 
> > Later that same day, a new zero-day exploit is announced in a
> > component included in all or most of those images. Something that
> > isn't developed in the community, such as OpenSSL or glibc. The
> > exploit allows a complete breach of any app running with it. All
> > existing published containers include the bad bits and need to be
> > updated.
> 
> I guess this is problem of all the software ever written. If community
> dies around it, people who uses it are in lots of trouble. One way to
> make sure it won't happen is to get involved yourself to make sure you
> can fix what is broken for you. This is how open source works. In
> Kolla most of our contributors are actually operators who run these
> very containers in their own infrastructure. This is where our
> diversity comes from. We aren't distro and that makes us, and our
> users, more protected from this scenario.
> 
> If nova loses all of it's community, and someone finds critical bug in
> nova that allows hackers to gain access to vm data, there will be
> nobody to fix it, that's bad right? Same argument can be made. We
> aren't discussing deleting Nova tho right?

I think there's a difference there, because of the way nova and the
other components currently have an intermediary doing the distribution.

> > Contrast that with a scenario in which consumers either take
> > responsibility for their systems by building their own images, by
> > collaborating directly with other consumers to share the resources
> > needed to build those images, or by paying a third-party a sustainable
> > amount of money to build images for them. In any of those cases,
> > there is an incentive for the responsible party to be ready and
> > able to produce new images in a timely manner. Consumers of the
> > images know exactly where to go for support when they have problems.
> > Issues in those images don't reflect on the community in any way,
> > because we were not involved in producing them.
> 
> Unless as you said build system breaks, then they are equally screwed
> locally. Unless someone fix it, and they can fix it for openstack
> infra too. Difference is, for OpenStack infra it's whole community
> that can fix it where local it's just you. That's the strength of open
> source.

The difference is that it's definitely not the community's problem in
that case. I'm looking at this from a community perspective, and not the
deployer or operator.

> > As I said at the start of this thread, we've long avoided building
> > and supporting simple operating system style packages of the
> > components we produce. I am still struggling to understand how
> > building more complex artifacts, including bits over which we have
> > little or no control, is somehow more sustainable than those simple
> > packages.
> 
> Binaries are built as standalone projects. Nova-api has no
> dependencies build into .rpm. If issue you just described would happen
> 

Re: [openstack-dev] [tc][swg] Updates on the TC Vision for 2019

2017-05-17 Thread Doug Hellmann
Excerpts from Colette Alexander's message of 2017-05-17 14:29:07 -0400:
> Hi everyone!
> 
> Just wanted to send the community some updates on the vision front and also
> get a discussion with the members of the technical committee going here for
> next steps on what we're up to with the TC Vision for 2019 [0].
> 
> A couple things:
> 
> 1. We finished our feedback phase by closing the survey we had open, and
> posting all collected feedback in a document for TC (and community) review
> [1] and having a session at the Boston Forum where we presented the vision,
> and also took feedback from those present [2]. There is also some feedback
> in Gerrit for the review up of the first draft [0], so that's a few places
> we've covered for feedback.
> 
> 2. We're now entering the phase of work where we incorporate feedback into
> the next draft of the vision and finalize it for posterity/the governance
> repository/ourselves and ourselves.
> 
> So! What should feedback incorporation look like? I think we need to have
> mbmers of the TC pipe up a bit here to discuss timelines for this round and
> also who all will be involved. Some rough estimates we came up with were
> something over the course of the 2-3 weeks after the Forum that could then
> be +2d into governance by mid-June or so. I'm not sure if that's possible
> based on everyone's travel/work/vacation schedules for the summer, but it
> would be great. Thoughts on that?

The timeline depends on who signed up to do the next revision. Did
we get someone to do that, yet, or are we still looking for a
volunteer?  (Note that I am not volunteering here, just asking for
status.)

Doug

> 
> Also - I promised to attempt to come up with some general themes in
> feedback we've uncovered for suggestions for edits:
> 
>  - There is (understandably) a lot of feedback around what the nature of
> the vision is itself. This might best be cleared up with a quick prologue
> explaining why the vision reads the way it does, and what the intention was
> behind writing it this way.
>  - The writing about constellations, generally, got quite a bit of feedback
> (some folks wanted more explanation, others wanted it to be more succinct,
> so I think wading through the data to read everyone's input on this is
> valuable here)
>  - Ease of installation & upgrades are language that comes up multiple times
>  - Many people asked for a bullet-points version, and/or that it be edited
> & shortened a bit
>  - A few people said 2 years was not enough time and this read more like a
> 5 year vision
>  - A few people mentioned issues of OpenStack at scale and wondering
> whether that might be addressed in the vision
>  - There was some question of whether it was appropriate to vision for "x
> number of users with yz types of deployments" as a target number to hit in
> this vision, or whether it should be left for an OpenStack-wide community
> vision.
> 
> Some favorites:
>  - Lots of people loved the idea of constellations
>  - A lot of likes for better mentoring & the ladders program
>  - Some likes for diversity across the TC of the future
> 
> Okay - I think that's a pretty good summary. If anyone else has any
> feedback they'd like to make sure gets into the conversation, please feel
> free to reply in this thread.
> 
> Thanks!
> 
> -colette/gothicmindfood
> 
> 
> 
> [0] https://review.openstack.org/#/c/453262/
> [1]
> https://docs.google.com/spreadsheets/d/1YzHPP2EQh2DZWGTj_VbhwhtsDQebAgqldyi1MHm6QpE/edit?usp=sharing
> [2]
> https://www.openstack.org/videos/boston-2017/the-openstack-technical-committee-vision-for-2019-updates-stories-and-q-and-a

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
> On 17 May 2017 at 04:14, Chris Dent  wrote:
> > On Wed, 17 May 2017, Thierry Carrez wrote:
> >
> >> Back to container image world, if we refresh those images daily and they
> >> are not versioned or archived (basically you can only use the latest and
> >> can't really access past dailies), I think we'd be in a similar situation
> >> ?
> >
> >
> > Yes, this.
> 
> I think it's not a bad idea to message "you are responsible for
> archving your containers". Do that, combine it with good toolset that
> helps users determine versions of packages and other metadata and
> we'll end up with something that itself would be greatly appreciated.
> 
> Few potential user stories.
> 
> I have OpenStack <100 nodes and need every single one of them, hence
> no CI. At the same time I want to have fresh packages to avoid CVEs. I
> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
> upgrade it every week. Because my scenerio is quite typical and
> containers already ran through gates that tests my scenerio, I'm good.
> 
> Another one:
> 
> I have 300+ node cloud, heavy CI and security team examining every
> container. While I could build containers locally, downloading them is
> just simpler and effectively the same (after all, it's containers
> being tested not build process). Every download our security team
> scrutinize contaniers and uses toolset Kolla provides to help them.
> Additional benefit is that on top of our CI these images went through
> Kolla CI which is nice, more testing is always good.
> 
> And another one
> 
> We are Kolla community. We want to provide testing for full release
> upgrades every day in gates, to make sure OpenStack and Kolla is
> upgradable and improve general user experience of upgrades. Because
> infra is resource constrained, we cannot afford building 2 sets of
> containers (stable and master) and doing deploy->test->upgrade->test.
> However because we have these cached containers, that are fresh and
> passed CI for deploy, we can just use them! Now effectively we're not
> only testing Kolla's correctness of upgrade procedure but also all the
> other project team upgrades! Oh, it seems Nova merged something that
> negatively affects upgrades, let's make sure they are aware!
> 
> And last one, which cannot be underestimated
> 
> I am CTO of some company and I've heard OpenStack is no longer hard to
> deploy, I'll just download kolla-ansible and try. I'll follow this
> guide that deploys simple OpenStack with 2 commands and few small
> configs, and it's done! Super simple! We're moving to OpenStack and
> start contributing tomorrow!
> 
> Please, let's solve messaging problems, put burden of archiving on
> users, whatever it takes to protect our community from wrong
> expectations, but not kill this effort. There are very real and
> immediate benefits to OpenStack as a whole if we do this.
> 
> Cheers,
> Michal

You've presented some positive scenarios. Here's a worst case
situation that I'm worried about.

Suppose in a few months the top several companies contributing to
kolla decide to pull out of or reduce their contributions to
OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
redirect their efforts to other projects.  Maybe they start
contributing directly to kubernetes. The kolla team is hit badly,
and all of the people from that team who know how the container
publishing jobs work are gone.

The day after everyone says goodbye, the build breaks. Maybe a bad
patch lands, or maybe some upstream assumption changes. The issue
isn't with the infra jobs themselves. The break means no new container
images are being published. Since there's not much of a kolla team
any more, it looks like it will be a while before anyone has time
to figure out how to fix the problem.

Later that same day, a new zero-day exploit is announced in a
component included in all or most of those images. Something that
isn't developed in the community, such as OpenSSL or glibc. The
exploit allows a complete breach of any app running with it. All
existing published containers include the bad bits and need to be
updated.

We now have an unknown number of clouds running containers built
by the community with major security holes. The team responsible
for maintaining those images is a shambles, but even if they weren't
the automation isn't working, so no new images can be published.
The consumers of the existing containers haven't bothered to set
up build pipelines of their own, because why bother? Even though
we've clearly said the images "we" publish are for our own testing,
they have found it irresistibly convenient to use them and move on
with their lives.

When the exploit is announced, they start clamoring for new container
images, and become understandably irate when we say we didn't think
they would be using them in production and they *shouldn't have*
and their problems are not our problems because we tol

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-17 12:19:22 +0200:
> Sean Dague wrote:
> > On 05/16/2017 02:39 PM, Doug Hellmann wrote:
> >> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
> >>> One thing I struggle with is...well...how does *not having* built
> >>> containers help with that? If your company have full time security
> >>> team, they can check our containers prior to deployment. If your
> >>> company doesn't, then building locally will be subject to same risks
> >>> as downloading from dockerhub. Difference is, dockerhub containers
> >>> were tested in our CI to extend that our CI allows. No matter whether
> >>> or not you have your own security team, local CI, staging env, that
> >>> will be just a little bit of testing on top of that which you get for
> >>> free, and I think that's value enough for users to push for this.
> >>
> >> The benefit of not building images ourselves is that we are clearly
> >> communicating that the responsibility for maintaining the images
> >> falls on whoever *does* build them. There can be no question in any
> >> user's mind that the community somehow needs to maintain the content
> >> of the images for them, just because we're publishing new images
> >> at some regular cadence.
> > 
> > +1. It is really easy to think that saying "don't use this in
> > production" prevents people from using it in production. See: User
> > Survey 2017 and the number of folks reporting DevStack as their
> > production deployment tool.
> > 
> > We need to not only manage artifacts, but expectations. And with all the
> > confusion of projects in the openstack git namespace being officially
> > blessed openstack projects over the past few years, I can't imagine
> > people not thinking that openstack infra generated content in dockerhub
> > is officially supported content.
> 
> I totally agree, although I think daily rebuilds / per-commit rebuilds,
> together with a properly named repository, might limit expectations
> enough to remove the "supported" part of your sentence.
> 
> As a parallel, we refresh per-commit a Nova master source code tarball
> (nova-master.tar.gz). If a vulnerability is introduced in master but was
> never "released" with a version number, we silently fix it in master (no
> OSSA advisory published). People tracking master are supposed to be
> continuously tracking master.
> 
> Back to container image world, if we refresh those images daily and they
> are not versioned or archived (basically you can only use the latest and
> can't really access past dailies), I think we'd be in a similar situation ?
> 

The source tarballs are not production deployment tools and only
contain code for one project at a time and it is all our code, so
we don't have to track issues in any other components. The same
differences apply to the artifacts we publish to PyPI and NPM. So
it's similar, but different.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
> On Wed, 17 May 2017, Thierry Carrez wrote:
> 
> > Back to container image world, if we refresh those images daily and they
> > are not versioned or archived (basically you can only use the latest and
> > can't really access past dailies), I think we'd be in a similar situation ?
> 
> Yes, this.
> 

Is that how container publishing works? Can we overwrite an existing
archive, so that there is only ever 1 version of a published container
at any given time?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> >> So another consideration. Do you think whole rule of "not building
> >> binares" should be reconsidered? We are kind of new use case here. We
> >> aren't distro but we are packagers (kind of). I don't think putting us
> >> on equal footing as Red Hat, Canonical or other companies is correct
> >> here.
> >>
> >> K8s is something we want to work with, and what we are discussing is
> >> central to how k8s is used. K8s community creates this culture of
> >> "organic packages" built by anyone, most of companies/projects already
> >> have semi-official container images and I think expectations on
> >> quality of these are well...none? You get what you're given and if you
> >> don't agree, there is always way to reproduce this yourself.
> >>
> >> [Another huge snip]
> >>
> >
> > I wanted to have the discussion, but my position for now is that
> > we should continue as we have been and not change the policy.
> >
> > I don't have a problem with any individual or group of individuals
> > publishing their own organic packages. The issue I have is with
> > making sure it is clear those *are* "organic" and not officially
> > supported by the broader community. One way to do that is to say
> > they need to be built somewhere other than on our shared infrastructure.
> > There may be other ways, though, so I'm looking for input on that.
> 
> What I was trying to say here is, current discussion aside, maybe we
> should revise this "not supported by broader community" rule. They may
> very well be supported to a certain point. Support is not just yes or
> no, it's all the levels in between. I think we can afford *some* level
> of official support, even if that some level means best effort made by
> community. If Kolla community, not an individual like myself, would
> like to support these images best to our ability, why aren't we
> allowed? As long as we are crystal clear what is scope of our support,
> why can't we do it? I think we've already proven that it's going to be
> tremendously useful for a lot of people, even in a shape we discuss
> today, that is "best effort, you still need to validate it for
> yourself"...

Right, I understood that. So far I haven't heard anything to change
my mind, though.

I think you're underestimating the amount of risk you're taking on
for yourselves and by extension the rest of the community, and
introducing to potential consumers of the images, by promising to
support production deployments with a small team of people without
the economic structure in place to sustain the work.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
> On 16 May 2017 at 09:40, Clint Byrum  wrote:
> >
> > What's at stake isn't so much "how do we get the bits to the users" but
> > "how do we only get bits to users that they need". If you build and push
> > daily, do you expect all of your users to also _pull_ daily? Redeploy
> > all their containers? How do you detect that there's new CVE-fixing
> > stuff in a daily build?
> >
> > This is really the realm of distributors that have full-time security
> > teams tracking issues and providing support to paying customers.
> >
> > So I think this is a fine idea, however, it needs to include a commitment
> > for a full-time paid security team who weighs in on every change to
> > the manifest. Otherwise we're just lobbing time bombs into our users'
> > data-centers.
> 
> One thing I struggle with is...well...how does *not having* built
> containers help with that? If your company have full time security
> team, they can check our containers prior to deployment. If your
> company doesn't, then building locally will be subject to same risks
> as downloading from dockerhub. Difference is, dockerhub containers
> were tested in our CI to extend that our CI allows. No matter whether
> or not you have your own security team, local CI, staging env, that
> will be just a little bit of testing on top of that which you get for
> free, and I think that's value enough for users to push for this.

The benefit of not building images ourselves is that we are clearly
communicating that the responsibility for maintaining the images
falls on whoever *does* build them. There can be no question in any
user's mind that the community somehow needs to maintain the content
of the images for them, just because we're publishing new images
at some regular cadence.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >> >>
> >> >> Flavio Percoco wrote:
> >> >>>
> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >> >>> projects
> >> >>> in any kind of built form. This was also one of the concerns I raised
> >> >>> when
> >> >>> working on the proposal to support other programming languages. The
> >> >>> problem of
> >> >>> releasing built images goes beyond the infrastructure requirements. 
> >> >>> It's
> >> >>> the
> >> >>> message and the guarantees implied with the built product itself that 
> >> >>> are
> >> >>> the
> >> >>> concern here. And I tend to agree with Doug that this might be a 
> >> >>> problem
> >> >>> for us
> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >> >>> point is
> >> >>> not enough. Kolla is not the only project producing container images 
> >> >>> and
> >> >>> we need
> >> >>> to be consistent in the way we release these images.
> >> >>>
> >> >>> Nothing prevents people for building their own images and uploading 
> >> >>> them
> >> >>> to
> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
> >> >>> problem.
> >> >>
> >> >>
> >> >> I totally subscribe to the concerns around publishing binaries (under
> >> >> any form), and the expectations in terms of security maintenance that it
> >> >> would set on the publisher. At the same time, we need to have images
> >> >> available, for convenience and testing. So what is the best way to
> >> >> achieve that without setting strong security maintenance expectations
> >> >> for the OpenStack community ? We have several options:
> >> >>
> >> >> 1/ Have third-parties publish images
> >> >> It is the current situation. The issue is that the Kolla team (and
> >> >> likely others) would rather automate the process and use OpenStack
> >> >> infrastructure for it.
> >> >>
> >> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> >> This would allow to automate the process, but it would be a bit weird to
> >> >> use common infra resources to publish in a private repo.
> >> >>
> >> >> 3/ Publish transient (per-commit or daily) images
> >> >> A "daily build" (especially if you replace it every day) would set
> >> >> relatively-limited expectations in terms of maintenance. It would end up
> >> >> picking up security updates in upstream layers, even if not immediately.
> >> >>
> >> >> 4/ Publish images and own them
> >> >> Staff release / VMT / stable team in a way that lets us properly own
> >> >> those images and publish them officially.
> >> >>
> >> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> >> and I prefer it to (2). If all else fails, we should keep (1).
> >> >
> >> >
> >> > Agreed #4 is a bit unrealistic.
> >> >
> >> > Not sure I understand the difference between #2 and #3. Is it just the
> >> > cadence?
> >> >
> >> > I'd prefer for these builds to have a daily cadence because it sets the
> >> > expectations w.r.t maintenance right: "These images are daily builds and 
> >> > not
> >> > certified releases. For stable builds you're better off building it
> >> > yourself"
> >>
> >> And daily builds are exactly what I wanted in the first place:) We
> >> probably will keep publishing release packages too, but we can be so
> >> called 3rd party. I also agree [4] is completely unrealistic and I
> >> would be against putting such heavy burden of responsibility on any
> >> community, including 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-16 17:41:28 +:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
> > Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
> > > If you build images properly in infra, then you will have an image that is
> > > not security checked (no gpg verification of packages) and completely
> > > unverifiable. These are absolutely not images we want to push to
> > > DockerHub/quay for obvious reasons. Security and verification being chief
> > > among them. They are absolutely not images that should ever be run in
> > > production and are only suited for testing. These are the only types of
> > > images that can come out of infra.
> > 
> > This sounds like an implementation detail of option 3? I think not
> > signing the images does help indicate that they're not meant to be used
> > in production environments.
> [...]
> 
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.

Thanks for clarifying; that makes more sense.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> So another consideration. Do you think whole rule of "not building
> binares" should be reconsidered? We are kind of new use case here. We
> aren't distro but we are packagers (kind of). I don't think putting us
> on equal footing as Red Hat, Canonical or other companies is correct
> here.
> 
> K8s is something we want to work with, and what we are discussing is
> central to how k8s is used. K8s community creates this culture of
> "organic packages" built by anyone, most of companies/projects already
> have semi-official container images and I think expectations on
> quality of these are well...none? You get what you're given and if you
> don't agree, there is always way to reproduce this yourself.
> 
> [Another huge snip]
> 

I wanted to have the discussion, but my position for now is that
we should continue as we have been and not change the policy.

I don't have a problem with any individual or group of individuals
publishing their own organic packages. The issue I have is with
making sure it is clear those *are* "organic" and not officially
supported by the broader community. One way to do that is to say
they need to be built somewhere other than on our shared infrastructure.
There may be other ways, though, so I'm looking for input on that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release] sphinx 1.6.1 behavior changes triggering job failures

2017-05-16 Thread Doug Hellmann
We now have 2 separate bugs related to changes in today's Sphinx 1.6.1
release causing our doc jobs to fail in different ways.

https://bugs.launchpad.net/pbr/+bug/1691129 describes a traceback
produced when building the developer documentation through pbr.

https://bugs.launchpad.net/reno/+bug/1691224 describes a change where
Sphinx now treats log messages at WARNING or ERROR level as reasons to
abort the build when strict mode is enabled.

I have a patch up to the global requirements list to block 1.6.1 for
builds following g-r and constraints:
https://review.openstack.org/#/c/465135/

Many of our doc builds do not use constraints, so if your doc build
fails you will want to apply the same change locally.

There's a patch in review for the reno issue. It would be great if
someone had time to look into a fix for pbr to make it work with
older and newer Sphinx.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
> > Folks,
> > 
> > See $TITLE :)
> > 
> > Thanks,
> > Dims
> > 
> 
> My preference would be to have an #openstack-tc channel.
> 
> One thing I like about the dedicated meeting time was if I was not able to
> attend, or when I was just a casual observer, it was easy to catch up on
> what was discussed because it was all in one place and did not have any
> non TC conversations interlaced.
> 
> If we just use -dev, there is a high chance there will be a lot of cross-
> talk during discussions. There would also be a lot of effort to grep
> through the full day of activity to find things relevant to TC
> discussions. If we have a dedicated channel for this, it makes it very
> easy for anyone to know where to go to get a clean, easy to read capture
> of all relevant discussions. I think that will be important with the
> lack of a captured and summarized meeting to look at.
> 
> Sean
> 

I definitely understand this desire. I think, though, that any
significant conversations should be made discoverable via an email
thread summarizing them. That honors the spirit of moving our
"decision making" to asynchronous communication tools.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:
> On Tue, 16 May 2017, Monty Taylor wrote:
> 
> > FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with 
> > that until someone has a better idea. I'm uncrazy about it for two reasons:
> >
> > a) the word "key" implies things to people that may or may not be true 
> > here. 
> > If we do stick with it - we need some REALLY crisp language about what it 
> > is 
> > and what it isn't.
> >
> > b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a 
> > thing 
> > called by this name. While what's written in the spec is quite similar in 
> > usage to that construct, I'm wary of re-using the name without the 
> > semantics 
> > actually being fully the same for risk of user confusion. "This uses 
> > api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - 
> > which 
> > may be a better term. Maybe just "ApplicationAuthorization"?
> 
> "api key" is a fairly common and generic term for "this magical
> thingie I can create to delegate my authority to some automation".
> It's also sometimes called "token", perhaps that's better (that's
> what GitHub uses, for example)? In either case the "api" bit is
> pretty important because it is the thing used to talk to the API.
> 
> I really hope we can avoid creating yet more special language for
> OpenStack. We've got an API. We want to send keys or tokens. Let's
> just call them that.
> 

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> I would like to bring up a subject that hasn't really been discussed in
> this thread yet, forgive me if I missed an email mentioning this.
> 
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security
> turned off are perfect for testing internally to infra.
> 
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.
> 
> Thanks,
> SamYaple

This sounds like an implementation detail of option 3? I think not
signing the images does help indicate that they're not meant to be used
in production environments.

Is some sort of self-hosted solution a reasonable compromise between
building images in test jobs (which I understand makes them take
extra time) and publishing images to public registries (which is
the thing I object to)?

If self-hosting is reasonable, then we can work out which tool to
use to do it as a second question.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >>
> >> Flavio Percoco wrote:
> >>>
> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >>> projects
> >>> in any kind of built form. This was also one of the concerns I raised
> >>> when
> >>> working on the proposal to support other programming languages. The
> >>> problem of
> >>> releasing built images goes beyond the infrastructure requirements. It's
> >>> the
> >>> message and the guarantees implied with the built product itself that are
> >>> the
> >>> concern here. And I tend to agree with Doug that this might be a problem
> >>> for us
> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >>> point is
> >>> not enough. Kolla is not the only project producing container images and
> >>> we need
> >>> to be consistent in the way we release these images.
> >>>
> >>> Nothing prevents people for building their own images and uploading them
> >>> to
> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> >>
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >
> >
> > Agreed #4 is a bit unrealistic.
> >
> > Not sure I understand the difference between #2 and #3. Is it just the
> > cadence?
> >
> > I'd prefer for these builds to have a daily cadence because it sets the
> > expectations w.r.t maintenance right: "These images are daily builds and not
> > certified releases. For stable builds you're better off building it
> > yourself"
> 
> And daily builds are exactly what I wanted in the first place:) We
> probably will keep publishing release packages too, but we can be so
> called 3rd party. I also agree [4] is completely unrealistic and I
> would be against putting such heavy burden of responsibility on any
> community, including Kolla.
> 
> While daily cadence will send message that it's not stable, truth will
> be that it will be more stable than what people would normally build
> locally (again, it passes more gates), but I'm totally fine in not
> saying that and let people decide how they want to use it.
> 
> So, can we move on with implementation?

I don't want the images published to docker hub. Are they still useful
to you if they aren't published?

Doug

> 
> Thanks!
> Michal
> 
> >
> > Flavio
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
> >> >> things:
> >> >>
> >> >> 1) Can folks install just one or two services for their specific 
> >> >> scenario?
> >> >
> >> >Yes, that's more of a kolla-ansible feature and require a little bit
> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >> >allow maximum flexibility in that space.
> >> >
> >> >> 2) Can the container images from kolla be run on bare docker daemon?
> >> >
> >> >Yes, but they need to either override our default CMD (kolla_start) or
> >> >provide ENVs requred by it, not a huge deal
> >> >
> >> >> 3) Can someone take the kolla container images from say dockerhub and
> >> >> use it without the Kolla framework?
> >> >
> >> >Yes, there is no such thing as kolla framework really. Our images
> >> >follow stable ABI and they can be deployed by any deploy mechanism
> >> >that will follow it. We have several users who wrote their own deploy
> >> >mechanism from scratch.
> >> >
> >> >Containers are just blobs with binaries in it. Little things that we
> >> >add are kolla_start script to allow our config file management and
> >> >some custom startup scripts for things like mariadb to help with
> >> >bootstrapping, both are entirely optional.
> >>
> >> Just as a bonus example, TripleO is currently using kolla images. They 
> >> used to
> >> be vanilla and they are not anymore but only because TripleO depends on 
> >> puppet
> >> being in the image, which has nothing to do with kolla.
> >>
> >> Flavio
> >>
> >
> >When you say "using kolla images," what do you mean? In upstream
> >CI tests? On contributors' dev/test systems? Production deployments?
> 
> All of them. Note that TripleO now builds its own "kolla images" (it uses the
> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
> said, TripleO uses kolla images was intended to answer Dims question on 
> whether
> these images (or Dockerfiles) can be consumed by other projects.
> 
> Flavio
> 

Ah, OK. So TripleO is using the build instructions for kolla images, but
not the binary images being produced today?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-16 15:28:11 +0100:
> On Sun, 14 May 2017, Sean Dague wrote:
> 
> > So, the basic idea is, services will optionally take an inbound 
> > X-OpenStack-Request-ID which will be strongly validated to the format 
> > (req-$uuid). They will continue to always generate one as well. When the 
> > context is built (which is typically about 3 more steps down the paste 
> > pipeline), we'll check that the service user was involved, and if not, 
> > reset 
> > the request_id to the local generated one. We'll log both the global and 
> > local request ids. All of these changes happen in oslo.middleware, 
> > oslo.context, oslo.log, and most projects won't need anything to get this 
> > infrastructure.
> 
> I may not be understanding this paragraph, but this sounds like you
> are saying: accept a valid and authentic incoming request id, but
> only use it in ongoing requests if the service user was involved in
> those requests.
> 
> If that's correct, I'd suggest not doing that because it confuses
> traceability of a series of things. Instead, always use the request
> id if it is valid and authentic.
> 
> But maybe you mean "if the request id could not be proven authentic,
> don't use it"?
> 

The idea is that a regular user calling into a service should not
be able to set the request id, but outgoing calls from that service
to other services as part of the same request would.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-16 10:49:54 -0400:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
> > Folks,
> > 
> > See $TITLE :)
> > 
> > Thanks,
> > Dims
> 
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
> 
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
> 
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
> 
> -Sean
> 

+1, let's start with just the -dev channel and see if volume becomes
an issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> Sorry for the top post, Michal, Can you please clarify a couple of things:
> >>
> >> 1) Can folks install just one or two services for their specific scenario?
> >
> >Yes, that's more of a kolla-ansible feature and require a little bit
> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >allow maximum flexibility in that space.
> >
> >> 2) Can the container images from kolla be run on bare docker daemon?
> >
> >Yes, but they need to either override our default CMD (kolla_start) or
> >provide ENVs requred by it, not a huge deal
> >
> >> 3) Can someone take the kolla container images from say dockerhub and
> >> use it without the Kolla framework?
> >
> >Yes, there is no such thing as kolla framework really. Our images
> >follow stable ABI and they can be deployed by any deploy mechanism
> >that will follow it. We have several users who wrote their own deploy
> >mechanism from scratch.
> >
> >Containers are just blobs with binaries in it. Little things that we
> >add are kolla_start script to allow our config file management and
> >some custom startup scripts for things like mariadb to help with
> >bootstrapping, both are entirely optional.
> 
> Just as a bonus example, TripleO is currently using kolla images. They used to
> be vanilla and they are not anymore but only because TripleO depends on puppet
> being in the image, which has nothing to do with kolla.
> 
> Flavio
> 

When you say "using kolla images," what do you mean? In upstream
CI tests? On contributors' dev/test systems? Production deployments?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
This is one of those areas that was shared understanding for a long
time, and seems less "shared" now that we've grown and added new
projects to the community.  I intended to prepare a governance
resolution *after* having some public discussion, so that we can
restore that common understanding through documentation. I didn't
prepare the resolution as a first step, because if the consensus
is that we've changed our collective minds about whether publishing
binary artifacts is a good idea then the wording of the resolution
needs to reflect that.

Doug

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-16 09:25:56 -0400:
> Steve,
> 
> We should not always ask "if this is a ruling from the TC", the
> default is that it's a discussion/exploration. If it is a "ruling", it
> won't be on a ML thread.
> 
> Thanks,
> Dims
> 
> On Tue, May 16, 2017 at 9:22 AM, Steven Dake (stdake)  
> wrote:
> > Dims,
> >
> > The [tc] was in the subject tag, and the message was represented as 
> > indicating some TC directive and has had several tc members comment on the 
> > thread.  I did nothing wrong.
> >
> > Regards
> > -steve
> >
> >
> > -Original Message-
> > From: Davanum Srinivas 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Date: Tuesday, May 16, 2017 at 4:34 AM
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Subject: Re: [openstack-dev] 
> > [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
> >  do we want to be publishing binary container images?
> >
> > Why drag TC into this discussion Steven? If the TC has something to
> > say, it will be in the form of a resolution with topic "formal-vote".
> > So please Stop!
> >
> > Thanks,
> > Dims
> >
> > On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake) 
> >  wrote:
> > > Flavio,
> > >
> > > Forgive the top post – outlook ftw.
> > >
> > > I understand the concerns raised in this thread.  It is unclear if 
> > this thread is the feeling of two TC members or enough TC members care 
> > deeply about this issue to permanently limit OpenStack big tent projects’ 
> > ability to generate container images in various external artifact storage 
> > systems.  The point of discussion I see effectively raised in this thread 
> > is “OpenStack infra will not push images to dockerhub”.
> > >
> > > I’d like clarification if this is a ruling from the TC, or simply an 
> > exploratory discussion.
> > >
> > > If it is exploratory, it is prudent that OpenStack projects not be 
> > blocked by debate on this issue until the TC has made such ruling as to 
> > banning the creation of container images via OpenStack infrastructure.
> > >
> > > Regards
> > > -steve
> > >
> > > -Original Message-
> > > From: Flavio Percoco 
> > > Reply-To: "OpenStack Development Mailing List (not for usage 
> > questions)" 
> > > Date: Monday, May 15, 2017 at 7:00 PM
> > > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > > Subject: Re: [openstack-dev] 
> > [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
> >  do we want to be publishing binary container images?
> > >
> > > On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> > > >On 15 May 2017 at 12:12, Doug Hellmann  
> > wrote:
> > >
> > > [huge snip]
> > >
> > > >>> > I'm raising the issue here to get some more input into how 
> > to
> > > >>> > proceed. Do other people think this concern is overblown? 
> > Can we
> > > >>> > mitigate the risk by communicating through metadata for the 
> > images?
> > > >>> > Should we stick to publishing build instructions 
> > (Dockerfiles, or
> > > >>> > whatever) instead of binary images? Are there other options 
> > I haven't
> > > >>> > mentioned?
> > > >>>
> > > >>> Today we do publish build instructions, that's what Kolla is. 
> > We also
> > > >>> publish built containers already, j

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> > 
> > > On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > > > I'm raising the issue here to get some more input into how to
> > > > proceed. Do other people think this concern is overblown? Can we
> > > > mitigate the risk by communicating through metadata for the images?
> > > > Should we stick to publishing build instructions (Dockerfiles, or
> > > > whatever) instead of binary images? Are there other options I haven't
> > > > mentioned?
> > > 
> > > Today we do publish build instructions, that's what Kolla is. We also
> > > publish built containers already, just we do it manually on release
> > > today. If we decide to block it, I assume we should stop doing that
> > > too? That will hurt users who uses this piece of Kolla, and I'd hate
> > > to hurt our users:(
> > 
> > Well, that's the question. Today we have teams publishing those
> > images themselves, right? And the proposal is to have infra do it?
> > That change could be construed to imply that there is more of a
> > relationship with the images and the rest of the community (remember,
> > folks outside of the main community activities do not always make
> > the same distinctions we do about teams). So, before we go ahead
> > with that, I want to make sure that we all have a chance to discuss
> > the policy change and its implications.
> 
> Sorry for hijacking the thread, but we have a similar scenario for example in 
> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
> stuff, and not containers, but it's looks really the same.
> So far ready-made images have been published under 
> http://sahara-files.mirantis.com/images/upstream/, but we are looking to have 
> them hosted on 
> openstack.org, just like other artifacts. 
> 
> We asked about this few days ago on openstack-infra@, but no answer so far 
> (the Summit didn't help):
> 
> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
> 
> I think that the answer to the question raised in this thread is definitely 
> going to be relevant for our use case.
> 
> Ciao

Thanks for raising this. I think the same concerns apply to VM images.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> Flavio Percoco wrote:
> > From a release perspective, as Doug mentioned, we've avoided releasing 
> > projects
> > in any kind of built form. This was also one of the concerns I raised when
> > working on the proposal to support other programming languages. The problem 
> > of
> > releasing built images goes beyond the infrastructure requirements. It's the
> > message and the guarantees implied with the built product itself that are 
> > the
> > concern here. And I tend to agree with Doug that this might be a problem 
> > for us
> > as a community. Unfortunately, putting your name, Michal, as contact point 
> > is
> > not enough. Kolla is not the only project producing container images and we 
> > need
> > to be consistent in the way we release these images.
> > 
> > Nothing prevents people for building their own images and uploading them to
> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> 
> I totally subscribe to the concerns around publishing binaries (under
> any form), and the expectations in terms of security maintenance that it
> would set on the publisher. At the same time, we need to have images
> available, for convenience and testing. So what is the best way to
> achieve that without setting strong security maintenance expectations
> for the OpenStack community ? We have several options:
> 
> 1/ Have third-parties publish images
> It is the current situation. The issue is that the Kolla team (and
> likely others) would rather automate the process and use OpenStack
> infrastructure for it.
> 
> 2/ Have third-parties publish images, but through OpenStack infra
> This would allow to automate the process, but it would be a bit weird to
> use common infra resources to publish in a private repo.
> 
> 3/ Publish transient (per-commit or daily) images
> A "daily build" (especially if you replace it every day) would set
> relatively-limited expectations in terms of maintenance. It would end up
> picking up security updates in upstream layers, even if not immediately.
> 
> 4/ Publish images and own them
> Staff release / VMT / stable team in a way that lets us properly own
> those images and publish them officially.
> 
> Personally I think (4) is not realistic. I think we could make (3) work,
> and I prefer it to (2). If all else fails, we should keep (1).
> 

At the forum we talked about putting test images on a "private"
repository hosted on openstack.org somewhere. I think that's option
3 from your list?

Paul may be able to shed more light on the details of the technology
(maybe it's just an Apache-served repo, rather than a full blown
instance of Docker's service, for example).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:
> > -Original Message-
> > From: Doug Hellmann [mailto:d...@doughellmann.com]
> > Sent: Monday, May 15, 2017 2:55 PM
> <...>
> > 
> > Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> > > import eventlet
> > > eventlet.monkey_patch
> > 
> > That's not calling monkey_patch -- there are no '()'. Is that a typo?
> 
> Yes, sorry, that was a typo when I put it in to the email.  It did have () 
> at the end.
> 
> > 
> > lock() claims to work differently when monkey_patch() has been called.
> > Without doing the monkey patching, I would expect the thread to have to
> > explicitly yield control.
> > 
> > Did you see the problem you describe in production code, or just in this
> > sample program?
> 
> We see this in production code.   I included the example to boil this down to 
> a simple enough scenario to be understood in this forum without the 
> distraction of superfluous code. 
> 

OK. I think from the Oslo team's perspective, this is likely to be
considered a bug in the application. The concurrency library is not
aware that it is running in an eventlet thread, so it relies on the
application to call the monkey patching function to inject the right
sort of lock class.  If that was done in the wrong order, or not
at all, that would cause this issue.

The next step is to look at which application had the problem, and under
what circumstances. Can you provide more detail there?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][logging] improvements to log debugging ready for review

2017-05-15 Thread Doug Hellmann
I have updated the Oslo spec for improving the logging debugging [1] and
the patch series that begins the implementation [2]. Please put these on
your review priority list.

Doug

[1] https://review.openstack.org/460112
[2] https://review.openstack.org/#/q/topic:improve-logging-debugging

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> For starters, I want to emphasize that fresh set of dockerhub images
> was one of most requested features from Kolla on this summit and few
> other features more or less requires readily-available docker
> registry. Features like full release upgrade gates.
> 
> This will have numerous benefits for users that doesn't have resources
> to put sophisticated CI/staging env, which, I'm willing to bet, is
> still quite significant user base. If we do it correctly (and we will
> do it correctly), images we'll going to push will go through series of
> gates which we have in Kolla (and will have more). So when you pull
> image, you know that it was successfully deployed within scenerios
> available in our gates, maybe even upgrade and increase scenerio
> coverage later? That is a huge benefit for actual users.

I have no doubt that consumers of the images would like us to keep
creating them. We had lots of discussions last week about resource
constraints and sustainable practices, though, and this strikes me
as an area where we're deviating from our history in a way that
will require more maintenance work upstream.

> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > Last week at the Forum we had a couple of discussions about
> > collaboration between the various teams building or consuming
> > container images. One topic that came up was deciding how to publish
> > images from the various teams to docker hub or other container
> > registries. While the technical bits seem easy enough to work out,
> > there is still the question of precedence and whether it's a good
> > idea to do so at all.
> >
> > In the past, we have refrained from publishing binary packages in
> > other formats such as debs and RPMs. (We did publish debs way back
> > in the beginning, for testing IIRC, but switched away from them to
> > sdists to be more inclusive.) Since then, we have said it is the
> > responsibility of downstream consumers to build production packages,
> > either as distributors or as a deployer that is rolling their own.
> > We do package sdists for python libraries, push some JavaScript to
> > the NPM registries, and have tarballs of those and a bunch of other
> > artifacts that we build out of our release tools.  But none of those
> > is declared as "production ready," and so the community is not
> > sending the signal that we are responsible for maintaining them in
> > the context of production deployments, beyond continuing to produce
> > new releases when there are bugs.
> 
> So for us that would mean something really hacky and bad. We are
> community driven not company driven project. We don't have Red Hat or
> Canonical teams behind us (we have contributors, but that's
> different).

Although I work at Red Hat, I want to make sure it's clear that my
objection is purely related to community concerns. For this
conversation, I'm wearing my upstream TC and Release team hats.

> > Container images introduce some extra complexity, over the basic
> > operating system style packages mentioned above. Due to the way
> > they are constructed, they are likely to include content we don't
> > produce ourselves (either in the form of base layers or via including
> > build tools or other things needed when assembling the full image).
> > That extra content means there would need to be more tracking of
> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> > as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.

A daily build job introduces new questions about how big the images
are and how many of them we keep, but let's focus on whether the
change in policy is something we want to adopt before we consider
those questions.

> > Given our security and stable team resources, I'm not entirely
> > comfortable with us publishing these images, and giving the appearance
> > that the community *as a whole* is committing to supporting them.
> > I don't have any objection to someone from the community publishing
> > them, as long as it is made clear who the actual owner is. I'm not
> > sure how easy it is to make that distinction if we publish them
> > through infra jobs, so that may mean some outside process. I also
> > don't think there would be any problem in building images on our
> > infrastructure for our own gate jobs, as long as they are just for
> > testing and we do

Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:
> On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:
> > Folks,
> >
> > It was decided at the oslo.messaging forum at summit that the pika
> > driver will be marked as deprecated [1] for removal.
> 
> [dims} +1 from me.

+1

> 
> >
> > The pika driver is another rabbitmq-based driver.  It was developed as
> > a replacement for the current rabbit driver (rabbit://).  The pika
> > driver is based on the 'pika' rabbitmq client library [2], rather than
> > the kombu library [3] of the current rabbitmq driver.  The pika
> > library was recommended by the rabbitmq community a couple of summits
> > ago as a better client than the kombu client.
> >
> > However, testing done against this driver did not show "appreciable
> > difference in performance or reliability" over the existing rabbitmq
> > driver.
> >
> > Given this, and the recent departure of some very talented
> > contributors, the consensus is to deprecate pika and recommend users
> > stay with the original rabbitmq driver.
> >
> > The plan is to mark the driver as deprecated in Pike, removal in Rocky.
> >
> > thanks,
> >
> >
> > [1] 
> > https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
> >   (~ line 80)
> > [2] https://github.com/pika/pika
> > [3] https://github.com/celery/kombu
> >
> > --
> > Ken Giusti  (kgiu...@gmail.com)
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> Can someone comment on whether the following scenario has been discussed
> before or whether this is viewed by the community as a bug?
> 
> While debugging a couple of different issues our investigation has lead
> us down the path of needing to look at whether the oslo concurrency lock
> utilities are working properly or not.  What we found is that it is
> possible for a greenthread to continuously acquire a lock even though
> there are other threads queued up waiting for the lock.
> 
> For instance, a greenthread acquires a lock, does some work, releases
> the lock, and then needs to repeat this process over several iterations.
> While the first greenthread holds the lock other greenthreads come along and
> attempt to acquire the lock.  Those subsequent greenthreads are added to the
> waiters list and suspended.  The observed behavior is that as long as the
> first greenthread continues to run without ever yielding it will always
> re-acquire the lock even before any of the waiters.
> 
> To illustrate my point I have included a short program that shows the
> effect of multiple threads contending for a lock with and without
> voluntarily yielding.   The code follows, but the output from both
> sample runs are included here first.
> 
> In both examples the output is formatted as "worker=XXX: YYY" where XXX
> is the worker number, and YYY is the number of times the worker has been
> executed while holding the lock.
> 
> In the first example,  notice that each worker gets to finish all of its
> tasks before any subsequence works gets to run even once.
> 
> In the second example, notice that the workload is fair and each worker
> gets to hold the lock once before passing it on to the next in line.
> 
> Example1 (without voluntarily yielding):
> =
> worker=0: 1
> worker=0: 2
> worker=0: 3
> worker=0: 4
> worker=1: 1
> worker=1: 2
> worker=1: 3
> worker=1: 4
> worker=2: 1
> worker=2: 2
> worker=2: 3
> worker=2: 4
> worker=3: 1
> worker=3: 2
> worker=3: 3
> worker=3: 4
> 
> 
> 
> Example2 (with voluntarily yielding):
> =
> worker=0: 1
> worker=1: 1
> worker=2: 1
> worker=3: 1
> worker=0: 2
> worker=1: 2
> worker=2: 2
> worker=3: 2
> worker=0: 3
> worker=1: 3
> worker=2: 3
> worker=3: 3
> worker=0: 4
> worker=1: 4
> worker=2: 4
> worker=3: 4
> 
> 
> 
> Code:
> =
> import eventlet
> eventlet.monkey_patch

That's not calling monkey_patch -- there are no '()'. Is that a typo?

lock() claims to work differently when monkey_patch() has been
called. Without doing the monkey patching, I would expect the thread
to have to explicitly yield control.

Did you see the problem you describe in production code, or just in this
sample program?

Doug

> 
> from oslo_concurrency import lockutils
> 
> workers = {}
> 
> synchronized = lockutils.synchronized_with_prefix('foo')
> 
> @synchronized('bar')
> def do_work(index):
> global workers
> workers[index] = workers.get(index, 0) + 1
> print "worker=%s: %s" % (index, workers[index])
> 
> 
> def worker(index, nb_jobs, sleep):
> for x in xrange(0, nb_jobs):
> do_work(index)
> if sleep:
> eventlet.greenthread.sleep(0)  # yield
> return index
> 
> 
> # hold the lock before starting workers to make sure that all worker queue up 
> # on the lock before any of them actually get to run.
> @synchronized('bar')
> def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
> for i in xrange(0, nb_workers):
> pool.spawn(worker, i, nb_jobs, sleep)
> 
> 
> print "Example1:  sleep=False"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool)
> pool.waitall()
> 
> 
> print "Example2:  sleep=True"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool, sleep=True)
> pool.waitall()
> 
> 
> 
> 
> Regards,
> Allain
> 
> 
> Allain Legacy, Software Developer, Wind River
> direct 613.270.2279  fax 613.492.7870 skype allain.legacy
> 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
> 
>  
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Doug Hellmann
Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools.  But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community *as a whole* is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2017-05-15 12:40:17 -0400:
> On 05/14/2017 01:02 PM, Monty Taylor wrote:
> > ** Bikeshed #1 **
> >
> > Are "internal" and "external" ok with folks as terms for those two ideas?
> 
> Yup, ++ from me on the above.
> 
> > ** Bikeshed #2 **
> >
> > Anybody have a problem with the key name "network-models"?
> 
> They're not network models. They're access/connectivity policies. :)
> 
> > (Incidentally, the idea from this is borrowed from GCE's
> > "compute#accessConfig" [0] - although they only have one model in their
> > enum: "ONE_TO_ONE_NAT")
> >
> > In a perfect future world where we have per-service capabilities
> > discovery I'd love for such information to be exposed directly by
> > neutron.
> 
> I actually don't see this as a Neutron thing. It's the *workload* 
> connectivity expectations that you're describing, not anything to do 
> with networks, subnets or ports.
> 
> So, I think actually Nova would be a better home for this capability 
> discovery, for similar reasons why get-me-a-network was mostly a Nova 
> user experience...
> 
> So, I suppose I'd prefer to call this thing an "access policy" or 
> "access model", optionally prefixing that with "network", i.e. "network 
> access policy".

We have enough things overloading the term "policy." Let's get out
a thesaurus for this one. ;-)

Doug

> 
> > ** Bikeshed #3 **
> >
> > What do we call the general concepts represented by fixed and floating
> > ips? Do we use the words "fixed" and "floating"? Do we instead try
> > something else, such as "direct" and "nat"?
> >
> > I have two proposals for the values in our enum:
> >
> > #1 - using fixed / floating
> >
> > ipv4-external-fixed
> > ipv4-external-floating
> > ipv4-internal-fixed
> > ipv4-internal-floating
> > ipv6-fixed
> 
> Definitely -1 on using fixed/floating.
> 
> > #2 - using direct / nat
> >
> > ipv4-external-direct
> > ipv4-external-nat
> > ipv4-internal-direct
> > ipv4-internal-nat
> > ipv6-direct
> 
> I'm good with direct and nat. +1 from me.
> 
> > On the other hand, "direct" isn't exactly a commonly used word in this
> > context. I asked a ton of people at the Summit last week and nobody
> > could come up with a better term for "IP that is configured inside of
> > the server's network stack". "non-natted", "attached", "routed" and
> > "normal" were all suggested. I'm not sure any of those are super-great -
> > so I'm proposing "direct" - but please if you have a better suggestion
> > please make it.
> 
> The other problem with the term "direct" is that there is already a vNIC 
> type of the same name which refers to a guest's vNIC using a host 
> passthrough device.
> 
> So, maybe non-nat or no-nat would be better? Or hell, make it a boolean 
> is_nat or has_nat if we're really just referring to whether an IP is 
> NATted or not?
> 
> Best,
> -jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2017-05-15 11:43:07 -0400:
> On 15/05/17 10:35, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:
> >> On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> >>> Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
> >>>> One of the things that came up in a logging Forum session is how much
> >>>> effort operators are having to put into reconstructing flows for things
> >>>> like server boot when they go wrong, as every time we jump a service
> >>>> barrier the request-id is reset to something new. The back and forth
> >>>> between Nova / Neutron and Nova / Glance would be definitely well served
> >>>> by this. Especially if this is something that's easy to query in elastic
> >>>> search.
> >>>>
> >>>> The last time this came up, some people were concerned that trusting
> >>>> request-id on the wire was concerning to them because it's coming from
> >>>> random users. We're going to assume that's still a concern by some.
> >>>> However, since the last time that came up, we've introduced the concept
> >>>> of "service users", which are a set of higher priv services that we are
> >>>> using to wrap user requests between services so that long running
> >>>> request chains (like image snapshot). We trust these service users
> >>>> enough to keep on trucking even after the user token has expired for
> >>>> this long run operations. We could use this same trust path for
> >>>> request-id chaining.
> >>>>
> >>>> So, the basic idea is, services will optionally take an inbound
> >>>> X-OpenStack-Request-ID which will be strongly validated to the format
> >>>> (req-$uuid). They will continue to always generate one as well. When the
> >>>
> >>> Do all of our services use that format for request ID? I thought Heat
> >>> used something added on to a UUID, or at least longer than a UUID?
> 
> FWIW I don't recall ever hearing this.
> 
> - ZB

OK, maybe I'm mixing it up with some other field that we expected to be
a UUID and wasn't. Ignore me and proceed. :-)

Doug

> 
> >> Don't know, now is a good time to speak up.
> >> http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
> >> seems to indicate that it's using the format everyone else is using.
> >>
> >> Swift does things a bit differently with suffixes, but they aren't using
> >> the common middleware.
> >>
> >> I've done code look throughs on nova/glance/cinder/neutron/keystone, but
> >> beyond that folks will need to speak up as to where this might break
> >> down. At worst failing validation just means you end up in the old
> >> (current) behavior.
> >>
> >> -Sean
> >>
> >
> > OK. I vaguely remembered something from the early days of ceilometer
> > trying to collect those notifications, but maybe I'm confusing it with
> > something else. I've added [heat] to the subject line to get that team's
> > attention for input.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:
> On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
> >> One of the things that came up in a logging Forum session is how much 
> >> effort operators are having to put into reconstructing flows for things 
> >> like server boot when they go wrong, as every time we jump a service 
> >> barrier the request-id is reset to something new. The back and forth 
> >> between Nova / Neutron and Nova / Glance would be definitely well served 
> >> by this. Especially if this is something that's easy to query in elastic 
> >> search.
> >>
> >> The last time this came up, some people were concerned that trusting 
> >> request-id on the wire was concerning to them because it's coming from 
> >> random users. We're going to assume that's still a concern by some. 
> >> However, since the last time that came up, we've introduced the concept 
> >> of "service users", which are a set of higher priv services that we are 
> >> using to wrap user requests between services so that long running 
> >> request chains (like image snapshot). We trust these service users 
> >> enough to keep on trucking even after the user token has expired for 
> >> this long run operations. We could use this same trust path for 
> >> request-id chaining.
> >>
> >> So, the basic idea is, services will optionally take an inbound 
> >> X-OpenStack-Request-ID which will be strongly validated to the format 
> >> (req-$uuid). They will continue to always generate one as well. When the 
> > 
> > Do all of our services use that format for request ID? I thought Heat
> > used something added on to a UUID, or at least longer than a UUID?
> 
> Don't know, now is a good time to speak up.
> http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
> seems to indicate that it's using the format everyone else is using.
> 
> Swift does things a bit differently with suffixes, but they aren't using
> the common middleware.
> 
> I've done code look throughs on nova/glance/cinder/neutron/keystone, but
> beyond that folks will need to speak up as to where this might break
> down. At worst failing validation just means you end up in the old
> (current) behavior.
> 
> -Sean
> 

OK. I vaguely remembered something from the early days of ceilometer
trying to collect those notifications, but maybe I'm confusing it with
something else. I've added [heat] to the subject line to get that team's
attention for input.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
> One of the things that came up in a logging Forum session is how much 
> effort operators are having to put into reconstructing flows for things 
> like server boot when they go wrong, as every time we jump a service 
> barrier the request-id is reset to something new. The back and forth 
> between Nova / Neutron and Nova / Glance would be definitely well served 
> by this. Especially if this is something that's easy to query in elastic 
> search.
> 
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users. We're going to assume that's still a concern by some. 
> However, since the last time that came up, we've introduced the concept 
> of "service users", which are a set of higher priv services that we are 
> using to wrap user requests between services so that long running 
> request chains (like image snapshot). We trust these service users 
> enough to keep on trucking even after the user token has expired for 
> this long run operations. We could use this same trust path for 
> request-id chaining.
> 
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 

Do all of our services use that format for request ID? I thought Heat
used something added on to a UUID, or at least longer than a UUID?

Doug

> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, 
> reset the request_id to the local generated one. We'll log both the 
> global and local request ids. All of these changes happen in 
> oslo.middleware, oslo.context, oslo.log, and most projects won't need 
> anything to get this infrastructure.
> 
> The python clients, and callers, will then need to be augmented to pass 
> the request-id in on requests. Servers will effectively decide when they 
> want to opt into calling other services this way.
> 
> This only ends up logging the top line global request id as well as the 
> last leaf for each call. This does mean that full tree construction will 
> take more work if you are bouncing through 3 or more servers, but it's a 
> step which I think can be completed this cycle.
> 
> I've got some more detailed notes, but before going through the process 
> of putting this into an oslo spec I wanted more general feedback on it 
> so that any objections we didn't think about yet can be raised before 
> going through the detailed design.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][logging] oslo.log fluentd native logging

2017-05-10 Thread Doug Hellmann
Excerpts from Dan Prince's message of 2017-05-10 17:41:12 -0400:
> On Mon, 2017-04-24 at 07:47 -0400, Joe Talerico wrote:
> > Hey owls - I have been playing with oslo.log fluentd integration[1]
> > in
> > a poc commit here [2]. Enabling the native service logging is nice
> > and
> > tracebacks no longer multiple inserts into elastic - there is a
> > "traceback" key which would contain the traceback if there was one.
> > 
> > The system-level / kernel level logging is still needed with the
> > fluent client on each Overcloud node.
> > 
> > I see Martin did the initial work [3] to integrate fluentd, is there
> > anyone looking at migrating the OpenStack services to using the
> > oslo.log facility?
> 
> Nobody officially implementing this yet that I know of. But it does
> look promising.
> 
> The idea of using oslo.logs fluentd formatter could dovetail very
> nicely into our new containers (docker) servers for Pike in that it
> would allow us to log to stdout directly within the container... but
> still support the Fluentd logging interfaces that we have today.
> 
> The only downside would be that not all services in OpenStack support
> olso.log (I don't think Swift does for example). Nor do some of the
> core services we deploy like Galera and RabbitMQ. So we'd have a mixed
> bag of host and stdout logging perhaps for some things or would need to
> integrate with Fluentd differently for services without oslo.log
> support.
> 
> Our current approach to containers logging in TripleO recently landed
> here and exposed the logs to a directory on the host specifically so
> that we could aim to support Fluentd integrations:
> 
> https://review.openstack.org/#/c/442603/
> 
> Perhaps we should revisit this in the (near) future to improve our
> containers deployments.

The Oslo team is also interested in talking to folks about making
it easier to enable some of the alternative output formatters such as
Fluentd and JSON. IIRC, right now to use them one must use a separate
logging configuration file, and we could add some config options to
avoid that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-08 20:18:35 +:
> On 2017-05-08 11:24:00 -0600 (-0600), Octave J. Orgeron wrote:
> [...]
> > none of those products that those drivers are written for are open
> > sourced and they meet less resistance to committing code upstream.
> > So I have to call BS on your comment that the community can't work
> > with us because Solaris isn't open sourced.
> 
> Totally not what I said.
> 
> My point was that constantly reminding management of one of the
> primary sources of friction might help. Working with free software
> communities becomes easier when you don't outright reject their
> values by deciding to cancel your open version of the thing you want
> them to help you support.
> 
> > Now for Oracle, we definitely need more 3rd party CI to make it
> > easier to test our drivers, components, and patches against so
> > that it's easier for the community to validate things. However, it
> > takes time, resources, and money to make that happen. Hopefully
> > that will get sorted out over time.
> 
> And _this_ was entirely the rest of my point, yes. Your needs seem
> quite similar to to those of VMWare, XenServer and HyperV, so I
> fully expect Nova's core reviewers will hold Solaris support patches
> to the same validation requirements. We can't run Solaris in our
> upstream testing for the same reasons we can't run those other
> examples (they're not free software), so the onus is on the vendor
> to satisfy this need for continuous testing and reporting instead.
> 
> > But even if we make all of the investments in setting that up, we
> > still need the upstream teams to come to the table and not shun us
> > away just because we are Oracle :)
> [...]
> 
> Smiley or no, the assertion that our quality assurance choices are
> based on personal preference for some particular company over
> another is still mildly offensive.

Yes, let's keep in mind that the answer to these questions about
stable branches are and have been the same no matter who asked them.
Early in this thread we pointed out that this topic comes up
regularly, from different sources, and the answer remains the same:
Start by contributing to the existing stable maintenance, and either
improve the processes and tools to make it easier to do more and/or
recruit more people to spread the work around.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-08 06:12:51 -0400:
> On Mon, May 8, 2017 at 3:52 AM, Bogdan Dobrelya  wrote:
> > On 06.05.2017 23:06, Doug Hellmann wrote:
> >> Excerpts from Thierry Carrez's message of 2017-05-04 16:14:07 +0200:
> >>> Chris Dent wrote:
> >>>> On Wed, 3 May 2017, Drew Fisher wrote:
> >>>>> "Most large customers move slowly and thus are running older versions,
> >>>>> which are EOL upstream sometimes before they even deploy them."
> >>>>
> >>>> Can someone with more of the history give more detail on where the
> >>>> expectation arose that upstream ought to be responsible things like
> >>>> long term support? I had always understood that such features were
> >>>> part of the way in which the corporately avaialable products added
> >>>> value?
> >>>
> >>> We started with no stable branches, we were just producing releases and
> >>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
> >>> distributions, and they all maintained their own stable branches,
> >>> handling backport of critical fixes. That is a pretty classic upstream /
> >>> downstream model.
> >>>
> >>> Some of us (including me) spotted the obvious duplication of effort
> >>> there, and encouraged distributions to share that stable branch
> >>> maintenance work rather than duplicate it. Here the stable branches were
> >>> born, mostly through a collaboration between Red Hat developers and
> >>> Canonical developers. All was well. Nobody was saying LTS back then
> >>> because OpenStack was barely usable so nobody wanted to stay on any
> >>> given version for too long.
> >>>
> >>> Maintaining stable branches has a cost. Keeping the infrastructure that
> >>> ensures that stable branches are actually working is a complex endeavor
> >>> that requires people to constantly pay attention. As time passed, we saw
> >>> the involvement of distro packagers become more limited. We therefore
> >>> limited the number of stable branches (and the length of time we
> >>> maintained them) to match the staffing of that team. Fast-forward to
> >>> today: the stable team is mostly one person, who is now out of his job
> >>> and seeking employment.
> >>>
> >>> In parallel, OpenStack became more stable, so the demand for longer-term
> >>> maintenance is stronger. People still expect "upstream" to provide it,
> >>> not realizing upstream is made of people employed by various
> >>> organizations, and that apparently their interest in funding work in
> >>> that area is pretty dead.
> >>>
> >>> I agree that our current stable branch model is inappropriate:
> >>> maintaining stable branches for one year only is a bit useless. But I
> >>> only see two outcomes:
> >>>
> >>> 1/ The OpenStack community still thinks there is a lot of value in doing
> >>> this work upstream, in which case organizations should invest resources
> >>> in making that happen (starting with giving the Stable branch
> >>> maintenance PTL a job), and then, yes, we should definitely consider
> >>> things like LTS or longer periods of support for stable branches, to
> >>> match the evolving usage of OpenStack.
> >>>
> >>> 2/ The OpenStack community thinks this is better handled downstream, and
> >>> we should just get rid of them completely. This is a valid approach, and
> >>> a lot of other open source communities just do that.
> >>
> >> Dropping stable branches completely would mean no upstream bugfix
> >> or security releases at all. I don't think we want that.
> >>
> >
> > I'd like to bring this up once again:
> >
> > option #3: Do not support or nurse gates for stable branches upstream.
> > Instead, only create and close them and attach 3rd party gating, if
> > asked by contributors willing to support LTS and nurse their gates.
> > Note, closing a branch should be an exceptional case, if only no one
> > willing to support and gate it for a long.
> 
> As i mentioned before, folks can join the Stable Team and make things
> like this happen. Won't happen by an email to the mailing list.
> 
> Thanks,
> Dims

Right. We need to change the tone of this thread from "you should do X"
to "I want to do X, where should I start?"

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-07 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-07 09:49:41 -0400:
> On 05/05/17 08:45 -0400, Sean Dague wrote:
> >On 05/04/2017 01:10 PM, Flavio Percoco wrote:
> >
> >> Some of the current TC activities depend on the meeting to some extent:
> >>
> >> * We use the meeting to give the final ack on some the formal-vote reviews.
> >> * Some folks (tc members and not) use the meeting agenda to know what they
> >>  should be reviewing.
> >> * Some folks (tc members and not) use the meeting as a way to review or
> >>  paticipate in active discussions.
> >> * Some folks use the meeting logs to catch up on what's going on in the TC
> >>
> >> In the resolution that has been proposed[1], we've listed possible
> >> solutions for
> >> some of this issues and others:
> >>
> >> * Having office hours
> >> * Sending weekly updates (pulse) on the current reviews and TC discussions
> >>
> >> Regardless we do this change on one-shot or multiple steps (or don't do
> >> it at
> >> all), I believe it requires changing the way TC activities are done:
> >>
> >> * It requires folks (especially TC members) to be more active on reviewing
> >>  governance patches
> >> * It requires folks to engage more on the mailing list and start more
> >>  discussions there.
> >>
> >> Sending this out to kick off a broader discussion on these topics.
> >> Thoughts?
> >> Opinions? Objections?
> >
> >To baseline: I am all in favor of an eventual world to get rid of the TC
> >IRC meeting (and honestly IRC meetings in general), for all the reasons
> >listed above.
> >
> >I shut down my IRC bouncer over a year ago specifically because I think
> >that the assumption of being on IRC all the time is an anti pattern that
> >we should be avoiding in our community.
> >
> >But, that being said, we have a working system right now, one where I
> >honestly can't remember the last time we had an IRC meeting get to every
> >topic we wanted to cover and not run into the time limit. That is data
> >that these needs are not being addressed in other places (yet).
> >
> >So the concrete steps I would go with is:
> >
> >1) We need to stop requiring IRC meetings as part of meeting the Open
> >definition.
> >
> >That has propagated this issue a lot -
> >https://review.openstack.org/#/c/462077
> >
> >2) We really need to stop putting items like the project adds.
> >
> >That's often forcing someone up in the middle of the night for 15
> >minutes for no particularly good reason.
> 
> We've been doing this because it is a requirement in our process but yeah, we
> can change this.
> 
> >3) Don't do interactive reviews in gerrit.
> >
> >Again, kind of a waste of time that is better in async. It's mostly
> >triggered by the fact that gerrit doesn't make a good discussion medium
> >in looking at broad strokes. It's really good about precision feedback,
> >but broad strokes, it's tough.
> >
> >One counter suggestion here is to have every governance patch that's not
> >trivial require that an email come to the list tagged [tc] [governance]
> >for people to comment more free form here.
> 
> I've mentioned this a gazillion of times and I believe it just keeps going
> unheard. I think this should be the *default* and I don't think requiring a
> thread to be started is enough. I think we can be more proactive and start
> threads ourselves when one is needed. The reason is that in "heated" patches
> there can be different topics and we might need multiple-threads for some
> patches. There's a lot that will have to be done to keep these emails on 
> track.
> 
> >4) See what the impact of the summary that Chris is sending out does to
> >make people feel like they understand what is going on in the meeting.
> >Because I also think that we make assumptions that the log of the
> >meeting describes what really happened. And I think that's often an
> >incorrect assumption. The same words used by Monty, Thierry, Jeremy mean
> >different things. Which you only know by knowing them all as people.
> >Having human interpretation of the meeting is good an puts together a
> >more ingestible narrative for people.
> 
> I disagree! I don't think we make those assumptions, which is why Anne and
> myself worked on those blog posts summarizing what had been going on in the 
> TC.
> Those posts stopped but I think we should start working on them already. I've
> pinged cdent and I think he's up to work with me on this. cdent yay/nay ?
> 
> >
> >Then evaluate because we will know that we need the meeting less (or
> >less often) when we're regularly ending in 45 minutes, or 30 minutes,
> >instead of slamming up against the wall with people feeling they had
> >more to say.
> 
> TBH, I'm a bit frustrated. what you've written here looks a lot to what's in 
> the
> resolution and what I've been saying except that the suggestion is to not shut
> meetings down right away but evaluate what happens and then shut them down, or
> not, which is fine.
> 
> My problem with this is that we *need* everyone in the TC to

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-06 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-04 16:14:07 +0200:
> Chris Dent wrote:
> > On Wed, 3 May 2017, Drew Fisher wrote:
> >> "Most large customers move slowly and thus are running older versions,
> >> which are EOL upstream sometimes before they even deploy them."
> > 
> > Can someone with more of the history give more detail on where the
> > expectation arose that upstream ought to be responsible things like
> > long term support? I had always understood that such features were
> > part of the way in which the corporately avaialable products added
> > value?
> 
> We started with no stable branches, we were just producing releases and
> ensuring that updates vaguely worked from N-1 to N. There were a lot of
> distributions, and they all maintained their own stable branches,
> handling backport of critical fixes. That is a pretty classic upstream /
> downstream model.
> 
> Some of us (including me) spotted the obvious duplication of effort
> there, and encouraged distributions to share that stable branch
> maintenance work rather than duplicate it. Here the stable branches were
> born, mostly through a collaboration between Red Hat developers and
> Canonical developers. All was well. Nobody was saying LTS back then
> because OpenStack was barely usable so nobody wanted to stay on any
> given version for too long.
> 
> Maintaining stable branches has a cost. Keeping the infrastructure that
> ensures that stable branches are actually working is a complex endeavor
> that requires people to constantly pay attention. As time passed, we saw
> the involvement of distro packagers become more limited. We therefore
> limited the number of stable branches (and the length of time we
> maintained them) to match the staffing of that team. Fast-forward to
> today: the stable team is mostly one person, who is now out of his job
> and seeking employment.
> 
> In parallel, OpenStack became more stable, so the demand for longer-term
> maintenance is stronger. People still expect "upstream" to provide it,
> not realizing upstream is made of people employed by various
> organizations, and that apparently their interest in funding work in
> that area is pretty dead.
> 
> I agree that our current stable branch model is inappropriate:
> maintaining stable branches for one year only is a bit useless. But I
> only see two outcomes:
> 
> 1/ The OpenStack community still thinks there is a lot of value in doing
> this work upstream, in which case organizations should invest resources
> in making that happen (starting with giving the Stable branch
> maintenance PTL a job), and then, yes, we should definitely consider
> things like LTS or longer periods of support for stable branches, to
> match the evolving usage of OpenStack.
> 
> 2/ The OpenStack community thinks this is better handled downstream, and
> we should just get rid of them completely. This is a valid approach, and
> a lot of other open source communities just do that.

Dropping stable branches completely would mean no upstream bugfix
or security releases at all. I don't think we want that.

Doug

> 
> The current reality in terms of invested resources points to (2). I
> personally would prefer (1), because that lets us address security
> issues more efficiently and avoids duplicating effort downstream. But
> unfortunately I don't control where development resources are posted.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-06 Thread Doug Hellmann
Excerpts from Octave J. Orgeron's message of 2017-05-05 15:35:16 -0600:
> Hi Matt,
> 
> And this is actually part of the problem for vendors. Many Oracle 
> engineers, including myself, have tried to get features and fixes pushed 
> upstream. While that may sound easy, the reality is that it isn't! In 
> many cases, it takes months for us to get something in or we get shot 
> down altogether. Here are the big issues we run into:
> 
>   * If it's in support of Oracle specific technologies such as Solaris,
> ZFS, MySQL Cluster, etc. we are often shunned away because it's not
> Linux or "mainstream" enough. A great example is how our Nova
> drivers for Solaris Zones, Kernel Zones, and LDoms are turned away.
> So we have to spend extra cycles maintaining our patches because
> they are shunned away from getting into the gate.
>   * If we release an OpenStack distribution and a year later, a major
> CVE security bug comes along.. we will patch it. But is there a way
> for us to push those changes back in? No, because the branch for
> that release is EOL'd and burned. So we have to maintain our own
> copy of the repos so we have something to work against.
>   * Taking a release and productizing it takes more than just pulling
> the git repo and building packages. It requires integrated testing
> on a given OS distribution, hardware, and infrastructure. We have to
> test it against our own products and handle upgrades from the
> previous product release. We have to make sure it works for
> customers. Then we have to spin up our distribution, documentation, etc.
> 
> Lastly, just throwing resources at this isn't going to solve the 
> cultural or logistics problems. Everyone has to work together and Oracle 

Can you expand on what you see as cultural and logistical problems?

Doug

> will continue to try and work with the community. If other vendors, 
> customers, and operators are willing to work together to build an LTS 
> branch and the governance around it, then Oracle will support that 
> effort. But to go it alone I think is risky for any single individual or 
> vendor. It's pretty obvious that over the past year, a lot of vendors 
> that were ponying up efforts have had to pull the plug on their 
> investments. A lot of the issues that I've out-lined effect the 
> bottom-line for OpenStack vendors. This is not about which vendor does 
> more or less or who has the bigger budget to spend. It's about making it 
> easier for vendors to support and for customers to consume.
> 
> Octave
> 
> On 5/5/2017 2:40 PM, Matt Riedemann wrote:
> >
> > If you're spending exorbitant amounts of time patching in your forks 
> > to keep up with the upstream code, then you're doing the wrong thing. 
> > Upstream your changes, or work against the APIs, or try to get the 
> > APIs you need upstream to build on for your downstream features. 
> > Otherwise this is all just burden you've put on yourself and I can't 
> > justify an LTS support model because it might make someone's 
> > downstream fork strategy easier to manage. As noted earlier, I don't 
> > see Oracle developers leading the way upstream. If you want to see 
> > major changes, then contribute those resources, get involved and make 
> > a lasting effect.
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2017-05-04 20:09:35 -0400:
> On 04/05/17 10:14, Thierry Carrez wrote:
> > Chris Dent wrote:
> >> On Wed, 3 May 2017, Drew Fisher wrote:
> >>> "Most large customers move slowly and thus are running older versions,
> >>> which are EOL upstream sometimes before they even deploy them."
> >>
> >> Can someone with more of the history give more detail on where the
> >> expectation arose that upstream ought to be responsible things like
> >> long term support? I had always understood that such features were
> >> part of the way in which the corporately avaialable products added
> >> value?
> >
> > We started with no stable branches, we were just producing releases and
> > ensuring that updates vaguely worked from N-1 to N. There were a lot of
> > distributions, and they all maintained their own stable branches,
> > handling backport of critical fixes. That is a pretty classic upstream /
> > downstream model.
> >
> > Some of us (including me) spotted the obvious duplication of effort
> > there, and encouraged distributions to share that stable branch
> > maintenance work rather than duplicate it. Here the stable branches were
> > born, mostly through a collaboration between Red Hat developers and
> > Canonical developers. All was well. Nobody was saying LTS back then
> > because OpenStack was barely usable so nobody wanted to stay on any
> > given version for too long.
> 
> Heh, if you go back _that_ far then upgrades between versions basically 
> weren't feasible, so everybody stayed on a given version for too long. 
> It's true that nobody *wanted* to though :D
> 
> > Maintaining stable branches has a cost. Keeping the infrastructure that
> > ensures that stable branches are actually working is a complex endeavor
> > that requires people to constantly pay attention. As time passed, we saw
> > the involvement of distro packagers become more limited. We therefore
> > limited the number of stable branches (and the length of time we
> > maintained them) to match the staffing of that team.
> 
> I wonder if this is one that needs revisiting. There was certainly a 
> time when closing a branch came with a strong sense of relief that you 
> could stop nursing the gate. I personally haven't felt that way in a 
> couple of years, thanks to a lot of *very* hard work done by the folks 
> looking after the gate to systematically solve a lot of those recurring 
> issues (e.g. by introducing upper constraints). We're still assuming 
> that stable branches are expensive, but what if they aren't any more?
> 
> > Fast-forward to
> > today: the stable team is mostly one person, who is now out of his job
> > and seeking employment.
> >
> > In parallel, OpenStack became more stable, so the demand for longer-term
> > maintenance is stronger. People still expect "upstream" to provide it,
> > not realizing upstream is made of people employed by various
> > organizations, and that apparently their interest in funding work in
> > that area is pretty dead.
> >
> > I agree that our current stable branch model is inappropriate:
> > maintaining stable branches for one year only is a bit useless. But I
> > only see two outcomes:
> >
> > 1/ The OpenStack community still thinks there is a lot of value in doing
> > this work upstream, in which case organizations should invest resources
> > in making that happen (starting with giving the Stable branch
> > maintenance PTL a job), and then, yes, we should definitely consider
> > things like LTS or longer periods of support for stable branches, to
> > match the evolving usage of OpenStack.
> 
> Speaking as a downstream maintainer, it sucks that backports I'm still 
> doing to, say, Liberty don't benefit anybody but Red Hat customers, 
> because there's nowhere upstream that I can share them. I want everyone 
> in the community to benefit. Even if I could only upload patches to 
> Gerrit and not merge them, that would at least be something.
> 
> (In a related bugbear, why must we delete the branch at EOL? This is 
> pure evil for consumers of the code. It breaks existing git checkouts 
> and thousands of web links in bug reports, review comments, IRC logs...)

Among other things, closing the branch lets us avoid all of the
discussions about why no one is reviewing patches there and why
folks shouldn't bother submitting them.

I would support having the stable maintenance team review the state
of the gate and revise the policy, if it's warranted. But we've had
that conversation at least once a year for the last 5 years, and
we only came to a different conclusion one time that I remember.
Even if branches are cheaper to maintain now, they aren't free. We
need people to be around to do the work.

> > 2/ The OpenStack community thinks this is better handled downstream, and
> > we should just get rid of them completely. This is a valid approach, and
> > a lot of other open source communities just do that.
> 
> Maybe we need a 5th 'Open', because to me the idea tha

Re: [openstack-dev] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Doug Hellmann
Excerpts from Drew Fisher's message of 2017-05-03 14:00:53 -0600:
> This email is meant to be the ML discussion of a question I brought up
> during the TC meeting on April 25th.; [1]

Thanks for starting this thread, Drew. I'll try to respond, but I
know a lot of folks are preparing for the summit next week, so it
may be a little quiet around here until after everyone is home.

> 
> The TL;DR version is:
> 
> 
> Reading the user survey [2], I see the same issues time and time again.
> Pages 18-19 of the survey are especially common points.

I was also interested in those comments and noticed that, as you
say, some are recurring themes. That reinforces in my mind that we
haven't adequately communicated the background behind some decisions
we've made in the past, or what we would need to do to make progress
on stalled initiatives.  I've started trying to address some of
those issues [1], and I'll be continuing that work after the summit.

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/

> Things move too fast,

I have to say, after so many years of hearing that we weren't moving
fast enough this one was a big surprise. :-) I'm not sure if that's
good or bad, or if it just means we now have a completely different
set of people responding to the user survey.

> no LTS release,

Over the past couple of years we have shifted the majority of the
backport review work off of a centralized team so that the individual
project teams are responsible for establishing their own stable
review groups. We've also changed the way we handle stable releases,
so that we now encourage projects to tag a release when they need
it instead of waiting and trying to tag all of the projects together
at the same time. As a result of these changes, we've been seeing
more stable releases for the branches we do maintain, giving users
more actual bug fix releases for those series.

That said, there are two main reasons we are unlikely to add more
stable releases or maintain any releases for longer: we need more
people to do the work, and we need to find a way to do that work
that doesn't hurt our ability to work on master.

We do still have a stable team responsible for ensuring that projects
are following the policies for stable releases, and that team needs
more participation. I'm sure the project teams would appreciate
having more help with backports and reviews on their stable branches,
too. Getting contributors to work on those tasks has been difficult
since the very beginning of the project.

It has been difficult to attract contributors to this area in part
due to the scope of work that is necessary to say that the community
supports those releases. We need the older versions of the deployment
platforms available in our CI systems to run the automated tests.
We need supported versions of the development tools (setuptools and
pip are especially problemmatic).  We need supported versions of
the various libraries and system-level dependencies like libvirt.
I'm sure the stable maintenance team could add to that list, but
the point is that it's not just a matter of saying we want to do
it, or even that we *will* do it.

> upgrades are terrifying for anything that isn't N-1 -> N.

The OpenStack community has a strong culture of testing.  We have
reasonable testing in place to balance our ability to ensure that
N-1 -> N upgrades work and as a result upgrades are easier than
ever. It seems quite a few users are still on the older versions
of the software that don't have some of those improvements.  It's
not the ideal answer, but their experience will continue to improve
as they move forward onto newer releases.

Meanwhile, adding more combinations of upgrades to handle N-M -> N
changes our ability to simplify the applications by removing technical
debt and by deprecating configuration options (reducing complexity
by cutting the number of configuration options has also been a
long-standing request from users). It also means more people are
needed to keep those older releases running in CI, so that the
upgrade jobs are reliable (see the discussion above about why that
is an issue).

> These come up time and time again
> How is the TC working with the dev teams to address these critical issues?
> 
> I asked this because on page 18 is this comment:
> 
> "Most large customers move slowly and thus are running older versions,
> which are EOL upstream sometimes before they even deploy them."
> 
> This is exactly what we're seeing with some of our customers and I
> wanted to ask the TC about it.

The contributors to OpenStack are not a free labor pool for the
consumers of the project. Just like with any other open source
project, the work is done by the people who show up, and we're all
motivated to work on different things.  Many (most?) of us are paid
by companies selling products or services based on OpenStack. Those
companies apply resources, in the form of contribu

Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
> Screen is going away in Queens.
> 
> Making the dev / test runtimes as similar as possible is really
> important. And there is so much weird debt around trying to make screen
> launch things reliably (like random sleeps) because screen has funny
> races in it.
> 
> It does mean some tricks people figured out in screen are going away.

It sounds like maybe we should start building a shared repository of new
tips & tricks for systemd/journald.

Doug

> 
> Journalctl provides some pretty serious improvements in querying logs
> https://www.freedesktop.org/software/systemd/man/journalctl.html - you
> can search in time ranges, filter by units (one or more of them), and if
> we get to the bottom of the eventlet interaction, we'll be able to
> search by things like REQUEST_ID as well.
> 
> Plus every modern Linux system uses systemd now, so skills learned
> around systemd and journalctl are transferable both from OpenStack to
> other systems, as well as for new people coming in that understand how
> this works outside of OpenStack. So it helps remove a difference from
> the way we do things from the rest of the world.
> 
> -Sean
> 
> On 05/03/2017 04:02 PM, Hongbin Lu wrote:
> > Hi Sean,
> > 
> > I tried the new systemd devstack and frankly I don't like it. There are 
> > several handy operations in screen that seems to be impossible after 
> > switching to systemd. For example, freeze a process by "Ctrl + a + [". In 
> > addition, navigating though the logs seems difficult (perhaps I am not 
> > familiar with journalctl).
> > 
> > From my understanding, the plan is dropping screen entirely in devstack? I 
> > would argue that it is better to keep both screen and systemd, and let 
> > users choose one of them based on their preference.
> > 
> > Best regards,
> > Hongbin
> > 
> >> -Original Message-
> >> From: Sean Dague [mailto:s...@dague.net]
> >> Sent: May-03-17 6:10 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> >> default
> >>
> >> On 05/02/2017 08:30 AM, Sean Dague wrote:
> >>> We started running systemd for devstack in the gate yesterday, so far
> >>> so good.
> >>>
> >>> The following patch (which will hopefully land soon), will convert
> >> the
> >>> default local use of devstack to systemd as well -
> >>> https://review.openstack.org/#/c/461716/. It also includes
> >>> substantially updated documentation.
> >>>
> >>> Once you take this patch, a "./clean.sh" is recommended. Flipping
> >>> modes can cause some cruft to build up, and ./clean.sh should be
> >>> pretty good at eliminating them.
> >>>
> >>> https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> >>> is probably specifically interesting / useful for people to read, as
> >>> it shows how the standard development workflows will change (for the
> >>> better) with systemd.
> >>>
> >>> -Sean
> >>
> >> As a follow up, there are definitely a few edge conditions we've hit
> >> with some jobs, so the following is provided as information in case you
> >> have a job that seems to fail in one of these ways.
> >>
> >> Doing process stop / start
> >> ==
> >>
> >> The nova live migration job is special, it was restarting services
> >> manually, however it was doing so with some copy / pasted devstack code,
> >> which means it didn't evolve with the rest of devstack. So the stop
> >> code stopped working (and wasn't robust enough to make it clear that
> >> was the issue).
> >>
> >> https://review.openstack.org/#/c/461803/ is the fix (merged)
> >>
> >> run_process limitations
> >> ===
> >>
> >> When doing the systemd conversion I looked for a path forward which was
> >> going to make 90% of everything just work. The key trick here was that
> >> services start as the "stack" user, and aren't daemonizing away from
> >> the console. We can take the run_process command and make that the
> >> ExecStart in a unit file.
> >>
> >> *Except* that only works if the command is specified by an *absolute
> >> path*.
> >>
> >> So things like this in kuryr-libnetwork become an issue
> >> https://github.com/openstack/kuryr-
> >> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
> >> n.sh#L148
> >>
> >> There is also a second issue there, which is calling sudo in the
> >> run_process line. If you need to run as a user/group different than the
> >> default, you need to specify that directly.
> >>
> >> The run_process command now supports that -
> >> https://github.com/openstack-
> >> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
> >> common#L1531-L1535
> >>
> >> And lastly, run_process really always did expect that the thing you
> >> started remained attached to the console. These are run as "simple"
> >> services in systemd. If you are running a thing which already
> >> daemonizes systemd is going to assume (correctly in

Re: [openstack-dev] [tc] [all] TC Report 18

2017-05-03 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-03 13:23:11 -0400:
> On 05/03/2017 01:02 PM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2017-05-03 18:16:29 +0200:
> >> Ed Leafe wrote:
> >>> On May 3, 2017, at 2:41 AM, Thierry Carrez  wrote:
> >>>
> >>>> In the current
> >>>> system, TC members (or really, anyone in the community) can add to the
> >>>> "Open discussion" section of the meeting agenda, but that happens
> >>>> extremely rarely. I suspect that the 5 minutes per week that we end up
> >>>> dedicating to open discussion in the meetings does not encourage people
> >>>> to load large topics of discussions in it.
> >>>
> >>> Simple: *start* the meeting with Open Discussion.
> >>
> >> I don't really see how that would solve the agenda problem.
> >>
> >> I think it's better for everyone if people post topics they want to
> >> discuss in advance. Europeans who want to attend need to stay up late,
> >> People in the APAC zone need to get up very early. Knowing up-front what
> >> will be discussed might (might) give people in those zones the extra
> >> incentive they need to attend.
> >>
> > 
> > Knowing what will be discussed in advanced also helps everyone collect
> > their thoughts and be ready to contribute.
> 
> What about ensuring that every agenda topic is more than a line, but
> includes a full paragraph about what the agenda topic proposer expects
> it will cover. A lot of times the agenda items are cryptic enough unless
> you are knee deep in things.
> 
> That would help people collect their thoughts even more and break away
> from the few minutes of delay in introducing the subject (the
> introduction of the subject would be in the agenda).
> 
> -Sean
> 

If the goal is to move most of the discussion onto the mailing list, we
could link to the thread(s) there, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18

2017-05-03 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-03 18:16:29 +0200:
> Ed Leafe wrote:
> > On May 3, 2017, at 2:41 AM, Thierry Carrez  wrote:
> > 
> >> In the current
> >> system, TC members (or really, anyone in the community) can add to the
> >> "Open discussion" section of the meeting agenda, but that happens
> >> extremely rarely. I suspect that the 5 minutes per week that we end up
> >> dedicating to open discussion in the meetings does not encourage people
> >> to load large topics of discussions in it.
> > 
> > Simple: *start* the meeting with Open Discussion.
> 
> I don't really see how that would solve the agenda problem.
> 
> I think it's better for everyone if people post topics they want to
> discuss in advance. Europeans who want to attend need to stay up late,
> People in the APAC zone need to get up very early. Knowing up-front what
> will be discussed might (might) give people in those zones the extra
> incentive they need to attend.
> 

Knowing what will be discussed in advanced also helps everyone collect
their thoughts and be ready to contribute.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [forum] extra session rooms at the forum

2017-05-02 Thread Doug Hellmann
We have a full forum schedule planned, but based on past experience
it's inevitable that something was missed, the scheduled slots won't
be quite long enough for some deeper discussions, and that new
topics will come up during the week. To accommodate those extra
discussions, we have 3 session rooms available on Thursday afternoon.
Use the wiki page [1] to sign up for a room, including a link to
an etherpad explaining what the session is about.

Doug

[1] 
https://wiki.openstack.org/wiki/Forum/Boston2017#Thursday_Afternoon_session_sign-up

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Deprecating Postgresql support

2017-04-27 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-04-27 07:38:17 -0400:
> Greetings,
> 
> We didn't see anyone using Postgresql when deploying Puppet OpenStack:
> - no feedback on ML or IRC
> - no bug report in Launchpad
> 
> Postgresql support (or call it how you want) is also removed upstream
> in OpenStack.
> We will deprecate the class that used to deploy Postgresql in Pike and
> remove the code in Queens.
> 
> https://review.openstack.org/#/c/460249/
> 
> Any feedback is very welcome before we do it.
> 
> Thanks,

There's a Forum session scheduled to discuss general deprecation.

https://www.openstack.org/summit/boston-2017/summit-schedule/events/18730/deprecation-of-postgresql

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-26 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-04-26 10:55:14 -0400:
> On 04/26/2017 10:47 AM, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2017-04-26 09:01:32 -0400:
> >> On 04/26/2017 08:36 AM, Doug Hellmann wrote:
> >>> Excerpts from Kekane, Abhishek's message of 2017-04-26 07:00:22 +:
> >>>> Hi All,
> >>>>
> >>>> As per suggested by @jay_pipes's
> >>>> if val.count('-') not in (0, 4):
> >>>> raise TypeError
> >>>>
> >>>> It is not sufficient solution because "is_uuid_like" returns only True 
> >>>> or False.
> >>>> For example,
> >>>>
> >>>> If user passes uuid like "urn:----" or 
> >>>> "urn:uuid:----" then "is_uuid_like" 
> >>>> method returns True as it is valid uuid format, but when this uuid tries 
> >>>> to insert into database table then it gives DBDataError because the 
> >>>> reason is in database "block_device_mapping" table has "volume_id" field 
> >>>> of 36 characters only so while inserting data to the table through 
> >>>> 'BlockDeviceMapping' object it raises DBDataError.
> >>>>
> >>>> Doug's solution of adding another method format_canonical_uuid() which 
> >>>> would format it with the proper number of hyphens and return actual UUID 
> >>>> will break backward compatibility IMO. Because of adding this new method 
> >>>> in oslo_utils then we have to make changes in all projects which are 
> >>>> using this is_uuid_like().
> >>>
> >>> I don't understand why adding a new function breaks backwards
> >>> compatibility. Can you elaborate on why you think so?
> >>
> >> I'm not sure why it's believed it would break compatibility, however
> >> format_canonical_uuid() isn't what Nova needs here.
> >>
> >> Nova actually wants to stop bad UUIDs ever getting past our API layer,
> >> and just spin back to the user that they handed us corrupt data. Because
> >> it will matter later if they try to use things in comparisons. Papering
> >> over bad format isn't what we want or intended.
> >>
> >> I think we will end up needing a "is_uuid" which accepts the standard
> >> dashed format only.
> >>
> >> -Sean
> >>
> > 
> > Sure, that's definitely another option, and again a new function
> > would be the way to do it and maintain backwards compatibility.
> > 
> > It sounds like there's a chance there's already bad data in the
> > database, though? For example a UUID presented without the dashes
> > would have passed the existing check and been able to be stored in
> > the field because it's shorter than the max length. What happens
> > to those records?
> 
> That is a good question, and one where we have to figure out what the
> cost of updating that data would be. I do wonder in what operations that
> round trips and becomes a good value later.
> 
> But, at a minimum, we want to prevent new bad data from landing.
> 
> -Sean
> 

Maybe preventing writes with bad data, but allowing queries with the
existing looser constraint, solves the problem? Presumably users
querying against this field already have to enter the UUID in exactly
the same way it was recorded, since it's not being converted to a
canonical form? Or maybe this is not a field used in queries?

Either way, I agree the bad data should be blocked with more strict
checks on input.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-26 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-04-26 09:01:32 -0400:
> On 04/26/2017 08:36 AM, Doug Hellmann wrote:
> > Excerpts from Kekane, Abhishek's message of 2017-04-26 07:00:22 +:
> >> Hi All,
> >>
> >> As per suggested by @jay_pipes's
> >> if val.count('-') not in (0, 4):
> >> raise TypeError
> >>
> >> It is not sufficient solution because "is_uuid_like" returns only True or 
> >> False.
> >> For example,
> >>
> >> If user passes uuid like "urn:----" or 
> >> "urn:uuid:----" then "is_uuid_like" method 
> >> returns True as it is valid uuid format, but when this uuid tries to 
> >> insert into database table then it gives DBDataError because the reason is 
> >> in database "block_device_mapping" table has "volume_id" field of 36 
> >> characters only so while inserting data to the table through 
> >> 'BlockDeviceMapping' object it raises DBDataError.
> >>
> >> Doug's solution of adding another method format_canonical_uuid() which 
> >> would format it with the proper number of hyphens and return actual UUID 
> >> will break backward compatibility IMO. Because of adding this new method 
> >> in oslo_utils then we have to make changes in all projects which are using 
> >> this is_uuid_like().
> > 
> > I don't understand why adding a new function breaks backwards
> > compatibility. Can you elaborate on why you think so?
> 
> I'm not sure why it's believed it would break compatibility, however
> format_canonical_uuid() isn't what Nova needs here.
> 
> Nova actually wants to stop bad UUIDs ever getting past our API layer,
> and just spin back to the user that they handed us corrupt data. Because
> it will matter later if they try to use things in comparisons. Papering
> over bad format isn't what we want or intended.
> 
> I think we will end up needing a "is_uuid" which accepts the standard
> dashed format only.
> 
> -Sean
> 

Sure, that's definitely another option, and again a new function
would be the way to do it and maintain backwards compatibility.

It sounds like there's a chance there's already bad data in the
database, though? For example a UUID presented without the dashes
would have passed the existing check and been able to be stored in
the field because it's shorter than the max length. What happens
to those records?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] looking for feedback on proposals to improve logging

2017-04-26 Thread Doug Hellmann
I am looking for some feedback on two new proposals to add IDs to
log messages.

The tl;dr is that we’ve been talking about adding unique IDs to log
messages for 5 years. I myself am still not 100% convinced the idea
is useful, but I would like us to either do it or definitively say
we won't ever do it so that we can stop talking about it and consider
some other improvements to logging instead.

Based on early feedback from a small group who have been involved
in the conversations about this in the past, I have drafted new two
specs with different approaches that try to avoid the pitfalls that
blocked the earlier specs:

1. A cross-project spec to add logging message IDs in (what I hope
   is) a less onerous way than has been proposed before:
   https://review.openstack.org/460110

2. An Oslo spec to add some features to oslo.log to try to achieve the
   goals of the original proposal without having to assign message IDs:
   https://review.openstack.org/460112

To understand the full history and context, you’ll want to read the
blog post I write last week [1].  The reference lists of the specs
also point to some older specs with different proposals that have
failed to gain traction in the past.

I expect all three proposals to be up for discussion during the
logging working group session at the summit/forum, so if you have
any interest in the topic please plan to attend [2].

Thanks!
Doug

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18507/logging-working-group-working-session

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-26 Thread Doug Hellmann
Excerpts from Kekane, Abhishek's message of 2017-04-26 07:00:22 +:
> Hi All,
> 
> As per suggested by @jay_pipes's
> if val.count('-') not in (0, 4):
> raise TypeError
> 
> It is not sufficient solution because "is_uuid_like" returns only True or 
> False.
> For example,
> 
> If user passes uuid like "urn:----" or 
> "urn:uuid:----" then "is_uuid_like" method 
> returns True as it is valid uuid format, but when this uuid tries to insert 
> into database table then it gives DBDataError because the reason is in 
> database "block_device_mapping" table has "volume_id" field of 36 characters 
> only so while inserting data to the table through 'BlockDeviceMapping' object 
> it raises DBDataError.
> 
> Doug's solution of adding another method format_canonical_uuid() which would 
> format it with the proper number of hyphens and return actual UUID will break 
> backward compatibility IMO. Because of adding this new method in oslo_utils 
> then we have to make changes in all projects which are using this 
> is_uuid_like().

I don't understand why adding a new function breaks backwards
compatibility. Can you elaborate on why you think so?

Doug

> 
> Please let me know if you have any suggestions on the same, IMO restricting 
> this uuid size at schema level is one solution but not all projects supports 
> schema validation.
> 
> Thank you,
> 
> Abhishek
> 
> 
> From: Lance Bragstad [mailto:lbrags...@gmail.com]
> Sent: Monday, April 24, 2017 11:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation 
> of UUID length
> 
> We had to do similar things in keystone in order to validate uuid-ish types 
> (just not as fancy) [0] [1]. If we didn't have to worry about being backwards 
> compatible with non-uuid formats, it would be awesome to have one 
> implementation for checking that.
> 
> [0] 
> https://github.com/openstack/keystone/blob/6c6589d2b0f308cb788b37b29ebde515304ee41e/keystone/identity/schema.py#L69
> [1] 
> https://github.com/openstack/keystone/blob/6c6589d2b0f308cb788b37b29ebde515304ee41e/keystone/common/validation/parameter_types.py#L38-L45
> 
> On Mon, Apr 24, 2017 at 1:05 PM, Matt Riedemann 
> mailto:mriede...@gmail.com>> wrote:
> On 4/24/2017 12:58 PM, Sean Dague wrote:
> 
> Which uses is_uuid_like to do the validation -
> https://github.com/openstack/nova/blob/1106477b78c80743e6443abc30911b24a9ab7b15/nova/api/validation/validators.py#L85-L87
> 
> We assumed (as did many others) that is_uuid_like was strict enough for
> param validation. It is apparently not.
> 
> Either it needs to be fixed to be so, or some other function needs to be
> created that is, that people can cut over to.
> 
> -Sean
> 
> Well kiss my grits. I had always assumed that was built into jsonschema.
> 
> --
> 
> Thanks,
> 
> Matt
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-24 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2017-04-24 10:44:47 -0400:
> On 04/24/2017 09:45 AM, Jadhav, Pooja wrote:
> > Solution 3:
> >
> > We can check UUID in central place means in "is_uuid_like" method of
> > oslo_utils [4].
> 
> This gets my vote. It's a bug in the is_uuid_like() function, IMHO, that 
> is returns True for badly-formatted UUID values (like having two 
> consecutive hyphens).
> 
> FTR, the fix would be pretty simple. Just change this [1] line from this:
> 
> return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val)
> 
> to this:
> 
> # Disallow two consecutive hyphens
> if '--' in val:
>  raise TypeError
> return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val)
> 
> Fix it there and you fix this issue for all projects that use it.
> 
> Best,
> -jay
> 
> [1] 
> https://github.com/openstack/oslo.utils/blob/master/oslo_utils/uuidutils.py#L56
> 

I think the point of that function was to be a little forgiving of
typos, since we use UUIDs so much in the command line interfaces.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][mistral] Release of openstack/python-mistralclient failed

2017-04-24 Thread Doug Hellmann
A recent patch in python-mistralclient added a release note that was
poorly formatted, so it broke announcement job for the 3.1.0 release.

I've proposed a fix for the note file in
https://review.openstack.org/459341 and I've proposed the project-config
changes to add the jobs to avoid allowing similar failures in the future
in https://review.openstack.org/459343

I also regenerated the announcement email by hand.

Doug

Excerpts from jenkins's message of 2017-04-24 13:41:24 +:
> Build failed.
> 
> - python-mistralclient-tarball 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/python-mistralclient-tarball/e2b9206/
>  : SUCCESS in 4m 08s
> - python-mistralclient-tarball-signing 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/python-mistralclient-tarball-signing/2a5465a/
>  : SUCCESS in 52s
> - python-mistralclient-pypi-both-upload 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/python-mistralclient-pypi-both-upload/551cc60/
>  : SUCCESS in 26s
> - python-mistralclient-announce-release 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/python-mistralclient-announce-release/a578383/
>  : FAILURE in 3m 12s
> - propose-python-mistralclient-update-constraints 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/propose-python-mistralclient-update-constraints/d356cb1/
>  : SUCCESS in 1m 01s
> - python-mistralclient-docs-tags-only 
> http://logs.openstack.org/88/888ad722abbd8308da91b15360a2e8d2fb582d65/release/python-mistralclient-docs-tags-only/141c4cb/
>  : SUCCESS in 4m 04s
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-24 Thread Doug Hellmann
Excerpts from Jadhav, Pooja's message of 2017-04-24 13:45:07 +:
> Hi Devs,
> 
> I want your opinion about bug: https://bugs.launchpad.net/nova/+bug/1680130
> 
> When user passes incorrect formatted UUID, volume UUID like: 
> -----(please note double hyphen) for 
> attaching a volume to an instance using "volume-attach" API then it results 
> into DBDataError with following error message: "Data too long for column 
> 'volume_id'". The reason is in database "block_device_mapping" table has 
> "volume_id" field of 36 characters only so while inserting data to the table 
> through 'BlockDeviceMapping' object it raises DBDaTaError.
> 
> In current code, volume_id is of 'UUID' format so it uses "is_uuid_like"[4] 
> method of oslo_utils and in this method, it removes all the hyphens and 
> checks 32 length UUID and returns true or false. As 
> "-----" this UUID treated as valid and 
> further it goes into database table for insertion, as its size is more than 
> 36 characters it gives DBDataError.
> 
> There are various solutions we can apply to validate volume UUID in this case:
> 
> Solution 1:
> We can restrict the length of volume UUID using maxlength property in schema 
> validation.
> 
> Advantage:
> This solution is better than solution 2 and 3 as we can restrict the invalid 
> UUID at schema [1] level itself by adding 'maxLength'[2].
> 
> Solution 2:
> Before creating a volume BDM object, we can check that the provided volume is 
> actually present or not.
> 
> Advantage:
> Volume BDM creation can be avoided if the volume does not exists.
> 
> Disadvantage:
> IMO this solution is not better because we need to change the current code. 
> Because in the current code after creating volume BDM it is checking volume 
> is exists or not.
> We have to check volume existence before creating volume BDM object. For that 
> we need to modify the "_check_attach_and_reserve_volume" method [3]. But this 
> method get used at 3 places. According to it, we have to modify all the 
> occurrences as per behavior.
> 
> Solution 3:
> We can check UUID in central place means in "is_uuid_like" method of 
> oslo_utils [4].
> 
> Advantage:
> If we change the "is_uuid_like" method then same issue might be solved for 
> the rest of the APIs.
> 
> Disadvantage:
> IMO this also not a better solution because if we change the "is_uuid_like" 
> method then it will affect on several different projects.

Another option would be to convert the input value to a canonical form.
So if is_uuid_like() returns true, then pass the value to a new function
format_canonical_uuid() which would format it with the proper number of
hyphens. That value could then be stored correctly.

Doug

> 
> Please let me know your opinion for the same.
> 
> [1] 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/volumes.py#L65
> 
> [2] 
> https://github.com/openstack/nova/blob/master/nova/api/validation/parameter_types.py#L297
> 
> [3] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L3721
> 
> [4] 
> https://github.com/openstack/oslo.utils/blob/master/oslo_utils/uuidutils.py#L45
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] do we still need non-voting tests for older releases?

2017-04-21 Thread Doug Hellmann
Excerpts from ChangBo Guo's message of 2017-04-21 16:04:35 +0800:
> What the related thing I can remember is we discuss oslo libraries'
> compatibility in [1], which was abandoned.
> I made stable compat ocata jobs non-voting in [2], hope can revert when
> related bug is fixed before.
> But now, If we decide remove them, we don't revert anymore.

With our constraint system and with stable branches for libraries,
do we need new releases from master to be compatible with older
services on stable branches?

Doug

> 
> [1]  https://review.openstack.org/226157
> [2]  https://review.openstack.org/#/c/448431
> 
> 2017-04-20 0:42 GMT+08:00 Doug Hellmann :
> 
> > I noticed again today that we have some test jobs running for some
> > of the Oslo libraries against old versions of services (e.g.,
> > gate-tempest-dsvm-neutron-src-oslo.log-ubuntu-xenial-newton,
> > gate-tempest-dsvm-neutron-src-oslo.log-ubuntu-xenial-ocata, and
> > gate-oslo.log-src-grenade-dsvm-ubuntu-xenial-nv).
> >
> > I don't remember what those are for, but I imagine they have to do
> > with testing compatibility. They're all non-voting, though, so maybe
> > not?
> >
> > Now that we're constraining libraries in our test systems, I wonder
> > if we still need the jobs at all?
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-21 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2017-04-20 22:31:19 -0700:
> Doug Hellmann wrote:
> > Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:
> >> On 20/04/17 01:32 AM, Joshua Harlow wrote:
> >>> Wasn't there also some decision made in austin (?) about how we as a
> >>> group stated something along the line of co-installability isn't as
> >>> important as it once was (and may not even be practical or what people
> >>> care about anymore anyway)?
> >
> > I don't remember that, but I may not have been in the room at the
> > time.  In the past when we've discussed that idea, we've continued
> > to maintain that co-installability is still needed for distributors
> > who have packaging constraints that require it and for use cases
> > like single-node deployments for POCs.
> 
> Ya, looking back I think it was:
> 
> https://etherpad.openstack.org/p/newton-global-requirements
> 
> I think that was robert that lead that session, but I might be incorrect 
> there.

That was me, though Robert was definitely present and vocal.

My memory of the outcome of that session was that we needed to maintain
co-installability; that we could continue to keep an eye on the
container space as an alternative; and that a new team of maintainers
would take over the requirements list (which was my secret agenda for
proposing that we stop doing it at all).

During the session in Barcelona (I previously said Austin, but
misremembered the location) we agreed that we could stop syncing,
as long as we maintained co-installability by ensuring that everyone's
requirements lists intersect with the upper-constraints.txt list. That
work has been started.

As far as I know, we have never said we could drop co-installability as
a requirement. We have wished we could, but have not said we can.

Doug

> 
> >
> >>> With kolla becoming more popular (tripleo I think is using it, and ...)
> >>> and the containers it creates making isolated per-application
> >>> environments it makes me wonder what of global-requirements is still
> >>> valid (as a concept) and what isn't.
> >
> > We still need to review dependencies for license compatibility, to
> > minimize redundancy, and to ensure that we're not adding things to
> > the list that are not being maintained upstream. Even if we stop syncing
> > versions, official projects need to be those reviews, and having the
> > global list is a way to ensure that the reviews are done.
> >
> >>> I do remember the days of free for all requirements (or requirements
> >>> sometimes just put/stashed in devstack vs elsewhere), which I don't
> >>> really want to go back to; but if we finally all agree that
> >>> co-installability isn't what people actually do and/or care about
> >>> (anymore?) then maybe we can re-think some things?
> >> agree with all of ^... but i imagine to make progress on this, we'd have
> >> to change/drop devstack usage in gate and that will take forever and a
> >> lifetime (is that a chick flick title?) given how embedded devstack is
> >> in everything. it seems like the solution starts with devstack.
> >>
> >> cheers,
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][monasca] Release of openstack/monasca-kibana-plugin failed

2017-04-21 Thread Doug Hellmann
Excerpts from witold.be...@est.fujitsu.com's message of 2017-04-21 09:13:03 
+:
> Hi,
> 
> I'm sorry, I missed that.
> 
> I've created the updates:
> 
> https://review.openstack.org/458773
> https://review.openstack.org/458776

Both of those look correct.

> 
> 
> Cheers
> Witek
> 
> > -Original Message-
> > From: Hochmuth, Roland M [mailto:roland.hochm...@hpe.com]
> > Sent: Freitag, 21. April 2017 06:19
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [release][monasca] Release of
> > openstack/monasca-kibana-plugin failed
> > 
> > Thanks Doug. My understanding of the issue is that we need to
> > 
> > 1. Update the version in package.json,
> > https://github.com/openstack/monasca-kibana-
> > plugin/blob/master/package.json, from "0.0.5" to "1.0.1".
> > 2. Cherry pick to Ocata.
> > 3. Then apply a new release tag for Ocata for 1.0.1 in,
> > https://review.openstack.org/#/c/433823/1/deliverables/ocata/monasca-
> > kibana-plugin.yaml.
> > 
> > Does that sound correct?
> > 
> > Regards --Roland
> > 
> > 
> > 
> > 
> > 
> > On 4/20/17, 1:41 PM, "Doug Hellmann"  wrote:
> > 
> > >Excerpts from Doug Hellmann's message of 2017-04-20 15:26:47 -0400:
> > >> The version of monasca-kibana-plugin in package.json does not match
> > >> the new tag, and that caused the publish job to fail. I'm available
> > >> to help debug or to quickly release an update after the problem is fixed.
> > >
> > >https://review.openstack.org/458629 should help us avoid this error in
> > >the future.
> > >
> > >>
> > >> Doug
> > >>
> > >> --- Begin forwarded message from jenkins ---
> > >> From: jenkins 
> > >> To: release-job-failures 
> > >> Date: Wed, 19 Apr 2017 10:01:01 +
> > >> Subject: [Release-job-failures] Release of
> > >> openstack/monasca-kibana-plugin failed
> > >>
> > >> Build failed.
> > >>
> > >> - monasca-kibana-plugin-nodejs4-npm-publish-tarball
> > >>
> > http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8
> > >> /release/monasca-kibana-plugin-nodejs4-npm-publish-tarball/4852b10/ :
> > >> SUCCESS in 4m 48s
> > >> - monasca-kibana-plugin-tarball-signing
> > >>
> > http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8
> > >> /release/monasca-kibana-plugin-tarball-signing/28e6145/ : SUCCESS in
> > >> 10s
> > >> - monasca-kibana-plugin-npm-upload
> > >>
> > http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8
> > >> /release/monasca-kibana-plugin-npm-upload/f3dd81a/ : FAILURE in 9s
> > >> - monasca-kibana-plugin-announce-release
> > >> monasca-kibana-plugin-announce-release : SKIPPED
> > >>
> > >> --- End forwarded message ---
> > >>
> > >
> > >_
> > __
> > >___ OpenStack Development Mailing List (not for usage questions)
> > >Unsubscribe:
> > >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][monasca] Release of openstack/monasca-kibana-plugin failed

2017-04-21 Thread Doug Hellmann
Excerpts from Hochmuth, Roland M's message of 2017-04-21 04:18:47 +:
> Thanks Doug. My understanding of the issue is that we need to
> 
> 1. Update the version in package.json, 
> https://github.com/openstack/monasca-kibana-plugin/blob/master/package.json, 
> from "0.0.5" to "1.0.1". 
> 2. Cherry pick to Ocata.
> 3. Then apply a new release tag for Ocata for 1.0.1 in, 
> https://review.openstack.org/#/c/433823/1/deliverables/ocata/monasca-kibana-plugin.yaml.
> 
> Does that sound correct?

The release in question was tagged 1.1.0 (see
https://review.openstack.org/#/c/457605/), so you'd want to make
it 1.1.1.  That release was part of Pike, not Ocata, so the package
file only needs to be fixed on master (at least for that release,
I don't know if there were other similar issues with ocata stable
releases).

In general you don't want the version numbers from one series to be part
of the other series, so I think you'll need to accept patches on your
stable branches to set the version without those being backports. This
is a reasonable exception to our stable policy because the changes are
packaging metadata and not production code.

Doug

> 
> Regards --Roland
> 
> On 4/20/17, 1:41 PM, "Doug Hellmann"  wrote:
> 
> >Excerpts from Doug Hellmann's message of 2017-04-20 15:26:47 -0400:
> >> The version of monasca-kibana-plugin in package.json does not match the
> >> new tag, and that caused the publish job to fail. I'm available to help
> >> debug or to quickly release an update after the problem is fixed.
> >
> >https://review.openstack.org/458629 should help us avoid this error in
> >the future.
> >
> >> 
> >> Doug
> >> 
> >> --- Begin forwarded message from jenkins ---
> >> From: jenkins 
> >> To: release-job-failures 
> >> Date: Wed, 19 Apr 2017 10:01:01 +
> >> Subject: [Release-job-failures] Release of openstack/monasca-kibana-plugin 
> >> failed
> >> 
> >> Build failed.
> >> 
> >> - monasca-kibana-plugin-nodejs4-npm-publish-tarball 
> >> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-nodejs4-npm-publish-tarball/4852b10/
> >>  : SUCCESS in 4m 48s
> >> - monasca-kibana-plugin-tarball-signing 
> >> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-tarball-signing/28e6145/
> >>  : SUCCESS in 10s
> >> - monasca-kibana-plugin-npm-upload 
> >> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-npm-upload/f3dd81a/
> >>  : FAILURE in 9s
> >> - monasca-kibana-plugin-announce-release 
> >> monasca-kibana-plugin-announce-release : SKIPPED
> >> 
> >> --- End forwarded message ---
> >> 
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][monasca] Release of openstack/monasca-kibana-plugin failed

2017-04-20 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-04-20 15:26:47 -0400:
> The version of monasca-kibana-plugin in package.json does not match the
> new tag, and that caused the publish job to fail. I'm available to help
> debug or to quickly release an update after the problem is fixed.

https://review.openstack.org/458629 should help us avoid this error in
the future.

> 
> Doug
> 
> --- Begin forwarded message from jenkins ---
> From: jenkins 
> To: release-job-failures 
> Date: Wed, 19 Apr 2017 10:01:01 +
> Subject: [Release-job-failures] Release of openstack/monasca-kibana-plugin 
> failed
> 
> Build failed.
> 
> - monasca-kibana-plugin-nodejs4-npm-publish-tarball 
> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-nodejs4-npm-publish-tarball/4852b10/
>  : SUCCESS in 4m 48s
> - monasca-kibana-plugin-tarball-signing 
> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-tarball-signing/28e6145/
>  : SUCCESS in 10s
> - monasca-kibana-plugin-npm-upload 
> http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-npm-upload/f3dd81a/
>  : FAILURE in 9s
> - monasca-kibana-plugin-announce-release 
> monasca-kibana-plugin-announce-release : SKIPPED
> 
> --- End forwarded message ---
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][monasca] Release of openstack/monasca-kibana-plugin failed

2017-04-20 Thread Doug Hellmann
The version of monasca-kibana-plugin in package.json does not match the
new tag, and that caused the publish job to fail. I'm available to help
debug or to quickly release an update after the problem is fixed.

Doug

--- Begin forwarded message from jenkins ---
From: jenkins 
To: release-job-failures 
Date: Wed, 19 Apr 2017 10:01:01 +
Subject: [Release-job-failures] Release of openstack/monasca-kibana-plugin 
failed

Build failed.

- monasca-kibana-plugin-nodejs4-npm-publish-tarball 
http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-nodejs4-npm-publish-tarball/4852b10/
 : SUCCESS in 4m 48s
- monasca-kibana-plugin-tarball-signing 
http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-tarball-signing/28e6145/
 : SUCCESS in 10s
- monasca-kibana-plugin-npm-upload 
http://logs.openstack.org/20/206249d12cb76a103cb84a851916ce415f7d5cf8/release/monasca-kibana-plugin-npm-upload/f3dd81a/
 : FAILURE in 9s
- monasca-kibana-plugin-announce-release monasca-kibana-plugin-announce-release 
: SKIPPED

--- End forwarded message ---

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Doug Hellmann
Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:
> 
> On 20/04/17 01:32 AM, Joshua Harlow wrote:
> > Wasn't there also some decision made in austin (?) about how we as a
> > group stated something along the line of co-installability isn't as
> > important as it once was (and may not even be practical or what people
> > care about anymore anyway)?

I don't remember that, but I may not have been in the room at the
time.  In the past when we've discussed that idea, we've continued
to maintain that co-installability is still needed for distributors
who have packaging constraints that require it and for use cases
like single-node deployments for POCs.

> > With kolla becoming more popular (tripleo I think is using it, and ...)
> > and the containers it creates making isolated per-application
> > environments it makes me wonder what of global-requirements is still
> > valid (as a concept) and what isn't.

We still need to review dependencies for license compatibility, to
minimize redundancy, and to ensure that we're not adding things to
the list that are not being maintained upstream. Even if we stop syncing
versions, official projects need to be those reviews, and having the
global list is a way to ensure that the reviews are done.

> > I do remember the days of free for all requirements (or requirements
> > sometimes just put/stashed in devstack vs elsewhere), which I don't
> > really want to go back to; but if we finally all agree that
> > co-installability isn't what people actually do and/or care about
> > (anymore?) then maybe we can re-think some things?
> 
> agree with all of ^... but i imagine to make progress on this, we'd have 
> to change/drop devstack usage in gate and that will take forever and a 
> lifetime (is that a chick flick title?) given how embedded devstack is 
> in everything. it seems like the solution starts with devstack.
> 
> cheers,
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Doug Hellmann
Excerpts from Matthew Oliver's message of 2017-04-20 14:41:38 +1000:
> We have started this work. I've been working on:
> https://review.openstack.org/#/c/444718/

Wonderful! I'm sorry I didn't realize you were working on it. Thank you!

> Which will do requirement checks, as specified in the Pike PTG ehterpad for
> Tuesday morning:
> https://etherpad.openstack.org/p/relmgt-stable-requirements-ptg-pike (line
> 40+).
> 
> Once done, Tony and I were going to start testing it on the experimental
> pipeline for Swift and Nova.

That sounds like a good approach. I'll subscribe to the review and
follow along.

Doug

> 
> Regards,
> Matt
> 
> On Thu, Apr 20, 2017 at 2:34 AM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> > > On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > > > Hoy,
> > > >
> > > > So Gnocchi gate is all broken (agan) because it depends on "pbr"
> > and
> > > > some new release of oslo.* depends on pbr!=2.1.0.
> > > >
> > > > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > > > that got in banished by requirements Gods. It does not prevent it to be
> > > > used e.g. to install the software or get version information. But it
> > > > does break anything that is not in OpenStack because well, pip installs
> > > > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> > >
> > > It actually breaks everything, including OpenStack. Shade and others are
> > > affected by this as well. The specific problem here is that PBR is a
> > > setup_requires which means it gets installed by easy_install before
> > > anything else. This means that the requirements restrictions are not
> > > applied to it (neither are the constraints). So you get latest PBR from
> > > easy_install then later when something checks the requirements
> > > (pkg_resources console script entrypoints?) they break because latest
> > > PBR isn't allowed.
> > >
> > > We need to stop pinning PBR and more generally stop pinning any
> > > setup_requires (there are a few more now since setuptools itself is
> > > starting to use that to list its deps rather than bundling them).
> > >
> > > > So I understand the culprit is probably pip installation scheme, and we
> > > > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > > > avoid the entire issue.
> > >
> > > Yes, a new release of PBR undoing the "pin" is the current sane step
> > > forward for fixing this particular issue. Monty also suggested that we
> > > gate global requirements changes on requiring changes not pin any
> > > setup_requires.
> > >
> > > > But for the future, could we stop updating the requirements in oslo
> > libs
> > > > for no good reason? just because some random OpenStack project hit a
> > bug
> > > > somewhere?
> > > >
> > > > For example, I've removed requirements update on tooz¹ for more than a
> > > > year now, which did not break *anything* in the meantime, proving that
> > > > this process is giving more problem than solutions. Oslo libs doing
> > that
> > > > automatic update introduce more pain for all consumers than anything
> > (at
> > > > least not in OpenStack).
> > >
> > > You are likely largely shielded by the constraints list here which is
> > > derivative of the global requirements list. Basically by using
> > > constraints you get distilled global requirements and even without being
> > > part of the requirements updates you'd be shielded from breakages when
> > > installed via something like devstack or other deployment method using
> > > constraints.
> > >
> > > > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > > > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > > > the long term…
> > >
> > > I think we know from experience that just stopping (eg reverting to the
> > > situation we had before requirements and constraints) would lead to
> > > sadness. Installations would frequently be impossible due to some
> > > unresolvable error in dependency resolution. Do you have some
> > > alternative in mind? Perhaps we loosen the in project requirements and
> > > explicitly state that constraints are know

Re: [openstack-dev] [release-announce] [oslo] pbr 3.0.0

2017-04-20 Thread Doug Hellmann
We've bumped pbr to a new major release. In the past this has triggered
issues for projects capping pbr, so please check your requirements list
and remove any caps if you have them. Projects following the
global-requirements process should also ready be clear of caps.

Doug

Excerpts from no-reply's message of 2017-04-20 16:03:13 +:
> We are pumped to announce the release of:
> 
> pbr 3.0.0: Python Build Reasonableness
> 
> The source is available from:
> 
> http://git.openstack.org/cgit/openstack-dev/pbr
> 
> Download the package from:
> 
> https://pypi.python.org/pypi/pbr
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/pbr
> 
> For more details, please see below.
> 
> Changes in pbr 2.1.0..3.0.0
> ---
> 
> 1ed8531 Remove 'build_sphinx_latex'
> d4e4efd Stop building man pages by default
> 54fb6e7 docs: Use definition lists
> 84a8599 add image.nonlocal_uri to the list of warnings ignored
> b9c9630 doc: Document Sphinx integration
> 16a0a98 add changelog to published documentation
> 3cc5af1 Add Changelog build handling for invalid chars
> 
> 
> Diffstat (except docs and test files)
> -
> 
> README.rst  |   1 +
> pbr/builddoc.py |  15 +---
> pbr/git.py  |  22 ++
> pbr/hooks/commands.py   |   1 -
> pbr/packaging.py|   2 -
> 9 files changed, 194 insertions(+), 74 deletions(-)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    5   6   7   8   9   10   11   12   13   14   >